TPC For System Z
TPC For System Z
TPC For System Z
ibm.com/redbooks
7563edno.fm
International Technical Support Organization Tivoli Storage Productivity Center for Replication for System z December 2013
SG24-7563-00
7563edno.fm
Note: Before using this information and the product it supports, read the information in Notices on page xi.
First Edition (December 2013) This edition applies to Version 5, Release 1 of Tivoli Storage Productivity Center for Replication for System z (product number 5698-Z11). This document was created or updated on December 30, 2013.
Copyright International Business Machines Corporation 2013. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
7563TOC.fm
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Chapter 1. Tivoli Storage Productivity Center for Replication introduction . . . . . . . . . 1 1.1 TPC for Replication overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Replication task management and automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 IBM Tivoli Storage Productivity Center for Replication for System z. . . . . . . . . . . . . . . . 6 1.2.1 TPC for Replication management overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Application design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.1 TPC for Replication session types and commands. . . . . . . . . . . . . . . . . . . . . . . . 12 1.4 New functions in TPC for Replication V5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.4.1 Failover operations that are managed by other applications . . . . . . . . . . . . . . . . 17 1.4.2 Additional support for space-efficient volumes in remote copy . . . . . . . . . . . . . . . 18 1.4.3 Reflash After Recover option for Global Mirror Failover/Failback with Practice sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4.4 No Copy option for Global Mirror with Practice and Metro Global Mirror with Practice sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4.5 Recovery Point Objective Alerts option for Global Mirror sessions. . . . . . . . . . . . 18 1.4.6 Enable Hardened Freeze option for Metro Mirror sessions . . . . . . . . . . . . . . . . . 18 1.4.7 StartGC H1->H2 command for Global Mirror sessions. . . . . . . . . . . . . . . . . . . . . 19 1.4.8 Export Global Mirror Data command for Global Mirror role pairs . . . . . . . . . . . . . 19 Chapter 2. WebSphere Application Server OEM Edition for z/OS install . . . . . . . . . . . 2.1 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 WAS OEM Configuration procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Stage 1 - Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Stage 2 - Security Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Stage 3 - Server instance creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Tivoli Storage Productivity Center for Replication install on z/OS . . . . . . 3.1 Hardware and software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Minimum hardware requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 GUI Client software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Connect servers to storage subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Physical planning and firewall considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 TCP/IP ports used by TPC for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 TPC for Replication Server and ESS 800 connectivity . . . . . . . . . . . . . . . . . . . . . 3.2.4 TPC for Replication server and DS6000 connectivity . . . . . . . . . . . . . . . . . . . . . . 3.2.5 TPC for Replication server and DS8000 connectivity . . . . . . . . . . . . . . . . . . . . . . 3.3 Pre-installation steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 TPC for Replication database repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 22 23 23 34 37 39 40 40 41 41 42 42 43 43 44 46 49 49 iii
7563TOC.fm
3.5 Install TPC for Replication using embedded Derby database. . . . . . . . . . . . . . . . . . . . 49 3.5.1 Configure Derby (zero-administration database) . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.5.2 Start TPC for Replication installation with IWNINSTL job . . . . . . . . . . . . . . . . . . . 50 3.5.3 Logon to TPC for Replication server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.6 Installing TPC for Replication using DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.6.1 Preparing and configuring DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.6.2 Creating TPC for Replication database. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.6.3 DB2 for z/OS customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 3.6.4 Start TPC for Replication installation with IWNINSTL job . . . . . . . . . . . . . . . . . . 123 3.6.5 Logging in TPC for Replication console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 3.7 Upgrading from TPC for Replication Basic Edition for System z to TPC for Replication for System z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 4.1 Configuring IBM ESS and DS Storage Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 4.1.1 Preparing the IBM ESS 800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 4.1.2 Preparing the IBM DS6800 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.1.3 Preparing the IBM DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.2 Adding IBM ESS or DS Storage Server to Tivoli Storage Productivity Center for Replication server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 4.2.1 Adding an IBM Storage Server using the Tivoli Storage Productivity Center for Replication GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 4.3 .Enable/Disable heartbeat for DS8000 storage systems . . . . . . . . . . . . . . . . . . . . . . 145 4.4 Tivoli Storage Productivity Center for Replication Volume Protection. . . . . . . . . . . . . 146 4.5 Removing a storage subsystem from Tivoli Storage Productivity Center for Replication . 150 4.6 Refreshing the storage system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 5.1 GUI overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5.1.1 Health Overview panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 5.2 Accessing TPC for Replication through the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.2.1 Configuring the command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 5.2.2 Setting up automatic login to the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.2.3 Using CLI in z/OS environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.2.4 TPC for Replication CLI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5.3 TPC for Replication user administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.3.1 TPC for Replication role based access control . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.3.2 Managing access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5.4 TPC for Replication Advanced Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 5.5 TPC for Replication Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 5.6 TPC for Replication High Availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 5.6.1 Set up a TPC for Replication standby server . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 5.6.2 Takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 5.7 Using CSV files for importing and exporting sessions. . . . . . . . . . . . . . . . . . . . . . . . . 196 5.7.1 Exporting CSV files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 5.7.2 Importing CSV file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 5.7.3 Working with CSV files under Microsoft Excel . . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.8 Starting and stopping the TPC for Replication server . . . . . . . . . . . . . . . . . . . . . . . . . 211 5.8.1 Starting WebSphere Application Server OEM Edition on z/OS . . . . . . . . . . . . . 211 5.8.2 Stopping WebSphere Application Server OEM Edition. . . . . . . . . . . . . . . . . . . . 212 5.9 TPC for Replication SNMP setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 iv
Tivoli Storage Productivity Center for Replication for System z
7563TOC.fm
Chapter 6. Basic HyperSwap customization and use . . . . . . . . . . . . . . . . . . . . . . . . . 215 6.1 z/OS HyperSwap overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Basic HyperSwap - it is not so basic anymore216 6.2.1 Planned and Unplanned HyperSwap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 6.3 Customization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 6.3.1 SYS1.PARMLIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 6.3.2 Address spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 6.4 Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 6.5 Basic HyperSwap considerations and requirements. . . . . . . . . . . . . . . . . . . . . . . . . . 223 6.5.1 Sysplex requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 6.5.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 6.5.3 JES3 considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 6.5.4 JES2 considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.5.5 HCD and IPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.5.6 Allocation and Esoteric Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6.5.7 Sysplex CDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 6.5.8 Hardware reserves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 6.5.9 Concurrent Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 6.5.10 Cache Fast Write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 6.5.11 Products which swap UCB pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 6.5.12 Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.5.13 Audit trail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.6 Use scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.6.1 Setting up a HyperSwap session in TPC-R . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.6.2 HyperSwap phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 6.6.3 Create Basic HyperSwap session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 6.6.4 Add Copy Sets to a Basic HyperSwap session. . . . . . . . . . . . . . . . . . . . . . . . . . 240 6.6.5 Start H1->H2 in a Basic HyperSwap session . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 6.6.6 HyperSwap command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 6.6.7 Start H2->H1 in a Basic HyperSwap session . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 6.6.8 HyperSwap command (after Start H2->H1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 6.6.9 Stop Basic HyperSwap session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 6.6.10 Terminate a Basic HyperSwap session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 6.6.11 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 6.6.12 Manual Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000 . . . 7.1 Configuring logical paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.1 Creating logical paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 Removing logical path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 FlashCopy session using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Create FlashCopy session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Add Copy Sets to a FlashCopy session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Initiate Flash action against FlashCopy session . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Initiate Background Copy against FlashCopy session . . . . . . . . . . . . . . . . . . . . 7.2.5 Terminate FlashCopy session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Metro Mirror Single Direction session using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Create Metro Mirror Single Direction session . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Add Copy Sets to a Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 JES3 considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.4 Start Metro Mirror session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.5 Switch to Global Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.6 Suspend Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 266 266 270 272 272 278 284 286 289 291 291 296 302 303 306 308
Contents
7563TOC.fm
7.3.7 Recover Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.8 Stop Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.9 Terminate Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Global Mirror Single Direction session using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Create Global Mirror Single Direction session . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Add Copy Sets to a Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Start Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Suspend Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.5 Recover Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.6 Terminate Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Metro Mirror Failover/Failback using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1 Create Metro Mirror Failover/Failback session . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.2 Add Copy Sets to a Metro Mirror Failover/Failback session . . . . . . . . . . . . . . . . 7.5.3 Start H1 to H2 replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.4 Hyperswap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.5 Suspend Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.6 Recover Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.7 Enable Copy to Site 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.8 Start H2 H1 replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.9 Enable Copy to Site 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.10 Stop Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.11 Terminate Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Metro Mirror Failover/Failback w/ Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.1 Create Metro Mirror Failover/Failback w/ Practice session. . . . . . . . . . . . . . . . . 7.6.2 Add Copy Sets to a Metro Mirror Failover/Failback w/ Practice session. . . . . . . 7.6.3 Starting the session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.4 Flash Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.5 Suspend Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.6 Recover Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.7 Enable Copy to Site 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.8 Start H2 H1 Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.9 Enable Copy to Site 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.10 Stop Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6.11 Terminate Metro Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7 Global Mirror Failover/Failback session using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.1 Create Global Mirror Failover/Failback session . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.2 Add Copy Sets to a Global Mirror Failover/Failback session . . . . . . . . . . . . . . . 7.7.3 Start the Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.4 Suspend Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.5 Recover Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.6 Enable Copy to Site 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.7 Start H2 H1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.8 Suspend H2 to H1 replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.9 Recover Global Mirror session (after Start H2 H1) . . . . . . . . . . . . . . . . . . . . . 7.7.10 Enable Copy to Site 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7.11 Terminate Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8 Global Mirror Failover/Failback with Practice session. . . . . . . . . . . . . . . . . . . . . . . . . 7.8.1 Create Global Mirror Single Failover/Failback w/ Practice session. . . . . . . . . . . 7.8.2 Add Copy Sets to a Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.3 Start H1 H2 Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.4 Flash Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.5 Initiate Background Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.8.6 Suspend Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Tivoli Storage Productivity Center for Replication for System z
312 314 316 318 318 323 329 332 333 335 337 337 344 350 352 360 364 366 368 370 372 374 377 377 380 385 387 390 392 394 396 398 400 402 404 404 409 413 417 419 422 424 426 428 430 432 434 434 438 445 448 451 453
7563TOC.fm
7.8.7 Recover Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456 7.8.8 Enable Copy to Site 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458 7.8.9 Start H2 H1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 7.8.10 Suspend Global Mirror session (after Start H2 H1) . . . . . . . . . . . . . . . . . . . 462 7.8.11 Recover Global Mirror session (after Start H2 H1) . . . . . . . . . . . . . . . . . . . . 465 7.8.12 Enable Copy to Site 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 7.8.13 Terminate Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 7.9 Metro Global Mirror using GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 7.9.1 Create Metro Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 7.9.2 Add Copy Sets to a Metro Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . 477 7.9.3 Start H1 H2 H3 Metro Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . 486 7.9.4 SuspendH2H3 Metro Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 7.9.5 RecoverH3 Metro Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 7.9.6 Suspend Metro Global Mirror session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494 7.9.7 Release I/O in a Metro Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 7.9.8 RecoverH2 in a Metro Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . . . . 498 7.9.9 Enable Copy to Site 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501 7.9.10 Start H2 H3 Metro Global Mirror session (after RecoverH2) . . . . . . . . . . . . 503 7.9.11 Start H2 H1 H3 Metro Global Mirror session (after RecoverH2) . . . . . . . 505 7.9.12 Suspend Metro Global Mirror session (after Start H2 H1 H3) . . . . . . . . . 508 7.9.13 RecoverH1 in a Metro Global Mirror session (after suspending H2 H1 H3 Metro Global Mirror session) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 7.9.14 Enable Copy to Site 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513 7.9.15 Start H1 H3 in a Metro Global Mirror session . . . . . . . . . . . . . . . . . . . . . . . . 515 7.9.16 Suspend Metro Global Mirror session (after Start H1 H3) . . . . . . . . . . . . . . 517 7.9.17 Recover Metro Global Mirror session (after suspending H1 H3 Global Mirror session) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 7.9.18 Enable Copy to Site 1 (after Recover H3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522 7.9.19 Start H3 H1 H2 in a Metro Global Mirror session (after recovering H3 volumes in a H1 H3 Global Mirror session) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524 7.9.20 Suspend Metro Mirror session (after Start H3 H1 H2) . . . . . . . . . . . . . . . 527 7.9.21 Recover Metro Global Mirror (after suspending H3 H1 H2 Metro Global Mirror session) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529 7.9.22 Terminate Metro Global Mirror session. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 Chapter 8. DS8000 recovery scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Metro Mirror Single Direction planned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Planned outage of H1 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Planned outage of H2 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Metro Mirror Single Direction unplanned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Unplanned outage of H1 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Unplanned outage of H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Global Mirror Single Direction planned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Planned outage of H1 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Planned outage of H2 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Global Mirror Single Direction unplanned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Unplanned outage of H1 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Unplanned outage of H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Metro Mirror Failover/Failback planned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Planned outage of H1 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2 Planned outage of H2 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Metro Mirror Failover/Failback unplanned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1 Unplanned outage of H1 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 536 536 536 536 537 537 537 537 538 538 538 538 539 539 539 540 540
Contents
vii
7563TOC.fm
8.6.2 Unplanned outage of H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540 8.7 Global Mirror Failover/Failback planned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 8.7.1 Planned outage of H1 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 8.7.2 Planned outage of H2 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 8.8 Global Mirror Failover/Failback unplanned outages . . . . . . . . . . . . . . . . . . . . . . . . . . 542 8.8.1 Unplanned outage of H1 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542 8.8.2 Unplanned outage of H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 8.9 Metro Mirror Failover/Failback w/ Practice practice . . . . . . . . . . . . . . . . . . . . . . . . . . 543 8.9.1 Practice on H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 8.10 Metro Mirror Failover/Failback w/ Practice planned outages . . . . . . . . . . . . . . . . . . 543 8.10.1 Planned outage of H1 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543 8.10.2 Planned outage of H2 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 8.11 Metro Mirror Failover/Failback w/ Practice unplanned outages . . . . . . . . . . . . . . . . 544 8.11.1 Unplanned outage of H1 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544 8.11.2 Unplanned outage of H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 8.12 Global Mirror Failover/Failback w/ Practice practice . . . . . . . . . . . . . . . . . . . . . . . . . 545 8.12.1 Practice on H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 8.13 Global Mirror Failover/Failback w/ Practice planned outages . . . . . . . . . . . . . . . . . . 546 8.13.1 Planned outage of H1 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 8.13.2 Planned outage of H2 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 8.14 Global Mirror Failover/Failback w/ Practice unplanned outages . . . . . . . . . . . . . . . . 547 8.14.1 Unplanned outage of H1 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547 8.14.2 Unplanned outage of H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 8.15 Metro Global Mirror planned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 8.15.1 Planned outage of local H1 site, with a production move to intermediate H2 site and return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548 8.15.2 Planned outage of intermediate H2 site and return. . . . . . . . . . . . . . . . . . . . . . 549 8.15.3 Planned outage of local H1 site and intermediate H2 site in normal configuration with production move to remote H3 site and return . . . . . . . . . . . . . . . . . . . . . . 549 8.15.4 Planned outage of remote H3 site and return . . . . . . . . . . . . . . . . . . . . . . . . . . 550 8.16 Metro Global Mirror unplanned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 8.16.1 Unplanned outage of local H1 site, production move to intermediate H2 site and return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551 8.16.2 Unplanned outage of intermediate H2 site and return . . . . . . . . . . . . . . . . 552 8.16.3 Unplanned outage of local H1 site and intermediate H2 site, with production move to remote H3 site and return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552 8.16.4 Unplanned outage of remote H3 site and return . . . . . . . . . . . . . . . . . . . . . 553 8.17 Metro Global Mirror practice scenarios at H3 site . . . . . . . . . . . . . . . . . . . . . . . . 553 8.17.1 Overview of Metro Global Mirror scenarios for practicing at H3 site. . . . . 554 8.17.2 Practice scenario 1: while practicing, disaster occurs at H1 site, recover to H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 8.17.3 Practice scenario 2: while practicing, planned outage at H1 site, recover to H2 site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 8.17.4 Practice scenario 3: while practicing with production at H2 site, move production back to H1 site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 8.18 Basic HyperSwap planned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556 8.18.1 Planned outage of H1 site storage subsystem (H1 volumes) . . . . . . . . . . . . . . 556 8.18.2 Planned outage of H2 site storage subsystem (H2 volumes) . . . . . . . . . . . . . . 556 8.19 Basic HyperSwap unplanned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557 8.19.1 Unplanned outage of H1 site storage subsystem (H1 volumes) . . . . . . . . . . . . 557 8.19.2 Unplanned outage of H2 site storage subsystem (H2 volumes) . . . . . . . . . . . . 557 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
viii
7563TOC.fm
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
Contents
ix
7563TOC.fm
7563spec.fm
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
xi
7563spec.fm
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX DB2 Distributed Relational Database Architecture DRDA DS6000 DS8000 ECKD Enterprise Storage Server FICON FlashCopy GDPS HyperSwap IBM MVS NetView Parallel Sysplex RACF Redbooks Redpaper Redbooks (logo) S/390 System p System Storage System z TDMF Tivoli WebSphere XIV z/OS z/VM
The following terms are trademarks of other companies: Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Other company, product, or service names may be trademarks or service marks of others.
xii
7563pref.fm
Preface
IBM Tivoli Storage Productivity Center for Replication provides support for the advanced copy services capabilities on the DS8000 and DS6000, in addition to the support for SAN Volume Controller. This support focuses on automating administration and configuration of these services, operational control (starting, suspending, resuming) copy services tasks and monitoring and managing the copy services sessions. In addition to the support for FlashCopy and Metro Mirror, Tivoli Storage Productivity Center for Replication supports Global Mirror on the DS8000, and SAN Volume hardware platforms. Advanced disaster recovery functions are also supported with failover/failback (planned and unplanned) from a primary site to a disaster recovery site. A new product, IBM Tivoli Storage Productivity Center for Replication Basic Edition for System z enables Basic HyperSwap on z/OS, which allows the management of disk replication services using an intuitive GUI on z/OS systems. Tivoli Storage Productivity Center for Replication also can monitor the performance of the copy services that provide a measurement of the amount of replication and the amount of time that is required to complete the replication operations. This IBM Redbooks publication provides the information you need to install Tivoli Storage Productivity Center for Replication V5.1, and create and manage replication sessions on a z/OS platform. Scenarios are provided that document the work performed in our laboratory setting, using the GUI and CLI.
xiii
7563pref.fm
Mauro Galindo Stefan Lein Nilzemar Macedo Curtis Neal Thanks to the following people for their contributions to this project: Rich Conway Bob Haimowitz Sangam Racherla International Technical Support Organization, Poughkeepsie Center Craig Gordon Rosemary McCutchen IBM ATS Gaithersburg Paulina Acevedo Randy Blea Steven Kern Billy Olsen Wayne Sun IBM Tucson Kevin Kelly Tariq Hanif Tri Hoang IBM Poughkeepsie
Comments welcome
Your comments are important to us! We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks xiv
7563pref.fm
Send your comments in an email to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099 2455 South Road Poughkeepsie, NY 12601-5400
Preface
xv
7563pref.fm
xvi
756TPCRintro.fm
Chapter 1.
756TPCRintro.fm
DS6000
ESS 800
756TPCRintro.fm
FlashCopy
The IBM FlashCopy feature is a point-in-time copy capability that can be used to help reduce application outages caused by backups and other data copy activities. FlashCopy is designed to enable data to be copied in the background while making both source and copied data available to users almost immediately. With its copy-on-write capability, the only data copied is that which is about to be changed or overlaid. Copies can be made quickly, after which data can be backed up and capacity reallocated. This form of replication creates a replica (or T-zero copy) of the source within the same physical storage subsystem. Both the source and target volumes exist within the same storage subsystem. The DS8000 products provide multiple LSS (Logical Subsystems) within a single physical subsystem box. These products support local (same box) point in time copy where the source volume is in one LSS and the target volume is in another LSS. For a FlashCopy session in TPC for Replication, Figure 1-2 shows the volume relationship established as part of the session creation.
Metro Mirror
Metro Mirror is designed to constantly maintain an up-to-date copy of the primary location data at a remote site within the metropolitan area. Synchronous mirroring techniques are designed to maintain data currency between two sites. Because mirrored data is a time-consistent image of the original data, this can help you avoid a long and complicated data recovery process before restoring business operations. A Metro Mirror session is a form of synchronous remote replication designed to operate over distances under 300 kilometers. With Metro Mirror, the source is located in one subsystem and the target is located in another subsystem. Metro Mirror replication maintains identical data in both the source and target. In synchronous replication, changes made to the source data are propagated to the target before the write is committed to the requesting host. Figure 1-3 shows two available Metro Mirror session icons in TPC for Replication.
756TPCRintro.fm
Figure 1-3 Metro Mirror and Metro Mirror with Practice sessions
Global Mirror
Global Mirror is designed to help maintain data currency at a remote site within a few seconds of the local site, regardless of distance. It includes exceptional capabilities such as self-managed cross-system data consistency groups, which help protect data integrity for large applications across a wide variety of flexible system configurations. These copying and mirroring capabilities are designed to help give users constant access to critical information during both planned and unplanned local outages. And TPC for Replication provides key configuration, administration and monitoring tools to manage these capabilities. For businesses in on demand fields, these capabilities are essential for managing data availability and resiliency and sustaining business continuity. Global Mirror is a method of continuous asynchronous replication. It is intended to enable data replication at distances over 300 kilometers. When a write is issued to the source copy, the change is propagated to the target copy, but subsequent changes are allowed to the source copy before the target copy verifies that it has received the change. However, because data changes are not applied synchronously, you can potentially lose some data. Figure 1-4 shows icons for all available Global Mirror session types in TPC for Replication.
Figure 1-4 Global Mirror (with Failover/Failback), Global Mirror Failover/Failback with Practice, Global Mirror Either Direction with Two Site Practice
Metro/Global Mirror
The Metro/Global Mirror function enables a three-site, high availability disaster recovery solution. It combines the capabilities of both Metro Mirror and Global Mirror functions for greater protection against planned and unplanned outages. Metro/Global Mirror is supported across zSeries and open-systems environments. Metro/Global Mirror is supported on the DS8000 only. Metro/Global Mirror uses synchronous replication to mirror data between a local site and an intermediate site, and asynchronous replication to mirror data from an intermediate site to a
756TPCRintro.fm
remote site. In this configuration, a Metro Mirror pair is established between two nearby sites (local and intermediate) to protect from local site disasters. The Global Mirror volumes can be located thousands of miles away and continue to be updated if the original local site has suffered a disaster and I/O has to be failed over to the intermediate site. In the case of a local-site-only disaster, Metro/Global Mirror can provide a zero-data-loss recovery at the remote site as well as at the intermediate site. The Metro/Global Mirror function provides the following combination of synchronous and asynchronous mirroring: A nearby two-site synchronous copy that can protect from local disasters. A longer distance asynchronous copy, at a third site, that can protect from larger scale regional disasters. The third site provides an extra layer of data protection.Metro/Global Mirror is an extension of Global Mirror, which is based on existing Global Copy (formerly known as PPRC XD) and FlashCopy functions. Global Mirror running at the intermediate site, using a master storage unit internally manages data consistency, removing the need for external software to form consistency groups at the remote site. Metro/Global Mirror should be used when two recovery sites are required. Its support for the DS8000 provides the following capabilities: failover and failback support HyperSwap function in a Metro Mirror session between local and intermediate site fast re-establishment of three-site mirroring quick re-synchronization of mirrored sites using incremental changes only data currency at the remote site Incremental Resync for Metro/Global Mirror is used in a Metro/Global Mirror Peer-to-Peer Remote Copy configuration in order to maintain a backup site if one of the three sites is lost. The purpose of the Incremental Resync function is to avoid always having to do a full volume resynchronization between the local and remote sites if an outage occurs at your intermediate site. When the intermediate site is lost, the local and remote sites can be connected, copying a subset of the data on the volumes to maintain a backup site. The microcode records the bytes in flight at the local site. If the intermediate site is lost, then the local site volume may be established as the primary to the volume at the remote site. Figure 1-5 shows a Metro/Global Mirror session icon in TPC for Replication.
HyperSwap
IBM Tivoli Storage Productivity Center for Replication for System z (including the Basic Edition version) enables Basic HyperSwap on z/OS to provide a single-site or multi-site,
756TPCRintro.fm
high-availability disk solution, which allows the configuration of disk-replication services using a GUI from z/OS. The intention is that with Basic HyperSwap function enabled, seamless swapping between primary and secondary disk volumes in the event of planned and unplanned outages such as hardware maintenance, testing, or device failure, can be accomplished from z/OS. When using Basic HyperSwap, Tivoli Storage Productivity Center for Replication helps eliminate single disk failures as a source of application outages by enabling you to specify a set of storage volumes to be synchronously mirrored. For example, in the event of a permanent I/O error, I/O requests can be automatically switched to the secondary copy, thereby masking the failure from the application and minimizing the need to restart the application (or system) after the failure. You can also initiate a planned failover to a secondary disk for the purpose of initiating hardware maintenance on primary storage controllers, or simply to periodically test the function. You can switch back to your preferred configuration via the GUI or operator commands. Figure 1-6 shows a Basic HyperSwap session.
1.2 IBM Tivoli Storage Productivity Center for Replication for System z
As we have seen TotalStorage Productivity Center for Replication is a member of the IBM Tivoli Storage Productivity Center product family. The TPC for Replication Family is designed to help simplify management of advanced copy services by automating administration and configuration of these services with wizard-based session and copy set definitions, providing simple operational control of copy services tasks, including starting, suspending and resuming, and finally offering tools for monitoring and managing copy sessions. The basic functions of TPC for Replication provide management for the following advanced copy functions o IBM storage systems: Metro Mirror Global Mirror Metro Global Mirror for SVC FlashCopy
756TPCRintro.fm
amount of replication that has been done as well as the amount of time needed to complete the replication. Automated failover is designed to keep your critical data online and available to your users even if your primary site fails. When the primary site comes back on, the software manages failback to the default configuration as well. TPC for Replication offers: Support for redundant TPC for Replication servers (active - standby) Support for the following additional session types for ESS model 800, DS6000, and / or DS8000. Two site Metro Mirror, ability to control the session in both directions with practice (from site 1 to site 2 or vice-versa). Two site Global Mirror, ability to control the session in both directions with practice (from site 1 to site 2 or vice-versa). Three-site Metro Global Mirror configurations with practice. Disaster recovery configurations that can be set up to indicate copy type (Flashcopy, Metro Mirror, Global Mirror, Metro Global Mirror) and the number of separate copies and sites to be involved. Replication Performance Monitoring showing the progress towards completion of hardware replication. For Metro Mirror, the progress towards getting all data to full duplex. For Global Mirror, the progress towards getting all data to fully joined in the session. For FlashCopy without persistent specified, the progress of getting background copy complete TPC for Replication offers a high availability capability, so you can manage your replication even if the main TPC for Replication server experiences a failure. With a second server operating as an active standby, services can switch quickly to the backup server to maintain copy services operations if the primary server goes off-line. TPC for Replication is designed to simplify disaster recovery management through planned and unplanned failover and failback automation for the IBM DS8000 using it's Metro Global Mirror (MGM) feature. This allows for a synchronous copy between two sites combined with an asynchronous copy link to a distant site. TPC for Replication supports fast failover and failback, fast re-establishment of three-site mirroring, data currency at the remote site with minimal lag behind the local site, and quick resynchronization of mirrored sites using incremental changes only. A three-site Metro Global Mirror configuration provides the following recovery options at the alternate sites if a failure occur: If an outage occurs at the local site, recovery operations can begin at the intermediate site. Global Mirror continues to mirror updates between the intermediate and remote sites, maintaining the recovery capability at the remote site. If an outage occurs at the local site, recovery operations can begin at the remote site and preparations can be made to resynchronize the local site when it recovers from its disaster. Once in recovery mode at the remote site, another Global Mirror session can be setup and put into operation using the former intermediate as its new remote site. This new Global Mirror session will provide additional disaster recovery solution while operating at the remote site.
756TPCRintro.fm
If an outage occurs at the intermediate site, data at the local storage unit is not affected. Applications continue to run normally. If an outage occurs at the remote site, data at the local and intermediate sites is not affected. Applications continue to run normally. The intermediate storage unit maintains a consistent up-to-date copy. If both the local and intermediate sites are lost, the scenario is similar to a two-site scenario when access to the local storage unit is lost. Recovery must be achieved using the last consistent point-in-time copy at the remote site.
FlashC op y
DS Interface
ESS Interface
S V C D isk
D S 8000/6000
E S S 800
Commands from the TPC for Replication server are passed from the CSM to the hardware layer and then packaged via the relevant subsystem interface. These packets are then 8
756TPCRintro.fm
passed to the destination storage subsystem via the IP or Ficon communications path established as part of the storage subsystem add process. In Figure 1-8, we can see the communications from the CSM to an ESS or DS storage subsystem. In this instance a command from CSM is received by the ESS or DS interface and packaged as a CCW packet. This CCW packet is then sent to the CCW server which then passes the CCW commands to the functional code residing in the storage subsystem. The return journey is achieved in the same way.
CCW S erver
1.3 Terminology
It is essential that you understand the following concepts and how they are used to enable the functionality of the replication environment. As such, the terminology is captured here along with a brief explanation of the term itself. Role: A volumes role is the function it assumes in the copy set, and is composed of the intended use and, for Global Mirror and Metro Mirror, the volumes site location. Every volume in a copy set is assigned a role. A role can assume the functions of a host volume, journal volume, or target volume. For example, a host volume at the primary site has the role of Host1, while a journal volume at the secondary site has the ole of Jornal2. Role Pair : A role pair is the association of two roles in a session that take part in a copy relationship. For example, in a Metro Mirror session, the role pair can be the association between the volume roles of Host1 and Host2. Volume Roles: Volume roles are given A volumes role is the function it assumes in the copy set, and is composed of the intended use and, for Global Mirror and Metro Mirror, the volumes site location. Every volume in a copy set is assigned a r Copy Set: A set of volumes that represent copies of the same data. All volumes in a Copy set must be of the same type and size. The number of volumes in a Copy set and the roles that each volume in a Copy set plays in the replication session is determined by the session policy. Session: The replication session is the fundamental concept which TotalStorage Productivity Center for Replication is built upon. The Copy sets within a session form a
Chapter 1. Tivoli Storage Productivity Center for Replication introduction
756TPCRintro.fm
consistency group. Actions taken against the session are taken against all of the Copy sets within the session. The session policy determines what type of replication is to be controlled via the session and determines what actions and states are allowable in the session. Source: This is a Copy Set role, used in hardware support type sessions. The volume that plays this role in the Copy Set is the source volume of the Copy Set. Target: This is a Copy Set role, used in hardware support type sessions. The volume that plays this role in the Copy Set is the target volume of the Copy Set (T1 in a FlashCopy session only). HostSite1 (H1): This is a Copy Set role. The volume that plays this role in the Copy Set is the volume that is to be mounted and online to the application when the session has site 1 as the production site. HostSite2 (H2): This is a Copy Set role. The volume that plays this role in the Copy Set is the volume that is to be mounted and online to the application when the session has site 2 as the production site. JournalSite2 (J2): This is a Copy Set role. The volume that plays this role in the Copy Set is the volume that is used to maintain Global Mirror consistency when production is on site 1. IntermediateSite2 (I2):This is a Copy Set role. The volume at Site 2 that receives data from the primary host volume during a Metro Mirror or Global Mirror replication with practice session. During a practice, data on the intermediate volumes is flash copied to the practice host volumes. HostSite3 (H3): This is a Copy Set role. The volume that plays this role in the Copy Set is the volume that is to be mounted and online to the application when the session has site 3 as the production site. JournalSite3 (J3): This is a Copy Set role. The volume that plays this role in the Copy Set is the volume that is used to maintain Global Mirror consistency at site 3. IntermediateSite3 (I3):This is a Copy Set role. The volume at Site 3 that receives data from the primary host volume at Site 2 during a Metro Global Mirror replication with practice session. During a practice, data on the intermediate volumes is flash copied to the practice host volumes. Figure 1-9 shows the terms and how they relate to each other.
10
756TPCRintro.fm
Role Pair
Intermediate 3 Journal 3
Copy Set
MGM GM
Role Pair
Role Pair
Role Pair
Copy Set
Copy Set
Host 1
Host 2
Role Pair
Host 1
Role Pair
Intermediate 2
Host 2
Role Pair
Intermediate 2
Role Pair
Role Pair
Journal 2
In addition, sessions themselves can exist in different states depending on the situation. Defined: Session created with or without Copy Sets but not started Preparing: Started and in the process of initialization or re-initialization. Will automatically transition to Prepared when all pairs are initialized (prepared). Prepared: All volumes are initialized (prepared). Suspending: Transitory state caused by Suspend command or suspending event. In the process of suspending copy operations. Suspended: Copying has stopped. For Metro Mirror, the application can continue writes. An additional recoverable flag indicates if data is consistent and recoverable. TargetAvailable: Recover command processing has completed. The target volume are write enabled. An additional recoverable flag indicates if data is consistent and recoverable. Figure 1-10 shows the transitional relationship of these session states for a continuous replication session.
11
756TPCRintro.fm
N o t D e fin e d c re a te S e s s io n d e le te S e s s io n
S uspended re c o v e r R e c o ve rin g
te rm in a te
s ta rt
T a rg e t A v a ila b le
te rm in a te
Figure 1-11 on page 12 shows the transitional relationship of these session states for a FlashCopy session.
Not Defined createSession deleteSession Defined start Preparing flash Prepared start start Prepared flash terminate terminate
flash Preparing
Target Available
Figure 1-11 Session state transition - point in time
It is important to understand these transitions since these can and will at times determine which TotalStorage Productivity Center for Replication commands are required to move to the next state.
756TPCRintro.fm
Basic HyperSwap Metro Mirror Single Direction Metro Mirror Failover / Failback (Bidirectional) Metro Mirror Failover / Failback w/Practice (Bidirectional) Global Mirror Single Direction Global Mirror Failover / Failback (Bidirectional) Global Mirror Failover / Failback w/Practice (Bidirectional) Global Mirror Either Direction with Two Site Practice (Bidirectional) Metro Global Mirror (Three Site) Metro Global Mirror with Practice (Three Site) Metro Mirror Failover/Failback w/ Practice and Global Mirror Failover/Failback w/ Practice, provide all the functions available in Metro Mirror and Global Mirror sessions, with the added support for a third volume to allow users to practice recovery procedures. The third additional volume is both the Metro Mirror or Global Mirror target and a FlashCopy source to the H2 target volume.
Session commands
The following tables show the commands which can be issued against any defined session. These commands represent the GUI interface and not the CLI command which may require specific syntax to be valid. Table 1-1 contains the FlashCopy commands.
Table 1-1 FlashCopy commands Command Flash Initiate Background Copy Meaning Perform the FlashCopy operation using the specified options. Copy all tracks from the source to the target immediately, instead of waiting until the source track is written to. This command is valid only when the background copy is not already running. Removes all physical copies from the hardware. This command can be issued at any point during an active session. If you want the targets to be data consistent before removing their relationship, you must issue the Initiate Background Copy command if NOCOPY was specified, and then wait for the background copy to complete by checking the copying status of the pairs.
Terminate
Start Start H1 H2
13
756TPCRintro.fm
Command Start H2 H1
Meaning Indicates direction of a failover/failback between two hosts in a Metro Mirror session. If the session has been recovered with the failover/failback function such that the production site is now H2, you can issue the Start H2-H1 command to start production on H2 and provide protection. This command is not supported for SVC. Suspends updates to all the targets of pairs in a session. This command can be issued at any point during an active session. Note, however, that updates are not considered to be consistent. Causes all target volumes to remain at a data-consistent point and stops all data that is moving to the target volumes. This command can be issued at any point during a session when the data is actively being copied. Removes all physical copies from the hardware during an active session. If you want the targets to be data consistent before removing their relationship, you must issue the Suspend command, the Recover command, and then the Terminate command.
Stop
Suspend
Terminate
Start Start H1 H2
Start H2 H1
Suspend
Terminate
756TPCRintro.fm
Table 1-4 Metro/Global Mirror commands Command Start H1 H2 H3 Meaning Metro/Global Mirror initial start command. This command creates Metro Mirror relationships between H1 and H2, and Global Mirror relationships between H2 and H3. For Metro/Global Mirror, this includes the J3 volume to complete the Global Mirror configuration. (The J3 volume role is the journal volume at site 3). Start H1 H2 H3 can be used from some Metro/Global Mirror configurations to transition back to the starting H1 H2 H3 configuration. From the H1 H2 H3 configuration, this command changes the session configuration to a Global Mirror-only session between H1 and H3, with H1 as the source. Use this command in case of an H2 failure with transition bitmap support provided by incremental resynchronization. It can be used when a session is in preparing, prepared, and suspended states because there is not a source host change involved. This command allows you to bypass the H2 volume in case of an H2 failure and copy only the changed tracks and tracks in flight from H1 to H3. After the incremental resynchronization is performed, the session is running Global Mirror from H1 to H3 and thus loses the near-zero data loss protection achieved with Metro Mirror when running H1 H2 H3. However, data consistency is still maintained at the remote site with the Global Mirror solution. From H2 H1 H3 configuration, this command changes the session configuration to a Global Mirror-only session configuration between H1 and H3, with H1 as the source. Use this command when the source site has a failure and production is moved to the H1 site. This can be done for unplanned Hyperswap. The Global Mirror session is continued. This is a host-volume change so this command is valid only when restarting the H1 H3 configuration or from the TargetAvailable H2 H1 H3 state. Start H3 H2 Metro Global Mirror command to start Global Copy from the disaster recovery site back to the H2 volumes. This is a host-volume change so this command is valid only when restarting the H3->H2 configuration or from the TargetAvailable H1 H2 H3 state.
Start H1 H3
15
756TPCRintro.fm
Command Start H2 H3
Meaning From the H1 H2 H3 configuration, this command moves the session configuration to a Global Mirror only session configuration between H2 and H3, with H2 as the source. Use this command when the source site has a failure and production is moved to the H1 site. This can be done for unplanned Hyperswap. The Global Mirror session is continued. This is a host-volume change so this command is valid only when restarting the H1 H3 configuration or from the TargetAvailable H2 H1 H3 state. From the H2 H1 H3 configuration, this command transitions the session configuration to a Global Mirror only session configuration between H2 and H3 with H2 as the source. Use this command in case of an H1 failure with transition bitmap support provided by incremental resynchronization. It can be used when the session is in preparing, prepared, and suspended states because there is not a source-host change involved. Start H2 H1 H3 can be used from some Metro Global Mirror configurations to transition back to the starting H2 H1 H3 configuration.
Start H2 H1 H3
Metro/Global Mirror start command. This is the configuration that completes the Hyperswap processing. This command creates Metro Mirror relationships between H2 and H1 and Global Mirror relationships between H1 and H3. For Metro/Global Mirror, this includes the J3 volume to complete the Global Mirror configuration. After recovering to H3, this command sets up the hardware to allow the application to begin writing to H3, and the data is copied back to H1 and H2. However, issuing this command does not guarantee consistency in the case of a disaster because only Global Copy relationships are established to cover the long distance copy back to site 1. To move the application back to H1, you can issue a suspend while in this state to drive all the relationships to a consistent state and then issue a freeze to make the session consistent. You can then issue a recover followed by a start H1 H2 H3 to go back to the original configuration.
Start H3 H1 H2
Suspend H2 H3
When running H1 H2 H3, this command issues a pause to the Global Mirror master and causes the Global Mirror master to stop forming consistency groups. This command is valid only when the session is in a prepared state.
Suspend H1 H3
When running H2 H1 H3, this command issues a pause to the Global Mirror master and causes the Global Mirror master to stop forming consistency groups. This command is valid only when the session is in a prepared state.
Recover H1
Specifying H1 makes the H1 volume TargetAvailable. Metro/Global Mirror (when running H2 H1 H3) can move production to either the H1 or H3 set of volumes. TPC for Replication processing is different depending on the recovery site. Therefore, the site designation is added to the recover command so TPC for Replication can set up for the failback.
16
756TPCRintro.fm
Command Recover H2
Meaning Specifying H2 makes the H2 volume TargetAvailable. Metro/Global Mirror (when running H1 H2 H3) can move production to either the H2 or H3 set of volumes. TPC for Replication processing is different depending on the recovery site. Therefore the site designation is added to the recover command so TPC for Replication can set up for the failback. Specifying H3 makes the H3 volume TargetAvailable. Metro/Global Mirror (when running H1 H2 H3) can then move production to the H3 set of volumes. Because TPC for Replication processing is different depending on the recovery site, the site designation is added to the recover command so that TPC for Replication can set up for the failback. This command sets up H3 so that you can start the application on H3. H3 becomes the active host, and you then have the option to start up H3 H1 H2 to perform a Global Copy copy back.
Recover H3
Causes a site switch, equivalent to a suspend and recover for a Metro Mirror with Failover/Failback. Because of this, individual suspend and recover commands are not available.
17
756TPCRintro.fm
1.4.3 Reflash After Recover option for Global Mirror Failover/Failback with Practice sessions
You can use the Reflash After Recover option with System Storage DS8000 version 4.2 or later. Use this option to create a FlashCopy replication between the Intermediate Site 2 (I2) and Journal Site 2 (J2) volumes after the recovery of a Global Mirror Failover/Failback with Practice session. If you do not use this option, a FlashCopy replication is created only between the I2 and H2 volumes.
1.4.4 No Copy option for Global Mirror with Practice and Metro Global Mirror with Practice sessions
You can use the No Copy option with System Storage DS8000 version 4.2 or later. Use this option if you do not want the hardware to write the background copy until the source track is written to. Data is not copied to the Intermediate Site 2 (I2) volume until the blocks or tracks of the H2 volume are modified. The point-in-time volume image is composed of the unmodified data on the H1 volume and the data that was copied to the T1 volume. If you want a complete point-in-time copy of the H1 volume to be created on the T1 volume, do not use the No Copy option. This option causes the data to be asynchronously copied from the H1 volume to the T1 volume. Although you can select any space-efficient volume as the target, you cannot change the Permit Space Efficient Target flag. This flag is always set. When selecting space-efficient volumes as targets, you might receive an x0FBD error message when you start a full background copy. To avoid this message, select the No Copy option.
1.4.5 Recovery Point Objective Alerts option for Global Mirror sessions
You can use the Recovery Point Objective Alerts option with IBM TotalStorage Enterprise Storage Server Model 800, System Storage DS8000, and System Storage DS6000. Use this option to specify the length of time that you want to set for the recovery point objective (RPO) thresholds. The values determine whether a Warning or Severe alert is generated when the RPO threshold is exceeded for a role pair. The RPO represents the length of time in seconds of data exposure that is acceptable in the event of a disaster.
756TPCRintro.fm
If you select the Enable Hardened Freeze option, the following actions can occur: IOS can freeze volumes regardless of whether the TPC for Replication server is started or stopped. You can include z/OS system volumes such as paging, database, and IBM WAS hierarchical file system (HFS) as Metro Mirror volumes in the session. When you select the Enable Hardened Freeze option, IOS manages the freeze operations for all Metro Mirror volumes in the session, which prevents TPC for Replication from freezing the volumes and possibly freezing itself. The Enable Hardened Freeze option does not enable IOS to manage freeze operations for Global Mirror volumes. Note: The Enable Hardened Freeze option requires the z/OS address spaces Basic HyperSwap Management and Basic HyperSwap API.
1.4.8 Export Global Mirror Data command for Global Mirror role pairs
You can use this option to export data for a Global Mirror role pair that is in a session to a comma-separated value (CSV) file. You can then use the data in the CSV file to analyze trends in your storage environment that affect your recovery point objective (RPO). For example, the file that contains data for the RPO might show that the RPO threshold is often exceeded on a particular day and time. You can then view the file that contains data for logical subsystem (LSS) out-of-sync tracks to see whether a particular LSS or set of LSSs have high out-of-sync track values for that day and time.
19
756TPCRintro.fm
20
7563WebSphereInstall.fm
Chapter 2.
21
7563WebSphereInstall.fm
Figure 2-1 IBM WebSphere Application Server (WAS) OEM Edition basic design structure In Figure 2-1, you can see the high level WAS OEM design structure, where SR stands for Servant Region and CR is Controller region. Servant Region basically represents application infrastructure. Controller region does not contain any application code, but it contains TCP listeners and accepts commands issued from z/OS. The Node is simply the logical collection of applications server on an z/OS LPAR. The Cell is a logical construct and it controls the 22
Tivoli Storage Productivity Center for Replication for System z
7563WebSphereInstall.fm
node. Deamon provides access to modules held in storage and the Location Name Service for remote client requests.
23
7563WebSphereInstall.fm
If you are not advanced WAS user, then typical mode is the recommended one. It is the appropriate configuration process to follow for most z/OS systems. All default settings match the system requirements for most z/OS systems. Note: In this chapter we discuss the typical mode configuration only. Refer to IBM WebSphere Application Server OEM Edition for z/OS Configuration Guide (V7.0.x), GA32-0631-07 publication for advanced mode configuration procedure. The WASOEM.sh script can be invoked from an OMVS or telnet/rlogin session. You cannot run this script from under ISHELL. Logon to your z/OS system where you received and applied IBM WAS OEM edition and invoke OMVS session to start with the configuration procedure described below. Before you invoke the WASOEM.sh script, you need to set the $PATH value from OMVS as shown in Example 2-1.
Example 2-1 Set $PATH value
export PATH=.:/usr/lpp/zWebSphereOEM/V7R0/bin:$PATH Where /usr/lpp/zWebSphereOEM/V7R0 is the default WebSphere mount point path. Setting this value provides all of the WAS OEM Edition for z/OS scripts system wide access to required items. In addition, you need to update the WASOEM_CONFIG_WORKDIR environmental variable with the name of the directory under which you want the IBM WAS OEM Edition for z/OS configuration files, and instance configuration working directories to be located. WASOEM.sh uses the value specified for this variable to locate these files and working directories during the IBM WAS OEM Edition for z/OS configuration process. Use the following command (as shown in Example 2-2) to perform this update:
Example 2-2 Define WAS OEM working directory
export WASOEM_CONFIG_WORKDIR=/etc Where /etc is the top level directory under which WASOEM.sh creates the following IBM WAS OEM Edition for z/OS product configuration files: zWebSphereOEM/V7R0/conf zWebSphereOEM/V7R0/conf/wasOverride.responseFile zWebSphereOEM/V7R0/conf/wasOEM_env.sh The working directories for each configuration of IBM WAS OEM Edition for z/OS will also be located in this directory. This WASOOEM_CONFIG_WORKDIR path will be required once you start configuration by invoking WASOEM.sh script. Note: The default setting for this variable is the /etc/ directory. The value you specify on this export statement for the WASOEM_CONFIG_WORKDIR variable only remains in effect for the current WASOEM.sh session. If this is the first time the IBM WAS OEM is being configured since it was installed, issue the WASOEM.sh command (as shown in Example 2-3) from OMVS session in order to copy the two required configuration files from the IBM WAS OEM installation location to a predetermined 24
Tivoli Storage Productivity Center for Replication for System z
7563WebSphereInstall.fm
location in the file system. This action is required once per product installation. The Example 2-3 includes the output of the WASOEM.sh command.
Example 2-3 WASOEM.sh first run output
>WASOEM.sh BBN0400I:The current top-level WebSphere Application Server OEM configuration working directory is set to: /etc BBN0400I:The WebSphere Application Server OEM configuration working directories will be located under: /etc/zWebSphereOEM/V7R0/conf BBN0400I:This location setting will be valid for this session only. BBN0235I:Creating configuration directory /etc/zWebSphereOEM/V7R0/conf BBN0233I:/usr/lpp/zWebSphereOEM/V7R0/zOS-config/zpmt/samples/wasOEM_env.sh was copied to /etc/zWebSphereOEM/V7R0/conf/wasOEM_env.sh. Review the contents of /etc/zWebSphereOEM/V7R0/conf/wasOEM_env.sh before continuing. BBN0234I:/usr/lpp/zWebSphereOEM/V7R0/zOS-config/zpmt/samples/wasOEMOverride.respon seFile was copied to /etc/zWebSphereOEM/V7R0/conf/wasOEMOverride.responseFile. Review the contents of /etc/zWebSphereOEM/V7R0/conf/wasOEMOverride.responseFile before continuing. If you issue this command, and the files have already been copied to the predetermined location in the file system, the help message for the WASOEM.sh command displays. Note: If you cannot use the /etc/ directory to store the configuration file and the default environment files, you can specify a different working directory for these files when you receive the following prompts. These prompts only appear the first time you issue the WASOEM.sh command:
BBN0400I:The current WebSphere Application Server OEM configuration working directory is not set BBN0094I:Enter a file system location under which the WebSphere Application Server OEM working directories will be located, or press Return to accept (/etc):
>cd /usr/lpp/zWebSphereOEM/V7R0/bin >ls WASOEM.sh WASOEM.sh Once you find WASOEM.sh script, issue the WASOEM.sh command using the parameters shown in Example 2-5.
25
7563WebSphereInstall.fm
>WASOEM.sh -config -mode typical The first prompt asks you to provide working directory path. The default is /etc and you can see in our Example 2-6 we accept the default working directory.
Example 2-6 Specify WAS OEM working directory
BBN0094I:Enter a filesystem location under which the WebSphere Application Server OEM working directories will be located, or press return to accept (/etc): ===> /etc BBN0400I:This location setting will be valid for this session only. BBN0014I:Located /etc/zWebSphereOEM/V7R0/conf/wasOEM_env.sh - Setting global defaults. The BBN0014I message gives you information where the wasOEM_env.sh script is located. This script is used to configure the IBM WAS OEM Edition for z/OS configuration environment. As part of its processing, the WASOEM.sh shell script performs various file creations, and writes logs of its activity to the file system. The locations of these various files are set by wasOEM_env.sh shell script and can be changed by an administrator. Note: Mark the directory above, where wasOEM_env.sh script is located. Once the WASOEM.sh script completes open the wasOEM_env.sh file to find out location of the relevant WAS OEM environmental files generated by the WASOEM.sh script.
BBN0002W:Log directory /var/zWebSphereOEM/V7R0/logs does not exist. Would you like to create it? (Y/N) Y BBN0229I: * Your response was Y Log directory has been set to /var/zWebSphereOEM/V7R0/logs. WASOEM.sh log will be written to /var/zWebSphereOEM/V7R0/logs/WASOEM_070212_135839.log. IBM WebSphere Application Server OEM Edition for z/OS Version 7, Release 0, configuration request is being processed. WASOEM.sh started.
Note: Mark the log file name and its location. In case the WASOEM.sh script fails you can find the error message and the reason why it failed. The new log file is allocated every time you start the WASOEM.sh script. As part of its initial processing, the WASOEM.sh script creates a series of subdirectories for its use and one of them is /tmp directory. It is a temporary directory that requires a minimum of 25 megabytes of free space that is used during WASOEM.sh processing. The default 26
Tivoli Storage Productivity Center for Replication for System z
7563WebSphereInstall.fm
mount point for the configuration file system requires approximately 1 GB of storage. Therefore, WASOEM.sh creates and allocates the space for this mount point as a separate file system. After WASOEM.sh processing completes, the objects created in this directory are no longer needed, and can be deleted. The WASOEM.sh script invokes the zpmt.sh script. The zpmt.sh script generates the components, or system jobs, that are used to create and configure the WAS OEM Edition for z/OS instance based on the content of the response file. The zpmt.sh script uses /tmp subdirectory as its work area during installation process. As shown in the Example 2-8, the warning message indicates that this subdirectory does not exist. You need to accept creation of this directory, otherwise you will receive an error message indicating that processing can not continue. Once you accept it, the BBN0022I message confirms creation of the directory followed by the WASOEM.sh startup information.
Example 2-8 Create /tmp/zWebSphereOEM/V7R0/zpmt/work directory
BBN0004W:ZPMT_WORK_ROOT directory (/tmp/zWebSphereOEM/V7R0/zpmt/work) does not exist. Would you like to create it? (Y/N) * Your response was Y BBN0022I:ZPMT_WORK_ROOT directory has been set to /tmp/zWebSphereOEM/V7R0/zpmt/work. ******************************************************************************* * IBM WebSphere Application Server OEM Edition for z/OS Version 7, Release 0 * Version 1.13 * Date updated 10/06/2010 * Options selected: * -v 0 * -noclear 1 * -nooverride 0 * -fastpath 0 * -showmsgprefix 0 * -nocustom 1 * -create 0 * -config 1 *******************************************************************************
BBN0025I:Enter a 1-12 character name to call this configuration, or press Return to accept (CONFIG1): The following prompts in Example 2-10, define the high level qualifier (HLQ) for the location of the partitioned data set s where the generated customization jobs and files will be stored. When WAS OEM Edition for z/OS uploads a z/OS customization definition to the target z/OS system, the customization jobs and files are written to these data sets. Press Return to accept the default value or specify your own HLQ if preferred.
27
7563WebSphereInstall.fm
BBN0114I:When a z/OS customization definition is uploaded to the target z/OS system, the customization jobs and files are written to a pair of partitioned data sets. BBN0115I:Enter a high-level qualifier for the target z/OS data sets that will contain the generated jobs and instructions (BBN.V7R0.CONFIG1.ZPMTJOBS): ===> WASOEM.V7R0.CONFIG1.ZPMTJOBS * Your response was WASOEM.V7R0.CONFIG.ZPMTJOBS issued tso listcat entry('WASOEM.V7R0.CONFIG1.ZPMTJOBS.CNTL') all listcat entry('WASOEM.V7R0.TPCRSC70.ZPMTJOBS.CNTL') all RC(4) issued tso listcat entry('WASOEM.V7R0.TPCRSC70.ZPMTJOBS.DATA') all listcat entry('WASOEM.V7R0.TPCRSC70.ZPMTJOBS.DATA') all RC(4) no HLQ datasets found...will allocate The input required for WAS OEM customization as shown in Example 2-11 is the volume serial number of the DASD that contains the file system data set. If SMS automatic class selection (ACS) routines are set up to handle data set allocation automatically you can specify * to let SMS select a volume for you. If SMS is not set up to handle data set allocation automatically, and you do not want to use the default volume, you must specify a specific volume. In our Example 2-11, we let SMS to handle volume selection.
Example 2-11 Specify the volume serial for DASD
BBN0133I:Enter the volume name to allocate the DATA and CNTL datasets on, or enter * to s elect SMS managed : ===> * elect SMS managed :* * Your response was * issued tso alloc dsn('WASOEM.V7R0.CONFIG1.ZPMTJOBS.DATA') TRACKS DIR(30) BLKSIZE(0) space(5,5) LRECL(255) new cyl recfm(V,B) unit(SYSALLDA) alloc dsn('WASOEM.V7R0.CONFIG1.ZPMTJOBS.DATA') TRACKS DIR(30) BLKSIZE(0) space(5,5) LRECL(255) new cyl recfm(V,B) unit(SYSALLDA) 5,5) LRECL(255) new cyl recfm(V,B) unit(SYSALLDA) issued tso alloc dsn('WASOEM.V7R0.CONFIG1.ZPMTJOBS.CNTL') TRACKS DIR(30) BLKSIZE(0) space(15,15) LRECL(80) new cyl recfm(F,B) unit(SYSALLDA) alloc dsn('WASOEM.V7R0.CONFIG1.ZPMTJOBS.CNTL') TRACKS DIR(30) BLKSIZE(0) space(15,15) LRECL(80) new cyl recfm(F,B) unit(SYSALLDA) issued tso free dsn('WASOEM.V7R0.CONFIG1.ZPMTJOBS.CNTL') free dsn('WASOEM.V7R0.CONFIG1.ZPMTJOBS.CNTL') RC(12) issued tso free dsn('WASOEM.V7R0.CONFIG1.ZPMTJOBS.DATA') free dsn('WASOEM.V7R0.CONFIG1.ZPMTJOBS.DATA') RC(12) WASOEM.V7R0.CONFIG1.ZPMTJOBS.DATA and WASOEM.V7R0.CONFIG1.ZPMTJOBS.CNTL data sets have been allocated. The next required input is to define the WAS OEM file system directory. It is the read-only file system mount point. If you want to use a symbolic link mount point, enter that mount point instead of the absolute mount point in response to this prompt. If you specify a symbolic link mount point, during WASOEM.sh -create processing symbolic inks are created in the configuration file system that point to the product files mounted at the location that you specified. In Example 2-12 we accepted the default directory. 28
Tivoli Storage Productivity Center for Replication for System z
7563WebSphereInstall.fm
BBN0238I:Enter the WebSphere Application Server Product File System directory, or press return to accept (/usr/lpp/zWebSphereOEM/V7R0): ===> accept default
BBN0112I:Enter the name of the configuration file system dataset to create, or press Return to accept (BBN.V7R0.CONFIG1.ZFS): ===> WASOEM.V7R0.CONFIG1.ZFS * Your response was WASOEM.V7R0.CONFIG.ZFS The next required input is the type of configuration file system you will use to run WAS OEM Edition for z/OS. You must use either a ZFS file system or an HFS file system. We use ZFS as shown in Example 2-14.
Example 2-14 Define configuration file system type
BBN0109I:Enter the type of the configuration file system (ZFS/HFS), or press Return to accept (ZFS): ===> ZFS * Your response was WASOEM.V7R0.TPCRSC70.ZFS At this point you need to provide the volume serial where you want to allocate the configuration file system. Alternatively, specify * if you want SMS handle allocation as we used in our Example 2-15.
Example 2-15 Specify the volume serial number for file system allocation
BBN0110I:To allocate the configuration file system on a particular volume, enter the volser here (enter * for SMS managed), or press return to accept (BBNVOL): ===> * * Your response was * Next required input is for the read/write file system directory mount point where application data and environment files will be written The customization process creates this mount point if it does not already exist. In Example 2-16, we accept the default path.
Example 2-16 Define file system directory mount point
BBN0108I:Enter the path of the configuration file system mount point, or return to accept (/zWebSphereOEM/V7R0/config1): ===> /zWebSphereOEM/V7R0/config1
29
7563WebSphereInstall.fm
BBN0999I:Would you like to use the WebSphere Application Server OEM default values for the cell, cluster and system identifiers? (Y/N) ===> Y * Your response was Y Assigning default values
BBN0511I:Using AUTOUID will help avoid UID collisions with existing installations of WAS, but SHARED.IDS and BPX.NEXT.USER must be defined. Would you like to allow OS security to automatically assign unique UIDs for you with AUTOUID? (Y/N) ===> Y * Your response was Y
Hostname prompt
The following prompt asks for TCP/IP network name for the TCP/IP stack within the z/OS Operating System on which WAS OEM Edition for z/OS is configured. The override response file sets this variable to @HOSTNAME. Therefore, the scripts do a hostname lookup on the system during the configuration process and it provides your hostname to accept it. As shown in Example 2-19, our specific DNS hostname is WTSC70.ITSO.IBM.COM.
Example 2-19 Specify DNS hostname
BBN0096I:Enter the DNS hostname for TCP/IP, or press return to accept (WTSC70.ITSO.IBM.COM): ===> WTSC70.ITSO.IBM.COM
30
7563WebSphereInstall.fm
Port Selections. A total of 15 free ports are needed for your configuration. Review the following options before making your selection. * Use the values from your response file. * Automatically generate 15 ports using a base number. This process will select the next 15 free ports from the provided base Here are the port values currently defined for your configuration Location Service Daemon port ............ (32200) Location Service Daemon SSL port ........ (32201) JMX SOAP connector port ................. (32202) JMX SOAP connector port ................. (32202) ORB port ................................ (32203) ORB SSL port ............................ (32204) HTTP transport port ..................... (32207) HTTPS transport port .................... (32208) Administrative interprocess communication port ....... High Availability Manager Communications port ........ Service Integration port ............................. Service Integration Secure port ...................... Service Integration MQ Interoperability port ......... Service Integration MQ Interoperability Secure port .. Session Initiation Protocol (SIP) port ............... Session Initiation Protocol (SIP) secure port ........
Simply press return to accept the values from your response file. Or, enter a base port to automatically assign the 15 ports. For example, if you enter 32200, 32200 and the next 14 free ports will be assigned. If 32200 is in use then it will start at the next free port. Make a selection: ===> 32600
* Your response was 32600 Automatically assigning ports using 32600 as a base value.
31
7563WebSphereInstall.fm
BBN0097I:Enter the system name, or press return to accept ........... (SC70): ===> SC70 accept default The next required input is the sysplex name for the target z/OS system on which you will be configuring WAS OEM Edition for z/OS. If you are not sure of the sysplex name, issue the D SYMBOLS command from the OMVS shell on the target z/OS system. This command will display the system and sysplex name for that target z/OS system. Same as in the previous example for system name, if you do not provide a system name, the scripts do a sysplex name lookup on the system during the configuration process. In Example 2-22 the name of our z/OS system is SC70.
Example 2-22 Define Sysplex Name
BBN0098I:Enter the sysplex name, or press return to accept .......... (SNDBOX): ===> SNDBOX accept default The following message prompts you to provide the PROCLIB PDS name of an existing procedure library to which the WAS OEM Edition for z/OS cataloged procedures will be added. The user ID that runs WASOEM.sh - create script must have permission to update this PROCLIB PDS. In our Example 2-23 we change the HLQ to WASOEM. Note: Allocate manually WASOEM.V7R0.TPCRSC70.PROCLIB and then continue by pressing enter.
Example 2-23 Define PROCLIB PDS name
BBN0113I:Enter the name of a cataloged PROCLIB PDS into which to copy the started task procs, or press return to accept (BBN.V7R0.CONFIG1.PROCLIB): WASOEM.V7R0.CONFIG1.PROCLIB * Your response was WASOEM.V7R0.CONFIG1.PROCLIB Press enter if done with this section, or enter an N if not. ===>
BBN0122I:Invoking /usr/lpp/zWebSphereOEM/V7R0/bin/zpmt.sh -workspace /tmp/zWebSphereOEM/V7R0/zpmt/work -transfer -responseFile /etc/zWebSphereOEM/V7R0/conf/zWebSphereOEM/V7R0/conf/CONFIG1/CONFIG1.responseFile BBN0007I:Wait ... ===>
32
7563WebSphereInstall.fm
> Customization definition successfully written to /etc/zWebSphereOEM/V7R0/conf/CONFIG1/zpmt Copying CNTL files to WASOEM.V7R0.TPCRSC70.ZPMTJOBS.CNTL... Copy successful. Copying DATA files to WASOEM.V7R0.TPCRSC70.ZPMTJOBS.DATA... Copy successful. BBN0143I:Success: customization jobs have been created successfully. ===> BBN0144I:Submit the following jobs before running WASOEM.sh -create BBN0225I:First, submit WASOEM.V7R0.TPCRSC70.ZPMTJOBS.CNTL(BBOSBRAK) - Make sure that you select BBOSBRAK. BBN0236I:After WASOEM.V7R0.TPCRSC70.ZPMTJOBS.CNTL(BBOSBRAK) completes, submit the following. BBN0226I: WASOEM.V7R0.TPCRSC70.ZPMTJOBS.CNTL(BBOSBRAM) BBN0227I: WASOEM.V7R0.TPCRSC70.ZPMTJOBS.CNTL(BBOCBRAK)
BBN0500I:NOTE: BBN0501I:RACF SPECIAL authority is required for the user that submits the BBOSBRAK and BBOCBRAK jobs. BBN0502I:Additionally, the user that submits the BBOSBRAM job must EITHER be uid=0 or have the following authority: BBN0503I:CONTROL access to SUPERUSER.FILESYS BBN0504I:READ access to SUPERUSER.FILESYS.CHOWN BBN0505I:READ access to SUPERUSER.FILESYS.CHANGEPERMS
BBN0506I:Finally, if you opted to have UID or GID values automatically assigned and did not specify a base value: BBN0507I: - The RACF profile SHARED.IDS must be defined. BBN0508I: - The RACF profile BPX.NEXT.USER must be defined and used to indicate the ranges from which UID and GID values are to be selected. Run /usr/lpp/zWebSphereOEM/V7R0/bin/WASOEM.sh -create /etc/zWebSphereOEM/ V7R0/conf/CONFIG1 WASOEM.sh has completed Check log file /var/zWebSphereOEM/V7R0/logs/WASOEM_070212_142053.log for more in formation In addition to the information that displays on the console, every time that you run the WASOEM.sh script, two log files are written to one of the directory you choose once the WASOEM.sh script is started for the first time (see Example 2-7 on page 26). The name of this log file displays on the console when you run the script file as shown in the Example 2-24 above. Open the log file and scroll down to the end of the log. As shown in our Example 2-25, there is information about ports you specified during the installation, but more importantly, you can find the MVS commands used to start and stop WAS.
Chapter 2. WebSphere Application Server OEM Edition for z/OS install
33
7563WebSphereInstall.fm
The following ports have been set, ensure that they are added to the reserved port list: zDaemonPort 32600 zDaemonSslPort 32601 zSoapPort 32602 zOrbListenerPort 32603 zOrbListenerSslPort 32604 zHttpTransportPort 32605 zHttpTransportSslPort 32606 zAdminLocalPort 32607 zHighAvailManagerPort 32608 zServiceIntegrationPort 32609 zServiceIntegrationSecurePort 32610 zServiceIntegrationMqPort 32611 zServiceIntegrationSecureMqPort 32612 zSessionInitiationPort 32613 zSessionInitiationSecurePort 32614 BBN0152I:To start the application server, issue the MVS command: BBN0153I:START BBN7ACR,JOBNAME=BBNS001,ENV=BBNBASE.BBNNODE.BBNS001 BBN0154I:To stop the application server, enter the MVS command: BBN0155I:STOP BBN7ACRS BBN0148I:WASOEM.sh has completed
34
7563WebSphereInstall.fm
Note: After each job completes, carefully check the output. Errors might exist even if all of the Return codes are zero.
If these group and user IDs have already been created during a previous IBM WAS OEM configuration, and are in all target system RACF databases, you do not have to run this job. Note: This job creates the administrator ID (zAdminUserid) without a password, or password phrase. You must assign this user ID a password, or password phrase that complies with your standards. If you are using a different security system, make sure that the administrator ID has a password or password phrase. Enter the following RACF command to assign a password: ALTUSER WOEMADM PASSWORD(password) NOEXPIRED To use RACF password phrase support, your target system must be running z/OS Version 1.9 or higher. Enter the following RACF command to assign a password phrase: ALTUSER WOEMADM PHRASE(password phrase) NOEXPIRED If you receive error messages from this job, such as messages that indicate that the user is invalid because a user ID, group, or profile is already defined, make sure that the existing user ID, group, or profile has the same characteristics as the user ID, group, or profile that the BBOSBRAK job is creating. If the characteristics are not the same, use the Profile Management Tool to change the values that are causing the conflict, and then upload the updated customization jobs, and restart the process. When this job completes, all groups and user IDs listed in the previous table for job BBOSBRAK are defined in the RACF database on each target system for the cell. Before proceeding with submitting the next security job, verify that the IBM WAS OEM administrator user ID has the configuration group WSCFG1 as its default OMVS group.
35
7563WebSphereInstall.fm
This EXEC completed with Return Code 0 Created the following directories: ================================== None. No directories were created. Following directories already exist: ==================================== /var/ /var/zWebSphereOEM/ /var/zWebSphereOEM/V7R0/ /var/zWebSphereOEM/V7R0/home/ /var/zWebSphereOEM/V7R0/home/WSCFG1/ /var/zWebSphereOEM/V7R0/home/WSSR1/ /var/zWebSphereOEM/V7R0/home/WSCLGP/ Problems creating the following directories: ============================================ No problems while creating the directories. End of EXEC. After this job finishes, verify that the directories exist on each system and have the correct permissions.
36
7563WebSphereInstall.fm
If you receive error messages from this job, such as messages that indicate that the user is invalid because a user ID, group, or profile is already defined, make sure that the existing user ID, group, or profile has the same characteristics as the user ID, group, or profile that the BBOCBRAK job is creating. When this job completes, all user IDs listed in the previous table for job BBOCBRAK should be defined in the RACF database on each target system for the cell.
WASOEM.sh -create CONFIG1 BBN0094I:Enter a filesystem location under which the WebSphere Application Server OEM working directories will be located, or press return to accept (/etc): ===> /etc/zWebSphereOEM/V7R0/conf/CONFIG1 When this step completes a series of messages, similar to the messages displayed in our Example 2-28 will appear.
Example 2-28 IBM WAS OEM server instance created
BBN2006I:Exiting updateConfigWASOEM.py ownership of files...updating BBN0016I:Success: Update of configuration completed. The following ports have been set, ensure that they are added to the reserved port list: zDaemonPort zDaemonSslPort zSoapPort zOrbListenerPort zOrbListenerSslPort zOrbListenerSslPort zHttpTransportPort zHttpTransportSslPort zAdminLocalPort zHighAvailManagerPort zServiceIntegrationPort zServiceIntegrationSecurePort zServiceIntegrationMqPort 32600 32601 32602 32603 32604 32604 32605 32606 32607 32608 32609 32610 32611
37
7563WebSphereInstall.fm
To start the application server, issue the MVS command: START BBN7ACR,JOBNAME=BBNS001,ENV=BBNBASE.BBNNODE.BBNS001 To stop the application server, enter the MVS command: STOP BBN7ACRS WASOEM.sh has completed Check log file /var/zWebSphereOEM/V7R0/logs/WASOEM_070212_192359.log for more information Keep a record of the start command that is specified in this group of messages at the end. You are now ready to issue this command to start IBM WAS OEM server instance you installed.
38
7563installZos.fm
Chapter 3.
39
7563installZos.fm
Tivoli Storage Productivity Center for Replication Basic Edition for System z
Operating system: IBM z/OS V1.11 with APAR OA39124, APAR OA37632 IBM z/OS V1.12 with APAR OA39124, APAR OA37632 IBM z/OS V1.13 with APAR OA39124, APAR OA37632 Note: APAR OA37632 is required for the new Basic HyperSwap and Metro Mirror functions.
40
7563installZos.fm
IBM SMP/E for z/OS V3.03.0, or later Any one of the following databases (if not using the EMBEDDED database): IBM DB2 V9 for z/OS , or later IBM DB2 V9 for z/OS Value Unit Edition, or later Any one of the following WebSphere Application Server editions: IBM WebSphere Application Server for z/OS V7 or V8 IBM WebSphere Application Server OEM Edition for z/OS V7 Note: IBM WebSphere Application Server is a prerequisite for Tivoli Storage Productivity for Replication Basic Edition for z/OS . If you currently have the supported version of IBM WebSphere Application Server, you may use it. If you do not, IBM WebSphere Application Server OEM Edition for z/OS V7.0 is shipped with Tivoli Storage Productivity for Replication for z/OS (including TPC-R Basic Edition), V5.1. z/OS Unix System Services (USS) TCP/IP connection to manage storage systems and Fixed Block volumes.
z/OS
System z architecture CPU Minimum disk space requirement to run is documented in the Program Directory for Tivoli Storage Productivity Center for Replication for System z, V5.1.0, GI13-2228-00.
41
7563installZos.fm
42
7563installZos.fm
Also ensure that your ICAT server is authenticated to enable communication with the TPC for Replication server. The firewall may timeout and block communication, thus re-authentication is necessary to allow communication between servers.
TotalStorage
I O
I O
TCP/IP Comms
TCP 2433
IBM 2105-800
Figure 3-2 ESS communications
The ESS subsystem will have established IP connectivity as part of its deployment. TPC for Replication needs access to the IP network to enable the two environments to communicate. We can once again see in Figure 3-3 the attachment schema.
43
7563installZos.fm
800 CEC
44
7563installZos.fm
DS SMC
Note that TPC for Replication does not connect to the SMC. The SMC as an external server does provide the interface to the DS6000 and through its software stack it offers access to the DS6000 controllers through the GUI or the DSCLI. Both applications execute against the SMC. As Figure 3-5 on page 46 shows this is different with TPC for Replication and how the server connects to the DS6000. TPC for Replication shares the same internal DS6000 network which the SMC already utilizes. But TPC for Replication communicates directly to the DS6000 servers which are server0 and server1 as shown in Figure 3-5 on page 46.
45
7563installZos.fm
[private] IP network
Note: the server locations are not drawn to scale to their actual physical locations
Figure 3-5 TPC for Replication server connection to DS6000
After TPC for Replication server connects to the DS6000 network, define the storage server (here the DS6000) to the TPC for Replication server.
Direct connection
Direct connection between the TPC for Replication server and the storage servers is based on particular Ethernet ports in the DS8000 internal controllers. This particular Ethernet card is available only on selected DS8000 models (921, 922, 931, and 932 with feature code 1801) and it slides into the first slot out of these four or five slots in the System p controllers in the rear of the DS8000 base frame as Figure 3-6 on page 47 shows.
46
7563installZos.fm
[private] IP network
Define Ethernet ports DSCLI GUI HMC Port ID I9801 Port IP address server0 server1 Port ID I9B01 Port IP address
I9802 I9B02 Note: the server locations are not drawn to scale to their actual physical locations
The HMC is used to configure these new Ethernet ports. This is done either through the DSCLI or via the GUI. Note that this only assigns an IP address to the ports and also defines the internal DS8000 network of the DS8000 but does not actually connect the TPC for Replication server to the ports. Figure 3-6 shows that only the upper port in each Ethernet card is used and defined to which the TPC for Replication server later connects to and communicate directly to the DS8000 servers, server0 and server1.
47
7563installZos.fm
DS8000 HMC can have multiple DS8000 storage systems connected to it. When you add an HMC to the TPC for Replication configuration, all DS8000 storage systems that are behind the HMC are also added. You cannot add or remove individual storage systems that are behind an HMC. You can also add a dual-HMC configuration, in which you have two HMCs for redundancy (see Figure 3-7). You must configure both HMCs identically, including the user ID and password.
48
7563installZos.fm
49
7563installZos.fm
50
7563installZos.fm
c. WAS_GROUP can be found in WAS OEM response file. Look for zServantGroup value. Default is WSSR1. d. WAS_SERVER can be found in WAS OEM response file. Look for serverName value. Default is server1. e. WAS_NODE can be found in WAS OEM response file. Look for nodeName value. Default is bbnnode. f. WAS_CELL can be found in WAS OEM response file. Look for cellName value. Default is bbnbase. 6. DB_TYPE specification is EMBEDDED, as we are going to use the embedded Apache Derby database. 7. The #was_servant_user in the RACF PERMIT ANT.REPLICATIONMANAGER can be found in WAS OEM response file. Look for zServantUserid value. Default is WSSRU1. 8. You can browse files install_RM.log and install_RM_err.log to monitor the install progress and look for any errors that may occur. These files are pointed by DDNAMEs STDOUT and STDERR in the IWNINSTL job stream. Example 3-1 is an example of IWNINSTL JCL, as we run in our system. The text in bold are variable we used appropriate for our system.
Example 3-1 IWNINSTL job
//IWNINSTL JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //IWNINSTL EXEC PGM=BPXBATCH //STDOUT DD PATH='/etc/install_RM510.log', // PATHOPTS=(OCREAT,OTRUNC,OWRONLY), // PATHMODE=(SIRWXU), // PATHDISP=KEEP //STDERR DD PATH='/etc/install_RM510_err.log', // PATHOPTS=(OCREAT,OTRUNC,OWRONLY), // PATHMODE=(SIRWXU), // PATHDISP=KEEP //STDPARM DD * SH /usr/lpp/Tivoli/RM/scripts/installRM.sh //STDENV DD * CLASSPATH=/usr/lpp/Tivoli/RM/scripts WAS_HOME=/zWebSphereOEM/V7R0/tpcr510/AppServer WAS_USER=WOEMADM WAS_PASSWD=woemadm WAS_GROUP=WSSR1 WAS_SERVER=server1 WAS_NODE=TPCNODE WAS_CELL=TPCBASE JAVA_HOME=/zWebSphereOEM/V7R0/tpcr510/AppServer/java TPCR_InstallRoot=/usr/lpp/Tivoli/RM TPCR_ProductionRoot=/var/Tivoli/RM DB_TYPE=EMBEDDED /* //ANTRAC EXEC PGM=IKJEFT01 //SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSTSIN DD *
Chapter 3. Tivoli Storage Productivity Center for Replication install on z/OS
51
7563installZos.fm
RDEFINE FACILITY ANT.REPLICATIONMANAGER UACC(NONE) PERMIT ANT.REPLICATIONMANAGER CLASS(FACILITY)+ ID(WSSRU1) ACCESS(CONTROL) SETROPTS RACLIST(FACILITY) REFRESH /* This job needs to complete with a return code 0. You must check allocation messages to verify the data sets are allocated and cataloged as expected.
Error logs
Once you submit the job, it should take several minutes to complete. Installation progress can be monitored from USS by browsing the installation logs located at the path defined in STDOUT and STDERR. In our Example 3-1 on page 51 above, the following installation logs were created: /etc/install_RM510.log /etc/install_RM510_err.log Check the above logs in order to ensure successful completion of the installation. There is a message at the end of the install_RM.log in our Example 3-2, indicating that installation completed successfully and without errors.
Example 3-2 The end of the install_RM.log
DONE CONFIGURING NATIVE LIBRARIES The following Library objects are configured: ------------------------------------------------------------------JWLLib(cells/cl6641/nodes/nd6641/servers/ws6641|libraries.xml#Library_1189600722 RMSharedLibraries(cells/cl6641/nodes/nd6641/servers/ws6641|libraries.xml#Library Done installing applications
Done. ******************************** Bottom of Data ******************************** Note: After IWNINSTL job successful ran you need to: 1. Edit the csmConnections.properties file located in the /zWebSphereOEM/V7R0/config1/AppServer/profiles/default/properties/ directory. 2. Change localhost to the IP address or hostname of the machine and port number, in our case we used: server=WTSC64.ITSO.IBM.COM port=5110 3. Restart the GUI so that it can connect to the local machine. 4. Configure the CLI configuration file /var/Tivoli/RM/CLI/repcli.properties to point to the IP address or hostname instead of the local-host that it defaults to.
52
7563installZos.fm
If you get an Unable to connect to the server message when trying to logon to TPC for Replication, please check: The port number you typed matches the zHttpTransportSslPort during WAS OEM install; Check for ICH408I RACF messages on the WAS OEM servant address space job log.
53
7563installZos.fm
Pre-install tasks
Before you start the installation we recommend you complete the following tasks: 1. Create the image copies of the DB2 directory and catalog using the DSNTIJIC sample job provided with DB2 2. Ensure you have created a database plan and that a storage group exists. By default the storage group is SYSDEFLT. You can use SYSDEFLT or create your own storage group. Be sure that this storage group has sufficient space and, if possible, mount it on a separate volume. In order to find out what is the name of the existing DB2 storage group, issue the following SQL command via SPUFI: SELECT * FROM SYSIBM.SYSSTOGROUP; Or by using JCL as shown in the following Example 3-3:
Example 3-3 Executing SQL commands using DSNTEP2
//MLTPCR21 JOB (999,POK),'TPC-R 340',CLASS=A,MSGLEVEL=(1,1), // MSGCLASS=T,NOTIFY=&SYSUID,REGION=0M,TIME=NOLIMIT /*JOBPARM L=999,SYSAFF=SC74 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* 54
Tivoli Storage Productivity Center for Replication for System z
7563installZos.fm
//SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DB8Y) RUN PROGRAM(DSNTEP2) PLAN(DSNTEP81) LIB('DB8YU.RUNLIB.LOAD') //SYSIN DD * SELECT * FROM SYSIBM.SYSSTOGROUP; Note: The job must return a JCL return code of 0. 3. Ensure that the TPC for Replication administrative ID that you are going to use has DB2 SYSADM authority. This ID should also have authority to access the WebSphere Administration Server Console to set up a JDBC data source. In our installation we use the following SQL command: GRANT SYSADM TO your_DB2_ID; Note: your_DB2_ID is the WebSpere administrator user ID used to connect to the target DB2 subsystem The SQL command can be issued from SPUFI or using JCL similar to Example 3-3. Note: The job must return a JCL return code of 0. All SQL Codes returned have to be 000.
4. Bind your DB2 instance to the TCP/IP server with the JDBC bind utility provided with DB2. There are two ways to bind the JCC drivers. Run the supplied batch job using the AOPBATCH utility (refer to Example 3-4 on page 56), or complete the following steps which require Unix System Services (USS) access: Update your CLASSPATH and PATH (you will need to make sure that you have the correct CLASSPATH and PATH settings to run the JCC Binder). PATH=/usr/lpp/Printsrv/bin:/bin PATH=/usr/lpp/java/J6.0/bin:$PATH PATH=/pp/db2v9/D100722/db2910_jdbc/bin:$PATH EXPORT PATH CLASSPATH=/pp/db2v9/D100722/db2910_jdbc/classes/db2jcc_license_cisuz.jar CLASSPATH=/pp/db2v9/D100722/db2910_jdbc/classes/db2jcc.jar:$CLASSPATH CLASSPATH=/usr/lpp/Printsrv/classes:$CLASSPATH EXPORT CLASSPATH Invoke the JDBC bind from the command line java com.ibm.db2.jcc.DB2Binder \ -url jdbc:db2://wtsc70.itso.ibm.com:38320/DB9C \ -user WOEMADM \ -password woemadm \ -collection NULLID
55
7563installZos.fm
wtsc70.itso.ibm.com as WebSphere server path 38320 as the DB2 port number DB9C as the DB2 subsystem WOEMADM as the WebSphere user ID used to connect to the target DB2 subsystem. It must have bind authority. Use the following command to list and ensure that your user ID has the necessary SYSADM authority: SELECT * FROM SYSIBM.SYSUSERAUTH; Example 3-4 shows the job used to bind the DB2 instance.
Example 3-4 Binding the JDBC driver packages with AOPBATCH utility
//MGDSAOP JOB (999,POK),'TPC-R 340',CLASS=A,MSGCLASS=T, // NOTIFY=&SYSUID,TIME=1440,REGION=0M,MSGLEVEL=(1,1) //JOBLIB DD DSN=CEE.SCEERUN,DISP=SHR //JBINDER EXEC PGM=AOPBATCH,PARM='sh -L' //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //STDERR DD SYSOUT=* //STDOUT DD SYSOUT=* //STDIN DD * PATH=/usr/lpp/Printsrv/bin:/bin PATH=/usr/lpp/java/J6.0/bin:$PATH PATH=/pp/db2v9/D100722/db2910_jdbc/bin:$PATH EXPORT PATH CLASSPATH=/pp/db2v9/D100722/db2910_jdbc/classes/db2jcc_license_cisuz.jar CLASSPATH=/pp/db2v9/D100722/db2910_jdbc/classes/db2jcc.jar:$CLASSPATH CLASSPATH=/usr/lpp/Printsrv/classes:$CLASSPATH EXPORT CLASSPATH echo $PATH echo $CLASSPATH echo $LIBPATH java com.ibm.db2.jcc.DB2Binder \ -url jdbc:db2://wtsc70.itso.ibm.com:38320/DB9C \ -user MHLRES1 \ -password kan3da \ -collection NULLID /* Note: The job must return a JCL return code of 0. Review the entire job log to verify that the message Package "xxxxxxxx": Bind succeeded (where xxxxxxxx is the package name) appears for all packages. Verify also that there are no SQL error codes returned and that DB2Binder finished is the final message. For usage and help binding your JDBC driver packages on your z/OS systems refer to the following Web site for information:
56
7563installZos.fm
http://publib.boulder.ibm.com/infocenter/mptoolic/v1r0/index.jsp?topic=/com.ibm .db2tools.aeu.doc.ug/ahxutcfg500.htm 5. Check which DB2 buffer pool you have on your system by using the following SQL command issued directly to the DB2 subsystem from z/OS SDSF: -DB9C dis bpool(BP32K) Where -DB9C is our DB2 subsystem recognition character. If BP32K buffer pool listed have a virtual pool size less than 22000 then you need to increase the size as follows: -DB2M ALTER BPOOL(BP32K) VPSIZE(22000) Where BP32K is the buffer pool name and vpsize is the size of the virtual pool. The following confirmation is displayed: DSNB522I -DB9C VPSIZE FOR BP32K HAS BEEN SET TO 22000 DSN9022I -DB9C DSNB1CMD -ALTER BPOOL NORMAL COMPLETION 6. Ask your DB2 Administrator to ensure that the DB2 sample DSNTIJUZ have specified a large number for the parameter idthtoin, or have set it to zero to dedicate function to it.
57
7563installZos.fm
612 - A duplicate column name was detected in the object definition or ALTER TABLE statement. All other SQL error codes must be examined and analyzed. Note: The jobs are not ready to run as-is. They need appropriate job cards for the system on which they run and possibly other modification. It is recommended that you do not edit the original copies of the jobs in HLQ.SAMPLIB. HLQ.SAMPLIB is one of the data sets managed by SMP/E when installing and maintaining products and should not be modified by anything other than SMP/E. Either edit the jobs in HLQ.ASAMPLIB or copy them to a data set of your choosing. Before executing any of the jobs, ensure that Websphere Applications Server (WAS) and DB2 are started. Always run the installation jobs from the system where your WebSphere and DB2 are started. Use the affinity parm /*JOBPARM SYSAFF=xxx, where xxx is the system name, to ensure the jobs run on the system where WebSphere and DB2 are installed. Run the jobs in the order they are listed in Table 3-1.
IWNDBELM IWNDBHWL
IWNDBREP
58
7563installZos.fm
Purpose of the job This sample updates any table changes that have occurred from release to release of Tivoli Storage Productivity Center for Replication. This sample sets the initial user that will have access to Tivoli Storage Productivity Center for Replication. It also sets the communication default for the server to the Tivoli Storage Productivity Center for Replication CLI and GUI.
IWNDB2ZZ
We use the following parameters for all jobs. You will need to customize them based on your system naming conventions and configuration. DB9D9.SDSNLOAD is the data set name of the DB2 system load library. D9D4 is the DB2 Subsystem identifier. DSNTIA91 is the defined PLAN parameter for DB2. DB9CU.RUNLIB.LOAD is the DB2 runtime library. SYSDEFLT is the DB2 storage group default; you can replace it to match your DB2 storage group configured in the IWNDBALO job. TPCR510 is the name of the ITSO RM database. Ensure that this name is unique in your environment. The following examples describe each job. The parameters you need to change based on your system configuration are highlighted.
IWNDBALO job
1. The IWNDBALO job (see Example 3-5) creates the underlying database for the rest of the jobs.
Example 3-5 IWNDBALO job
//IWNDBALO JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //****************************************************************** //*** CREATE CSM DATABASE AND CSMTS TABLESPACE *********************/ //****************************************************************** //SYSIN DD * CREATE DATABASE TPCR510 STOGROUP SYSDEFLT BUFFERPOOL BP0 CCSID UNICODE; CREATE TABLESPACE CSMTS IN TPCR510
Chapter 3. Tivoli Storage Productivity Center for Replication install on z/OS
59
7563installZos.fm
USING PRIQTY SECQTY SEGSIZE LOCKSIZE LOCKMAX BUFFERPOOL CLOSE CCSID COMMIT; /*
Note: The job must return a JCL return code of 0. Verify the whole log to certify that SQL Codes returned are 000 or acceptable.
IWNDBSHL job
2. The IWNDBSHL job (see Example 3-6) creates the SVC Hardware layer database.
Example 3-6 IWNDBSHL job
//IWNDBSHL JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //**************************************************************** //****CREATE CSM DATABASE AND CSMTS TABLESPACE ******************* //**************************************************************** //SYSIN DD * DROP TABLE SVCHWL.LSSERVERLOCATIONMAPPING; COMMIT; CREATE TABLE SVCHWL.LSSERVERLOCATIONMAPPING (SERVERLOCATION VARCHAR(250) NOT NULL, USERNAME VARCHAR(250), PASSWORD VARCHAR(250), UNIQUENAME VARCHAR(250), NOTIFICATIONPORT INTEGER, SITELOCATIONID SMALLINT NOT NULL DEFAULT 0) IN TPCR510.CSMTS; CREATE UNIQUE INDEX SVCHWL.LSSERVERLOCATIONMAPPING1 ON SVCHWL.LSSERVERLOCATIONMAPPING (SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO 60
Tivoli Storage Productivity Center for Replication for System z
7563installZos.fm
BUFFERPOOL BP0 CLOSE NO; ALTER TABLE SVCHWL.LSSERVERLOCATIONMAPPING ADD CONSTRAINT PK_LSSERVERLOCATI1 PRIMARY KEY (SERVERLOCATION); DROP TABLE SVCHWL.CLUSTERSERVERMAPPING; COMMIT; CREATE TABLE SVCHWL.CLUSTERSERVERMAPPING (CLUSTERNAME VARCHAR(250) NOT NULL, SERVERLOCATION VARCHAR(250) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX SVCHWL.CLUSTERSERVERMAPPING1 ON SVCHWL.CLUSTERSERVERMAPPING (CLUSTERNAME ASC, SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE SVCHWL.CLUSTERSERVERMAPPING ADD CONSTRAINT PK_CLUSTERSERVERM2 PRIMARY KEY (CLUSTERNAME, SERVERLOCATION); COMMIT; /* Note:The job must return a JCL return code of 0. Verify the whole log to certify that SQL Codes returned are 000 or acceptable. It is acceptable to get a SQLCODE = -204 error when trying to drop an undefined table.
IWNDBELM job
3. The IWNDBELM job (see Example 3-7) creates the element catalog table.
Example 3-7 IWNDBELM job
//IWNDBELM JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //**************************************************************** //****CREATE CSM DATABASE AND CSMTS TABLESPACE ******************* //**************************************************************** //SYSIN DD * DROP TABLE EC.ELEMENTS; COMMIT; CREATE TABLE EC.ELEMENTS
Chapter 3. Tivoli Storage Productivity Center for Replication install on z/OS
61
7563installZos.fm
(INTERNALNAME VARCHAR(250) NOT NULL, ELEMENTID VARCHAR(250) NOT NULL, PARENTID VARCHAR(250), NICKNAME VARCHAR(250), BASETYPE VARCHAR(250), SPECIFICTYPE1 VARCHAR(250), CAPACITY VARCHAR(250), CAPACITYTYPE VARCHAR(250), ISALIVE SMALLINT NOT NULL, ISPROTECTED SMALLINT NOT NULL DEFAULT 0, PROTECTIONBY VARCHAR(32) NOT NULL DEFAULT 'nobody', SITELOCATIONID SMALLINT NOT NULL DEFAULT 0, ISSPACEEFFICIENT SMALLINT NOT NULL DEFAULT 0, SUPERPARENTID VARCHAR(250), ELEMENTTYPE VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX EC.ELEMENTS1 ON EC.ELEMENTS (ELEMENTID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE EC.ELEMENTS ADD CONSTRAINT PK_ELEMENTS PRIMARY KEY (ELEMENTID); DROP TABLE EC.SITELOCATION; COMMIT; CREATE TABLE EC.SITELOCATION (SITELOCATIONID SMALLINT NOT NULL, SITENAME VARCHAR (250) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX EC.SITELOCATION1 ON EC.SITELOCATION (SITELOCATIONID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE UNIQUE INDEX EC.SITELOCATION2 ON EC.SITELOCATION (SITENAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; ALTER TABLE EC.SITELOCATION ADD CONSTRAINT PK_SITELOCATION PRIMARY KEY
62
7563installZos.fm
(SITELOCATIONID); ALTER TABLE EC.SITELOCATION ADD CONSTRAINT UN_SITELOCATION UNIQUE (SITENAME); COMMIT; /*
Note:The job must return a JCL return code of 0. Verify the whole log to certify that SQL Codes returned are 000 or acceptable. Its acceptable SQLCODE = -204 error when trying to Drop an undefined table.
IWNDBHWL job
4. The IWNDBHWL job (see Example 3-8) creates the hardware layer table.
Example 3-8 IWNDBHWL job
//IWNDBHWL JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //**************************************************************** //****CREATE CSM DATABASE AND CSMTS TABLESPACE ******************* //**************************************************************** //SYSIN DD * DROP TABLE ESSHWL.ESSSERVERMAPPING; COMMIT; CREATE TABLE ESSHWL.ESSSERVERMAPPING (ESSNAME VARCHAR(250) NOT NULL, SERVERLOCATION VARCHAR(250) NOT NULL, ESSINFORMATION BLOB(1K) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX ESSHWL.ESSSERVERMAPPING1 ON ESSHWL.ESSSERVERMAPPING (SERVERLOCATION ASC, ESSNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.ESSSERVERMAPPING ADD CONSTRAINT PK_ESSSERVERMAPPIN PRIMARY KEY (SERVERLOCATION, ESSNAME); CREATE LOB TABLESPACE ESSINFTS
Chapter 3. Tivoli Storage Productivity Center for Replication install on z/OS
63
7563installZos.fm
IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE ESSHWL.ESSSERVERMAPPING_ESSINFORMATION_TAB IN TPCR510.ESSINFTS STORES ESSHWL.ESSSERVERMAPPING COLUMN ESSINFORMATION; CREATE UNIQUE INDEX ESSHWL.ESSSERVERMAPPING_ESSINFORMATION_TAB1 ON ESSHWL.ESSSERVERMAPPING_ESSINFORMATION_TAB USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP TABLE ESSHWL.SERVERLOCATIONMAPPING; COMMIT; DROP TABLE ESSHWL.SERVERLOCATIONMAPPING; COMMIT; CREATE TABLE ESSHWL.SERVERLOCATIONMAPPING (SERVERLOCATION VARCHAR(250) NOT NULL, USERNAME VARCHAR(250), PASSWORD VARCHAR(250), UNIQUENAME VARCHAR(250), NOTIFICATIONPORT INTEGER NOT NULL, SERVERTYPE SMALLINT NOT NULL, HOSTNAMECLUSTER0 VARCHAR(250), PORTNUMBERCLUSTER0 INTEGER NOT NULL, USERNAMECLUSTER0 VARCHAR(250), PASSWORDCLUSTER0 VARCHAR(250), HOSTNAMECLUSTER1 VARCHAR(250), PORTNUMBERCLUSTER1 INTEGER NOT NULL, USERNAMECLUSTER1 VARCHAR(250), PASSWORDCLUSTER1 VARCHAR(250), SITELOCATIONID SMALLINT NOT NULL DEFAULT 0) IN TPCR510.CSMTS; CREATE UNIQUE INDEX ESSHWL.SERVERLOCATIONMAPPING1 ON ESSHWL.SERVERLOCATIONMAPPING (SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.SERVERLOCATIONMAPPING ADD CONSTRAINT PK_SERVERLOCATION PRIMARY KEY (SERVERLOCATION); DROP TABLE ESSHWL.ESSPATH; COMMIT; CREATE TABLE ESSHWL.ESSPATH (PATHID VARCHAR(250) NOT NULL, PATHTYPE SMALLINT NOT NULL, SERVERLOCATION VARCHAR(250) NOT NULL,
64
7563installZos.fm
PATHSTATE SMALLINT NOT NULL, SOURCEPORTS BLOB(1K) NOT NULL, TARGETPORTS BLOB(1K) NOT NULL, AUTOGENERATED BLOB(1K) NOT NULL, SOURCEWWNN VARCHAR(250) NOT NULL, TARGETWWNN VARCHAR(250) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX ESSHWL.ESSPATH1 ON ESSHWL.ESSPATH (PATHTYPE ASC, PATHID ASC, SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.ESSPATH ADD CONSTRAINT PK_ESSPATH PRIMARY KEY (PATHTYPE, PATHID, SERVERLOCATION); DROP TABLESPACE TPCR510.SRCPORTS; COMMIT; CREATE LOB TABLESPACE SRCPORTS IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE ESSHWL.ESSPATH_SOURCEPORTS_TAB IN TPCR510.SRCPORTS STORES ESSHWL.ESSPATH COLUMN SOURCEPORTS; CREATE UNIQUE INDEX ESSHWL.ESSPATH_SOURCEPORTS_TAB1 ON ESSHWL.ESSPATH_SOURCEPORTS_TAB USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP TABLESPACE TPCR510.TGTPORTS; COMMIT; CREATE LOB TABLESPACE TGTPORTS IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE ESSHWL.ESSPATH_TARGETPORTS_TAB IN TPCR510.TGTPORTS STORES ESSHWL.ESSPATH COLUMN TARGETPORTS; CREATE UNIQUE INDEX ESSHWL.ESSPATH_TARGETPORTS_TAB1 ON ESSHWL.ESSPATH_TARGETPORTS_TAB USING STOGROUP SYSDEFLT PRIQTY 3200
65
7563installZos.fm
SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP TABLESPACE TPCR510.AUTOGENE; COMMIT; CREATE LOB TABLESPACE AUTOGENE IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE ESSHWL.ESSPATH_AUTOGENERATED_TAB IN TPCR510.AUTOGENE STORES ESSHWL.ESSPATH COLUMN AUTOGENERATED; CREATE UNIQUE INDEX ESSHWL.ESSPATH_AUTOGENERATED_TAB1 ON ESSHWL.ESSPATH_AUTOGENERATED_TAB USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP TABLE ESSHWL.ASYNCMASTERQUERYMONITOR; COMMIT; CREATE TABLE ESSHWL.ASYNCMASTERQUERYMONITOR (ESSNAME VARCHAR(250) NOT NULL, SESSIONNUMBER INTEGER NOT NULL, LSSNUMBER INTEGER NOT NULL, SSID INTEGER NOT NULL, QUERYISRUNNING SMALLINT NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX ESSHWL.ASYNCMASTERQUERYMONITOR1 ON ESSHWL.ASYNCMASTERQUERYMONITOR (ESSNAME ASC, SESSIONNUMBER ASC, LSSNUMBER ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.ASYNCMASTERQUERYMONITOR ADD CONSTRAINT PK_ASYNCMASTERQUER PRIMARY KEY (ESSNAME, SESSIONNUMBER, LSSNUMBER); COMMIT; /*
Note:The job must return a JCL return code of 0. Review the job log to certify that SQL Codes returned are 000 or acceptable. An SQLCODE = -204 error is acceptable when trying to drop an undefined table.
66
7563installZos.fm
IWNDBREP job
5. The IWNDBREP job (see Example 3-9) creates the Tivoli Storage Productivity Center for Replication table.
Example 3-9 IWNDBREP job
//IWNDBREP JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //**************************************************************** //****CREATE CSM DATABASE AND CSMTS TABLESPACE ******************* //**************************************************************** //SYSIN DD * DROP TABLE REPMGR.CSMUSER; COMMIT; CREATE TABLE REPMGR.CSMUSER ("NAME" VARCHAR(255) NOT NULL , "TYPE" INTEGER NOT NULL , "PERMISSION" VARCHAR(255) NOT NULL , "RESOURCE" VARCHAR(255) NOT NULL , "ACTIONS" VARCHAR(255) NOT NULL ) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.CSMUSER1 ON REPMGR.CSMUSER (NAME ASC, TYPE ASC, PERMISSION ASC, RESOURCE ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.CSMUSER ADD CONSTRAINT PK_CSMUSER PRIMARY KEY (NAME, TYPE, PERMISSION, RESOURCE); DROP TABLE REPMGR.CSMPROPERTIES; COMMIT; CREATE TABLE REPMGR.CSMPROPERTIES (PROP_NAME VARCHAR(250) NOT NULL, PROP_VALUE VARCHAR(250) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.CSMPROPERTIES1 ON REPMGR.CSMPROPERTIES (PROP_NAME ASC) USING STOGROUP SYSDEFLT
Chapter 3. Tivoli Storage Productivity Center for Replication install on z/OS
67
7563installZos.fm
PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.CSMPROPERTIES ADD CONSTRAINT PK_CSMPROPERTIES PRIMARY KEY (PROP_NAME); DROP TABLE REPMGR.DRIVER; COMMIT; CREATE TABLE REPMGR.DRIVER (DRIVERID VARCHAR(250) NOT NULL, TYPE1 VARCHAR(250), ISSHADOWING SMALLINT NOT NULL, CONTROLCLASS VARCHAR(250), NUMBEROFPAIRS INTEGER NOT NULL, CGINFO BLOB(13000), SEQUENCENAME VARCHAR(250), SESSIONNAME VARCHAR(250) NOT NULL, ISDEFINED SMALLINT NOT NULL, TIMESTAMP1 VARCHAR(250), INUSE SMALLINT NOT NULL, HWCONSISTENCYGROUPNAME VARCHAR(250), ISNEW SMALLINT NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.DRIVER1 ON REPMGR.DRIVER (DRIVERID ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.DRIVER ADD CONSTRAINT PK_DRIVER PRIMARY KEY (DRIVERID, SESSIONNAME); DROP TABLESPACE TPCR510.CGINFOTS; COMMIT; CREATE LOB TABLESPACE CGINFOTS IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.DRIVER_CGINFO_TAB IN TPCR510.CGINFOTS STORES REPMGR.DRIVER COLUMN CGINFO; CREATE UNIQUE INDEX REPMGR.DRIVER_CGINFO_TAB1 ON REPMGR.DRIVER_CGINFO_TAB USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO;
68
7563installZos.fm
DROP TABLE REPMGR.PAIR; COMMIT; CREATE TABLE REPMGR.PAIR (SOURCEID VARCHAR(250) NOT NULL, TARGETID VARCHAR(250) NOT NULL, STATE1 VARCHAR(250), DRIVERID VARCHAR(250), PENDINGSTATE VARCHAR(250), SESSIONNAME VARCHAR(250) NOT NULL, SEQUENCENAME VARCHAR(250) NOT NULL, ISNEW SMALLINT NOT NULL, TIMESTAMP1 VARCHAR(250), ISSHADOWING SMALLINT NOT NULL, ISCONSISTENT SMALLINT NOT NULL, LASTRESULTTYPE INTEGER NOT NULL, LASTRESULTMSGKEY VARCHAR(10), LASTRESULTINSERTS BLOB(11000), LASTRESULTTIME TIMESTAMP NOT NULL, PENDINGREASONCODE INTEGER NOT NULL, ISINHWCONSISTENCYGROUP SMALLINT NOT NULL, DIRECTION SMALLINT NOT NULL, COPYSETID VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.PAIR1 ON REPMGR.PAIR (SOURCEID ASC, TARGETID ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.PAIR ADD CONSTRAINT PK_PAIR PRIMARY KEY (SOURCEID, TARGETID, SESSIONNAME); DROP TABLESPACE TPCR510.LRESINTS; COMMIT; CREATE LOB TABLESPACE LRESINTS IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.PAIR_LASTRESULTINSERTS_TAB IN TPCR510.LRESINTS STORES REPMGR.PAIR COLUMN LASTRESULTINSERTS; CREATE UNIQUE INDEX REPMGR.PAIR_LASTRESULTINSERTS_TAB1 ON REPMGR.PAIR_LASTRESULTINSERTS_TAB USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO;
69
7563installZos.fm
DROP TABLE REPMGR.SESSION1; COMMIT; CREATE TABLE REPMGR.SESSION1 (SESSIONNAME VARCHAR(250) NOT NULL, COPYRULES BLOB(65000), CMDBEINGPROCESSED VARCHAR(250), STATE VARCHAR(250), DESCRIPTION VARCHAR(250), PRODUCTIONROLE VARCHAR(250), NUMACTIVEDRIVERS INTEGER NOT NULL, INUSE SMALLINT NOT NULL, INGENERATE VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SESSION11 ON REPMGR.SESSION1 (SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SESSION1 ADD CONSTRAINT PK_SESSION1 PRIMARY KEY (SESSIONNAME); DROP TABLESPACE TPCR510.CPYRULTS; COMMIT; CREATE LOB TABLESPACE CPYRULTS IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.SESSION1_COPYRULES_TAB IN TPCR510.CPYRULTS STORES REPMGR.SESSION1 COLUMN COPYRULES; CREATE UNIQUE INDEX REPMGR.SESSION1_COPYRULES_TAB1 ON REPMGR.SESSION1_COPYRULES_TAB USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP TABLE REPMGR.COPYSET; COMMIT; CREATE TABLE REPMGR.COPYSET (COPYSETID VARCHAR(250) NOT NULL, CONSISTENCYLEVEL VARCHAR(250), SESSIONNAME VARCHAR(250) NOT NULL, INGAME SMALLINT NOT NULL, ISVALID SMALLINT NOT NULL, ISVERIFIED SMALLINT NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.COPYSET1
70
7563installZos.fm
ON REPMGR.COPYSET (COPYSETID ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.COPYSET ADD CONSTRAINT PK_COPYSET PRIMARY KEY (COPYSETID, SESSIONNAME); DROP TABLE REPMGR.POINT; COMMIT; CREATE TABLE REPMGR.POINT (POINTID VARCHAR(250) NOT NULL, POINTNUMBER INTEGER NOT NULL, SESSIONID VARCHAR(250) NOT NULL, COPYSETID VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.POINT1 ON REPMGR.POINT (POINTID ASC, SESSIONID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.POINT ADD CONSTRAINT PK_POINT PRIMARY KEY (POINTID, SESSIONID); DROP TABLE REPMGR.SEQUENCE; COMMIT; CREATE TABLE REPMGR.SEQUENCE (SEQUENCENAME VARCHAR(250) NOT NULL, SESSIONNAME VARCHAR(250) NOT NULL, DIRECTION SMALLINT NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SEQUENCE1 ON REPMGR.SEQUENCE (SEQUENCENAME ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SEQUENCE ADD CONSTRAINT PK_SEQUENCE PRIMARY KEY (SEQUENCENAME, SESSIONNAME); DROP TABLE REPMGR.HA_STATUS; COMMIT; CREATE TABLE REPMGR.HA_STATUS (HOSTNAME VARCHAR(250) NOT NULL, IPADDRESS VARCHAR(36) NOT NULL, ROLE SMALLINT NOT NULL DEFAULT 0, STATE SMALLINT NOT NULL DEFAULT 0,
71
7563installZos.fm
TIME_STAMP TIMESTAMP NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.HA_STATUS1 ON REPMGR.HA_STATUS (HOSTNAME ASC, IPADDRESS ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.HA_STATUS ADD CONSTRAINT PK_HA_STATUS PRIMARY KEY (HOSTNAME, IPADDRESS); DROP TABLE REPMGR.SNMP_MGRS; COMMIT; CREATE TABLE REPMGR.SNMP_MGRS (MGRNAME VARCHAR (250) NOT NULL, PORT INTEGER NOT NULL WITH DEFAULT 162) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SNMP_MGRS1 ON REPMGR.SNMP_MGRS (MGRNAME ASC, PORT ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SNMP_MGRS ADD CONSTRAINT PK_SNMP_MGRS PRIMARY KEY ( MGRNAME, PORT); CREATE TABLE REPMGR.SITELOCATIONS (SITELOCATIONID SMALLINT NOT NULL, SESSIONNAME VARCHAR (250) NOT NULL, SITENAME VARCHAR(20) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SITELOCATIONS1 ON REPMGR.SITELOCATIONS (SESSIONNAME ASC, SITENAME ASC, SITELOCATIONID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SITELOCATIONS ADD CONSTRAINT PK_SITELOCATIONS PRIMARY KEY (SESSIONNAME, SITENAME, SITELOCATIONID); COMMIT; /*
72
7563installZos.fm
Note:The job must return a JCL return code of 0. Verify the whole log to certify that SQL Codes returned are 000 or acceptable. An SQLCODE = -204 error is acceptable when trying to drop an undefined table.
IWNDBHAE job
6. The IWNDBHAE job (see Example 3-10) creates the TPC for Replication table.
Example 3-10 IWNDBHAE job
//IWNDBHAE JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //****************************************************************** //*** CREATE CSM DATABASE AND CSMTS TABLESPACE *********************/ //****************************************************************** //SYSIN DD * CREATE TABLE EC.ELEMENTS_bak (INTERNALNAME VARCHAR(250) NOT NULL, ELEMENTID VARCHAR(250) NOT NULL, PARENTID VARCHAR(250), NICKNAME VARCHAR(250), BASETYPE VARCHAR(250), SPECIFICTYPE1 VARCHAR(250), CAPACITY VARCHAR(250), CAPACITYTYPE VARCHAR(250), ISALIVE SMALLINT NOT NULL, ISPROTECTED SMALLINT NOT NULL DEFAULT 0, PROTECTIONBY VARCHAR(32) NOT NULL DEFAULT 'nobody', SITELOCATIONID SMALLINT NOT NULL DEFAULT 0, ISSPACEEFFICIENT SMALLINT NOT NULL DEFAULT 0, SUPERPARENTID VARCHAR(250), ELEMENTTYPE VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX EC.ELEMENTS_bak1 ON EC.ELEMENTS_bak (ELEMENTID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO
73
7563installZos.fm
BUFFERPOOL BP0 CLOSE NO; ALTER TABLE EC.ELEMENTS_bak ADD CONSTRAINT PK_ELEMENTS_bak PRIMARY KEY (ELEMENTID); CREATE TABLE EC.SITELOCATION_bak (SITELOCATIONID SMALLINT NOT NULL, SITENAME VARCHAR (250) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX EC.SITELOCATION_bak1 ON EC.SITELOCATION_bak (SITELOCATIONID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE UNIQUE INDEX EC.SITELOCATION_bak2 ON EC.SITELOCATION_bak (SITENAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE EC.SITELOCATION_bak ADD CONSTRAINT PK_SITELOCA_bak PRIMARY KEY (SITELOCATIONID) ADD CONSTRAINT UN_SITELOCA_bak UNIQUE (SITENAME); COMMIT; /*
Note: The job must return a JCL return code of 0. Review the job log to certify that SQL Codes returned are 000 or acceptable.
IWNDBHAH job
7. The IWNDBHAH job (see Example 3-11) provides high availability backup for the hardware layer.
Example 3-11 IWNDBHAH job
//IWNDBHAH JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR
74
7563installZos.fm
//SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //****************************************************************** //*** CREATE CSM DATABASE AND CSMTS TABLESPACE *********************/ //****************************************************************** //SYSIN DD * CREATE TABLE ESSHWL.ESSSERVERMAPPING_bak (ESSNAME VARCHAR(250) NOT NULL, SERVERLOCATION VARCHAR(250) NOT NULL, ESSINFORMATION BLOB(1K) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX ESSHWL.ESSSERVERMAPPING_bak1 ON ESSHWL.ESSSERVERMAPPING_bak (SERVERLOCATION ASC, ESSNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.ESSSERVERMAPPING_bak ADD CONSTRAINT PK_ESSSERVERMA_bak PRIMARY KEY (SERVERLOCATION, ESSNAME); CREATE LOB TABLESPACE ESSINFTB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE ESSHWL.ESSSERVERMAPPING_ESSINFORMATION_TAB_bak IN TPCR510.ESSINFTB STORES ESSHWL.ESSSERVERMAPPING_bak COLUMN ESSINFORMATION; CREATE UNIQUE INDEX ESSHWL.ESSSERVERMAPPING_ESSINFORMATION_TAB_bak1 ON ESSHWL.ESSSERVERMAPPING_ESSINFORMATION_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE TABLE ESSHWL.SERVERLOCATIONMAPPING_bak
75
7563installZos.fm
(SERVERLOCATION VARCHAR(250) NOT NULL, USERNAME VARCHAR(250), PASSWORD VARCHAR(250), UNIQUENAME VARCHAR(250), NOTIFICATIONPORT INTEGER NOT NULL, SERVERTYPE SMALLINT NOT NULL, HOSTNAMECLUSTER0 VARCHAR(250), PORTNUMBERCLUSTER0 INTEGER NOT NULL, USERNAMECLUSTER0 VARCHAR(250), PASSWORDCLUSTER0 VARCHAR(250), HOSTNAMECLUSTER1 VARCHAR(250), PORTNUMBERCLUSTER1 INTEGER NOT NULL, USERNAMECLUSTER1 VARCHAR(250), PASSWORDCLUSTER1 VARCHAR(250), SITELOCATIONID SMALLINT NOT NULL DEFAULT 0) IN TPCR510.CSMTS; CREATE UNIQUE INDEX ESSHWL.SERVERLOCATIONMAPPING_bak1 ON ESSHWL.SERVERLOCATIONMAPPING_bak (SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.SERVERLOCATIONMAPPING_bak ADD CONSTRAINT PK_SERVERLOCAT_bak PRIMARY KEY (SERVERLOCATION); CREATE TABLE ESSHWL.ESSPATH_bak (PATHID VARCHAR(250) NOT NULL, PATHTYPE SMALLINT NOT NULL, SERVERLOCATION VARCHAR(250) NOT NULL, PATHSTATE SMALLINT NOT NULL, SOURCEPORTS BLOB(1K) NOT NULL, TARGETPORTS BLOB(1K) NOT NULL, AUTOGENERATED BLOB(1K) NOT NULL, SOURCEWWNN VARCHAR(250) NOT NULL, TARGETWWNN VARCHAR(250) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX ESSHWL.ESSPATH_bak1 ON ESSHWL.ESSPATH_bak (PATHTYPE ASC, PATHID ASC, SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.ESSPATH_bak ADD CONSTRAINT PK_ESSPATH_bak PRIMARY KEY (PATHTYPE, PATHID, SERVERLOCATION);
76
7563installZos.fm
CREATE LOB TABLESPACE SRCPORTB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE ESSHWL.ESSPATH_SOURCEPORTS_TAB_bak IN TPCR510.SRCPORTB STORES ESSHWL.ESSPATH_bak COLUMN SOURCEPORTS; CREATE UNIQUE INDEX ESSHWL.ESSPATH_SOURCEPORTS_TAB_bak1 ON ESSHWL.ESSPATH_SOURCEPORTS_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE LOB TABLESPACE TGTPORTB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE ESSHWL.ESSPATH_TARGETPORTS_TAB_bak IN TPCR510.TGTPORTB STORES ESSHWL.ESSPATH_bak COLUMN TARGETPORTS; CREATE UNIQUE INDEX ESSHWL.ESSPATH_TARGETPORTS_TAB_bak1 ON ESSHWL.ESSPATH_TARGETPORTS_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO;
CREATE LOB TABLESPACE AUTOGENB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE ESSHWL.ESSPATH_AUTOGENERATED_TAB_bak IN TPCR510.AUTOGENB STORES ESSHWL.ESSPATH_bak
77
7563installZos.fm
COLUMN AUTOGENERATED; CREATE UNIQUE INDEX ESSHWL.ESSPATH_AUTOGENERATED_TAB_bak1 ON ESSHWL.ESSPATH_AUTOGENERATED_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE TABLE ESSHWL.ASYNCMASTERQUERYMONITOR_bak (ESSNAME VARCHAR(250) NOT NULL, SESSIONNUMBER INTEGER NOT NULL, LSSNUMBER INTEGER NOT NULL, SSID INTEGER NOT NULL, QUERYISRUNNING SMALLINT NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX ESSHWL.ASYNCMASTERQUERYMONITOR_bak1 ON ESSHWL.ASYNCMASTERQUERYMONITOR_bak (ESSNAME ASC, SESSIONNUMBER ASC, LSSNUMBER ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.ASYNCMASTERQUERYMONITOR_bak ADD CONSTRAINT PK_ASYNCMASTER_bak PRIMARY KEY (ESSNAME, SESSIONNUMBER, LSSNUMBER); COMMIT; /*
Note: The job must return a JCL return code of 0. Review the job log to verify that SQL Codes returned are 000 or acceptable.
IWNDBHAR job
8. The IWNDBHAR job (see Example 3-12) provides high availability backup for the TPC for Replication table.
Example 3-12 IWNDBHAR job
/IWNDBHAR JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, / MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) *JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=*
78
7563installZos.fm
//SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //****************************************************************** //*** CREATE CSM DATABASE AND CSMTS TABLESPACE *********************/ //****************************************************************** //SYSIN DD * CREATE TABLE REPMGR.CSMUSER_bak ("NAME" VARCHAR(255) NOT NULL , "TYPE" INTEGER NOT NULL , "PERMISSION" VARCHAR(255) NOT NULL , "RESOURCE" VARCHAR(255) NOT NULL , "ACTIONS" VARCHAR(255) NOT NULL ) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.CSMUSER_bak1 ON REPMGR.CSMUSER_bak (NAME ASC, TYPE ASC, PERMISSION ASC, RESOURCE ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.CSMUSER_bak ADD CONSTRAINT PK_CSMUSER_bak PRIMARY KEY (NAME, TYPE, PERMISSION, RESOURCE);
CREATE TABLE REPMGR.CSMPROPERTIES_bak (PROP_NAME VARCHAR(250) NOT NULL, PROP_VALUE VARCHAR(250) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.CSMPROPERTIES_bak1 ON REPMGR.CSMPROPERTIES_bak (PROP_NAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.CSMPROPERTIES_bak ADD CONSTRAINT PK_CSMPROPER_bak PRIMARY KEY (PROP_NAME);
79
7563installZos.fm
(DRIVERID VARCHAR(250) NOT NULL, TYPE1 VARCHAR(250), ISSHADOWING SMALLINT NOT NULL, CONTROLCLASS VARCHAR(250), NUMBEROFPAIRS INTEGER NOT NULL, CGINFO BLOB(13000), SEQUENCENAME VARCHAR(250), SESSIONNAME VARCHAR(250) NOT NULL, ISDEFINED SMALLINT NOT NULL, TIMESTAMP1 VARCHAR(250), INUSE SMALLINT NOT NULL, HWCONSISTENCYGROUPNAME VARCHAR(250), ISNEW SMALLINT NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.DRIVER1_bak ON REPMGR.DRIVER_bak (DRIVERID ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.DRIVER_bak ADD CONSTRAINT PK_DRIVER_bak PRIMARY KEY (DRIVERID, SESSIONNAME); CREATE LOB TABLESPACE CGINFOTB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.DRIVER_CGINFO_TAB_bak IN TPCR510.CGINFOTB STORES REPMGR.DRIVER_bak COLUMN CGINFO; CREATE UNIQUE INDEX REPMGR.DRIVER_CGINFO_TAB_bak1 ON REPMGR.DRIVER_CGINFO_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO;
CREATE TABLE REPMGR.PAIR_bak (SOURCEID VARCHAR(250) NOT NULL, TARGETID VARCHAR(250) NOT NULL, STATE1 VARCHAR(250), DRIVERID VARCHAR(250),
80
7563installZos.fm
PENDINGSTATE VARCHAR(250), SESSIONNAME VARCHAR(250) NOT NULL, SEQUENCENAME VARCHAR(250) NOT NULL, ISNEW SMALLINT NOT NULL, TIMESTAMP1 VARCHAR(250), ISSHADOWING SMALLINT NOT NULL, ISCONSISTENT SMALLINT NOT NULL, LASTRESULTTYPE INTEGER NOT NULL, LASTRESULTMSGKEY VARCHAR(10), LASTRESULTINSERTS BLOB(11000), LASTRESULTTIME TIMESTAMP NOT NULL, PENDINGREASONCODE INTEGER NOT NULL, ISINHWCONSISTENCYGROUP SMALLINT NOT NULL, DIRECTION SMALLINT NOT NULL, COPYSETID VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.PAIR_bak1 ON REPMGR.PAIR_bak (SOURCEID ASC, TARGETID ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.PAIR_bak ADD CONSTRAINT PK_PAIR_bak PRIMARY KEY (SOURCEID, TARGETID, SESSIONNAME); CREATE LOB TABLESPACE LRESINTB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.PAIR_LASTRESULTINSERTS_TAB_bak IN TPCR510.LRESINTB STORES REPMGR.PAIR_bak COLUMN LASTRESULTINSERTS; CREATE UNIQUE INDEX REPMGR.PAIR_LASTRESULTINSERTS_TAB_bak1 ON REPMGR.PAIR_LASTRESULTINSERTS_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO;
81
7563installZos.fm
COPYRULES BLOB(65000), CMDBEINGPROCESSED VARCHAR(250), STATE VARCHAR(250), DESCRIPTION VARCHAR(250), PRODUCTIONROLE VARCHAR(250), NUMACTIVEDRIVERS INTEGER NOT NULL, INUSE SMALLINT NOT NULL, INGENERATE VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SESSION1_bak1 ON REPMGR.SESSION1_bak (SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SESSION1_bak ADD CONSTRAINT PK_SESSION1_bak PRIMARY KEY (SESSIONNAME); CREATE LOB TABLESPACE CPYRULTB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.SESSION1_COPYRULES_TAB_bak IN TPCR510.CPYRULTB STORES REPMGR.SESSION1_bak COLUMN COPYRULES; CREATE UNIQUE INDEX REPMGR.SESSION1_COPYRULES_TAB_bak1 ON REPMGR.SESSION1_COPYRULES_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO;
CREATE TABLE REPMGR.COPYSET_bak (COPYSETID VARCHAR(250) NOT NULL, CONSISTENCYLEVEL VARCHAR(250), SESSIONNAME VARCHAR(250) NOT NULL, INGAME SMALLINT NOT NULL, ISVALID SMALLINT NOT NULL, ISVERIFIED SMALLINT NOT NULL) p F2=Split F3=Exit F5=Rfind IN TPCR510.CSMTS;
F6=Rchange
F7=Up
82
7563installZos.fm
CREATE UNIQUE INDEX REPMGR.COPYSET_bak1 ON REPMGR.COPYSET_bak (COPYSETID ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.COPYSET_bak ADD CONSTRAINT PK_COPYSET_bak PRIMARY KEY (COPYSETID, SESSIONNAME);
CREATE TABLE REPMGR.POINT_bak (POINTID VARCHAR(250) NOT NULL, POINTNUMBER INTEGER NOT NULL, SESSIONID VARCHAR(250) NOT NULL, COPYSETID VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.POINT1_bak ON REPMGR.POINT_bak (POINTID ASC, SESSIONID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.POINT_bak ADD CONSTRAINT PK_POINT_bak PRIMARY KEY (POINTID, SESSIONID); CREATE TABLE REPMGR.SEQUENCE_bak (SEQUENCENAME VARCHAR(250) NOT NULL, SESSIONNAME VARCHAR(250) NOT NULL, DIRECTION SMALLINT NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SEQUENCE1_bak ON REPMGR.SEQUENCE_bak (SEQUENCENAME ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SEQUENCE_bak ADD CONSTRAINT PK_SEQUENC_bak PRIMARY KEY (SEQUENCENAME, SESSIONNAME); CREATE TABLE REPMGR.HA_STATUS_bak
83
7563installZos.fm
(HOSTNAME VARCHAR(250) NOT NULL, IPADDRESS VARCHAR(36) NOT NULL, ROLE SMALLINT NOT NULL DEFAULT 0, STATE SMALLINT NOT NULL DEFAULT 0, TIME_STAMP TIMESTAMP NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.HA_STATUS1_bak ON REPMGR.HA_STATUS_bak (HOSTNAME ASC, IPADDRESS ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.HA_STATUS_bak ADD CONSTRAINT PK_HA_STATU_bak PRIMARY KEY (HOSTNAME, IPADDRESS);
CREATE TABLE REPMGR.SNMP_MGRS_bak (MGRNAME VARCHAR (250) NOT NULL, PORT INTEGER NOT NULL WITH DEFAULT 162) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SNMP_MGRS1_bak ON REPMGR.SNMP_MGRS_bak (MGRNAME ASC, PORT ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SNMP_MGRS_bak ADD CONSTRAINT PK_SNMP_MGR_bak PRIMARY KEY ( MGRNAME, PORT);
CREATE TABLE REPMGR.SITELOCATIONS_bak (SITELOCATIONID SMALLINT NOT NULL, SESSIONNAME VARCHAR (250) NOT NULL, SITENAME VARCHAR(20) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SITELOCATIONS1_bak ON REPMGR.SITELOCATIONS_bak (SESSIONNAME ASC, SITENAME ASC, SITELOCATIONID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0
84
7563installZos.fm
CLOSE NO; ALTER TABLE REPMGR.SITELOCATIONS_bak ADD CONSTRAINT PK_SITELOCATIONS PRIMARY KEY (SESSIONNAME, SITENAME, SITELOCATIONID); COMMIT; /*
Note: The job must return a JCL return code of 0. Review the job log to verify that SQL Codes returned are 000 or acceptable.
IWNDBHAS job
9. IWNDBHAS job (see Example 3-13) provides high availability backup for the SVC hardware layer.
Example 3-13 IWNDBHAS job
//IWNDBHAS JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //****************************************************************** //*** CREATE CSM DATABASE AND CSMTS TABLESPACE *********************/ //****************************************************************** //SYSIN DD * DROP TABLE SVCHWL.LSSERVERLOCATIONMAPPING_bak; COMMIT; CREATE TABLE SVCHWL.LSSERVERLOCATIONMAPPING_bak (SERVERLOCATION VARCHAR(250) NOT NULL, USERNAME VARCHAR(250), PASSWORD VARCHAR(250), UNIQUENAME VARCHAR(250), NOTIFICATIONPORT INTEGER, SITELOCATIONID SMALLINT NOT NULL DEFAULT 0) IN TPCR510.CSMTS; CREATE UNIQUE INDEX SVCHWL.LSSERVERLOCATIONMAPPING_bak1 ON SVCHWL.LSSERVERLOCATIONMAPPING_bak (SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200
85
7563installZos.fm
ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE SVCHWL.LSSERVERLOCATIONMAPPING_bak ADD CONSTRAINT PK_LSSRVRLOCBAK1 PRIMARY KEY (SERVERLOCATION); DROP TABLE SVCHWL.CLUSTERSERVERMAPPING_bak; COMMIT; CREATE TABLE SVCHWL.CLUSTERSERVERMAPPING_bak (CLUSTERNAME VARCHAR(250) NOT NULL, SERVERLOCATION VARCHAR(250) NOT NULL) IN TPCR510.CSMTS; CREATE UNIQUE INDEX SVCHWL.CLUSTERSERVERMAPPING_bak1 ON SVCHWL.CLUSTERSERVERMAPPING_bak (CLUSTERNAME ASC, SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE SVCHWL.CLUSTERSERVERMAPPING_bak ADD CONSTRAINT PK_CLSTSRVBAK1 PRIMARY KEY (CLUSTERNAME, SERVERLOCATION); COMMIT; /*
Note:The job must return a JCL return code of 0. Verify the whole log to certify that SQL Codes returned are 000 or acceptable. Its acceptable SQLCODE = -204 error when trying to Drop an undefined table.
IWNDBMIG job
10.The IIWNDBMIG job (see Example 3-14) updates any table changes that have occurred from release to release of TPC for Replication.
Example 3-14 IWNDBMIG job
//IWNDBMIG JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //**************************************************************** //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARM('RC0') LIB('DB9DU.RUNLIB.LOAD') //**** MIGRATION CONFIGURATION ASSISTANCE ************************
86
7563installZos.fm
//**** FOR CSM DATABASE V.R.M.L to V.R.M.L *********************** //**************************************************************** //SYSIN DD * TABLE REPMGR.CSMUSER COLUMN "NAME" VARCHAR (255) NOT NULL WITH DEFAULT 'nobody'; TABLE REPMGR.CSMUSER COLUMN "TYPE" INTEGER NOT NULL WITH DEFAULT 0; TABLE REPMGR.CSMUSER COLUMN "PERMISSION" VARCHAR(255) NOT NULL WITH DEFAULT 'com.ibm.csm.common.permission.LogonPermission'; ALTER TABLE REPMGR.CSMUSER ADD COLUMN "RESOURCE" VARCHAR(255) NOT NULL WITH DEFAULT ' LOGON ALTER TABLE REPMGR.CSMUSER ADD COLUMN "ACTIONS" VARCHAR(255) NOT NULL WITH DEFAULT 'none'; ALTER TABLE REPMGR.CSMUSER DROP PRIMARY KEY; DROP INDEX REPMGR.CSMUSER1; COMMIT; CREATE UNIQUE INDEX REPMGR.CSMUSER1 ON REPMGR.CSMUSER (NAME ASC, TYPE ASC, PERMISSION ASC, RESOURCE ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.CSMUSER ADD CONSTRAINT PK_CSMUSER PRIMARY KEY (NAME, TYPE, PERMISSION, RESOURCE); CREATE TABLE REPMGR.TMP_SESSION1 (SESSIONNAME VARCHAR(250) NOT NULL, COPYRULES BLOB(100000), CMDBEINGPROCESSED VARCHAR(250), STATE VARCHAR(250), DESCRIPTION VARCHAR(250), PRODUCTIONROLE VARCHAR(250), NUMACTIVEDRIVERS INTEGER NOT NULL, INUSE SMALLINT NOT NULL, INGENERATE VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.TMP_SESSION11 ON REPMGR.TMP_SESSION1 (SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP TABLESPACE TPCR510.TMPCPRL; COMMIT; CREATE LOB TABLESPACE TMPCPRL ALTER ADD ALTER ADD ALTER ADD
';
87
7563installZos.fm
IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.TMP_SESSION1_COPYRULES_TAB IN TPCR510.TMPCPRL STORES REPMGR.TMP_SESSION1 COLUMN COPYRULES; CREATE UNIQUE INDEX REPMGR.TMP_SESSION1_COPYRULES_TAB1 ON REPMGR.TMP_SESSION1_COPYRULES_TAB USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; INSERT INTO REPMGR.TMP_SESSION1 SELECT * FROM REPMGR.SESSION1; DROP TABLE REPMGR.SESSION1; COMMIT; CREATE TABLE REPMGR.SESSION1 (SESSIONNAME VARCHAR(250) NOT NULL, COPYRULES BLOB(100000), CMDBEINGPROCESSED VARCHAR(250), STATE VARCHAR(250), DESCRIPTION VARCHAR(250), PRODUCTIONROLE VARCHAR(250), NUMACTIVEDRIVERS INTEGER NOT NULL, INUSE SMALLINT NOT NULL, INGENERATE VARCHAR(250)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SESSION11 ON REPMGR.SESSION1 (SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SESSION1 ADD CONSTRAINT PK_SESSION1 PRIMARY KEY (SESSIONNAME); DROP TABLESPACE TPCR510.CPYRULTS; COMMIT; CREATE LOB TABLESPACE CPYRULTS IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.SESSION1_COPYRULES_TAB IN TPCR510.CPYRULTS STORES REPMGR.SESSION1 COLUMN COPYRULES;
88
7563installZos.fm
CREATE UNIQUE INDEX REPMGR.SESSION1_COPYRULES_TAB1 ON REPMGR.SESSION1_COPYRULES_TAB USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; INSERT INTO REPMGR.SESSION1 SELECT * FROM REPMGR.TMP_SESSION1; DROP TABLE REPMGR.TMP_SESSION1; COMMIT; CREATE TABLE REPMGR.TMP_SESSION1_bak (SESSIONNAME VARCHAR(250) NOT NULL, COPYRULES BLOB(100000), CMDBEINGPROCESSED VARCHAR(250), STATE VARCHAR(250), DESCRIPTION VARCHAR(250), PRODUCTIONROLE VARCHAR(250), NUMACTIVEDRIVERS INTEGER NOT NULL, INUSE SMALLINT NOT NULL, INGENERATE VARCHAR(250)) in TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.TMP_SESSION1_bak1 ON REPMGR.TMP_SESSION1_bak (SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP TABLESPACE TPCR510.TMPCPRLB; COMMIT; CREATE LOB TABLESPACE TMPCPRLB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.TMP_SESSION1_COPYRULES_TAB_bak IN TPCR510.TMPCPRLB STORES REPMGR.TMP_SESSION1_bak COLUMN COPYRULES; CREATE UNIQUE INDEX REPMGR.TMP_SESSION1_COPYRULES_TAB1_bak ON REPMGR.TMP_SESSION1_COPYRULES_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; INSERT INTO REPMGR.TMP_SESSION1_bak SELECT * FROM REPMGR.SESSION1_bak;
89
7563installZos.fm
DROP TABLE REPMGR.SESSION1_bak; COMMIT; CREATE TABLE REPMGR.SESSION1_bak (SESSIONNAME VARCHAR(250) NOT NULL, COPYRULES BLOB(100000), CMDBEINGPROCESSED VARCHAR(250), STATE VARCHAR(250), DESCRIPTION VARCHAR(250), PRODUCTIONROLE VARCHAR(250), NUMACTIVEDRIVERS INTEGER NOT NULL, INUSE SMALLINT NOT NULL, INGENERATE VARCHAR(250)) in TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.SESSION1_bak1 ON REPMGR.SESSION1_bak (SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP TABLESPACE TPCR510.CPYRULTB; COMMIT; CREATE LOB TABLESPACE CPYRULTB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.SESSION1_COPYRULES_TAB_bak IN TPCR510.CPYRULTB STORES REPMGR.SESSION1_bak COLUMN COPYRULES; CREATE UNIQUE INDEX REPMGR.SESSION1_COPYRULES_TAB_bak1 ON REPMGR.SESSION1_COPYRULES_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE REPMGR.SESSION1_bak ADD CONSTRAINT PK_SESSION1_bak PRIMARY KEY (SESSIONNAME); INSERT INTO REPMGR.SESSION1_bak SELECT * FROM REPMGR.TMP_SESSION1_bak; DROP TABLE REPMGR.TMP_SESSION1_bak; COMMIT; ALTER TABLE EC.ELEMENTS ADD COLUMN ISPROTECTED SMALLINT NOT NULL WITH DEFAULT 0; ALTER TABLE EC.ELEMENTS ADD COLUMN PROTECTIONBY VARCHAR (32) NOT NULL WITH DEFAULT 'nobody'; ALTER TABLE EC.ELEMENTS ADD COLUMN SITELOCATIONID SMALLINT NOT NULL WITH DEFAULT 0; ALTER TABLE EC.ELEMENTS
90
7563installZos.fm
DROP CONSTRAINT PK_ELEMENTS; COMMIT; DROP INDEX EC.ELEMENTS1; COMMIT; CREATE UNIQUE INDEX EC.ELEMENTS1 ON EC.ELEMENTS (ELEMENTID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; CREATE UNIQUE INDEX EC.ELEMENTS2 ON EC.ELEMENTS (INTERNALNAME ASC, PARENTID ASC, BASETYPE ASC, SPECIFICTYPE1 ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; ALTER TABLE EC.ELEMENTS ADD CONSTRAINT PK_ELEMENTS PRIMARY KEY(ELEMENTID); COMMIT; ALTER TABLE EC.ELEMENTS_bak ADD COLUMN ISPROTECTED SMALLINT NOT NULL WITH DEFAULT 0; ALTER TABLE EC.ELEMENTS_bak ADD COLUMN PROTECTIONBY VARCHAR (32) NOT NULL WITH DEFAULT 'nobody'; ALTER TABLE EC.ELEMENTS_bak ADD COLUMN SITELOCATIONID SMALLINT NOT NULL WITH DEFAULT 0; ALTER TABLE EC.ELEMENTS_bak DROP CONSTRAINT PK_ELEMENTS_bak; COMMIT; DROP INDEX EC.ELEMENTS_bak1; COMMIT; CREATE UNIQUE INDEX EC.ELEMENTS_bak1 ON EC.ELEMENTS_bak (ELEMENTID ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; CREATE UNIQUE INDEX EC.ELEMENTS_bak2 ON EC.ELEMENTS_bak (INTERNALNAME ASC, PARENTID ASC, BASETYPE ASC, SPECIFICTYPE1 ASC) USING STOGROUP SYSDEFLT PRIQTY 3200
91
7563installZos.fm
SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; ALTER TABLE EC.ELEMENTS_bak ADD CONSTRAINT PK_ELEMENTS_bak PRIMARY KEY(ELEMENTID); COMMIT; DROP INDEX ESSHWL.ESSSMIDX; COMMIT; CREATE INDEX ESSHWL.ESSSMIDX ON ESSHWL.ESSSERVERMAPPING (ESSNAME ASC, SERVERLOCATION ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; ALTER TABLE ESSHWL.SERVERLOCATIONMAPPING ADD COLUMN SITELOCATIONID SMALLINT NOT NULL WITH DEFAULT 0; DROP INDEX ESSHWL.SERVLIDX; COMMIT; CREATE INDEX ESSHWL.SERVLIDX ON ESSHWL.SERVERLOCATIONMAPPING (SERVERLOCATION ASC, HOSTNAMECLUSTER0 ASC, HOSTNAMECLUSTER1 ASC, SERVERTYPE ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; DROP INDEX ESSHWL.ASCMQIDX; COMMIT; CREATE INDEX ESSHWL.ASCMQIDX ON ESSHWL.ASYNCMASTERQUERYMONITOR (ESSNAME ASC, SESSIONNUMBER ASC, QUERYISRUNNING ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; ALTER TABLE ADD ALTER TABLE ADD ALTER TABLE ADD
ESSHWL.SERVERLOCATIONMAPPING_bak COLUMN SITELOCATIONID SMALLINT NOT NULL WITH DEFAULT 0; SVCHWL.LSSERVERLOCATIONMAPPING COLUMN SITELOCATIONID SMALLINT NOT NULL WITH DEFAULT 0; SVCHWL.LSSERVERLOCATIONMAPPING_bak COLUMN SITELOCATIONID SMALLINT NOT NULL WITH DEFAULT 0;
92
7563installZos.fm
COMMIT; ALTER TABLE EC.ELEMENTS ADD COLUMN ISSPACEEFFICIENT SMALLINT NOT NULL WITH DEFAULT 0; ALTER TABLE EC.ELEMENTS_bak ADD COLUMN ISSPACEEFFICIENT SMALLINT NOT NULL WITH DEFAULT 0; COMMIT; DROP TABLE REPMGR.DRIVER_tmp; COMMIT; CREATE TABLE REPMGR.DRIVER_tmp (DRIVERID VARCHAR(250) NOT NULL, TYPE1 VARCHAR(250), ISSHADOWING SMALLINT NOT NULL, CONTROLCLASS VARCHAR(250), NUMBEROFPAIRS INTEGER NOT NULL, CGINFO BLOB(50000), SEQUENCENAME VARCHAR(250) NOT NULL, SESSIONNAME VARCHAR(250) NOT NULL, ISDEFINED SMALLINT NOT NULL, TIMESTAMP1 VARCHAR(250), INUSE SMALLINT NOT NULL, HWCONSISTENCYGROUPNAME VARCHAR(250), ISNEW SMALLINT NOT NULL, PRIMARY KEY (DRIVERID, SESSIONNAME)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.DRIVER1_tmp ON REPMGR.DRIVER_tmp (DRIVERID ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE LOB TABLESPACE CGINFTMP IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.DRIVER_CGINFO_TAB_tmp IN TPCR510.CGINFTMP STORES REPMGR.DRIVER_tmp COLUMN CGINFO; CREATE UNIQUE INDEX REPMGR.DRIVER_CGINFO_TAB1_tmp ON REPMGR.DRIVER_CGINFO_TAB_tmp USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200
93
7563installZos.fm
ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; INSERT INTO REPMGR.DRIVER_tmp SELECT * FROM REPMGR.DRIVER; COMMIT; DROP TABLE REPMGR.DRIVER; DROP TABLESPACE TPCR510.CGINFOTS; COMMIT; CREATE TABLE REPMGR.DRIVER (DRIVERID VARCHAR(250) NOT NULL, TYPE1 VARCHAR(250), ISSHADOWING SMALLINT NOT NULL, CONTROLCLASS VARCHAR(250), NUMBEROFPAIRS INTEGER NOT NULL, CGINFO BLOB(50000), SEQUENCENAME VARCHAR(250) NOT NULL, SESSIONNAME VARCHAR(250) NOT NULL, ISDEFINED SMALLINT NOT NULL, TIMESTAMP1 VARCHAR(250), INUSE SMALLINT NOT NULL, HWCONSISTENCYGROUPNAME VARCHAR(250), ISNEW SMALLINT NOT NULL, PRIMARY KEY (DRIVERID, SESSIONNAME, SEQUENCENAME)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.DRIVER1 ON REPMGR.DRIVER (DRIVERID ASC, SESSIONNAME ASC, SEQUENCENAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE LOB TABLESPACE CGINFOTS IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.DRIVER_CGINFO_TAB IN TPCR510.CGINFOTS STORES REPMGR.DRIVER COLUMN CGINFO; CREATE UNIQUE INDEX REPMGR.DRIVER_CGINFO_TAB1 ON REPMGR.DRIVER_CGINFO_TAB USING STOGROUP SYSDEFLT
94
7563installZos.fm
PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; INSERT INTO REPMGR.DRIVER SELECT * FROM REPMGR.DRIVER_tmp; COMMIT; DROP TABLE REPMGR.DRIVER_tmp; DROP TABLESPACE TPCR510.CGINFTMP; COMMIT; DROP TABLE REPMGR.DRIVER_bak_tmp; COMMIT; CREATE TABLE REPMGR.DRIVER_bak_tmp (DRIVERID VARCHAR(250) NOT NULL, TYPE1 VARCHAR(250), ISSHADOWING SMALLINT NOT NULL, CONTROLCLASS VARCHAR(250), NUMBEROFPAIRS INTEGER NOT NULL, CGINFO BLOB(50000), SEQUENCENAME VARCHAR(250) NOT NULL, SESSIONNAME VARCHAR(250) NOT NULL, ISDEFINED SMALLINT NOT NULL, TIMESTAMP1 VARCHAR(250), INUSE SMALLINT NOT NULL, HWCONSISTENCYGROUPNAME VARCHAR(250), ISNEW SMALLINT NOT NULL, PRIMARY KEY (DRIVERID, SESSIONNAME)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.DRIVER1_bak_tmp ON REPMGR.DRIVER_bak_tmp (DRIVERID ASC, SESSIONNAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE LOB TABLESPACE CGIBTMP IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO; CREATE AUXILIARY TABLE REPMGR.DRIVER_CGINFO_TAB_bak_tmp IN TPCR510.CGIBTMP
95
7563installZos.fm
STORES REPMGR.DRIVER_bak_tmp COLUMN CGINFO; CREATE UNIQUE INDEX REPMGR.DRIVER_CGINFO_TAB_bak1_tmp ON REPMGR.DRIVER_CGINFO_TAB_bak_tmp USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; INSERT INTO REPMGR.DRIVER_bak_tmp SELECT * FROM REPMGR.DRIVER_bak; COMMIT; DROP TABLE REPMGR.DRIVER_bak; DROP TABLESPACE TPCR510.CGINFOTB; COMMIT; CREATE TABLE REPMGR.DRIVER_bak (DRIVERID VARCHAR(250) NOT NULL, TYPE1 VARCHAR(250), ISSHADOWING SMALLINT NOT NULL, CONTROLCLASS VARCHAR(250), NUMBEROFPAIRS INTEGER NOT NULL, CGINFO BLOB(50000), SEQUENCENAME VARCHAR(250) NOT NULL, SESSIONNAME VARCHAR(250) NOT NULL, ISDEFINED SMALLINT NOT NULL, TIMESTAMP1 VARCHAR(250), INUSE SMALLINT NOT NULL, HWCONSISTENCYGROUPNAME VARCHAR(250), ISNEW SMALLINT NOT NULL, PRIMARY KEY (DRIVERID, SESSIONNAME, SEQUENCENAME)) IN TPCR510.CSMTS; CREATE UNIQUE INDEX REPMGR.DRIVER1_bak ON REPMGR.DRIVER_bak (DRIVERID ASC, SESSIONNAME ASC, SEQUENCENAME ASC) USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; CREATE LOB TABLESPACE CGINFOTB IN TPCR510 USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 LOG NO;
96
7563installZos.fm
CREATE AUXILIARY TABLE REPMGR.DRIVER_CGINFO_TAB_bak IN TPCR510.CGINFOTB STORES REPMGR.DRIVER_bak COLUMN CGINFO; CREATE UNIQUE INDEX REPMGR.DRIVER_CGINFO_TAB_bak1 ON REPMGR.DRIVER_CGINFO_TAB_bak USING STOGROUP SYSDEFLT PRIQTY 3200 SECQTY 3200 ERASE NO BUFFERPOOL BP0 CLOSE NO; COMMIT; INSERT INTO REPMGR.DRIVER_bak SELECT * FROM REPMGR.DRIVER_bak_tmp; COMMIT; DROP TABLE REPMGR.DRIVER_CGINFO_TAB_bak_tmp; DROP TABLE REPMGR.DRIVER_bak_tmp; DROP TABLESPACE TPCR510.CGIBTMP; COMMIT; /*
Note:The job must return a JCL return code of 0. Review the job log to verify that SQL Codes returned are 000 or acceptable. An SQLCODE = -204 error is acceptable when trying to drop an undefined table..
IWNDBH2ZZ job
11.The IWNDB2ZZ job (see Example 3-15) sets the initial user that will have access to TPC for Replication and also sets the communication default for the server to the TPC-R CLI and GUI. In Example 3-15 you need to specify the userid used for TPC for Replication installation in INSERT INTO REPMGR.CSMUSER VALUES SQL statement. In our example WOEMADM is the administrator WebSphere user ID which is going to be used for the TPC for Replication installation.
Example 3-15 IWNDB2ZZ job
//IWNDB2ZZ JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) -
97
7563installZos.fm
PARMS('RC0') LIB('DB9DU.RUNLIB.LOAD') //**************************************************************** //****CREATE CSM DATABASE AND CSMTS TABLESPACE ******************* //**************************************************************** //SYSIN DD * DELETE FROM REPMGR.CSMUSER WHERE NAME='*'; COMMIT; INSERT INTO REPMGR.CSMUSER VALUES ('csmgui',2,'com.ibm.csm.common.permission.AdministratorPermission', 'SYSTEM','*'); INSERT INTO REPMGR.CSMUSER VALUES ('Administrators',1,'com.ibm.csm.common.permission.LogonPermission', 'Logon','*'); INSERT INTO REPMGR.CSMUSER VALUES ('WOEMADM',0, 'com.ibm.csm.common.permission.AdministratorPermission', 'SYSTEM','*'); COMMIT;
Note: The job must return a JCL return code of 0. Review the joblog to verify that SQL Codes returned are 000 or acceptable. An SQLCODE = -204 error is acceptable when trying to drop an undefined table. To avoid a return code 4 you can add the PARMS('RC0') parameter as highlighted in JCL Example 3-15.
//MLTPCR22 JOB (999,POK),'TPC-R 340',CLASS=A,MSGLEVEL=(1,1), // MSGCLASS=T,NOTIFY=&SYSUID,REGION=0M,TIME=NOLIMIT /*JOBPARM L=999,SYSAFF=SC74 //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB9D9.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(D9D4) RUN PROGRAM(DSNTEP2) PLAN(DSNTEP91) LIB('DB9DU.RUNLIB.LOAD') //SYSIN DD *
98
7563installZos.fm
SELECT * FROM SYSIBM.SYSTABLES WHERE DBNAME = 'TPCR510'; /* Check if all tables were created by comparing the table names listed in the output result of the above SQL commands with the table names used in each job previously submitted (search for CREATE TABLE SQL statement in each job). If by mistake you skipped some job you need to run it now before you proceed with the installation. Table 3-2 lists all tables in the TPC for Replication database created previously.
Table 3-2 TPC for Replication Database Tables
TPC-R TABLE NAMES 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 SVCHWL.LSSERVERLOCATIONMAPPING SVCHWL.CLUSTERSERVERMAPPING EC.ELEMENTS EC.SITELOCATION ESSHWL.ESSSERVERMAPPING ESSHWL.ESSSERVERMAPPING_ESSINFORMATION_TAB ESSHWL.SERVERLOCATIONMAPPING ESSHWL.ESSPATH ESSHWL.ESSPATH_SOURCEPORTS_TAB ESSHWL.ESSPATH_TARGETPORTS_TAB ESSHWL.ESSPATH_AUTOGENERATED_TAB ESSHWL.ASYNCMASTERQUERYMONITOR REPMGR.CSMUSER REPMGR.CSMPROPERTIES REPMGR.DRIVER REPMGR.DRIVER_CGINFO_TAB REPMGR.PAIR REPMGR.PAIR_LASTRESULTINSERTS_TAB REPMGR.SESSION1 REPMGR.SESSION1_COPYRULES_TAB REPMGR.COPYSET REPMGR.POINT REPMGR.SEQUENCE REPMGR.HA_STATUS REPMGR.SNMP_MGRS 99
7563installZos.fm
TPC-R TABLE NAMES 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 Note: The jobs contain table names with a TMP qualifier. These are temporary tables created and deleted during job execution and therefore not included in the list of TPC for Replication tables. REPMGR.SITELOCATIONS EC.ELEMENTS_BAK EC.SITELOCATION_BAK ESSHWL.ESSSERVERMAPPING_BAK ESSHWL.ESSSERVERMAPPING_ESSINFORMATION_TAB_BAK ESSHWL.SERVERLOCATIONMAPPING_BAK ESSHWL.ESSPATH_BAK ESSHWL.ESSPATH_SOURCEPORTS_TAB_BAK ESSHWL.ESSPATH_TARGETPORTS_TAB_BAK ESSHWL.ESSPATH_AUTOGENERATED_TAB_BAK ESSHWL.ASYNCMASTERQUERYMONITOR_BAK REPMGR.CSMUSER_BAK REPMGR.CSMPROPERTIES_BAK REPMGR.DRIVER_BAK REPMGR.DRIVER_CGINFO_TAB_BAK REPMGR.PAIR_BAK REPMGR.PAIR_LASTRESULTINSERTS_TAB_BAK REPMGR.SESSION1_BAK REPMGR.SESSION1_COPYRULES_TAB_BAK REPMGR.COPYSET_BAK REPMGR.POINT_BAK REPMGR.SEQUENCE_BAK REPMGR.HA_STATUS_BAK REPMGR.SNMP_MGRS_BAK REPMGR.SITELOCATIONS_BAKAK SVCHWL.LSSERVERLOCATIONMAPPING_BAK SVCHWL.CLUSTERSERVERMAPPING_BAK
100
7563installZos.fm
After you successful logon you receive WebSphere welcome screen as shown in Figure 3-10.
101
7563installZos.fm
After you logged in to the WebSphere console click on the Environment twisty in the left hand pane. From the expanded list click WebSphere Variables as shown in Figure 3-11. The right hand pane will display a list of the WebSphere Variables and their values.
102
7563installZos.fm
Scroll down the WebSphere Variables list to find the following environment variables that are required to define paths to jar files that connect WebSphere to DB2 using IBM Java: DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH DB2UNIVERSAL_JDBC_DRIVER_PATH DB2_JDBC_DRIVER_PATH Click the DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH and the window in Figure 3-12 appears.
103
7563installZos.fm
In the Value field enter the path to the DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH and select OK. The path is similar to the following:
-DB2_install_path/lib
Ensure in USS that the follwing files are there: libdb2jcct2zos.so libdb2jcct2zos_64.so libdb2jcct2zos4.so libdb2jcct2zos4_64.so Repeat the same procedure for : DB2UNIVERSAL_JDBC_DRIVER_PATH and DB2_JDBC_DRIVER_PATH. The path to DB2UNIVERSAL_JDBC_DRIVER_PATH and DB2_JDBC_DRIVER_PATH is similar to the following:
-DB2_install_path/jcc/classes
Make sure that the following files are there from USS: db2jcc.jar db2jcc_javax.jar db2jcc_license_cisuz.jar
104
7563installZos.fm
Once all three variables are set click the Save directly to the master configuration link as shown in the Figure 3-13.
The next step is to define a data source provider through the WebSphere Admin console by expanding Resources in the left hand pane. From the expanded Resources menu, expand JDBC and select JDBC Providers. The screen in the Figure 3-14 will be displayed. In the Create new JDBC provider pane select your WebSphere node and server as the scope in the pull-down menu and select New under the Preferences section.
105
7563installZos.fm
You are about to set up the new JDBC Provider as shown in Figure 3-15.
In this first step make sure you have the correct WebSphere cell, node and server name in the Scope field.
106
7563installZos.fm
Select the following values for the next three fields: Database type pull-down list: DB2 Provider type pull-down list: DB2 Universal JDBC Driver Provider Implementation type pull-down list: XA data source In the Name text field the default description is DB2 Universal JDBC Driver Provider (XA). However, if you have already defined JDBC Providers for different applications you can change it to CSM Provider (CSM represents TPC for Replication and it stands for Copy Services Manager). Click Next to continue. The step 2 panel is displayed (see Figure 3-16).
In this panel you provide the database class path information. Provide the correct location for db2jcc.jar and db2jcc_license_cisuz.jar files as well as location for DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH. Use the same path which was saved in the WebSphere variable panel previously (see Figure 3-12 on page 104). Click Next to proceed to the next screen as shown in Figure 3-17.
107
7563installZos.fm
Verify all values are correct and click Finish. In the next panel you can to save the changes to the master configuration by clicking the Save directly to the master configuration in the Messages box as shown in (see Figure 3-18).
Figure 3-18 Save the new JDBC Provider to the master configuration
108
7563installZos.fm
After you have set up a JDBC Provider the next step is to define a data source by navigating in the left hand pane to Resources, then JDBC and click the Data Sources link as shown in Figure 3-19. Make sure you have the correct WebSphere cell, node and server name in the Scope field. Click on New in the Data sources pane under the Preferences section to continue.
Follow the wizard instructions to create the Data source. The first step is shown in Figure 3-20. Verify you have the correct WebSphere cell, node and server name in the Scope field. Enter the following values: Data source name filed: CSMDS JNDI name field: jdbc/CSMDS The component-managed authentication alias for DB2 will be created later. Select Next to proceed to the following step.
109
7563installZos.fm
Click the Select an existing JDBC Provider radio button and select the JDBC provider you created earlier from the pull-down menu as shown in Figure 3-21. Click Next to continue.
You need to provide the following information in Step 3 (see Figure 3-22): Database name: name of the DB2 subsystem on z/OS where TPC for Replication database is created Server name: TCP/IP address or host name of the z/OS system where the above DB2 subsystem resides 110
7563installZos.fm
Port number: TCP/IP port number where DB2 Distributed Relational Database Architecture - DRDA server resides. In order to find the above information go to z/OS SDSF and issue the following command: -DB8Y display DDF Where -DB8Y is the DB2 recognition character of our DB2 subsystem. The output result of the above command is shown in the following Example 3-17.
Example 3-17 DB2 display DDF command
-DB8Y DSNLTDDF DISPLAY DDF REPORT FOLLOWS: STATUS=STARTD LOCATION LUNAME GENERICLU DB8Y USIBMSC.SCPDB8Y -NONE IPADDR TCPPORT RESPORT 9.12.4.70 38270 38271 SQL DOMAIN=wtsc74.itso.ibm.com RESYNC DOMAIN=wtsc74.itso.ibm.com DSNLTDDF DISPLAY DDF REPORT COMPLETE
The alternative is to look at the DB2 master address space log (SDSF DA). The DB2 master address space name is similar to XXXXMSTR. Search for DDF keyword and you should see similar DDF information as shown in Example 3-18.
Example 3-18 DDF information from DB2 master address space log
-DB8Y DDF START COMPLETE 918 LOCATION DB8Y LU USIBMSC.SCPDB8Y GENERICLU -NONE DOMAIN wtsc74.itso.ibm.com TCPPORT 38270 RESPORT 38271
Note: Distributed Data Facility - DDF has to be installed and running. Otherwise you will not be able to complete the installation and use TPC for Replication. Based on our DB2 subsystem installation setup and the information in the above Example 3-17 and Example 3-18 we used the following values (see Figure 3-22). Database name: DB8Y Server name: wtsc74.itso.ibm.com Port number: 38270 In addition, select 4 from the Driver type pull-down list and check the Use this data source in Container Managed Persistence (CMP) check box.
111
7563installZos.fm
Note: Driver Type specifies the JDBC connectivity type of the data source. On the z/OS platform, the Type 4 driver implementation that connects to the DB2 Subsystem uses Distributed Relational Database Architecture - DRDA over TCP/IP, whereas the Type 2 driver implementation connects to the DB2 Subsystem through Resource Recovery Services - RRS (it requires DLLs that are included with DB2 for z/OS V8 and APAR PQ80841 distribution for the driver). The general recommendation is to use the DB2 Universal JDBC Driver Type 2 implementation for local DB2 connections and Type 4 implementation for remote DB2 connections. "Local" means that the DB2 Subsystem is located in the same LPAR as WAS for z/OS. Whereas the "remote" refers to the DB2 Subsystem located on a different LPAR from the WAS for z/OS. In our environment Type 4 was used. Click Next to continue.
The next screen you are presented (see Figure 3-23) provides a Data source summary. Verify all values are correct and click Finish to continue.
112
7563installZos.fm
In the next screen click the Save directly to the master configuration link (see Figure 3-24) to save the Data source changes to the master configuration.
113
7563installZos.fm
Figure 3-24 Save the new Data source to the master configuration
Once the configuration has been saved click the CSMDS link to the data source you have created (see Figure 3-25) in order to set up J2EE Connector Architecture (J2C) authentication.
114
7563installZos.fm
On the right-hand side of the Data sources pane, under the heading Related Items, click JAAS - J2C authentication data entries as shown in Figure 3-26.
Figure 3-26 CSMDS Data Source - J2EE Connector (J2C) authentication data entries
Figure 3-27 CSMDS Data Source - New JAAS - J2C authentication data entries
115
7563installZos.fm
Enter any alias (in our environment we use SYSADM) and your DB2 user ID and password. Use the same user ID and password you defined in Example 3-1 on page 51. We used wsadmin user ID in our installation. The wsadmin user is TPC for Replication administrative user and it will be used for the first logon to TPC for Replication. Attention: Ensure that the TPC for Replication user ID you specify has DB2 SYSADM authority. You can grant DB2 SYSADM authority by issuing the following DB2 SQL command via SPUFI: GRANT SYSADM TO wsadmin; Or by using the following JCL: //DSNTIST EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DSN=DB2Y8.SDSNLOAD,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DB8Y) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA81) PARMS('RC0') LIB('DB9D9U.RUNLIB.LOAD') //SYSIN DD * GRANT SYSADM TO wsadmin; Use the following SQL command to list end ensure that wsadmin has the necessary SYSADM authority: SELECT * FROM SYSIBM.SYSUSERAUTH; Click OK to continue.
Figure 3-28 CSMDS Data Source - new JAAS - (J2C) authentication data entries definitions
116
7563installZos.fm
In the next screen click the Save directly to the master configuration link as shown in Figure 3-29 to save the JAAS - J2C authentication data changes to the master configuration.
Figure 3-29 Save CSMDS Data Source - J2EE Connector (J2C) authentication data entries
Return back to CSMDS data source defined previously by clicking the CSMDS link on the top of the Data sources pane as shown in the Figure 3-30.
Figure 3-30 CSMDS Data Source - J2EE Connector (J2C) authentication data entries
117
7563installZos.fm
Scroll down Data sources CSMDS pane to find Component-managed authentication alias pull-down list and select sysadm (or the alias name that you chose). The same sysadm user ID has to be selected from Container-managed authentication pull-down list as shown in Figure 3-31. Click Apply to continue.
Figure 3-31 CSMDS Data Source - J2EE Connector (J2C) authentication data entries - SYSADM
In the next screen select Save directly to the master configuration link as shown in Figure 3-32 to save the changes to the master configuration.
118
7563installZos.fm
Figure 3-32 Save the CSMDS data source changes to the master configuration
The last step prior to data source test connection is to verify WebSphere security setup on the WebSphere Admin console. Select Security and then Secure administration, applications and infrastructure link as shown in Figure 3-33. Check the Enable Administrative security and Enable application security check boxes. However, TPC for Replication does not need you to use Java 2 security, so this check box should be left unchecked. Click Apply to continue.
119
7563installZos.fm
In the next screen click Save directly to the master configuration link as shown in Figure 3-34 to save the security changes to the master configuration. Note: You may receive error messages after applying the changes to the Websphere security setup as shown in Figure 3-34. You should review these messages to determine if any action is required. In our environment the second error message is due to the fact the restrict access to resource authentication check box under the Java 2 Security section was not checked.
120
7563installZos.fm
Go back to CSMDS data source by navigating to Resources JDBC Data sources in the right hand pane. Select the CSMDS data source by clicking the check box next to it under the Preferences section and click Test Connection as shown in Figure 3-35.
You should see a message that connection was successful as shown in the Figure 3-36.
121
7563installZos.fm
If you get a failure message click Troubleshooting and then Log and Trace from the WebSphere Admin console as shown in Figure 3-37. Select your WebSphere server to view the logs and determine the problem. Review the preceding steps to ensure they were done correctly.
122
7563installZos.fm
123
7563installZos.fm
//IWNINSTL JOB (999,POK),'TPC-R 5.1.0',CLASS=A,REGION=0M, // MSGCLASS=T,NOTIFY=&SYSUID,MSGLEVEL=(1,1) /*JOBPARM L=999,SYSAFF=SC70 //IWNINSTL EXEC PGM=BPXBATCH //STDOUT DD PATH='/etc/install_RM510.log', // PATHOPTS=(OCREAT,OTRUNC,OWRONLY), // PATHMODE=(SIRWXU), // PATHDISP=KEEP //STDERR DD PATH='/etc/install_RM510_err.log', // PATHOPTS=(OCREAT,OTRUNC,OWRONLY), // PATHMODE=(SIRWXU), // PATHDISP=KEEP //STDPARM DD * SH /usr/lpp/Tivoli/RM/scripts/installRM.sh //STDENV DD * CLASSPATH=/usr/lpp/Tivoli/RM/scripts WAS_HOME=/zWebSphereOEM/V7R0/tpcr510/AppServer WAS_USER=WOEMADM WAS_PASSWD=woemadm WAS_GROUP=WSSR1 WAS_SERVER=server1 WAS_NODE=TPCNODE WAS_CELL=TPCBASE JAVA_HOME=/zWebSphereOEM/V7R0/tpcr510/AppServer/java TPCR_InstallRoot=/usr/lpp/Tivoli/RM TPCR_ProductionRoot=/var/Tivoli/RM DB_TYPE=DB2 /* //ANTRAC EXEC PGM=IKJEFT01 //SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * RDEFINE FACILITY ANT.REPLICATIONMANAGER UACC(NONE) PERMIT ANT.REPLICATIONMANAGER CLASS(FACILITY)+ ID(WSSRU1) ACCESS(CONTROL) SETROPTS RACLIST(FACILITY) REFRESH /* This job needs to complete with a return code 0. You must check allocation messages to verify the data sets are allocated and cataloged as expected.
Error logs
Once you submit the job, it should take several minutes to complete. Installation progress can be monitored from USS by browsing the installation logs located at the path defined in STDOUT and STDERR. In our Example 3-19 above, the following installation logs were created: /etc/install_RM510.log /etc/install_RM510_err.log Check the above logs in order to ensure successful completion of the installation. There is a message at the end of the install_RM.log in our Example 3-20, indicating that installation completed successfully and without errors. 124
Tivoli Storage Productivity Center for Replication for System z
7563installZos.fm
DONE CONFIGURING NATIVE LIBRARIES The following Library objects are configured: ------------------------------------------------------------------JWLLib(cells/cl6641/nodes/nd6641/servers/ws6641|libraries.xml#Library_1189600722 RMSharedLibraries(cells/cl6641/nodes/nd6641/servers/ws6641|libraries.xml#Library Done installing applications
Done. ******************************** Bottom of Data ******************************** Note: After IWNINSTL job successful ran you need to: 1. Edit the csmConnections.properties file located in the /zWebSphereOEM/V7R0/config1/AppServer/profiles/default/properties/ directory. 2. Change localhost to the IP address or hostname of the machine and port number, in our case we used: server=WTSC70.ITSO.IBM.COM port=5110 3. Restart the GUI so that it can connect to the local machine. 4. Configure the CLI configuration file /var/Tivoli/RM/CLI/repcli.properties to point to the IP address or hostname instead of the local-host that it defaults to.
125
7563installZos.fm
3.7 Upgrading from TPC for Replication Basic Edition for System z to TPC for Replication for System z
The procedure to upgrade TPC for Replication Basic Edition for System z to TPC for Replication for System z is based on SMP/E APPLY of the code in the same location as your TPC for Replication Basic Edition code. You do not need to re-run the IWNINSTL job - just restart your WebSphere.
126
TPCR-SSConfig.fm
Chapter 4.
Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
After you have successfully installed the Tivoli Storage Productivity Center for Replication server you have to connect your storage subsystems to be able to manage the replication services via Tivoli Storage Productivity Center for Replication. With Tivoli Storage Productivity Center for Replication V5.1 the following storage subsystems are supported: IBM DS8000 family of Storage Servers IBM DS6000 IBM Enterprise Storage Server 800 IBM SAN Volume Controller IBM V7000 IBM XIV This chapter will describe what steps are needed to prepare ESS 800, DS6000 and DS8000 storage subsystem for the use with Tivoli Storage Productivity Center for Replication for System z and how to connect it to the Tivoli Storage Productivity Center for Replication. We will also cover the use of the Tivoli Storage Productivity Center for Replication Volume Protection feature and show how to remove a configured DS8000 storage subsystem from Tivoli Storage Productivity Center for Replication.
127
TPCR-SSConfig.fm
128
TPCR-SSConfig.fm
Figure 4-1 Connectivity for Tivoli Storage Productivity Center for Replication and IBM Storage Servers
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
129
TPCR-SSConfig.fm
Click on ESS Specialist, to start the Storwatch Specialist. You will be asked to enter a valid username and password to logon as shown in Figure 4-3.
Enter the administrator user ID and password and click OK to continue. If you are presented with pop-up windows, accept all of them. The Storwatch Specialist screen will be displayed as shown in Figure 4-4.
130
TPCR-SSConfig.fm
To add the Tivoli Storage Productivity Center for Replication user click Modify Users. In the Modify Users define new user as shown in Figure 4-6. Note: Tivoli Storage Productivity Center for Replication user must have Administration rights.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
131
TPCR-SSConfig.fm
Type in Name and Password you have chosen for the Tivoli Storage Productivity Center for Replication user and click Add. To save the user definition click Perform Configuration Update. You will be asked to confirm the changes. This user can be now used to define the Storage Subsystem in Tivoli Storage Productivity Center for Replication server.
1 tpc passw0rd
132
TPCR-SSConfig.fm
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
133
TPCR-SSConfig.fm
Select you storage subsystem and in pull down menu select Configure network ports as shown in Figure 4-8.
134
TPCR-SSConfig.fm
Have the following information available to fill in the next panel as shown in Figure 4-9. IP address which is assigned to the Ethernet card port for each server Internal DS8000 gateway IP address and subnet mask Optionally the IP addresses of primary DNS and secondary DNS.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
135
TPCR-SSConfig.fm
Once you enter the data for both ports click OK and this will complete the GUI configuration of the ethernet ports.
136
TPCR-SSConfig.fm
[private] IP network
DSCLI GUI
HMC
Ethernet ports
server0 server1
Note: the server locations are not drawn to scale to their real physical locations
Use the following DSCLI command to configure Ethernet ports as shown in Example 4-2.
Example 4-2 setnetworkport command example dscli> setnetworkport -ipaddr 172.31.30.40 -subnet 255.255.0.0 -gateway 172.31.254.1 I9801 Date/Time: Sobota, 10 maj 2008 2:40:49 CEST IBM DSCLI Version: 5.3.0.1022 DS: IBM.2107-7516381 CMUC00250I setnetworkport: You configured network port I9801 successfully. dscli> setnetworkport -ipaddr 172.31.30.41 -subnet 255.255.0.0 -gateway 172.31.254.1 I9B01 Date/Time: Sobota, 10 maj 2008 2:41:19 CEST IBM DSCLI Version: 5.3.0.1022 DS: IBM.2107-7516381 CMUC00250I setnetworkport: You configured network port I9B01 successfully.
Example 4-3 shows sample output of the lsnetworkport which provides an overview of all available Ethernet ports on the concerned storage image facility.
Example 4-3 Output of lsnetworkport command dscli> lsnetworkport Date/Time: Sobota, 10 maj 2008 2:43:53 CEST IBM DSCLI Version: 5.3.0.1022 DS: IBM.2107-7516381 ID IP Address Subnet Mask Gateway Primary DNS Secondary DNS State ============================================================================= I9801 172.31.30.40 255.255.0.0 172.31.254.1 0.0.0.0 0.0.0.0 Online I9802 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 Offline I9B01 172.31.30.41 255.255.0.0 172.31.254.1 0.0.0.0 0.0.0.0 Online I9B02 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 Offline
Example 4-4 shows sample output example of shownetworkport which provides an overview of all settings for both Ethernet ports.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
137
TPCR-SSConfig.fm
Example 4-4 Output of shownetworkport command for both Ethernet ports dscli> shownetworkport I9801 Date/Time: Sobota, 10 maj 2008 2:46:46 CEST IBM DSCLI Version: 5.3.0.1022 DS: IBM.2107-7516381 ID I9801 IP Address 172.31.30.40 Subnet Mask 255.255.0.0 Gateway 172.31.254.1 Primary DNS 0.0.0.0 Secondary DNS 0.0.0.0 State Online Server 00 Speed 1 Gb/sec Type Ethernet-Copper Location U7879.001.DQD0NH0-P1-C1-T1 dscli> shownetworkport I9B01 Date/Time: Sobota, 10 maj 2008 2:47:11 CEST IBM DSCLI Version: 5.3.0.1022 DS: IBM.2107-7516381 ID I9B01 IP Address 172.31.30.41 Subnet Mask 255.255.0.0 Gateway 172.31.254.1 Primary DNS 0.0.0.0 Secondary DNS 0.0.0.0 State Online Server 01 Speed 1 Gb/sec Type Ethernet-Copper Location U7879.001.DQD0A2H-P1-C1-T1
This definition process is only required once for each DS8000 with Ethernet adapters and when trying to establish direct connection between Tivoli Storage Productivity Center for Replication and DS8000 systems.
TPCR-SSConfig.fm
Storage Productivity Center for Replication configuration. ECKD volumes that are not attached to the Tivoli Storage Productivity Center for Replication z/OS management server are not added to the Tivoli Storage Productivity Center for Replication configuration through the z/OS FICON Connection. Ensure that all volumes in the logical storage subsystem (LSS) that you want to manage through a z/OS FICON Connection are attached to z/OS. Either the entire LSS must be attached to z/OS or none of the volumes in the LSS should be attached to z/OS for Tivoli Storage Productivity Center for Replication to properly manage queries to the hardware. Use the following guidelines to add storage systems through a z/OS FICON Connection: Use the z/OS connection to manage ECKD volumes that are attached to Tivoli Storage Productivity Center for Replication management server running on z/OS. To manage z/OS attached volumes through a z/OS connection (for example, for HyperSwap), you must explicitly add the z/OS connection for that storage system in addition to a TCP/IP connection (either the direct connection or the HMC connection) Create a z/OS connection before all TCP/IP connections if you want to continue to have Tivoli Storage Productivity Center for Replication manage only the attached ECKD volumes. Tip: It is recommended that you create both TCP/IP and z/OS connections for ECKD volumes to allow for greater storage accessibility.
Adding Tivoli Storage Productivity Center for Replication userid and password into DS8000
Some DS8000 systems (older models, i.e. 921, 931) used to come with predefined user and password for Tivoli Storage Productivity Center for Replication servers. The default user is tpcruser and the password is the serial number of the storage facility image (SFI). Note that it is not the unit number. For example the unit number is 27550. Then the SFI is 27551. The SFI number is preceded by the code of the manufacturing site which is usually 75 for the European manufacturing site or a 13 when the DS8000 has been built in San Jose. The corresponding password in this case is then 7527551. With Release 2.4 of the DS8000 microcode you may alter the password without the help of an IBM CE. Release 2.4 provides a new DSCLI command to alter the password and Example 4-5 shows this command.
Example 4-5 DSCLI command to alter the password
dscli> setrmpw -server both -rmpw XXXXXXX Date/Time: Sobota, 10 maj 2008 3:13:10 CEST IBM DSCLI Version: 5.3.0.1022 DS: IBM.2107-7516381 CMUC00265I setrmpw: You have updated the Replication Manager password successfully. It is useful to specify both to create the same password for both DS8000 servers. Note that the username remains as tpcruser. With newer DS8000 systems (i.e. 941 and 951 models) with microcode Release 6.x, you need to create the userid for Tivoli Storage Productivity Center for Replication.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
139
TPCR-SSConfig.fm
The following procedure described steps required to define Tivoli Storage Productivity Center for Replication userid on DS8000 with microcode Release 6.2 and above. Logon to the DS8000 storage systems and from navigation pane on the left, by hovering over Access icon, select Users submenu as shown in Figure 4-11.
In the Users window, select Add user from the Action drop down menu (see Figure 4-12
140
TPCR-SSConfig.fm
The Add User window appears as shown in Figure 4-13. Create the new Tivoli Storage Productivity Center for Replication userid and assign password accordingly. Under the Group Assignment section, select Administrator user permissions.
Figure 4-13 Add Tivoli Storage Productivity Center for Replication userid
4.2 Adding IBM ESS or DS Storage Server to Tivoli Storage Productivity Center for Replication server
Before you can use Tivoli Storage Productivity Center for Replication with an IBM ESS or an IBM DS8000/DS6000 Storage Systems, you will have to add it to Tivoli Storage Productivity Center for Replication as a Storage Subsystem. You can do that by using the Tivoli Storage Productivity Center for Replication GUI or the Command Line Interface. You can connect storage systems over TCP/IP either directly or through a Hardware Management Console (HMC) or using Ficon IBM z/OS connection. A single storage system can be connected using multiple connections for redundancy. For example, you can connect a IBM System Storage DS8000 storage system using an HMC connection and a z/OS connection. Tivoli Storage Productivity Center for Replication monitors how a storage system has been added to the configuration.
4.2.1 Adding an IBM Storage Server using the Tivoli Storage Productivity Center for Replication GUI
Start your Web browser and sign on to your Tivoli Storage Productivity Center for Replication server. Once you are signed on, select the Storage Systems from the Navigation Menu on the left as shown in Figure 4-14.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
141
TPCR-SSConfig.fm
Click the Add Storage Connection button in order to invoke Add Storage System Wizard as displayed in Figure 4-15.
Direct Connection should be selected for ESS 800, DS6000 and selected DS8000 models with dedicated Ethernet adapters. DS8000 HMC Connection is used for all DS8000 models connected to Tivoli Storage Productivity Center for Replication server via DS8000 HMC network. For DS8000 model 941 and 951 this is the only available connection over TCP/IP. The z/OS FICON Connection is limited to storage systems that are connected to Tivoli Storage Productivity Center for Replication management server running on z/OS. In our example we define DS8000 HMC Connection connection. Once you select the appropriate connection, click Next to continue. The Connection definition window is displayed as shown in Figure 4-16.
142
TPCR-SSConfig.fm
Enter the primary DS8000 HMC IP address and the same for the secondary DS8000 HMC if exists. Enter the username and password dedicated for Tivoli Storage Productivity Center for Replication server which you previously defined on DS8000 storage system as explained in , Adding Tivoli Storage Productivity Center for Replication userid and password into DS8000 on page 139. Click Next to continue. The next Window displays the Adding Storage System phase and it takes a few seconds to complete (Figure 4-17).
Once the adding storage system phase is completed, Tivoli Storage Productivity Center for Replication displays the window as in Figure 4-18 indicating the storage system has been successfully defined.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
143
TPCR-SSConfig.fm
Click Finish to complete the task. The newly defined storage subsystem will appear in the Tivoli Storage Productivity Center for Replication Storage Subsystem Panel as shown in Figure 4-19.
Figure 4-19 Storage System Overview panel with new DS8000 storage system added
As you can see in Figure 4-19, the location description is by default None. In order to give more meaningful name of the location for the defined storage system, click in the location box and select and overwrite None with your specific site description as shown in Figure 4-20. You must have Administrator privileges to modify the location of a storage system. 144
TPCR-SSConfig.fm
Note: Changing the location of a storage system might have consequences. When a session has a volume role with a location that is linked to the location of the storage system, changing the location of the storage system could change the session's volume role location. For example, if there is one storage system with the location of Site 1 and a session with the location of Site 1 for its H1 role, changing the location of the storage system to a different location, such as Site 2, also changes the session's H1 location to Site 2. However, if there is a second storage system that has the location of Site 1, the session's role location is not changed.
Figure 4-20 Specify the location name for the added storage system
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
145
TPCR-SSConfig.fm
In our example, the heartbeat is disable since the Enable Heartbeat button is active.
146
TPCR-SSConfig.fm
This will start the Volume Protection Wizard. You can now specify the volumes which Tivoli Storage Productivity Center for Replication will protect. You could either select the volumes by specifying storage system, logical storage subsystem and a specific volume or you could specify the volume name (also including wildcards) or using a combination of both. In our example we protect all volumes in LSS 00. Click Next as shown in Figure 4-23.
Tivoli Storage Productivity Center for Replication will now create a list of volumes matching to the volume selection input you have made in the panel shown in Figure 4-24. In our example this list will contain 51 volumes. Click Next to continue.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
147
TPCR-SSConfig.fm
You will now be presented the list of volumes matching your volume masking inputs. You can select single volumes in this list or use the Select All button to confirm selection of all the volumes in the list as shown in Figure 4-25 and then click Next to continue.
Tivoli Storage Productivity Center for Replication will now mark the selected volumes as protected and report if this action has completed successfully.
148
TPCR-SSConfig.fm
Note: If your volume selection input would include volumes which are already members of Tivoli Storage Productivity Center for Replication Copy Sets, these volumes would appear in the list of matched volumes and you would be able to select them for protection. However, the actual protection process would fail for those volumes and an error message would be produced. An example of such a message is shown in Figure 4-27.
Figure 4-27 Volume Protection Wizard - some volumes already members of Copy Sets
If you like to unprotect volumes which are currently protected by Tivoli Storage Productivity Center for Replication start the Volume Protection Wizard and enter the appropriate volume masking information like shown in Figure 4-23 on page 147 and Figure 4-25 on page 148. Tivoli Storage Productivity Center for Replication will again show a list with all volumes matching your masking criteria. Volumes which are currently protected by Tivoli Storage Productivity Center for Replication will appear with a tick in the associated checkbox. To unprotect a volume just deselect it in the list and click Next.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
149
TPCR-SSConfig.fm
4.5 Removing a storage subsystem from Tivoli Storage Productivity Center for Replication
To remove a Storage Subsystems from Tivoli Storage Productivity Center for Replication management log on to Tivoli Storage Productivity Center for Replication and click the Storage Systems in the Health Overview Panels Work Area or in the Navigation Area. This takes you to the Systems panel shown in Figure 4-28.
Select the storage subsystem you would like to remove from Tivoli Storage Productivity Center for Replication by marking the associated radio button, choose Remove Storage System and click Go as shown in Figure 4-29.
Next, you will see a confirmation message shown in Figure 4-30 warning you that the removal of a storage subsystem from Tivoli Storage Productivity Center for Replication will result in the removal of all Copy Sets containing volumes from this storage subsystem from their Tivoli Storage Productivity Center for Replication Sessions. If you are sure, that you would like to remove the storage subsystem click Yes to continue.
150
TPCR-SSConfig.fm
After having acknowledged the confirmation message shown in Figure 4-30, Tivoli Storage Productivity Center for Replication will remove the storage subsystem, all volumes on this subsystem and all Copy Sets containing these volumes. You will be taken back to the Storage Subsystems panel an will see a message in the message line indicating that the storage subsystem has been successfully removed. Important: When you remove a storage subsystem, all connections to the subsystems will be withdrawn and access to all volumes within that subsystem will be withdrawn. This will cause any Tivoli Storage Productivity Center for Replication Copy Sets containing volumes of this storage subsystem to be removed from their respective Tivoli Storage Productivity Center for Replication Sessions. If these sessions are active, this could also lead to relationships being left on the hardware that are no longer managed by Tivoli Storage Productivity Center for Replication.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
151
TPCR-SSConfig.fm
Select the storage system for which you want to refresh the configuration and select Refresh Configuration from the Actions list, and click Go as in Figure 4-32.
The following screen in Figure 4-33 displays the confirmation massage that refreshing storage configuration has been completed.
152
TPCR-SSConfig.fm
Now you should be able to see from Tivoli Storage Productivity Center for Replication all newly added volumes into your DS8000 storage systems.
Chapter 4. Configuring DS8000 storage system for use with Tivoli Storage Productivity Center for Replication
153
TPCR-SSConfig.fm
154
TPC-R-Gen_Admin-and-HA.fm
Chapter 5.
Tivoli Storage Productivity Center for Replication general administration and high availability
At this point you have successfully installed the Tivoli Storage Productivity Center for Replication. In this chapter we show you how to perform general administration tasks with your TPC for Replication application. In this chapter, we provide an overview of the graphical user interface (GUI) and the command line interface (CLI), explain the TPC for Replication User Administration and show how to configure some basic settings of TPC for Replication. We also show how to set up TPC for Replication servers for High Availability as well as the process of performing a takeover from the active TPC for Replication server to the standby server. Administration and configuration tasks related to TPC for Replication sessions for specific storage subsystems, Copy Sets, Paths and Storage Subsystems are not covered in this chapter but described in detail in dedicated chapters.
155
TPC-R-Gen_Admin-and-HA.fm
Health Overview
Content area
Navigation Tree
Health Overview
Content area
156
TPC-R-Gen_Admin-and-HA.fm
Overall storage subsystem status: indicates the connection status of the storage subsystems. Overall host systems status: indicates the connection status of host systems. This status does not apply for System z hosts. Management server status: indicates the status of the standby server if you are logged on to the local server. If you are logged on to the standby server, this status indicates the status of the local server. Management server status is not available if you are using Tivoli Storage Productivity Center for Replication Basic Edition for System z. TPC for Replication uses color based status indicators to provide a quick overview of the overall state of specific Tivoli Storage Productivity Center for Replication components. In addition, various icons are used to represent a more detailed status of different objects. These icons are shown in Table 5-1 on page 157. Green - TPC Copy Services is in normal mode. The session is in prepared state for all defined volumes and maintaining a current consistent copy of the data. Or, the session has successful processed a Recover command and is in Target Available state with all volumes consistent and no exceptions. Yellow - TPC Copy Services is not maintaining a current consistent copy at this time but is working toward that goal. In other words, sessions may have volumes that are actively being copied or pending to be copied, there are no suspended volumes and copy services is temporarily inconsistent but actions are in place to come into duplex state. No action is required to make this become Green as states will automatically change. Red - TPC Copy Services has one or more exceptions that need to be dealt with immediately.
Table 5-1 TTPC Symbols Symbol Meaning The sessions are in a normal state.
At least one storage subsystem cannot communicate with the active servers.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
157
TPC-R-Gen_Admin-and-HA.fm
Symbol
server=WTSC64.itso.ibm.com port=9560 rmserver.properties Contains configuration information about logging. It is located in directory /install_root/AppServer/profiles/default/properties. In our system this directory is /zWebSphereOEM/V7R0/tpcr510/AppServer/profiles/default/properties. The communication.port that you specify here in this file must be the same as in the repcli.properties file. Otherwise you get a login failed message when trying to use the CLI. tpcrcli-auth.properties Contains authorization information for signing on to the CLI automatically without entering your user name and password.
158
TPC-R-Gen_Admin-and-HA.fm
Place cursor on choice and press enter to Retrieve command => omvs => ishell
Figure 5-2 Using OMVS under ISPF
Now you have to type cd and the path /var/Tivoli/RM/CLI (assuming you take defaults) and click <enter> as shown in Figure 5-3 on page 160.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
159
TPC-R-Gen_Admin-and-HA.fm
IBM Licensed Material - Property of IBM 5694-A01 Copyright IBM Corp. 1993, 2011 (C) Copyright Mortice Kern Systems, Inc., 1985, 1996. (C) Copyright Software Development Group, University of Waterloo, 1989. All Rights Reserved. U.S. Government Users Restricted Rights Use,duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. IBM is a registered trademark of the IBM Corp. MHLRES4 @ SC64:/u/mhlres4>
===> cd /var/Tivoli/RM/CLI/ ESC= 1=Help 7=BackScr 2=SubCmd 8=Scroll 3=HlpRetrn 4=Top 9=NextSess 10=Refresh 5=Bottom 11=FwdRetr INPUT 6=TSO 12=Retrieve
Type the shell script csmcli.sh and click <enter> as shown in Figure 5-4 on page 161. Use the <PF10> to refresh screen after <enter> commands any time you need.
160
TPC-R-Gen_Admin-and-HA.fm
IBM Licensed Material - Property of IBM 5694-A01 Copyright IBM Corp. 1993, 2011 (C) Copyright Mortice Kern Systems, Inc., 1985, 1996. (C) Copyright Software Development Group, University of Waterloo, 1989. All Rights Reserved. U.S. Government Users Restricted Rights Use,duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. IBM is a registered trademark of the IBM Corp. MHLRES4 @ SC64:/u/mhlres4>cd /var/Tivoli/RM/CLI/ MHLRES4 @ SC64:/SC64/var/Tivoli/RM/CLI>
===> csmcli.sh ESC= 1=Help 7=BackScr 2=SubCmd 8=Scroll 3=HlpRetrn 4=Top 9=NextSess 10=Refresh 5=Bottom 11=FwdRetr INPUT 6=TSO 12=Retrieve
CLI automatically authenticates your user id and password if you have done the customization to automatically login as described in 5.2.2, Setting up automatic login to the CLI on page 159. Otherwise, you have to enter them manually.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
161
TPC-R-Gen_Admin-and-HA.fm
(C) Copyright Software Development Group, University of Waterloo, 1989. All Rights Reserved. U.S. Government Users Restricted Rights Use,duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. IBM is a registered trademark of the IBM Corp. MHLRES4 @ SC64:/u/mhlres4>cd /var/Tivoli/RM/CLI/ MHLRES4 @ SC64:/SC64/var/Tivoli/RM/CLI>csmcli.sh Tivoli Storage Productivity Center for Replication Command Line Interface (CLI) Copyright 2007, 2012 IBM Corporation Version: 5.1 Build: l20120510-1010 Server: wtsc64.itso.ibm.com Port: 9560 Authentication file: /u/mhlres4/tpcr-cli/tpcrcli-auth.properties csmcli> ===> ESC= 1=Help 7=BackScr 2=SubCmd 8=Scroll 3=HlpRetrn 4=Top 9=NextSess 10=Refresh 5=Bottom 11=FwdRetr INPUT 6=TSO 12=Retrieve
To verify the available CLI commands type help after successful login into CLI as shown in Figure 5-6 on page 163.
162
TPC-R-Gen_Admin-and-HA.fm
addhost addmc addstorsys chauth chdevice chhost chlocation chmc chsess chvol cmdsess cmdsnapgrp csmcli exit exportcsv exportgmdata hareconnect hatakeover help csmcli> ===> ESC=
lsauth lsavailports lscpset lscptypes lsdevice lshaservers lshost lslocation lslss lsmc lspair lsparameter lspath lspool lsrolepairs lsrolescpset lssess lssessactions lssessdetails
lssnapgrpactions lssnapshots lssnmp lsstorcandidate lsvol mkauth mkbackup mkcpset mklogpkg mkpath mksess mksnmp quit refreshdevice rmactive rmassoc rmauth rmcpset rmdevice
rmmc rmpath rmsess rmsnmp rmstdby rmstorsys setasstdby setoutput setparameter setstdby showcpset showdevice showgmdetails showha showmc showsess ver whoami
1=Help 7=BackScr
2=SubCmd 8=Scroll
5=Bottom 11=FwdRetr
csmcli command name -flag parameter -command parameter The command name specifies the task that the command-line interface will have to perform. For example, lssess tells the command-line interface to list sessions, and mksess tells the command-line interface to create a session.
Flags modify the command. They provide additional information that directs the
command-line interface to perform the command task in a specific way. For example, the -v flag tells the command-line interface to display the command results in verbose mode. Some flags may be used with every command-line interface command, others are specific to a command and are invalid when used with other commands. Flags are preceded by a hyphen (-), and may be followed immediately by space and a flag parameter.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
163
TPC-R-Gen_Admin-and-HA.fm
Flag parameters provide information that is required to implement the command modification that is specified by a flag. If you do not provide a parameter, then a default value is assumed. For example, you can specify -v on, or -v off to turn verbose mode on or off; but if you specify -v only, then the flag parameter is assumed to be on.
The command parameter provides basic information that is necessary to perform the command task. When a command parameter is required, it is always the last component of the command; and it is not preceded by a flag. Some commands permit multiple command parameters with each parameter separated by a blank space and not a comma (unlike flag parameters that allow multiple values). Full details of the CLI can be found in the manual IBM Tivoli Storage Productivity Center for Replication Command-Line Interface Users Guide, SC27-2323.
Administrator
Administrators can perform all actions within TPC for Replications without any restriction. They can manage all replication related activities, manage storage subsystems and can also manage access control. Note that Administrators could revoke their own administrative access rights.
Monitor
Monitors can display information within TPC for Replication but are not allowed to modify anything. Monitors can view the following information: All storage subsystems All path information All sessions and session details High availability status and standby servers
164
TPC-R-Gen_Admin-and-HA.fm
Session Operator
Session Operators can manage specific sessions. An Administrator has to specify for which session a Session Operator is authorized when granting access. A Session Operator is allowed to perform the following tasks: Create sessions For specified sessions and for sessions created by the Session Operator: add and remove Copy Sets modify the Session Description set the Session Options Remove the Session Execute an Action / Command against the Session Add PPRC paths Remove paths with no hardware relationship
view the following information, regardless of the specific session for which the Session Operator is authorized: All storage subsystems All path information All sessions and session details High availability status and standby servers
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
165
TPC-R-Gen_Admin-and-HA.fm
The Administration panel will be displayed which is depicted in Figure 5-8 on page 167. Here you will see a list of all users or user groups which have been granted access to TPC for Replication. If you invoke this panel the first time after installation, you will see only one user with an Administrator role. That is the user you have been asked to provide during the installation process in 3.5, Install TPC for Replication using embedded Derby database.
Grant access
In order to add new access privileges for TPC for Replication, click Add Access in the Administration panel as shown in Figure 5-8 on page 167.
166
TPC-R-Gen_Admin-and-HA.fm
This will start the Add Access Wizard. As TPC for Replication does not maintain its own list of users it has now to probe to either the local operating system (RACF) or to the LDAP directory to obtain a list of users and groups which are currently present. You can filter the users and groups which will be returned and can specify the maximum results. If you want all users and groups be displayed specify an asterisk and click Next to continue. You can also select a mask as shown in Figure 5-9 on page 168. In that example, we are searching for all the user ids beginning with mhl.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
167
TPC-R-Gen_Admin-and-HA.fm
Figure 5-9 TPC for Replication Add Access Wizard - filter users and groups
Now TPC for Replication will actually probe the local operating system or the LDAP directory and retrieve a list of users and groups which match your specifications up to the limit of maximum results. You will have select the user or group for which you want to grant access. Note that multiple selections are possible. After having checked your selection(s) click Next to continue as shown in shown in Figure 5-10.
Figure 5-10 TPC for Replication Add Access Wizard - select users and groups
Next, you will be shown a panel where you define the role which will be assigned to the users and groups you have selected in the panel before (Figure 5-10). As described above, you can select Monitor, Administrator or Session Operator. If you select Session Operator you will also have to specify the TPC for Replication session(s) for which the user(s) will get authorization.
168
TPC-R-Gen_Admin-and-HA.fm
Assign a role and in case of the Session Operator role specify the Sessions and click Next to continue.
Figure 5-11 TPC for Replication Add Access Wizard - select role and sessions
A summary panel will be displayed to allow you to review your selections and go back, in case any corrections should be necessary. If you are satisfied with your configuration click Next to perform the actual changes to TPC for Replication.
Figure 5-12 TPC for Replication Add Access Wizard - confirm panel
Finally, you will see a panel informing you that the addition of access for the selected users and groups has been completed successfully. Click Finish to exit the Add Access Wizard.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
169
TPC-R-Gen_Admin-and-HA.fm
After exiting the Add Access Wizard you will be taken back to the Administration panel. You will now see the entry or entries for the new user and groups you have just granted access like shown in Figure 5-14 on page 170
Figure 5-14 TPC for Replication Administration panel with newly added user
Modify access
To view or modify the role and the associated sessions for TPC for Replication Session Operators select the user or group in the Administration panel and select View/Modify Access and click Go as you see in Figure 5-15 below.
170
TPC-R-Gen_Admin-and-HA.fm
Figure 5-15 TPC for Replication Administration panel - select View/Modify Access
Next you a see a panel, where you can review and change the users or groups role and TPC for Replication sessions (similar to the Add Access Wizard panel shown in Figure 5-11 on page 169). Review and modify the access level and click OK or Apply.
Figure 5-16 TPC for Replication Administration panel - View / Modify access
You will return to the Administration panel. A message is posted in the message line of the GUI, indicating either success or failure of your modification operation. In our example you can see in Figure 5-17 below, that the user MHLRES6 has now an Administrator role assigned.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
171
TPC-R-Gen_Admin-and-HA.fm
Figure 5-17 TPC for Replication Administration panel - modified role for MHLRES6
Remove access
To revoke the access of a user or a group to TPC for Replication, select the user or group in the Administration panel and select Remove Access and Click Go as shown in Figure 5-18:
Figure 5-18 TPC for Replication Administration panel - select Remove Access
TPC for Replication will remove all access to the selected user or group without further confirmation. You will see a message line indicating if the access removal has completed successfully or failed, and the selected user will not be seen as shown in Figure 5-19 on page 173.
172
TPC-R-Gen_Admin-and-HA.fm
Figure 5-19 TPC for Replication Administration panel - access removed successfully
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
173
TPC-R-Gen_Admin-and-HA.fm
You will be taken to the Advanced Tools panel shown in Figure 5-21.
174
TPC-R-Gen_Admin-and-HA.fm
In order to create a log file package for diagnostic purposes (for example to send it to IBM support in case of a problem) just click Create in the Package Log Files section. You will see a message that the creation of the log file package has been initiated and is currently running. When TPC for Replication finishes creating the package you are notified of the completion and are informed of the location of the diagnostics package. You see a panel similar to Figure 5-22.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
175
TPC-R-Gen_Admin-and-HA.fm
The Advanced Tools panel also has the buttons to Enable and Disable the Metro Mirror heartbeat. When it is enabled, TPC for Replication checks the heartbeat of all disk storage subsystems that are managed. It relies on the disk subsystem microcode to be able to check this heartbeat. When a storage subsystem cannot communicate its heartbeat to the TPC for Replication server, the microcode in the storage subsystem sends a FREEZE to all its LSSs. As soon as the heartbeat time-out expires in TPC for Replication, TPC for Replication sends FREEZE commands to all LSS in other disk subsystems that belong to same session. Those FREEZE commands causes the Metro Mirror between all affected LSSs to be suspended, so they keep the data in the secondary disks in a consistent state.
176
TPC-R-Gen_Admin-and-HA.fm
in the message lines which appear at the top edge of the Work Area after you have performed specific actions in TPC for Replication.
Figure 5-24 on page 178 shows the TPC for Replication Console panel with various root and a pointer to children messages also.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
177
TPC-R-Gen_Admin-and-HA.fm
As described above, the console lists the message IDs of the messages as hyperlinks. Clicking these hyperlinks takes you to the associated help panels.
TPC-R-Gen_Admin-and-HA.fm
A TPC for Replication standby server does not necessarily need to reside on the same operating system platform as the active TPC for Replication server. TPC for Replication for System z supports a standby server running in another platform. However, if you plan to use Basic HyperSwap sessions, both the active server and the standby server must run each one in different z/OS systems in the same Sysplex. This section guides you through the steps necessary to set up a TPC for Replication Server as a standby server and explains how to initiate a takeover process. Figure Figure 5-25 on page 179 depicts an overview of a typical two site storage infrastructure with a high available TPC for Replication installation.
As Figure 5-25 shows that both servers have to have IP connectivity to each other as well as to all storage subsystems managed by TPC for Replication. Note: At the time of writing, TPC for Replication supports exactly one standby server for an active server. This is also true for a three site environment.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
179
TPC-R-Gen_Admin-and-HA.fm
Setting up a different server as the standby server Setting up the server you are logged in at as the standby server Note: Standby server: is not available for the TPC for Replication for System z Basic Edition license. only can manage HyperSwap sessions if the standby server is installed in the same Sysplex as active server. can not manage sessions running in active server. have to be configured after you have your High Availability plan defined. After setting a server as a standby server you can not use this server for any other purpose than a Takeover.
Figure 5-26 Active TPC for Replication server with defined sessions
180
TPC-R-Gen_Admin-and-HA.fm
Figure 5-27 on page 181 shows the Health Overview panel of our TPC for Replication just installed in z/OS partition SC70.
To define the TPC for Replication server on SC70 (which in this example has the DNS name wtsc70.itso.ibm.com) as a standby server, select Management Servers in the Navigation tree area of your active TPC for Replication Server. This will take you to the Management Servers panel of your active server as shown in Figure 5-28 on page 182. You can see an entry for your server with its DNS name and the information that it has the active role.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
181
TPC-R-Gen_Admin-and-HA.fm
Figure 5-28 Active TPC for Replication server - Management Servers panel
Next, select Define Standby in the drop down menu of the Management Servers panel of your active server and click Go as shown in Figure 5-29.
Figure 5-29 Active TPC for Replication server - select Define Standby
TPC for Replication will now show a panel, where you have to enter the IP address or the fully qualified DNS name of your designated TPC for Replication standby server (in this example wtsc70.itso.ibm.com). You will also have to enter a user name and a password for the standby server. This user has to be an TPC for Replication Administrator. Click OK if you have entered and verified your input.
182
TPC-R-Gen_Admin-and-HA.fm
Figure 5-30 Active TPC for Replication server - Define address, and user credentials for standby
Important: Defining a TPC for Replication server as a standby server will overwrite the complete database of the standby server. There is no way within TPC for Replication to recover the configuration, once overwritten. As you click OK, TPC for Replication will produce a confirmation message which explains this fact and asks you if you are sure that you want to continue. This message is shown in Figure 5-31. As you are sure that you want to continue, click Yes.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
183
TPC-R-Gen_Admin-and-HA.fm
TPC for Replication will now establish communication with the designated standby server wtsc70.itso.ibm.com, will turn it into standby mode and will start to synchronize the database of the active server with that of the standby server. The Management Servers status will first switch to Connected status and warning, as shown in Figure 5-32, while the database of TPC for Replication on system SC64 is being copied to the database of TPC of Replication on SC70.
After synchronization has been finished, the state will turn to synchronized and you will see the following content of the Management Servers panel of your active server as shown in Figure 5-33 on page 185.
184
TPC-R-Gen_Admin-and-HA.fm
Figure 5-33 Active TPC for Replication server after standby Server synchronized
The standby server TPC for Replication now shows that it is connected to the remote server, as shown in Figure 5-34.
Figure 5-34 stand by TPC for Replication server - Health Overview panel after standby synch
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
185
TPC-R-Gen_Admin-and-HA.fm
Figure 5-35 Set the current server as a TPC for Replication Standby Server
In the next panel, you will have to specify the IP address or DNS name of the server for which your current server will act as a standby server. You do not have to supply any user credentials as you will not cause the configuration of the active server to be overwritten or damaged in any other way. Click Ok or Apply to define your server as standby.
186
TPC-R-Gen_Admin-and-HA.fm
Figure 5-36 Enter the name of the active TPC for Replication server
Important: Defining a TPC for Replication server as a standby server will overwrite the complete database of the standby server. There is no way within TPC for Replication to recover the configuration, once overwritten.
As you click OK, TPC for Replication will produce a confirmation message which explains this fact and asks you if you are sure that you want to continue. This message is shown in Figure 5-37. As you are sure that you want to continue, click Yes.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
187
TPC-R-Gen_Admin-and-HA.fm
TPC for Replication will now establish communication with the designated active server wtsc64.itso.ibm.com, and will start to synchronize the database of the active server with that of the standby server. The Management Servers status switches to Synchronization Pending as shown in Figure 5-38.
After synchronization has been finished, the state will turn to synchronized and you will see the following content of the Management Servers panel of your active server as shown in Figure 5-39. 188
TPC-R-Gen_Admin-and-HA.fm
You can also use the TPC for Replication Command Line Interface to define a standby server for your active server or to define your current server as a standby server to a different server. To define a standby server for your active server open a CLI command shell on your active server wtsc64.itso.ibm.com and use the setstdby CLI command, as described in the following procedure: 1. From OMVS ISPF panel, go to the directory where the csmcli.sh shell is installed. Assuming that you have installed TPC for Replication in the default directories, the command to go to that directory is as follow: cd /var/Tivoli/RM/CLI/ 2. Call the csmcli.sh shell by typing csmcli.sh and pressing ENTER. The CLI asks your user id and password, if it is not customized as described in 5.2.2, Setting up automatic login to the CLI on page 159. 3. Issue the lshaservers to verify that your active server is not already connected to a standby server.
Example 5-3
csmcli> lshaservers Server Role Status Port ========================================== wtsc64.itso.ibm.com ACTIVE No Standby 5120 4. Use the setstdby command the set the standby server. In our example, we assign wtsc70.itso.ibm.com as the standby server for our active server. The CLI tells you that this operation overwrites the contents of the standby server database, and asks you to confirm that you to continue 5. Issue the lshaserves command again that your active server is now has a standby server You can see a sample of this CLI command sequence in
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
189
TPC-R-Gen_Admin-and-HA.fm
Authentication file: /u/mhlres4/tpcr-cli/tpcrcli-auth.properties csmcli> lshaservers Server Role Status Port ========================================== wtsc64.itso.ibm.com ACTIVE No Standby 5120 csmcli> setstdby -server wtsc70.itso.ibm.com -username mhlres4 -password xxxxxx IWNR3111W Aug 30, 2012 6:16:48 PM This command will define another management server as a standby for this server. This will overwrite the configuration of th e specified standby. Do you want to continue? y/n:Y IWNR3020I Aug 30, 2012 6:17:04 PM Connection to the active high-availability server at wtsc64.itso.ibm.com making the server wtsc70.itso.ibm.com a standby wa s successful. csmcli> lshaservers Server Role Status Port ============================================= wtsc64.itso.ibm.com ACTIVE Synchronized 5120 wtsc70.itso.ibm.com STANDBY Synchronized 5120 csmcli> ===> INPUT ESC= 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr 8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
Figure 5-40 Selecting Standby server through CLI
At this point in time, you have successfully defined a standby server for your active TPC for Replication server. Your active server works like before (except that it propagates all database changes to your standby server). Your standby server, however has some different properties now: Configuration of the TPC for Replication standby server in terms of Storage Subsystems, ESS/DS Paths, Sessions and Copy Sets has been overwritten with the configuration of the active TPC for Replication server The Sessions menu item is disabled, so that you cannot view or modify any Session or Copy Set related configurations from the standby server. You can view the Storage Subsystem and ESS/DS Paths configuration but cannot make any changes from the standby server. You can access the Advanced Tools menu but cannot alter the Heartbeat setting from the standby server You can still access the TPC for Replication Console from the standby server. Note: User access data will not be synchronized between active and standby TPC for Replication servers
190
TPC-R-Gen_Admin-and-HA.fm
5.6.2 Takeover
In case of a failure of your active TPC for Replication server or maybe also in case of a planned failover you have to perform a takeover of the active server role on your standby TPC for Replication server. This takeover is a manual process, which can be initiated either through the GUI or the Command Line Interface on the TPC for Replication standby server. After having performed the manual takeover of the active server on the standby server, the replication of database changes will stop and the TPC for Replication standby server will become an active server with the same configuration as the original active server. This is the case, even when the original active server is still up and running. In this case, you would have two active TPC for Replication Servers with identical configurations in your environment. You would then be able to manipulate your copy services configurations from both servers, although changes in the TPC for Replication databases would no longer be synchronized between the two active server. This could lead to inconsistencies in your overall configuration which would have the potential to damage your environment. Important: Before attempting a TPC for Replication takeover, we recommend to always shut down the active TPC for Replication server first. You can initiate a takeover only from the standby server. In a planned situation, for maintenance purposes for example, you must first shutdown the TPC for Replication by either: Stopping the CSM and CSMGUI applications from the WebSphere console (if you had installed TPC for Replication under the WebSphere Application Server product); Stopping the WebSphere Application Server OEM Edition address spaces as described in 5.8.2, Stopping WebSphere Application Server OEM Edition on page 212 (if you had installed TPC for Replication under WebSphere Application Server OEM Edition). After having shut down your active server, log into your TPC for Replication standby server (wtsc70.itso.ibm.com in our example). On the Health Overview Panel, TPC for Replication reports a Warning condition, Disconnected Consistent regarding to the connection to the remote server, that is now down. See Figure 5-41 on page 192. You see also that all Storage Subsystems are connected. The standby server has got the configuration of the Storage Subsystems via the database replication process from the active server. The Sessions hyperlink in the Navigation tree area is greyed out because you are logged in a TPC for Replication standby server.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
191
TPC-R-Gen_Admin-and-HA.fm
Figure 5-41 TPC for Replication standby server Health Overview panel
Click Management Servers. This will take you to the Management Servers panel of your standby server as shown in Figure 5-42 on page 193. You see a list of our two TPC for Replication servers, wtsc70.itso.ibm.com having the standby role and wtsc64.itso.ibm.com having the active role. The status of both management servers is disconnected consistent from the point of view of the standby server. This means, the standby server is not able to communicate with its active server, but it has a consistent database and could take over the role of the active server. In the drop down menu select Takeover and click Go like shown in to initiate the takeover process.
192
TPC-R-Gen_Admin-and-HA.fm
Figure 5-42 TPC for Replication standby server Management Servers panel
You have also the opportunity to first try a reconnect to your active server before taking the decision that a takeover is necessary. As you know, the active server is down, you can omit this step. You will first see a confirmation message, warning you about the fact, that if the original active server was still up, you would have two active TPC for Replication servers with identical configurations in your environment.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
193
TPC-R-Gen_Admin-and-HA.fm
Figure 5-43 TPC for Replication standby server Management Servers panel - confirm Takeover
As you have shut down your original active server, click Yes to continue. TPC for Replication now changes the role of your standby server to an active server. After few seconds you see the following Management Servers panel.
Figure 5-44 TPC for Replication standby server Management Servers panel - Takeover complete
Your server wtsc70.itso.ibm.com is now an active TPC for Replication server. Notice, that the Sessions menu item in the Navigation Area has now turned active. You are now able to manipulate your sessions from the activated standby server.
194
TPC-R-Gen_Admin-and-HA.fm
Click on Sessions in the Navigation Area. This will take you to the Sessions Overview panel and present a list of the sessions which were originally configured on the TPC for Replication server wtsc64.ibm.com.
Figure 5-45 TPC for Replication standby server Management Servers panel - sessions
You can also perform the takeover process via the TPC for Replication Command Line Interface with the hatakeover command issued at the standby server, like shown in Figure 5-46. csmcli> hatakeover IWNR3114W Aug 31, 2012 6:04:52 PM This command will make this standby management server an active management server. Both management servers will be active with identical configurations. Do you want to continue? y/n:y IWNR3063I Aug 31, 2012 6:05:08 PM Successfully issued the takeover to the standby server wtsc70.itso.ibm.com with the active HA server wtsc64.itso.ibm.com. csmcli>
Figure 5-46 Takeover via TPC for Replication CLI
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
195
TPC-R-Gen_Admin-and-HA.fm
You have now performed a failover from your active server to your standby server. Note, that TPC for Replication does not offer a failback function. If you like to switch back to your original active server you have to perform the following procedure: 1. Start up your recovered original TPC for Replication server. 2. On the original active TPC for Replication GUI Select Management Servers and then select Remove Standby from Select Action pull-down menu. 3. After removing the standby server, select Set this Server as Standby, from the Select Action pull down menu. TPC for Replication GUI asks you the specify the name or IP address of the Active Server (wtsc70.itso.ibm.com in our example). 4. Wait until both servers reach status Synchronized. 5. Perform a takeover on this TPC for Replication server after synchronization has completed. Note, that you would have two active servers during this part of the process. 6. On the original standby server, remove standby server by using Management Servers Remove Standby 7. Set your original standby server as a standby server to the original active server.
196
TPC-R-Gen_Admin-and-HA.fm
Select the Session you want to export clicking radio button in the left side of the Session Name. Select Export Copy Sets in the drop down menu and click Go button as shown in Figure 5-48.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
197
TPC-R-Gen_Admin-and-HA.fm
The next panel is the export Copy Sets. TPC for Replication creates a CSV file containing the Copy Sets and provides a link to download the CSV file. Click in the CSV file name link as shown in Figure 5-49.
You have option to open this file or save (download) it. We recommend you to only open to see the content. Do not edit at this time. However, if you want to edit anything, do it after you save the file. Use spreadsheet program as shown in Working with CSV files under Microsoft 198
Tivoli Storage Productivity Center for Replication for System z
TPC-R-Gen_Admin-and-HA.fm
Excel on page 211 to edit. If you are using Microsoft Internet Explorer as your web browser, you click the Save button. Internet Explorer lets you choose the directory to which you save the CSV file. If you are using Mozilla Firefox as your web browser, click Save File as shown in Figure 5-50. You can choose the directory where you download your files by clicking in menu bar Tools Options. You can specify the directory that you want for saving the files in the Downloads area of the Options menu.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
199
TPC-R-Gen_Admin-and-HA.fm
200
TPC-R-Gen_Admin-and-HA.fm
Select the Session Type in a drop down menu and click Next. as shown in Figure 5-53 on page 202. Note: The Session Type should have the same Session Type defined into previously exported CSV file to be imported.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
201
TPC-R-Gen_Admin-and-HA.fm
Fill Session Name, Description and Properties available as you require. Figure 5-54 on page 203 shows an example of how you specifies this information for a MGM with Basic HyperSwap session. Click Next to continue.
202
TPC-R-Gen_Admin-and-HA.fm
Next TPC for Replication shows you a panel to specify the Site1 Location, as shown in Figure 5-55 on page 204. Select the Site Location in a drop down menu and click Next. Note: The Site Locations should be the same as defined in the previously exported Session.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
203
TPC-R-Gen_Admin-and-HA.fm
Repeat this process for specifying other Locations as your session needs. A successfully created session message will displayed. Click on button Launch Add Copy Set Wizard at the bottom of the Work Area as shown in Figure 5-56.
In the Add Copy Set Wizard panel check the left hand box close the CVS file to import Copy Sets to use a CVS file to import Copy Sets. You can introduce a file name typing the path and
204
TPC-R-Gen_Admin-and-HA.fm
file name directly in the box. We recommend you to use Browse option to avoid typing error. Click on Browse button to proceed as shown in Figure 5-57.
Select a before exported or created CSV file then click Open button Next as shown in Figure 5-58 on page 206. Note: It is not possible to choose multiple CSV file in the same operation. The Copy Sets into CSV files has an specific format to address Copy Services functions as FlashCopy (FC), Metro Mirror (MM) and Metro/Global Mirroring (MGM). In this way is not possible to Import a CSV file with specific characteristics into another with different characteristics. Be sure that Copy Sets in the CSV file has the same Site Location as defined in the Session that Copy Sets will be imported to avoid error. CSV file is case sensitive.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
205
TPC-R-Gen_Admin-and-HA.fm
TPC for Replication checks if the volumes are defined in another session. It may shows you a warning, as shown Figure 5-60 on page 207. After clicking Next, TPC for Replication shows you the reason for the warning. This warning does not prohibit you to add this Copy Set;
206
TPC-R-Gen_Admin-and-HA.fm
After clicking Next, TPC for Replication shows the panel to select the Copy Sets. It also shows the reasons for the warning message, if there is one. Verify if the selected Copy Set to be imported is checked and click Next as shown in Figure 5-61.
TPC for Replication shows you the panel to confirm that you want to add the Copy Sets.Click Next to confirm that you want to add the Copy Sets. Click Finish to conclude Copy Set Import operation for a new session and get back to previous export session menu in the Work Area as shown in Figure 5-62 on page 208.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
207
TPC-R-Gen_Admin-and-HA.fm
208
TPC-R-Gen_Admin-and-HA.fm
The steps to add the Copy Sets are the same as described in previous topic, from Figure 5-57 on page 205 to Figure 5-62 on page 208. When you are adding a Copy Sets to a existing Session with Copy Sets already active with Status Normal and State Target Available the Status will change to Warning as shown in Figure 5-64.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
209
TPC-R-Gen_Admin-and-HA.fm
Figure 5-64 Warning when adding Copy Sets to a existing active Session
TPC for Replication automatically starts all the copying process. The session changes its status to Normal as soon as the new Copy Sets finishes their initial copy process and enter Prepared state in the MGM configuration. See Figure 5-65.
210
TPC-R-Gen_Admin-and-HA.fm
Where: MGM HS Session1is the exported session name. Metro Global Mirror is the session type. H1, H2, H3 and J3 are labels describing the Copy Set roles of the disk subsystems and volumes that belong to the exported MGM session. Under those labels there are the disk subsystems and volumes.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
211
TPC-R-Gen_Admin-and-HA.fm
START zControlProcName,JOBNAME=zServerShortName, ENV=zCellShortName.zNodeShortName.zServerShortName Where: zControlProcName zServerShortName zCellShortName zNodeShortName Name of member in your procedure library that you use to start the application server controller region. Job name that you assign to your application server control region. Name that z/OS facilities such as SAF use to identify the WebSphere cell. Name that z/OS facilities such as SAF use to identify the WebSphere node.
These values are created when you install WebSphere Application Server OEM Edition and run the WASOEM.sh script. They can be found in the default response file or in the override response file, if you choose to use variables that are different from the default variables, For example, if you are using the default response file, you would issue the following command: START BBN7ACR,JOBNAME=BBNS001,ENV=BBNBASE.BBNNODE.BBNS001
212
TPC-R-Gen_Admin-and-HA.fm
Again, try to avoid using the CANCEL command too frequently, as it does not remove a temporary directory that is created when the daemon is started.
Chapter 5. Tivoli Storage Productivity Center for Replication general administration and high availability
213
TPC-R-Gen_Admin-and-HA.fm
214
7563Hyperswapx_ wjr.fm
Chapter 6.
215
7563Hyperswapx_ wjr.fm
Basic HyperSwap
z/OS Basic HyperSwap is a single-site z/OS-only solution that extends SYSPLEX / Parallel Sysplex high availability capabilities to data. It is not a Disaster Recovery solution. It is not designed to handle Metro Mirror link failures that can occur in cross site configurations.
216
7563Hyperswapx_ wjr.fm
Metro Global Mirror with HyperSwap and with Practice - Used when discussing the TPC-R HyperSwap session that supports the three-site solution, that extends the capabilities of Metro Global Mirror with HyperSwap to provide an additional set of volumes that may be used to practice your business resumption procedure, while still maintaining full Metro Global Mirror with HyperSwap capabilites. (Introduced with TPC-R for System z v5.1.1 (December 2012). Table 6-1 shows a comparison of GDPS HyperSwap and TPC for Replication for System z Basic HyperSwap.
Table 6-1 GDPS HyperSwap compared to Basic HyperSwap Features Provides fast swapping of UCBs Transparent to application Code hardened to avoid hangs Page faults Failure managment provided by Cost Platforms Supported Software Required GDPS HyperSwap Yes Yes GDPS Service Offering z/OS and zLinux System Automation NetView TPC-R HyperSwap Yes Yes z/OS TPC-R Basic Edition - No chargea TPC-R for System z -Charge z/OS TPC-R v5.1.1 WAS OEM v7.0 (included with TPC-R)
a. "Tivoli Storage Productivity Center for Replication Basic Edition for System z" is a no-charge disk availability only solution, while there is a charge for "Tivoli Storage Productivity Center for Replication for System z"
HyperSwap volumes are defined to a TPC-R HyperSwap Session. A HyperSwap Session is a special Metro Mirror Session that is designed to provide high availability in the event of a Disk Storage Subsystem failure on the same data center floor. TPC-R controls all of the activities during the creation and construction of a HyperSwap Session. These activities include defining the HyperSwap Session itself, populating the session with the appropriate Metro Mirror volume pairs (Copy Sets), and starting the session to fully initialize all of the copy pairs. Eventually all of the TPC-R copy sets within the sessionwill reach FULL DUPLEX state (i.e. the target volumes are now identical copies of the source volumes). Once in FULL DUPLEX state, control of the session is transferredHyperSwap, which is part of the I/O Supervisor (IOS) component of z/OS. Figure 6-1 illustrates a HyperSwap operation. On the left, we see applications running in z/OS accessing a single primary volume through a z/OS control block called a UCB - Unit Control Block. Every device in z/OS is represented by a unique UCB. Each UCB in turn points to another control block called a Sub-Channel Information Block (SCHIB) that it is used as to communicate with the I/O device. The HyperSwap operation basically swaps the UCB pointer from the SCHIB that represents the primary device to the SCHIB that represents the secondary device. On the right we see the result of a HyperSwap operation. The UCB pointer to the SCHIB representing the primary volume is changed to point to the SCHIB representing the secondary volume. Also, the Metro Mirror direction is reversed.
217
7563Hyperswapx_ wjr.fm
Before a HyperSwap takes place, all of the I/O requests to and from the primary devices defined in the HyperSwap session are frozen by z/OS. z/OS does this by calling a function to freeze all of the primary logical subsystems (LSS). This function is equivalent to the TSO CGROUP FREEZE command. As soon as a logical subsystem receives a CGROUP FREEZE order, all of the I/Os to its device are placed in an Extended Long Busy (ELB) state. When all of the logical subsystems are in an ELB state, all of the z/OS images in the sysplex perform the HyperSwap operation for all of the devices. The amount of time that a logical subsystem stays in this ELB state is controlled by a hardware specification in the DS or ESS box. We recommend that you use the default value, which is 120 seconds. You can ask your IBM hardware representative to verify and change the ELB value if necessary.
Planned HyperSwap
Planned HyperSwap function provides the ability to: Transparently switch all primary PPRC disk subsystems with the secondary PPRC disk subsystems for a planned reconfiguration. Perform disk configuration maintenance and planned site maintenance without requiring any applications to be quiesced. Perform periodic testing of the HyperSwap function Planned HyperSwaps are usually initiated through TPC-R, although they may also be initiated bythez/OS command SETHS SWAP.
218
7563Hyperswapx_ wjr.fm
Unplanned HyperSwap
Unplanned HyperSwap function provides the ability to transparently switch to use secondary PPRC disk subsystems in the event of unplanned outages of the primary PPRC disk subsystems. The unplanned HyperSwap function allows production systems to remain active during a disk subsystem failure. Disk subsystem failures will no longer constitute a single point of failure for an entire system, sysplex or Parallel Sysplex. An Unplanned HyperSwap is triggered by: any disk subsystem condition that would cause a permanent I/O error to be returned to the application. the channel subsystem in the zSeries processor detecting that it no longer has a path to a primary disk volume. TPC-R is not involved in triggering unplanned HyperSwaps.
Event triggers
The sequence of events to trigger an unplanned HyperSwap operation is the following: 1. An application sends an I/O request to a disk volume. 2. The disk subsystem or the channel subsystem in the zSeries processor return a permanent I/O error or a no path available status to z/OS. 3. z/OS routines that monitor I/O activity determine that this event is a HyperSwap trigger, and send an event notfication to the HyperSwap Management address space within the sysplex that is acting as the master to let it know that a HyperSwap trigger has been detected. 4. At this point, the HyperSwap begins. The execution of a planned or unplanned is very similar. You can find more details in HyperSwap phases on page 234 z/OS performs the HyperSwap transparently to the applications. Applications will experience a delay during the HyperSwap operation, but they will resume running as soon as the HyperSwap completes.
6.3 Customization
In this topic we explain how to set up TPC-R HyperSwap in the z/OS environment. Before using TPC-R HyperSwap, you must be sure that no HyperSwap managed volumes are accessed by systems outside of the sysplex (or monoplex). In addition
6.3.1 SYS1.PARMLIB
For more information about any of the following topics, see the SA22-7592 z/OS MVS Initialization and Tuning Reference publication.
IEASYSxx
To accommodate a larger number of log buffers, you may need to increase the amount of extended CSA (ECSA) storage allowed on your systems. We recommend that you add approximately 150 bytes of additional ECSA for every 1000 additional log buffers. See the
219
7563Hyperswapx_ wjr.fm
second value specification for the CSA parameter. (The additional log buffers are specified using the LOGLIM parameter discussed below) HyperSwap control blocks occupy extended SQA (ESQA) storage and you may need to increase the amount of ESQA available on your systems. Each PPRC device pair requires 32 bytes, and each LSS requires 1100 bytes. A configuration of ten thousand PPRC device pairs will require on the order of 320K. See the second value specification for the SQA parameter.
CONSOLxx
During a HyperSwap, a very large number of messages may be produced. To minimize the possibility that Console buffer shortages will impact operations during the HyperSwap, we recommend that you increase the number of Console buffers that you allow. These buffers now occupy above-the-line storage, so you can specify a large value without impacting your system. See the MLIM parameter on the CONSOLxx INIT statement. The large number of messages produced during a HyperSwap are also written to the system log (SYSLOG and/or OPERLOG). To minimize the possibility that a log buffer shortage will impact operations during the HyperSwap, we recommend that you increase the number of log buffers that you allow. See the LOGLIM parameter on the CONSOLxx INIT statement. Note that log buffers occupy extended CSA (ECSA) and you should increase the amount of CSA that you provide through the IEASYSxx PARMLIB member.
IECIOSxx
The I/O timing facility can be enabled to trigger a HyperSwap when an I/O timeout occurs for a device that is monitored for HyperSwap. See the IOTHSWAP and IOTTERM parameters.
MPFLSTxx
All disk subsystems keep the Metro Mirror status of all of their devices. Any change in this status, such as a volume going from DUPLEX to SUSPENDED, causes the disk subsystem to send a state change interrupt to all systems that have that volume online. Each z/OS image that receives this interrupt issues the following message to the console: IEA494I devn,volser,PPRC PAIR SUSPENDED,SSID=ssid,CCA=cc This message -- and others -- will be issued whenever a HyperSwap occurs. You will see this message for each primary device in the HyperSwap session. If you have thousands of volumes in your HyperSwap session, you will see thousands of these messages on your operator consoles, and this can cause WTO buffer shortages and impact the operation of your systems. You can avoid having large numbers of messages going to your consoles by using the Message Processing Facility (MPF) to keep these messages off of your consoles. The messages will be written to the system log (SYSLOG and/or OPERLOG). If you wish to see the first few instances of messages -- but not all of them -- you may wish to use Message Flood Automation (see the MSGFLDxx topic below) instead of or in addition to the Message Processing Facility. We recommend that the following messages always be suppressed from the consoles using the following MPF specifications: IOS000I,SUP(ALL),AUTO(NO) IOS017I,SUP(ALL),AUTO(NO) IOS109E,SUP(ALL),AUTO(NO) IOS251I,SUP(ALL),AUTO(NO) IOS444I,SUP(ALL),AUTO(NO)
220
7563Hyperswapx_ wjr.fm
MSGFLDxx
You can avoid seeing all of the HyperSwap-related messages on your consoles by implementing Message Flood Automation in your sysplex. Message Flood Automation allows you to establish thresholds for certain messages. When the number of these messages reaches the threshold within a specified time interval, Message Flood Automation can take action, such as preventing the messages from going to the consoles. We recommend that the following message be suppressed from the consoles when it is produced in large quantities in a short period of time:
MSG IEA494I LOG,NOAUTO,NODISPLAY,NORETAIN,NOIGNORE
SCHEDxx
HyperSwap depends on the XCF component of z/OS to communicate with other systems in the sysplex. To ensure that XCF is able to communicate when a HyperSwap is required, see the CRITICALPAGING parameter.
HSIB API address space example: //HSIBAPI //HSIBAPI PROC EXEC PGM=IOSHSAPI,,TIME=NOLIMIT,REGION=0M
You must start both address spacesin order to use z/OS HyperSwap. The start commands for these procedures are: S HSIB,SUB=MSTR S HSIAPI,SUB=MSTR Both of these address spaces must be up and running in all images in the Sysplex. These started tasks are associated by default with SYSSTC service class in WLM. You must confirm that they have this service class assigned to them.
6.4 Commands
Once you have started the HyperSwap address spaces, you can use additional commands for gathering information or controlling a HyperSwap Session on z/OS. DISPLAY HS,STATUS
Chapter 6. Basic HyperSwap customization and use
221
7563Hyperswapx_ wjr.fm
Displays the status of HyperSwap. This command also displays any reasons why HyperSwap may be disabled, and the current policies for the HyperSwap Session. Below is an example of the D HS,STATUS command output: D HS,STATUS IOSHM0303I HyperSwap Status 772 Replication Session: HS test2 New member configuration load failed: Disable Planned swap recovery: Disable Unplanned swap recovery: Disable HyperSwap disabled: Unable to verify PPRC status One or more members unable to verify PPRC secondary device connectivity SC75: Member unable to verify PPRC secondary device connectivity
Figure 6-2 D HS,STATUS output
Table 6-2 gives some hints about the meaning of this output.
Table 6-2 D HS,STATUS fields Replication Session New member configuration load failed Planned swap recovery Unplanned swap recovery HyperSwap enabled/disabled The session name as it was defined in TPC-R These are the HyperSwap policy options that were specified when the session was created. See Setting up a HyperSwap session in TPC-R on page 231. The status of HyperSwap. If it is disabled, then additional information is displayed explaining the reason for it to be disabled, and the system that has problems to enable it.
D HS,CONFIG(DETAIL,ALL) Displays the detailed configuration for the current Basic HyperSwap session. This will list the volumes and status of all pairs in the Basic HyperSwap configuration. Below is an example of this command output; it shows only one volume pair in the "Basic HyperSwap" session.: IOSHM0304I HyperSwap Configuration 819 Replication Session: Basic HyperSwap Prim. SSID UA DEV# VOLSER Sec. SSID 0B 20 DB20 MLDA20 0A D HS,CONFIG(EXCEPTION,ALL) Displays all of the volumes in the current Basic HyperSwap session that are experiencing problems, by the problems that they are experiencing. D IOS,CONFIG(ALL) Displays IOS-related configuration information, including information about whether a HyperSwap has occurred. D M=DEV(nnnnn) Displays device-related configuration information, including whether the device is being monitored by HyperSwap and whether the device has been swapped by HyperSwap. SETHS ENABLE 222
Tivoli Storage Productivity Center for Replication for System z
UA 20
DEV# DA20
Status
7563Hyperswapx_ wjr.fm
Enables HyperSwap. This allows a HyperSwap to be performed, either by command or automatically, after being disabled by the operator using SETHS DISABLE. This command will not enable HyperSwap if it is disabled for other reasons. SETHS DISABLE Disables HyperSwap. This command allows the operator to prevent a HyperSwap from being performed, either by command or automatically. SETHS RESUMEIO Resumes normal I/O to all disk devices that have been quiesced by HyperSwap because STOP was specified in the session policy as the action to be taken on a PPRC suspend event. Note that there is no guarantee that the SETHS RESUMEIO will be successful once I/O is quiesced to HyperSwap managed storage. In this case a reIPL will be rrequired. SETHS SWAP Performs a planned HyperSwap. This can be done instead of issuing the HyperSwap command from Tivoli Storage Productivity Center for Replication.
223
7563Hyperswapx_ wjr.fm
Generally FlashCopy Targets should not be part of a HyperSwap session. When a FlashCopy relationship is established, the volumes will drop out of full Duplex state and become Duplex Pending until all of the FlashCopy data has been synchronized with the PPRC target devices. While these devices are in Duplex Pending mode, if they are in a HyperSwap Session, the HyperSwap session will be disabled until all PPRC pairs have returned to the full Duplex state. If FlashCopy targets must be part of a HyperSwap session, it is recommended that you use Remote Pair FlashCopy (RPFC). TPC-R does not support HyperSwap with Remote Pair Flash Copy (RPFC) in a singles session, so the way to accomplish this is to create three TPC-R sessions. Two would be FlashCopy sessions, and the thirda HyperSwap session. First define the HyperSwap session: Following the figure below, you will define two copysets, one from the "site 1-FC source" to the "site 2-FC source" and one from the "site 1-FC target" to the "site 2-FC target". Select Allow FlashCopy target to be Metro Mirror or Global Copy source check box, and select option 3 below. These three options correspond to Remote Pair FlashCopy options None, Preferred and Required respectively. 1. "Do not attempt to preserve Metro Mirror consistency. 2. "Attempt to preserve Metro Mirror consistency but allow FlashCopy even if Metro Mirror target consistency cannot be preserved. 3. "Attempt to preserve Metro Mirror consistency but fail FlashCopy if Metro Mirror target consistency cannot be preserved
Next define the two FlashCopy sessions: For the first Flashcopy session you will add one copyset from the "site 1-FC source" to the "site 1-FC target. So in the HyperSwap session you will have twice as many copysets as in the Flashcopy copyset. Now if you will want to contiue to FlashCopy from FC source to target once you have moved to site 2 (either via HyperSwap or after reIPLing at site 2), then we need a second FlashCopy session available. This is because TPC-R knows the volumes by their serial number, not their device number, and after you move to site 2 the devices you want to issue the FlashCopy against are now the ones at site 2. Therefore, define a third session: For the second Flashcopy session you will add one copyset from the "site 2-FC source" to the "site 2-FC target For more information, there is a Redpaper available on the Redbooks web site called "IBM System Storage DS8000 - Remote Pair FlashCopy (Preserve Mirror) REDP-4504-00" that may be of value. http://www.redbooks.ibm.com/abstracts/redp4504.html
Symmetric configuration
We recommend that you maintain a symmetric Metro Mirror configuration. Symmetric configuration means: One to one relationship between: Primary and secondary disk subsystems
224
7563Hyperswapx_ wjr.fm
Primary and secondary logical subsystems Primary and secondary volumes. The same number of FICON links between the hosts and the primary and secondary disk subsystems. The same number of Parallel Access Volumes (PAV) defined in the primary and secondary LSS. The secondary disk subsystem Cache and Non-Volatile Storage sizes are the same as that in its primary disk subsystem partner. Symmetric configurations are easier to maintain. If you have a symmetric configuration and good naming conventions for your disk volumes, you can easily determine which primary is paired with each secondary. Another good reason for maintaining a symmetric configuration is that after a HyperSwap you will continue to have the same performance level that you had when you were experiencing using the primary disks.
Determining if a Device is JES3 Managed If you issue the *I D D=dddd command and it returns an answer, as in the following examples, then the device is JES3 managed. *I D D=9607 IAT8572 9607 (AV ) IAT8572 9607 (AV ) IAT8572 9607 (OFF) NOT OPR IAT8500 INQUIRY ON SC64 SC70 SC65 DEVICES COMPLETE
*I D D=9E07 IAT8572 9E07 (AV ) IAT8572 9E07 (AV ) IAT8572 9E07 (AV ) MOUNTED IAT8500 INQUIRY ON
DEVICES COMPLETE
225
7563Hyperswapx_ wjr.fm
If you issue the *I D D=dddd command and it says device not found, as in the following example, then the device is not JES3 managed.
226
7563Hyperswapx_ wjr.fm
LOADxx
5-6
IMSI
7
Alt Nuc
8
In LOADxx, you must specify the subchannel set you wish to use: * * * * IODF Suffix | | V IODF HLQ OS Config | V | V Subchannel Set | V * * * *
When you IPL specifying a subchannel set of 1, this says that all the system is to use the devices in subchannel set 1 if they exist. For devices which are defined only in subchannel set 0 (e.g. devices which are not mirrored), then the device defined in subchannel set 0 will be used. So by specifying the correct IPL volume and the correct LOADPARM, you can avoid duplicate volser messages and accidentally using the wrong volumes. Starting with z196 (GA2), you can specify a five digit IPL volume (i.e. the subchannel set and device number). In this case you have the option of setting the subchannel set number in LOADxx to a value of "*", which means that we are to infer which subchannel set to use based on what you specified as the current IPL volume. (i.e. if you IPLed from device 04500, then z/OS will use subchannel set 0, and if you IPLed from device 14500 subchannel set 1will be used). Valid values for subchannel set are 0, 1, 2, "*", and blank. If the subchannel set is blank, you will be prompted to enter the subchannel set you wish to use.
227
7563Hyperswapx_ wjr.fm
Follow the rules in Table 6-3 in order to create two I/O configurations in the HCD.
Table 6-3 HCD configuration IPL using primary addresses primary disks HyperSwap volumes CDS volumes (except the Logger) non-HyperSwap volumes ONLINE ONLINE secondary disks OFFLINE ONLINE secondary addresses primary disks OFFLINE ONLINE secondary disks ONLINE ONLINE
as required
as requires
as required
as required
228
7563Hyperswapx_ wjr.fm
At least 1 spare CDS on primary disk and at least 1 spare CDS on secondary disk. For more information on system logger data set placement, see Appendix A. Using the system logger in a Tivoli Storage Productivity Center for Replication for System z environment in IBM Tivoli Storage Productivity Center for Replication for System z Version 5.1.1 - User's Guide (SC27-4054-01).
Move RESERVE
Although RESERVEs are not allowed, once the HyperSwap session has been enabled, it will manage the Reserves issued to primary devices by attempting to move those reserves from the primary devices to their secondary device partners. However, if the event which triggered the HyperSwap was the loss of all paths from the system holding the RESERVE to the disk, the disk subsystem will clear the RESERVE which will potentially allow other systems to gain the RESERVE before HyperSwap can move it. In addition, the scope of a HyperSwap is a single sysplex (or monoplex). HyperSwap will assure that all systems accessing HyperSwap managed devices have stopped using the old primaries, before swapping over to the old secondary (new primary). So if RESERVEs are actually being used to serialize devices between two or more sysplexes, there is no guarantee that both sysplexes will swap at exactly the same time. For these reasons, RESERVEs are not supported on HyperSwap managed devices. 4.
229
7563Hyperswapx_ wjr.fm
If you need to share disk volumes with systems outside the sysplex, you must exclude these volumes for the Basic HyperSwap session.
We recommend that you eliminate any exploitation of cache fast write if you plan to use HyperSwap.
230
7563Hyperswapx_ wjr.fm
Both TDMF and FDR/PAS support HyperSwap by temporaily preventing a HyperSwap from occuring while the perform their swap. Customers should assure that they are running a current version of these products which support HyperSwap. z/OS Support for TDMF and FDR/PAS was provided in APAR OA26509 for z/OS Releases 9 and above. Requires TDMF v5.2 or above Requires FDRPAS V 5.4/76 or above
6.5.12 Automation
z/OSHyperSwap provides built-in active and passive monitoring of the devices participating in the Basic HyperSwap session. z?OS HyperSwap automatically coordinates the swapping of primary and secondary devices across all of the z/OS images in the sysplex. z/OS HyperSwap only handles disk subsystem failures; it does not provide automation to handle z/OS image failures.
231
7563Hyperswapx_ wjr.fm
Partition the system(s) out of sysplex: any systems that can not load the new configuration will be reset. It will enter a 0B5 wait state, and it will be partitioned out of the sysplex. You will have to re-IPL it. Disable HyperSwap: this option disables HyperSwap when a new HyperSwap configuration can not be loaded in any system in the sysplex. The default for this option is Partition the system(s) out of the sysplex. However, we recommend that you specify Disable HyperSwap when you are doing your tests in order to avoid unnecessary outages due to systems being reset due to a configuration error. As soon as you are skilled with this process of loading new configurations, you can change it back to the Partition the system(s) out of the sysplex. On Planned HyperSwap Error These options specify what should be done when a planned HyperSwap can not be performed on any system in the sysplex. This may occur, for example, if one of the systems does not have access to all of the secondary devices or if one of the two address spaces is not running: The first button specifies that if a system fails to complete a planned HyperSwap, it will be partitioned out of the sysplex. The other images will have their UCBs swapped by the HyperSwap operation. The second button specifies that the HyperSwap operation will be backed out, and HyperSwap will be disabled. We recommend that you choose the Disable HyperSwap option for planned HyperSwap errors. On Unplanned HyperSwap Error These options specify how the sysplex should behave in case of an unplanned HyperSwap. The choices are the same as for a planned HyperSwap: Partition the system that can not complete the HyperSwap operation out of the sysplex. That system will enter a 0B5 wait state, and it will be partitioned out of the sysplex. You will have to re-IPL this system. Back out the HyperSwap operation and disable it. We recommend that you select the option to partition the system(s) out of the sysplex. The other systems in the sysplex will survive the planned HyperSwap and will continue to run even if there is a crash in an entire disk subsystem box. It should be noted that at a point during the HyperSwap processing, even though the policy might be to backout, HyperSwap can no longer do this and assure data integrity. We call this "the point of no return". For both planned and unplanned HyperSwap the "point of no return" is when we begin swapping UCBs. At this point, if one system is unable to confirm that it has switched to the target devices it is too late to go back to the primary, and any system not confirming it has successfully swapped must be partitioned out of the sysplex. Similarly, once the swap is complete, policy has no effect on how HyperSwap will respond to any errors detected. In this case HyperSwap knows that all systems have swapped to the correct devices, and that HyperSwap is now disabled. In this case we continue processing even if errors are detected or systems are no longer responsive, and rely on the appropriate error recovery. This error recovery may include - for example - an application terminating due to a permanent I/O error, or a system being partitioned out of the sysplex because it is unresponsive. Fail MM/GC if target is online
232
7563Hyperswapx_ wjr.fm
We recommend that you check this option in order to avoid starting a HyperSwap session using secondary volumes that can be online to any system. Unchecking this option allows you to start the session even if there is a secondary volume online to another system, but the execution of a HyperSwap in such conditions can lead to unpredictable results. If you are unable to start a HyperSwap session because of a secondary volume that is online, check the following: 1. Check all systems in the sysplex to see if the device is online and if so vary it offline 2. Often this is a result of a volume being shared outside the sysplex. Check to see if any systems outside the sysplex have the device online and if so vary it offline. 3. The online state is determined by the storage system, not z/OS. The storage system will treat a volume as being online if their are paths established to the device. Therefore if the device is is the OFFLINE and Allocated to a System Component state, then the storage system will view this as online.
Establishing paths
Although TPC-R can establish the paths between the logical subsystems automatically when you start a session, we recommend that you define the paths manually, especially if your disk subsystem box is shared between sysplexes. You probably would like to dedicate more links to production sysplexes than for other sysplexes. You should establish the paths manually in the reverse direction, from secondary to primary. After a HyperSwap, your secondary volumes will be the new active primary volumes and your primary volumes are now the active secondary volumes. You will need to start a session from your secondary volumes to your primary volumes to allow you to HyperSwap again.
233
7563Hyperswapx_ wjr.fm
SETHS SWAP IOSHM0400I 13:05:12.12 HyperSwap requested IOSHM0424I Master status = 00000000 00000000 0000001600000000 IOSHM0401I 13:05:12.12 Planned HyperSwap started - Operator IOSHM0424I Master status = 00000000 00000000 0000001601000000 IOSHM0402I 13:05:12.24 HyperSwap phase - Validation of I/O connectivity starting IOSHM0424I Master status = 00000000 80000000 0000001602000000 IOSHM0403I 13:05:12.28 HyperSwap phase - Validation of I/O connectivity completed IOSHM0404I 13:05:12.28 HyperSwap phase - Freeze and quiesce DASD I/O starting IOSHM0424I Master status = 00000000 80000000 0000001603000000 IOSHM0405I 13:05:12.34 HyperSwap phase - Freeze and quiesce DASD I/O completed IOSHM0406I 13:05:12.34 HyperSwap phase - Failover PPRC volumes starting IOSHM0424I Master status = 00000000 80000000 0000001604000000 IOSHM0407I 13:05:12.42 HyperSwap phase - Failover PPRC volumes completed IOSHM0408I 13:05:12.42 HyperSwap phase - Swap UCBs starting IOSHM0424I Master status = 00000000 80000000 0000001605000000 IOSHM0409I 13:05:12.47 HyperSwap phase - Swap UCBs completed IOSHM0410I 13:05:12.47 HyperSwap phase - Resume DASD I/O starting IEA494I DD21,MLD821,PPRC PAIR SUSPENDED,SSID=89ED,CCA=21 IOSHM0424I Master status = 00000000 80000000 0000001606000000 IOSHM0411I 13:05:12.61 HyperSwap phase - Resume DASD I/O completed IOSHM0429I 13:05:12.62 HyperSwap processing issued an UnFreeze IOSHM0412I 13:05:12.62 HyperSwap phase - Cleanup starting IOSHM0424I Master status = 00000000 80000000 0000001607000000 IOSHM0413I 13:05:12.80 HyperSwap phase - Cleanup completed IOSHM0414I 13:05:12.80 Planned HyperSwap completed These are the z/OS HyperSwap phases 1. Validation of I/O connectivity In this phase, z/OS checks connectivity to the secondary volumes. If all the images have connectivity to their secondary volumes, then HyperSwap proceeds. If one or more z/OS images fails this connectivity checking, the action that HyperSwap does depends on the HyperSwap options that you specified when you created the HyperSwap session in TPC-R. See Example 6-1 on page 234. If you had chosen to Partition the system(s) out of the sysplex, every system failing connectivity validation enters a 0B5 wait state code and is partitioned out of the sysplex. HyperSwap proceeds on the remaining z/OS images. If you had chosen to disable HyperSwap, the HyperSwap operation is backed out. Depending on the type of disk failure, this can lead to a sysplex outage. 2. Freeze and quiesce DASD I/O z/OS internal routines send freeze commands to all of the logical subsystem pairs defined in the HyperSwap session. All I/O to these logical subsystems receive an Extend Long Busy status. Also, z/OS stops issuing I/Os to any primary disk volume.
234
7563Hyperswapx_ wjr.fm
3. Failover PPRC volumes z/OS sends Failover commands to volume pairs defined in the HyperSwap session. The PPRC status for the former secondary volumes are changed from DUPLEX to PRIMARY SUSPENDED. 4. Swap UCBs All z/OS images in the Sysplex swap certain contents of their UCB pairs. 5. Resume DASD I/O z/OS resumes I/O requests to the new primary disk volumes (i.e. the old secondary disk volumes). When this step is complete, the system and all applications will resume normal I/O activity. 6. Cleanup In this phase, z/OS tries to release Reserves outstanding on the old primaries and perform other cleanup tasks.
Figure 6-4 Query session through the Session hyperlink in Health Overview
Select the Basic HyperSwap session from the pull-down menu as shown in Figure 6-5. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1
235
7563Hyperswapx_ wjr.fm
represents Site 1 volumes and H2 represents Site 2 volumes. When you define Copy Sets this pictograph helps you to orient and understand replication direction. Click Next to continue.
This leads us to the session Properties panel as Figure 6-6 shows. The Properties panel is also important because it requires that you specify a name for the session which is about to be created. Adding an optional Description is recommended since the session name may not be sufficient to understand the purpose of the session. In Figure 6-6 we are showing our session definition. You can choose the following options in a Base HyperSwap session: Disable HyperSwap (default: unchecked) On Configuration Error (for example when a new member fails to join sysplex configuration): Partition the system(s) out of the sysplex (default) Disable HyperSwap On Planned HyperSwap Error: Partition out the failing system(s) and continue swap processing on the remaining system(s) Disable HyperSwap after attempting backout (default) On Unplanned HyperSwap Error: Partition out the failing system(s) and continue swap processing on the remaining system(s) (default) Disable HyperSwap after attempting backout
236
7563Hyperswapx_ wjr.fm
Fail MM/GC if the target is online (CKD only) - This option fails any session commands for a copy set if the target volume in the copy set is online to a host (default: checked). We recommend that for the Basic HyperSwap Options On Configuration Error: you initially select Disable HyperSwap. Do not select Partition the system(s) out of the sysplex until after you have tested your HyperSwap configuration. When partitioned out of the sysplex the system(s) will be reset. After selecting the Basic HyperSwap Options as shown in Figure 6-6. Click Next to define location sites.
From the pull-down Site 1 Location menu (see Figure 6-7) select the location of your H1 storage subsystem previously defined in Adding IBM ESS or DS Storage Server to Tivoli Storage Productivity Center for Replication server on page 141 and click Next to continue.
237
7563Hyperswapx_ wjr.fm
From the pull-down Site 2 Location menu (see Figure 6-8) select the location of your H2 storage subsystem previously defined in Adding IBM ESS or DS Storage Server to Tivoli Storage Productivity Center for Replication server on page 141 and click Next to continue. Note: In our example we are going to create Basic HyperSwap session inside one DS8000 storage system. That is the reason why Site 1 and Site 2 location logical names are the same. In a production environment you would have one or more pairs of storage systems. Your Site 1 location would contain your primary storage systems and your Site 2 location would contain your target storage systems. In a HyperSwap environment both locations may actually reside on the same site. In this case you still want to treat them as two different locations. So you will want name the locations appropriately, for example Raised Floor - North and Raised Floor - South if on the same raised floor, or Building 300 and Building 500 if both are on the same campus.
238
7563Hyperswapx_ wjr.fm
Figure 6-9 displays the message that the Basic HyperSwap session was successfully created. At this stage you can click on Finish to exit the Create Session wizard.
239
7563Hyperswapx_ wjr.fm
Alternatively, you have an option to add Copy Sets and click on the Launch Add Copy Sets Wizard as in our example and follow the instructions described in 6.6.4, Add Copy Sets to a Basic HyperSwap session.
240
7563Hyperswapx_ wjr.fm
Figure 6-10 Add Copy Sets to a Basic HyperSwap session- Choose Host 1
In case you would like to define all volumes within a certain LSS for this session, there is an option to select All Volumes from the Host1 volume list as shown in Figure 6-11. Click Next to continue.
Figure 6-11 Add Copy Sets to a Basic HyperSwap session - Choose Host 1 and All Volumes option
241
7563Hyperswapx_ wjr.fm
The next step is to define Site 2 storage subsystems and volumes as shown in Figure 6-12. Select the desired Host 2 storage subsystem from the pull-down menu and wait for the Host 2 logical storage subsystem list. Select the LSS where your H2 volumes resides.If All volumes for a given LSS were selected in the previous step while defining Host 1 volumes, you do not have an option to select any volume from Host 2 volumes list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. In our example we selected All Volumes in Choose Host1 panel. Click Next to continue.
Figure 6-12 Add Copy Sets to a Basic HyperSwap session - Choose Host 2 and All Volumes option
The following screen shown in Figure 6-13 is a confirmation that Copy Set matches were successfully created.
242
7563Hyperswapx_ wjr.fm
Figure 6-13 Add Copy Sets to a Basic HyperSwap session - Matching Results
The next screen in Figure 6-14 displays message with regard to the matching results. In our example we have a message that all copy sets matching were successful. However, you may see warning messages for one of the following reasons: the number of volumes at Host 1 storage subsystem LSS and the Host 2 storage subsystem LSS is not the same volumes at Host 1 storage subsystem LSS are a different sizd then Host 2 storage subsystem LSS volumes the volumes are not the same type (i.e. ECKD vs SCSI) the Host 1 or Host 2 storage subsystems and volumes are already defined in some other copy services session The warning message does not mean the failure of Copy Sets creation. Click Next to see the list of available Copy Sets.
243
7563Hyperswapx_ wjr.fm
Figure 6-14 Add Copy Sets to a Basic HyperSwap session - Matching Results
All copy sets volumes which met the matching criteria are automatically selected. You still have a chance to modify the current selection and deselect any of the volume pair included in the list. The Show hyper link next to each matching volume pair provides Copy Set information. We selected only one copy set as shown in Figure 6-15. Click Next to continue
244
7563Hyperswapx_ wjr.fm
Figure 6-15 Add Copy Sets to a Basic HyperSwap session - Select Copy Sets
The next screen displays the number of copy sets which are going to be created as well as the number of unresolved matches (or not selected) as shown in Figure 6-16. Click Next to continue.
245
7563Hyperswapx_ wjr.fm
TPC for Replication internally adds that copy set to its database and you can monitor it via progress panel which reports the number of copy sets added to the TPC for Replication inventory database. Note this does not establish HyperSwap copy pairs. It is just a TPC for Replication internal process to add this copy set to the TPC for Replication inventory database. After a few seconds the progress panel reaches 100% and leaves the Adding Copy Sets panel and progress to the next panel shown in Figure 6-17. Click Finish to exit the Add Copy Sets wizard.
Figure 6-17 All Copy Sets are successfully added to the TPC-R database
Once the copy set is successfully added to the "Basic HyperSwap" session the session status is still Inactive but in Defined state as shown in Figure 6-18.
246
7563Hyperswapx_ wjr.fm
This concludes the session definition procedure via GUI and you are ready to start the session as explained in the following section 6.6.5, Start H1->H2 in a Basic HyperSwap session.
247
7563Hyperswapx_ wjr.fm
The next message shown in Figure 6-20 is a warning that you are about to initiate a basic HyperSwap session which will copy over the H2 volumes. This action established the Metro Mirror relationships between the H1 and H2 volumes. Once the relationship is established, copying of data from Site 1 to Site 2 will overwrite all data on H2 volumes. Click Yes to continue.
The message at the top of the screen in Figure 6-21 confirms that start of the Basic HyperSwap session is completed. The session is in Preparing state and Warning status. Click 248
Tivoli Storage Productivity Center for Replication for System z
7563Hyperswapx_ wjr.fm
on the session name hyper link ("Basic HyperSwap" in our example) to find out more details on this session.
As shown in Figure 6-22, the Basic HyperSwap session status has changed to Normal and there are no errors. The copy progress between H1 and H2 volumes has reached 100%.
Once the Basic HyperSwap session is started and it has Normal status and in Prepared State the following options are available:
249
7563Hyperswapx_ wjr.fm
HyperSwap - this will cause a site switch, equivalent to a suspend and recover for a Metro Mirror with Failover/Failback. Because of this, individual suspend and recover actions for HyperSwap session are not available. Start H1->H2- this will start Metro Mirror relationship (synchronous full copy) between H1 and H2 volumes Stop - this will stop copying with inconsistent target volumes
The next message shown in our example in Figure 6-24 is a warning that you are about to move application I/O from H1 volumes to H2 volumes since the current data replication direction is from H1 to H2 volumes. In the case that the HyperSwap action is invoked on a Copy Set with the data copying from H2 to H1 volumes, the warning message will indicate that you are about to move application I/O from H2 volumes to H1 volumes. Click Yes to continue.
250
7563Hyperswapx_ wjr.fm
The status of our Basic HyperSwap session has still Normal status but the state is Target Available as indicated in Figure 6-25. Both H1 and H2 volumes are suspended (data is not copied between H1 and H2 volumes). H2 is available to host (online), while H1 is not available to host (offline).
Click on the session name hyper link ("Basic HyperSwap" in our example) to find out more details on this session as shown in Figure 6-26. There is a timestamp for H1-H2 pair indicating when the session was swapped. It can be used as a reference.
251
7563Hyperswapx_ wjr.fm
Once the Basic HyperSwap session has Target Available state for H2 volumes, the following options are available: Start H2->H1 - this will restart copying from H2 to H1 volumes in a Metro Mirror session
252
7563Hyperswapx_ wjr.fm
The next message shown in Figure 6-28 is a warning that you are about to initiate Metro Mirror between H2 and H1 volumes. Click Yes to continue. When the HyperSwap was initiated, the H2 volumes started keeping track of the changed tracks, so when copying resumes only changed tracks will be copied. There will be no need to peform full volume copies.
The message at the top of the screen in Figure 6-29 confirms that Start H2->H1 action is completed. The session is in Prepared state and Normal status. Click on the session name hyper link ("Basic HyperSwap" in our example) to find out more details on this session.
Chapter 6. Basic HyperSwap customization and use
253
7563Hyperswapx_ wjr.fm
As shown in Figure 6-30 Metro Mirror data copying progress from H2 to H1 volumes reached 100%. Click the Sessions hyper link in the Health Overview section at the bottom left hand side of the screen to go back to the Sessions screen.
At this stage the following options are available: HyperSwap - this will cause a site switch, equivalent to a suspend and recover for a Metro Mirror with Failover/Failback. Because of this, individual suspend and recover actions for HyperSwap session are not available.
254
7563Hyperswapx_ wjr.fm
Start H2->H1- this will start Metro Mirror relationship between H2 and H1 volumes Stop - this will stop copying with inconsistent secondary volumes Suspend - this will stopy copying, but will do so with consistent secondary volumes which can then be used to recover from. This command is not available in the Basic HyperSwap session type, but is available for all other session types where HyperSwap is enabled (e.g. Metro Mirror with HyperSwap)
The next message shown in our example in Figure 6-32 is a warning that you are about to move application I/O from H2 volumes to H1 volumes. Click Yes to continue.
255
7563Hyperswapx_ wjr.fm
The status of our HyperSwap session has still Normal status but the state is Target Available as indicated in Figure 6-33. Both H1 and H2 volumes are suspended (data is not copied between H1 and H2 volumes) and H1 is available to host (online), while H2 is not available to host (offline).
Click on the session name hyper link ("Basic HyperSwap" in our example) to find out more details on this session as shown in Figure 6-34. There is a timestamp for H1-H2 pair indicating when the session was swapped. It can be used as a reference.
256
7563Hyperswapx_ wjr.fm
Once the Basic HyperSwap session has Target Available state for H1 volumes, the following options are available: Start H1->H2 - this will restart copying from H1 to H2 volumes in a Metro Mirror session
257
7563Hyperswapx_ wjr.fm
The next message shown in Figure 6-36 is a warning that you are about to stop data copying between H1 and H2 volumes. Click Yes to continue.
There is a message at the top of the screen in Figure 6-37 indicating that Stop action has been successfully completed. The status of our HyperSwap session is Severe and the state is Suspended. Column Recoverable in Figure 6-37 has value NO indicating that H2 storage subsystem volumes are not consistent.
258
7563Hyperswapx_ wjr.fm
Click on the session name hyper link ("Basic HyperSwap" in our example) to find out more details on this session as shown in Figure 6-38. Click the Sessions hyper link in the Health Overview section at the bottom left hand side of the screen to go back to the Sessions screen.
At this stage the following option is available: Start H1->H2 (or H2->H1, depending on the copy direction) - this will restart copying from H1 to H2 (or H2 to H1) volumes in a Metro Mirror session
Chapter 6. Basic HyperSwap customization and use
259
7563Hyperswapx_ wjr.fm
The next message shown in Figure 6-40 is a warning that you are about to terminate Metro Mirror relationship between H1 to H2 volumes. Note that if you need to start the very same HyperSwap session again a full copy from H1 to H2 volumes will be required. If you wish to stop mirroring, but wish to restart mirroring later with incremental copying, see the STOP command above. Click Yes to continue.
260
7563Hyperswapx_ wjr.fm
There is a message at the top of the screen in Figure 6-41 indicating that Terminate action has been successfully completed. The status of our HyperSwap session is now Inactive and the state is Defined.
Once the Basic HyperSwap session is terminated the following option is available: Start H1->H2 - this will restart copying from H1 to H2 volumes in a Metro Mirror session
261
7563Hyperswapx_ wjr.fm
6.6.11 Testing
We strongly recommend that you establish a small test environment and thoroughly test your HyperSwap set-up before setting up HyperSwap on your production systems. A test environment will allow you to exercise all of your parameters and operational procedures and allow you to become familiar with the behavior ofHyperSwap. Once you have set up HyperSwap on your production systems, we strongly recommend that you test HyperSwap on your production environment to make sure that it behaves the same way on your production systems as it does in your test environment. Test environments usually do not have the size and complexity of production environments, and sometimes there are important attributes of the production environment that have not been exposed in the test environment. You should test HyperSwap in your production environment so that you can find and fix any problems before you need to do a real HyperSwap. We recommend that you perform HyperSwaps on your production environment on a regular schedule to find any new problems (perhaps due to configuration changes) and to exercise your operations staff. It is important to have your operations staff familiar with HyperSwap procedures so that they react properly when a real HyperSwap occurs. You can issue a VARY OFFLINE FORCE command against a PPRC primary device to trigger an unplanned HyperSwap. (This will create a boxed condition on the old primary device that you must remove later.) If you have a dedicated test disk subsystem, you can also test no paths to the device or even loss of the device. You can trigger a no paths condition by configuring the last channel path offline to MVS with the force option; by disconnecting the fiber; or by blocking a port on the switch. You can trigger a loss of device condition by powering-off the device. Once TPC-R has loaded the configuration into HyperSwap, which is part of the START processing, and the session had reached the Prepared state, a HyperSwap will still occur even if TPC-R is not currently active. One way to test this is to take TPC-R down and then issue the SETHS SWAP MVS operator command which will initiate a planned HyperSwap. You can also test this by triggering an unplanned HyperSwap, such as by boxing a HyperSwap managed volume. To restart HyperSwap in the reverse direction, you will need to restart TPC-R.
7563Hyperswapx_ wjr.fm
for any device has become boxed. On the system(s) where the message appears, you can issue the z/OS D M=DEV(devnum) command to determine the channel numbers used to access the boxed device. You can cause the boxed device to go OFFLINE by issuing a z/OS VARY PATH,ONLINE command for the device and one of the associated channels. If a failure occurs part way through a HyperSwap operation, further actions depend on when and where the failure occurred. The following situations can occur: One or more systems in the SYSPLEX cannot swap. If the swap policy is set to disable, which is the default for planned HyperSwaps, the HyperSwap operation is backed out. If the swap policy is set to partition, which is the default for unplanned HyperSwaps the systems that can not swap are partitioned out of tye sysplex, and the swap continues on the systems in the sysplex thar are able to proceed. An IPL has to be performed on the systems that were partitioned out of the sysplex. TPC-R may bave a standby server. The standby server can be running on another system within the sysplex or ourside of the sysplex. If it is running outside of the sysplex it may be running on z/OS, Windows, AIX or Linux. If it running outside of the sysplex you will not be able to perform HyperSwap commands, such as restarting HyperSwap after a swap. That must be done from a TPC-R which is running on a system within the sysplex.
263
7563Hyperswapx_ wjr.fm
264
TPCRDS8K_zOS-Using TPCR.fm
Chapter 7.
265
TPCRDS8K_zOS-Using TPCR.fm
In our example we have three DS8000 disk subsystem boxes. There are many paths already defined. Click on storage subsystem hyperlink to find more details about each path. Select Manage Path to create a new logical path. 266
Tivoli Storage Productivity Center for Replication for System z
TPCRDS8K_zOS-Using TPCR.fm
From the drop-down boxes in the Path Management wizard (see Figure 7-3), select the source storage system, source logical storage system (LSS), target storage system, and target logical storage system (LSS). In our example we selected DS8000 as a source and ESS storage system as a target. Click Next to continue.
From the drop-down boxes in the Select Paths panel, select the source and target FCP port and click Add as shown in Figure 7-4.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
267
TPCRDS8K_zOS-Using TPCR.fm
You can add multiple FCP ports used for copy services between the disk subsystems, or just one at a time. If you want to create the same logical path over more FCP links, select desired source and target FCP port and click Add. After you add all the required paths, click Next, as shown in Figure 7-5.
The next screen (see Figure 7-6) includes information about logical path(s) previously selected. Click Next in order to confirm logical path(s) creation.
268
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-7 displays the message that logical paths were successfully created. Click Finish to exit the Path Management wizard.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
269
TPCRDS8K_zOS-Using TPCR.fm
Perform these steps to add IBM TotalStorage Enterprise Storage Server, IBM System Storage DS8000, and IBM System Storage DS6000 logical paths using a CSV file: 1. Create a CSV file named portpairings.csv in the WAS_HOME/profiles/default/properties directory. In our system, this directory is /zWebSphereOEM/V7R0/tpcr510/AppServer/profiles/default/properties/. 2. Figure 7-8 shows a sample content for this file, as you can see by editing through ISHELL. Each line in this file represents a storage subsystem pairing. All pairing values are separated by a comma. The first pair of values in each line represents a storage subsystems paring: 2107.L3331 is our primary DS8000 box; 2107.L3001 is our secondary DS8000 box.
The remaining values are the ports that physical links between them to be used for replicating data between the storage subsystems: 0x0202:0x0202 tells TPC for Replication to use port 0x0202 in DS8000 box 2107.L3331 to replicate data to DS8000 2107.L3001 port 0x0202. The same applies to ports 0x0302:0x0302.
File Edit Edit_Settings Menu Utilities Compilers Test Help --------------------------------------------------------------------------EDIT portpairings.csv Columns 00001 00072 Command ===> Scroll ===> CSR ****** ***************************** Top of Data ****************************** 000001 2107.L3331:2107.L3001,0x0202:0x0202,0x0302:0x0302 ****** **************************** Bottom of Data **************************** . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. To enable the changes in the file, perform a task that requires new paths to be established. For example, suspend a session, remove the logical paths and then Start a Metro Mirror or Global Mirror session for TPC for Replication to use the ports pairings in the CSV file.
270
TPCRDS8K_zOS-Using TPCR.fm
Use the check boxes to select the paths that you want to remove as shown in Figure 7-10 and from the drop-down menu select Remove. Click GO to continue.
The message at the top of the screen in Figure 7-11 confirms that the path has been removed successfully.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
271
TPCRDS8K_zOS-Using TPCR.fm
272
TPCRDS8K_zOS-Using TPCR.fm
In the first pull down menu that is shown, you specify the type of storage subsystem that you are going to use in this session. In our example, we selected DS8000, DS6000, ESS800. Next you select the FlashCopy session from the pull-down menu as shown in Figure 7-13 on page 274. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1 represents Host 1 volumes and T1 represents FlashCopy target volumes. When you define Copy Sets this pictograph helps you to orient and understand the copy direction. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
273
TPCRDS8K_zOS-Using TPCR.fm
This leads us to the session Properties panel as Figure 7-14 shows. The Properties panel is also important because it requires that you specify at least a name for the session which is about to be created. An optional Description is recommended to understand the purpose of the session because the session name may not reveal what this session is intended for. There are three FlashCopy options available: Incremental This option sets up the FlashCopy relationship for change recording. Any subsequent FlashCopy operation for that session only copies the tracks that have changed since the last flash. Incremental always assumes persistence. Persistent This option tells the hardware to leave the relationship established after all source tracks are copied to the target. Persistent does not assume incremental. No Copy This option tells the hardware not to write the background copy until the source track is written to. Allow FlashCopy target to be Metro Mirror source Allows you to use a Metro Mirror source volume as a FlashCopy target. If this option is cleared, a FlashCopy to a Metro Mirror source volume fails. Note: All settings in the Properties panel can be changed dynamically at a later time, once the session is defined and it has normal status. For example, if you want to change from Incremental to No Copy, select your FlashCopy session and from the Select Action pull-down menu select View/Modify Properties (under Modify submenu) and click Go to change the properties.
274
TPCRDS8K_zOS-Using TPCR.fm
When you select the Allow FlashCopy target to be a Metro Mirror Source option, TPC for Replication shows you a warning in a pop-up window asking you to reply with Yes or No. See Figure 7-15 on page 276. This is because the Metro Mirror pair whose source volume is also the target of a FlashCopy relationship enters a Duplex Pending state when the FlashCopy background copy is taking place. It a Metro Mirror pair enters a Duplex Pending state, its target is inconsistent with all other MM volumes until it reaches Full Duplex again. If we loose the primary volume, we might not be able to use the secondary volume of the MM pair.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
275
TPCRDS8K_zOS-Using TPCR.fm
After you select Yes, TPC for Replication allows you to choose additional options to use the traditional FlashCopy, or the Remote Pair FlashCopy architecture. See Figure 7-16 on page 277. Do not attempt to preserve Metro Mirror consistency This option uses the traditional FlashCopy architecture. The source FlashCopy volume is copied to its target FlashCopy volume, and replicated across the Metro Mirror links to the secondary Metro Mirror volume. Attempt to preserve Metro Mirror consistency, but allow FlashCopy even if Metro Mirror target consistency cannot be preserved This option uses the Remote Pair FlashCopy architecture. It needs both the source and target of a FlashCopy relationship to be also both primary devices of Metro Mirror relationships. When you start a FlashCopy session in the primary DS8000 disk subsystem where the primary volumes of Metro Mirror are, DS8000 sends FlashCopy orders over the Metro Mirror links, to be executed in parallel in the secondary DS8000 where the Metro Mirror secondaries are. If it can not do the Remote Pair FlashCopy, a full copy is made from the primary to the secondary MM volume. Attempt to preserve Metro Mirror consistency, but fail FlashCopy if Metro Mirror target consistency cannot be preserved This option also uses the Remote Pair FlashCopy architecture, like the previous option. However, it will fail the FlashCopy session if it can not send the orders for the FlashCopy to be done at the secondary DS8000. Click Next.
276
TPCRDS8K_zOS-Using TPCR.fm
From the pull-down Site 1 Location menu (see Figure 7-17) select the location of your H1 storage subsystem previously defined in Adding an IBM Storage Server using the Tivoli Storage Productivity Center for Replication GUI on page 141. Click Next to continue.
Figure 7-18 displays the message that the FC1 session was successfully created. Click Finish to exit the Create Session wizard. Alternatively, you have an option to add Copy Sets and click on Launch Add Copy Sets Wizard and follow the instructions described in 7.2.2, Add Copy Sets to a FlashCopy session on page 278.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
277
TPCRDS8K_zOS-Using TPCR.fm
278
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-20 on page 280 displays the panel which provides details on the source FlashCopy volumes which are called Host 1 volumes. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS where your H1 volume resides. Once the LSS has been selected, choose the appropriate volume from the Host 1 volume pull-down list. The alternative way to add a large number of volumes to this session is to create a CSV file as explained in Using CSV files for importing and exporting sessions on page 196. In case you have CSV file ready, select Use a CSV file to import copy sets check box and provide a path to your CSV file. In our example we selected DS8000 disk subsystem and appropriate volume as shown in Figure 7-20. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
279
TPCRDS8K_zOS-Using TPCR.fm
If you would like to define all volumes within a certain LSS in the FlashCopy session, there is an option to select All Volumes from the Host1 volume list The next step is to define Site 2 storage subsystems and volumes as shown in Figure 7-21 on page 281. Select the desired Target 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Target 1 logical storage subsystem list. Select the LSS where your T1 target FlashCopy volume resides. If All volumes for a given LSS were selected in the previous step while defining Host 1 source FlashCopy volumes, you do not have an option to select a volume from Target 1 volume list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Target 1 storage subsystem. In our example we selected LSS 0C VOL 08, which has ML9C08 volser. Click Next to continue.
280
TPCRDS8K_zOS-Using TPCR.fm
The next screen in Figure 7-22 on page 282 displays message related to the matching results. If there is a warning, like in our example, you can click on Show and TPC for Replication shows you all the Copy Sets in the session, and the reason for the warning. In our example, the warning is due to the fact that the FlashCopy target volume is already defined in session MGM HS new. This is only a warning; it will not prevent you from creating this new session. Click Next to proceed with the creation of the FC1 session.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
281
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication shows you a confirmation panel, telling the number of Copy Sets that are going to created, as seen in Figure 7-23. Click Next to continue.
282
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication may some seconds to add all the Copy Sets, depending on how many of them you are adding. After that, it shows you the Results panel, as seen in Figure 7-24. The Copy Set creation completed successfully with one warning. You can click on View individual results in this panel if want to get additional information. Otherwise, click Finish to exit the Add Copy Sets wizard.
Figure 7-24 All Copy Sets are successfully added to the TPC-R database
Figure 7-25 confirms that all Copy Sets are successfully added to the FC1 session and the database is successfully updated. The session status is still Inactive.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
283
TPCRDS8K_zOS-Using TPCR.fm
This concludes the steps through the GUI when you add Copy Sets to a FlashCopy session.
TPC for Replication will pop up a window as shown in Figure 7-27 on page 285, showing a warning that you are about to initiate a FlashCopy session. It will cause point in time relationship to be established from the source H1 volumes to the target T1 volumes, thus overwriting any data on the target T1 volumes. Click Yes to continue.
284
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-28 confirms that FlashCopy session is started. The session has a Normal status and the state is Target Available. Click on the session name hype link (FC1 in our example) to find out more details on this session.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
285
TPCRDS8K_zOS-Using TPCR.fm
Once the Flash has been initiated against the FlashCopy session and it has Normal status and Target Available state the following options are available: Initiate Background Copy - copies all tracks from the source H1 volume to the target T1 volume immediately, instead of waiting until the source track is written to. Flash - performs the FlashCopy operation using the option specified while defining FlashCopy session (Incremental, Persistent or No Copy). Terminate - this will terminate the session (see under Cleanup submenu)
The next message shown in Figure 7-30 is a warning that you are about to initiate background copy from the source H1 to the target T1 volume. Click Yes to continue.
286
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-31 indicating that Initiate Background Copy action has been successfully completed. The status of our FlashCopy session is Normal and the State is Target Available. Click on FlashCopy session name hyperlink (FC1 in our example) to find out more details and copy progress.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
287
TPCRDS8K_zOS-Using TPCR.fm
As shown in Figure 7-32 the copy progress is 100% indicating that all data from the H1 source volumes have been copied to the T1 target FlashCopy volumes.
288
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-34 is a warning that you are about to terminate the FlashCopy relationship between H1 source and T1 target FlashCopy volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
289
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-35 indicating that Terminate action has been successfully completed. The status of our FlashCopy session is now Inactive and the state is Defined.
290
TPCRDS8K_zOS-Using TPCR.fm
Once the FlashCopy session is terminated the following option is available: Flash - performs the FlashCopy operation using the option specified while defining FlashCopy session (Incremental, Persistent or No Copy).
Select the Metro Mirror Single Direction session from the pull-down menu as shown in Figure 7-37 on page 292. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1 represents Site 1 volumes and H2 represents Site 2 volumes. When you define Copy Sets this pictograph helps you to orient and understand the replication direction. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
291
TPCRDS8K_zOS-Using TPCR.fm
This leads us to the session Properties panel as Figure 7-38 on page 292 shows. The Properties panel is also important because it requires that you specify at least a name for the session which is about to be created. We choose MM SD as the name for our session. An optional Description is recommended to understand the purpose of the session because the session name may not reveal what this session is intended for. Figure 7-38 on page 292 comments are on the storage servers involved in this session. You may add a location of each storage server and a date when the session is created or changed.
292
TPCRDS8K_zOS-Using TPCR.fm
The Properties panel also gives you the options for you to define the new session. Here we give some details about what you can specify for a Metro Mirror Single Direction session. Fail MM/GC if the target is online (CKD only) This option insures that all target (secondary) volumes in a Metro Mirror session are offline and not visible to any host, otherwise the start of the session task will fail. This applies to CKD volumes only. We recommend you to use this option since target (secondary) volumes in a Metro Mirror session should be offline to all hosts. Enable Hardened Freeze Use this option to let z/OS Input/Output Supervisor (IOS) manage freeze operations for the volumes in the session, which prevents Tivoli Storage Productivity Center for Replication from freezing the volumes and possibly freezing itself. We recommend you to use this option if you put system volumes, like SYSRES and page data sets, into the Copy Sets of a Metro Mirror session. You need the following pre-requisites to implement this function: z/OS at 1.13 level, with APAR OA37632 installed; z/OS address spaces Basic HyperSwap Management (HSIB) and Basic HyperSwap API (HSIBAPI) must be active, even if you are not going to exploit Basic HyperSwap. Metro Mirror Suspend Policy TPC for Replication, or z/OS IOS if you are using the Enable Hardened Freeze option, issues Freeze commands to all the LSSs defined in a MM session, as soon as a replication error or some primary disk errors occur. All the write I/Os are hold at the primary volumes (H1), and then the replication between H1 and H2 volumes is suspended. Those Freeze commands thus guarantees that the data in H2 volumes is consistent. The action that TPC for Replication, or z/OS, take subsequent to the Freeze is specified by the Metro Mirror Suspend Policy. You basically have two options available: Hold I/O after Suspend - known as Freeze and Stop policy After a freeze, new writes are not allowed to the H1 volumes, thus stopping your production systems. Release I/O after Suspend - known as Freeze and Go policy After a freeze, you can make new writes to the H1 volumes, but no replication will occur to the H2 volumes. This is the default setting for all new sessions. Which option you select is really a business decision rather than an IT decision. If your Recovery Point Objective (RPO) is zero (that is, you cannot tolerate any data loss in case of production site disaster), you must select Hold I/O after Suspend. This option will hold I/O at the production site on all volumes defined in the session. The volumes that belong to a session defined with Hold I/O after Suspend present a Extended Long Busy (ELB) state to all the systems that try to update those volumes. The Extend Long Busy timer is defined in DS8000 or ESS box, at LSS level. The default is 120 seconds. What happens when the timer expires depends on if you are using Enable Hardened Freeze or not: If you defined the session without Enable Hardened Freeze, I/Os to the H1 (primary) volumes is automatically resumed after the timer expires. You can, however, allow I/O operations to these primary volumes before the timer expires by performing a Release I/O action in TPC for Replication. If you had defined the session with Enable Hardened Freeze, z/OS holds the I/O operations until you explicitly release them by issuing the z/OS command SETHS RESUMEIO. If you issue this command before the ELB expires, I/O operations continue
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
293
TPCRDS8K_zOS-Using TPCR.fm
on hold until either ELB expires or you perform a Release I/O action from TPC for Replication. On the other hand, if the event that caused TPC for Replication to take Hold I/O after Suspend action was a transient event (for example, temporary link lost between sites) rather than a real disaster, you will have brought all production systems down unnecessarily. If your RPO is higher than zero, you may decide to let the production systems continue operation once the secondary volumes have been protected. This is known as Freeze and Go policy and you need to select Release I/O after Suspend option. In this case, if the trigger was only a transient event, you have avoided an unnecessary outage. On the other hand, if the trigger was the first sign of an actual disaster, you could continue operating for some amount of time before all systems actually fail (so called Rolling disaster). Any updates made to the H1 volumes during this time will not have been remote copied, and therefore are lost. In our example we use Release I/O after Suspend option as shown previously in Figure 7-38 on page 292. Reset Secondary Reserves This option is not shown in the Properties panel when you create a new session. But it can be accessed later using the View/Modify Properties panel. Select this option to remove any persistent reserves that might be set on the target volumes of the copy sets being started when a Start command is issued for the session. Before enabling the Reset Secondary Reserves option, be aware that this action causes the session to overwrite all data on the target volume. Click Next to define location sites. From the pull-down Site 1 Location menu (see Figure 7-39) select the storage subsystem where your primary volumes reside. Click Next to continue.
294
TPCRDS8K_zOS-Using TPCR.fm
From the pull-down Site 2 Location menu (see Figure 7-40) select your H2 storage subsystem previously defined and click Next to continue.
Figure 7-41 displays the message that the MM SD session was successfully created. Click Finish to exit the Create Session wizard. Alternatively, you have an option to add Copy Sets and click on Launch Add Copy Sets Wizard and follow the instructions described in Add Copy Sets to a Metro Mirror session on page 296.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
295
TPCRDS8K_zOS-Using TPCR.fm
Go back to the TPC for Replication home page and select Sessions hyperlink to check on the recently created Metro Mirror session. Figure 7-42 now displays the MM SD session which we successfully created.
296
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-43 Add Copy Sets to Metro Mirror Single Direction session
Note that over time various terminology is introduced for the same thing. Peer-to-Peer Remote Copy or PPRC (known today as Metro Mirror) started out with primary volumes and secondary volumes. This terminology was fine for a 2-site solution. With the arrival of switching sites from an application viewpoint as well as for the storage subsystem used, TPC for Replication introduced a different terminology for these PPRC volumes and their association to a certain site. Host 1 or Site 1 refers to the primary volumes as a starting point. This may also be the local site or application site. It may always be considered as the customer primary site. But it has the potential to change. A customer may want to switch application sites from Host 1 to Host 2 in Site 2, and also switch at the same time the Copy Services role of the associated storage subsystems. This led to Host 1, Site 1 volumes and Host 2, Site 2 volumes terminology being used in TPC for Replication. Figure 7-44 on page 298 displays the panel which provides details on the primary volumes or local volumes which are called Host 1 volumes relating to the fact that these volumes reside in the Site 1 (also referred to as the application site or local site). These are all synonyms and refer to the same environment. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS where your H1 volume resides. Once the LSS has been selected, choose the appropriate volume from the Host 1 volume pull-down list. The alternative way to add a large number of volumes to this session is to create a CSV file as explained in Importing Copy Sets into an existing session on page 208. In case you have an existing CSV file select Use a CSV file to import copy sets check box and provide a path to your CSV file. In our scenario we selected the DS8000 disk subsystem that we defined in our Host 1 location and the appropriate volumes, as shown in Figure 7-44 on page 298. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
297
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-44 Add Copy Sets to Metro Mirror Single Direction Session - Choose Host 1
Choosing one volume at a time can be very tedious if you are need to specify a large numbers of volumes. In this case, it worth selecting the option All Volumes from the Host1 volume list as shown in Figure 7-45. You have the option to deselect the ones that you dont want to be in the session at a later time. Click Next to continue.
Figure 7-45 Add Copy Sets to Metro Mirror Single Direction session - Host 1 and All Volumes option
The next step is to define Site 2 storage subsystems and volumes as shown in Figure 7-46. Select the desired Host 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 2 logical storage subsystem list. Select the LSS where your H2 volume resides. If All volumes for a given LSS were selected in the previous step while defining Host 1 volumes, you do not have an option to select any volume from Host 2 volumes list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. In our scenario we selected All volumes in Choose Host1 step. Click Next to continue.
298
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-46 Add Copy Sets to Metro Mirror Single Direction session - Choose Host 2 and All Volumes option
The next screen in Figure 7-47 displays a message with regard to the matching results. However, this warning message does not mean the failure of Copy Sets creation. Click Next to see the list of available Copy Sets and the reasons for the warning message.
Figure 7-47 Add Copy Sets to Metro Mirror Single Direction Session - Matching Results
In our example one volume pair has a warning and if you click the Show hyperlink next to it, the message description appears as shown in Figure 7-48 on page 300. This particular H1 volume is already defined in the FC1 session that we created previously, as described in topic 7.2, FlashCopy session using GUI on page 272. In our case, we can ignore this warning as we are going to keep FC1 session inactive. The rest of the volumes met the matching criteria and are automatically selected. You still have a chance to modify the current selection and deselect any of the volume pair included in the list. The Show hyperlink next to each matching volume pair provides Copy Set
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
299
TPCRDS8K_zOS-Using TPCR.fm
information. In our example, we clicked on the Deselect All button, then we selected some Copy Sets that were available. Click Next to continue.
Figure 7-48 Add Copy Sets to Metro Mirror Single Direction Session - Select Copy Sets
The next screen displays the number of Copy Sets which are going to be created as well as the number of unresolved matches as shown in Figure 7-49 on page 301. These unresolved messages are a result of we deselecting 47 volumes from the panel shown in Figure 7-48. Click Next to continue.
300
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-49 Add Copy Sets to Metro Mirror Single Direction session - Confirm
TPC for Replication internally adds these 3 Copy Sets to its database. Note this does not start to establish Metro Mirror copy pairs. It is just an TPC for Replication internal process to add these Copy Sets to the TPC for Replication database inventory database. It takes a few seconds for TPC for Replication to add the Copy Sets to this session. After that, it shows a panel as shown in Figure 7-50 with the results. The warning is still present, as we used a volume in one Copy Set in this MM SD session that was also used in another Copy Set for session FC1. Click Finish to exit the Add Copy Sets wizard.
Figure 7-51 on page 302 confirms that all Copy Sets (3) are successfully added to the session MM SD and the database is successfully updated. The session status is still Inactive.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
301
TPCRDS8K_zOS-Using TPCR.fm
This concludes the steps through the GUI when you add Copy Sets to a Metro Mirror session.
The display above shows that the device address 948A is offline to the JES3 adress spaces on systems SC64, SC70 and SC65.
302
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-53 is a warning that you are about to initiate a Metro Mirror session. It will start copying data from the Host 1 to Host 2 volume defined previously by adding the Copy Set, thus overwriting any data on Host 2 volume. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
303
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-54 on page 304 confirms that Metro Mirror session is started. The session is in Preparing state and Warning status. Click on the session name hyper link (MM SD in our example) to find out more details on this session.
As shown in Figure 7-55 the Metro Mirror session has Normal status and there are no errors. The copy progress is 100% and the session has changed to Prepared state. Click the Sessions hyperlink in the Navigation tree area.
304
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-56 on page 305 shows the health overview of our Metro Mirror session.
If you the session was create with the Enable Hardened Freeze option, you see the following set of messages when all Metro Mirror pairs reach Full Duplex state (Prepared). IOSHM0200I MetroMirror Configuration Load complete IOSHM0808I HyperSwap Configuration Monitoring started, time interval = 5 minutes Although we not using Basic HyperSwap, z/OS activates the HyperSwap Configuration Monitoring, in order to watch for all the freeze triggers that can occur in any system in the sysplex.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
305
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Mirror session is started and it has Normal status and in Prepared State. the following Actions are available: StartGC Suspend Stop Start Terminate Switches from Metro Mirror to Global Copy between H1 and H2 volumes. suspends the replication. Primary disk subsystems keep a bit map of tracks changed, to be used when you start the replication again. stops replication with inconsistent H2 volumes. this reestablishes replication. this will terminate the session (under Cleanup submenu).
When you switch from synchronous replication to asynchronous replication, the data on H2 volumes is no longer consistent. Because of this, TPC for Replication shows a warning like shown in Figure 7-58. Click Yes to confirm that you really want to convert the session to Global Copy mode.
306
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-59 on page 307 shows the Sessions panel. Session MM SD is now with a Warning status and in Preparing state. It shows Preparing state as long as it stay in Global Copy mode.
You can switch this session back to synchronous replication (Metro Mirror) by selecting the session MM SD radio button, then Start from the Select Action pull down menu and clicking Go. Session MM SD status goes back to Normal, and its state changes back to Prepared.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
307
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-61 is a warning that you are about to a Suspend Metro Mirror session. Note: Once the Suspend is initiated all host I/O is temporarily halted if Release I/O after Suspend has been selected during the Metro Mirror session creation step (see 7.3.1, Create Metro Mirror Single Direction session on page 291). In case Hold I/O after Suspend policy has been selected then all systems that can update the H1 volumes are on hold before the Extended Long Busy (ELB) for CKD volumes ends (default is 120 seconds). Click Yes to continue.
308
TPCRDS8K_zOS-Using TPCR.fm
The status of our Metro Mirror session has changed from Normal to Severe status indicating that data is not replicated anymore between Host 1 and Host 2 volumes. The state of the session is Suspended as shown in Figure 7-62.
If you suspend a Metro Mirror session that was created with Hold I/O after Suspend and Enable Hardened Freeze options, z/OS will block write operations against the H1 volumes (your application volumes) until you issue the SETHS RESUMEIO command from z/OS. You see a message similar to what is shown in Figure 7-63 on page 310 when you suspend this
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
309
TPCRDS8K_zOS-Using TPCR.fm
session. Although HyperSwap is not enabled (it cannot be enabled in a Metro Mirror Single Direction session), the HyperSwap Monitoring was in effect. This HyperSwap Monitoring watches for all possible freeze triggers that can occur in the systems in the sysplex. IOSHM0200I MetroMirror Configuration Purge complete IOSHM0809I HyperSwap Configuration Monitoring stopped IOSHM0308I PPRC suspension requested for replication session MM SD 324 FreezeAll and Stop completed. Reason: 0000
Figure 7-63 Metro Mirror session, with Enabled Hardened Freeze, suspended
Figure shows a sample of messages that you see when you issue the SETHS RESUMEIO command from any system in the sysplex. Message IOSHM0309I shows you the names of the z/OS images in the sysplex, the total number of devices processed for each system, and, eventually, the number of devices that z/OS could not resume I/O operations. SETHS RESUMEIO IOSHM0309I SETHS RESUMEIO has Member: SC63 Total Member: SC64 Total Member: SC70 Total Member: SC65 Total
Figure 7-64 SETHS RESUMEIO
0 0 0 0
If you issue the SETHS RESUMEIO command before the Extend Long Busy (ELB) timer expires in a disk subsystem like DS8000 or ESS, the write operations continues on hold by the disk subsystem. You can perform the Release I/O action in order to release all write I/Os to the application volumes before ELB timer expires. If you suspend a Metro Mirror session which has been created with Hold I/O after Suspend policy and without Hardened Freeze enabled (see 7.3.1, Create Metro Mirror Single Direction session on page 291), then all systems that can update the H1 volumes are on hold before the Extended Long Busy for CKD volumes ends (default is 120 seconds). If you do not want to stop your production for 120 seconds you have an option to release I/O immediately after the Metro Mirror session has been suspended. Select Release I/O from Select Action pull-down menu and click Go to continue (see Figure 7-65).
310
TPCRDS8K_zOS-Using TPCR.fm
Note: If you suspend a Metro Mirror session that was previously defined with the Hold I/O After Suspend policy, all the paths between the LSSs that are defined in this session are also removed. The next message shown in Figure 7-66 on page 312 is a warning that you are about to allow writes to continue to H1 volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
311
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Mirror session is suspended the following options are available: Start Recover Release I/O this will reestablish copying this will make H2 volumes available for host access and stop the copy process this will release I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. this will terminate the session (under Cleanup submenu)
Terminate
312
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-68 is a warning that you will allow H2 volumes available to your hosts. Click Yes to continue.
There is a message at the top of the screen in Figure 7-69 indicating that the Recover action has been successfully completed. The status of our Metro Mirror session is Normal and the State is Target Available indicating that H2 volume is available to your host.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
313
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-69 Metro Mirror session - H2 storage subsystem volume available to host
After the Metro Mirror session is recovered in target available state the following options are available: Start StartGC Release I/O this option reestablishes the replication in synchronous mode (Metro Mirror); this option reestablishes the replication in global copy mode; releases I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy; terminates the session (under Cleanup submenu).
Terminate
314
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-71 is a warning that you will stop data replication from H1 to H2 volumes and H2 volumes are not consistent. Click Yes to continue.
There is a message at the top of the screen in Figure 7-72 on page 316 indicating that Stop action has been successfully completed. The status of our Metro Mirror session is Severe and the state is Suspended. Column Recoverable in Figure 7-72 on page 316 has value NO indicating that H2 storage subsystem volumes are not consistent.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
315
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Mirror session is stopped the following options are available: Start StartGC Recover Release I/O reestablishes Metro Mirror replication reestablishes the replication in global copy mode makes H2 volumes available for host access releases I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. terminates the session (under Cleanup submenu)
Terminate
316
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-74 is a warning that you are about to terminate Metro Mirror relationship between H1 to H2 volumes. Note that if you require to start the very same Metro Mirror session again a full copy from H1 to H2 volumes will be required. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
317
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-75 indicating that Terminate action has been successfully completed. The status of our Metro Mirror session is now Inactive and the state is Defined.
Once the Metro Mirror session is terminated the following options are available: Start StopGC> reestablishes replication in synchronous mode (Metro Mirror) reestablishes replication in Global Copy mode.
318
TPCRDS8K_zOS-Using TPCR.fm
Select the hardware type of the storage subsystems that you are going to use for this session. In our example, we plan to implement a Global Mirror session between two DS8000 boxes. Select the Global Mirror Single Direction session from the pull-down menu as shown in Figure 7-77. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1 represents Site 1 volumes, H2 represents Site 2 volumes and J2 represent journal volumes used to restore data to the last consistency point. When you define Copy Sets this pictograph helps you to orient and understand replication direction. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
319
TPCRDS8K_zOS-Using TPCR.fm
This leads us to the session Properties panel as Figure 7-78 shows. The Properties panel is also important because it requires that you specify at least a name for the session which is about to be created. An optional Description is recommended to understand the purpose of the session because the session name may not reveal what this session is intended for. Figure 7-78 comments on the storage subsystems involved in this session. You may add a location of each storage server and a date when the session is created or changed. The Properties panel allows you to specify some parameters to monitor and control the Global Mirror session. These parameters are described in the following paragraphs: Consistency Group interval time (sec) This value specifies how long to wait between the formation of the next Consistency Groups. This is specified in seconds, and the default is zero (0) seconds. Zero seconds means that Consistency Group formation happens constantly. As soon as a Consistency Group is successfully created, the process to create a new Consistency Group starts again immediately. Recovery Point Objective Alert Specifies the length of time that you want to set for the recovery point objective (RPO) thresholds. The values determine whether a Warning or Severe alert is generated when the RPO threshold is exceeded for a role pair. The RPO represents the length of time in seconds of data exposure that is acceptable in the event of a disaster. The thresholds are specified in the following way: Warning level threshold (seconds): when the RPO is greater than this value, you want an alert to be generated. Severe level threshold (seconds): when RPO is greater than this value, an alert is also generated, and the session status changes to Severe. Fail MM/GC if the target is online (CKD only) This option insures that all target (secondary) volumes in a Global Mirror session are offline and not visible to any host, otherwise the create session task will fail. This applies to CKD volumes only. We recommend to use this option since target (secondary) volumes in a Global Mirror session should be offline to all hosts.
320
TPCRDS8K_zOS-Using TPCR.fm
From the pull-down Site 1 Location menu (see Figure 7-79) select the storage subsystem at your Site 1 location. Click Next to continue
From the pull-down Site 2 Location menu (see Figure 7-80) select the storage subsystem at your site 2 location. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
321
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-81 on page 322 displays the message that the GM SD session was successfully created. Click Finish to exit the Create Session wizard. Alternatively, you have an option to add Copy Sets by clicking on Launch Add Copy Sets Wizard and following the instructions described in 7.4.2, Add Copy Sets to a Global Mirror session on page 323
Go back to the TPC for Replication home page and select Sessions hyperlink to check on recently created Global Mirror session. Figure 7-82 now displays the Global Mirror Single Direction session which we successfully created. 322
Tivoli Storage Productivity Center for Replication for System z
TPCRDS8K_zOS-Using TPCR.fm
Again note that this session with its name is just a token and represents a Global Mirror Copy Services type. At this stage there is not any volume associated with this GM SD session.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
323
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-83 Add Copy Sets to Global Mirror Single Direction session
Figure 7-84 on page 325 displays the panel which provides details on the primary volumes or local volumes which are called Host 1 volumes relating to the fact that these volumes reside in the Site 1 or application site or local site. These are all synonyms and refer to the same environment. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS where your H1 volumes resides. Once the LSS has been selected, choose the appropriate volume from the Host 1 volume pull-down list or select All Volumes. The alternative way to add a large number of volumes to this session is to create a CSV file as explained in Using CSV files for importing and exporting sessions on page 196. In case you have CSV file ready select Use a CSV file to import copy sets check box and provide a path to your CSV file. Click Next to continue.
324
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-84 Add Copy Sets to Global Mirror Single Direction session - Choose Host 1
The next step is to define Site 2 storage subsystems and volumes as shown in Figure 7-85 on page 325. Select the desired Host 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 2 logical storage subsystem list. Select the LSS where your H2 volume resides.If you selected All volumes for a given LSS in the previous step while defining Host 1 volumes, you do not have an option to select any volume from Host 2 volume list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. In our example we selected All volumes in Choose Host1 step. Click Next to continue.
Figure 7-85 Add Copy Sets to Global Mirror Single Direction session - Choose Host 2 volumes
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
325
TPCRDS8K_zOS-Using TPCR.fm
In order to complete the Global Mirror Copy Set definition we need to define J2 journal volume as shown in Figure 7-86. Select the desired Journal 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Journal 2 logical storage subsystem list. Select the LSS where your J2 volume resides. If you selected All volumes for a given LSS in the previous step while defining Host 1 and Host 2 volumes, you do not have an option to select any volume from Journal 2 volume list. TPC for Replication will automatically match all volumes from selected LSS in Host 2 storage subsystem with all volumes from selected LSS in Journal 2 storage subsystem. Click Next to continue.
Figure 7-86 Add Copy Sets to Global Mirror Single Direction session - Choose Journal 2 volumes
The next screen in Figure 7-87 on page 327 displays the matching results.ts. You can click on the Add More button to continue adding Copy Sets to this session. Click Next after you have added all the Copy Sets you need.
326
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-87 Add Copy Sets to Global Mirror Single Direction session - Matching Results
The next screen displays the number of Copy Sets which are going to be created, as shown in Figure 7-88. Click Next to continue.
Figure 7-88 Add Copy Sets to Global Mirror Single Direction session - Confirm
TPC for Replication internally adds Copy Sets to its database. Note this does not start Global Mirror relationship between H1, H2 and J2 volumes. It is just a TPC for Replication internal process to add these Copy Sets to the TPC for Replication inventory database. After a few seconds TPC for Replication displays the Results pane, as shown in Figure 7-89 on page 328. Click Finish to exit the Add Copy Sets wizard.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
327
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-90 confirms that all Copy Sets are successfully added to the GM SD session and the TPC for Replication repository database is successfully updated. The session status is still Inactive and in Defined state.
This concludes the steps through the GUI when you add Copy Sets to a Global Mirror session.
328
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-92 on page 330 is a warning that you are about to initiate a Global Mirror session. It will start the copying of data from Host 1 to Host 2 volumes defined previously by adding Copy Set, thus overwriting any data on the Host 2 volumes. Furthermore, it will initiate a FlashCopy to J2 journal volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
329
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-93 confirms that Global Mirror session is started. The session is in Preparing state and Warning status. Click on the session name hyperlink (GM SD in our example) to find out more details on this session.
As shown in Figure 7-94 on page 331 the Global Mirror session is still in Preparing state since the initial copy of data between H1 and H2 volumes is still in progress, and consistency groups are not yet being formed.
330
TPCRDS8K_zOS-Using TPCR.fm
Wait until session Status changes to Normal status as shown in Figure 7-95 on page 331. It means that initial copy has finished and the first consistency group has been created.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
331
TPCRDS8K_zOS-Using TPCR.fm
Once the Global Mirror session is started and it has Normal status and in Prepared state the following options are available: Suspend Start Terminate stops replication with H2 volumes, while the Site 1 storage subsystem keeps track of changes being applied to tracks on the H1 volumes. reestablishes replication terminates the session (under Cleanup submenu)
The next message shown in Figure 7-97 is a warning that you are about to Suspend Global Mirror session. Click Yes to continue.
332
TPCRDS8K_zOS-Using TPCR.fm
The status of our Global Mirror session has changed from Normal to Severe status indicating that data is not replicated anymore between Host 1 and Host 2 volumes. State of the session is Suspended as indicated in Figure 7-98.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
333
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-100 is a warning that you will allow H2 volumes available to your host. Click Yes to continue.
There is a message at the top of the screen in Figure 7-101 indicating that Recover action has been successfully completed. The status of our Global Mirror session is Normal and the State is Target Available indicating that H2 volumes are available to your hosts.
334
TPCRDS8K_zOS-Using TPCR.fm
After the Global Mirror session is recovered in target available state the following options are available: Start Terminate reestablishes replication terminates the session (under Cleanup submenu)
The next message shown in Figure 7-103 on page 336 is a warning that you are about to terminate Global Mirror relationship between H1, H2 and J2 volumes. Note that if you require to start the very same Global Mirror session again a full copy from H1 to H2 volumes will be required. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
335
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-104 indicating that Terminate action has been successfully completed. The status of our Global Mirror session is now Inactive and the state is Defined.
336
TPCRDS8K_zOS-Using TPCR.fm
Select the hardware type (DS8000 in our example) and select Metro Mirror Failover/Failback session from the pull-down menu as shown in Figure 7-106 on page 338. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1 represents Site 1 volumes and H2 represents Site 2 volumes. When you define Copy Sets this pictograph helps you to orient and understand replication direction. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
337
TPCRDS8K_zOS-Using TPCR.fm
This leads us to the session Properties panel as Figure 7-107 on page 341 shows. The Properties panel is also important because it requires that you specify at least a name for the session which is about to be created. An optional Description is recommended to understand the purpose of the session because the session name may not reveal what this session is intended for. The Properties panel also gives you the options for you to define the new session. TPC for Replication allows you to specify additional options to work with HyperSwap when defining a Metro Mirror Failover/Failback session. Fail MM/GC if the target is online (CKD only) This option insures that all target (secondary) volumes in a Metro Mirror session are offline and not visible to any host, otherwise the start of the session task will fail. This applies to CKD volumes only. We recommend you to use this option since target (secondary) volumes in a Metro Mirror session should be offline to all hosts. Enable Hardened Freeze Use this option to let z/OS Input/Output Supervisor (IOS) manage freeze operations for the volumes in the session, which prevents Tivoli Storage Productivity Center for Replication from freezing the volumes and possibly freezing itself. We recommend to use this option if you put system volumes, like SYSRES, into the Copy Sets of a Metro Mirror session. You need the following pre-requisites to implement this function: z/OS at 1.13 level, with APAR OA37632 installed; z/OS address spaces Basic HyperSwap Management and Basic HyperSwap API must be active, even if you are not going to exploit Basic HyperSwap. Metro Mirror Suspend Policy
338
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication, or z/OS IOS if you are using the Enable Hardened Freeze option, issues Freeze commands to all the LSSs defined in a MM session, as soon as a replication error or some primary disk errors occur. All the write I/Os are hold at the primary volumes (H1), and then the replication between H1 and H2 volumes is suspended. Those Freeze commands thus guarantees that the H2 volumes data is consistent. The action that TPC for Replication, or z/OS, take subsequent to the Freeze is specified by the Metro Mirror Suspend Policy. You basically have two options available: Hold I/O after Suspend - known as Freeze and Stop policy After a freeze, new writes are not allowed to the H1 volumes, thus stopping your production systems. Release I/O after Suspend - known as Freeze and Go policy After a freeze, you can make new writes to the H1 volumes, but no replication will occur to the H2 volumes. This is the default setting for all new sessions. Which option you select is really a business decision rather than an IT decision. If your Recovery Point Objective (RPO) is zero (that is, you cannot tolerate any data loss in case of production site disaster), you must select Hold I/O after Suspend. This option will hold I/O at the production site on all volumes defined in the session. Because all systems that can update the production site volumes are on hold before the Extended Long Busy (ELB) for CKD volumes (default is 120 seconds), you are sure that no updates are made to the production volumes that are not mirrored to the volumes at the disaster recovery (DR) site. If you do not take any action against this session within this ELB time0, I/O is released to the production volumes when the ELB timer expires. On the other hand, if the event that caused TPC for Replication to take Hold I/O after Suspend action was a transient event (for example, temporary link lost between sites) rather than a real disaster, you will have brought all production systems down unnecessarily. If your RPO is higher than zero, you may decide to let the production systems continue operation once the secondary volumes have been protected. This is known as Freeze and Go policy and you need to select Release I/O after Suspend option. In this case, if the trigger was only a transient event, you have avoided an unnecessary outage. On the other hand, if the trigger was the first sign of an actual disaster, you could continue operating for some amount of time before all systems actually fail (so called Rolling disaster). Any updates made to the H1 volumes during this time will not have been remote copied, and therefore are lost. In our example we use Release I/O after Suspend option as shown in Figure 7-107 on page 341. Reset Secondary Reserves This option is not shown in the Properties panel when you create a new session. But it can be accessed later using the View/Modify Properties panel. Select this option to remove any persistent reserves that might be set on the target volumes of the copy sets being started when a Start command is issued for the session. Before enabling the Reset Secondary Reserves option, be aware that this action causes the session to overwrite all data on the target volume. Manage H1-H2 with Hyperswap Select this option to trigger a HyperSwap, redirecting application I/O to the secondary volumes (H2 volumes), either when a error occur in one of the primary volumes (H1), or as a part of a planned procedure. There are some options to control the behavior of z/OS regarding he HyperSwap configuration:
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
339
TPCRDS8K_zOS-Using TPCR.fm
Disable HyperSwap You can use this option to start the Basic HyperSwap session with HyperSwap disabled. You can use this option for test purposes. You can enable HyperSwap later on by modifying this option in Session properties panel, or by issuing the SETHS ENABLE console command on z/OS. On Configuration Error These options tells what your sysplex should do in case a member in the sysplex fails to load a new HyperSwap configuration: Partition the system(s) out of sysplex: the system that can not load a new configuration is taken out of the sysplex. It enters a 0B5 wait state, and it will be partitioned out of the sysplex. You have to re-IPL it. Disable HyperSwap: it just disables HyperSwap when a new HyperSwap configuration can not be loaded in any system in the sysplex.
The default for this option is Partition the system(s) out of the sysplex. However, we recommend you specify Disable HyperSwap when you are doing your tests in order to avoid unnecessary outages due to systems being taken out of the sysplex due to a configuration error. As soon as you get skilled with this process of loading new configurations, you can change it back to the Partition the system(s) out of the sysplex. On planned HyperSwap error These options specify what should be done in case a planned HyperSwap fails in any system in a sysplex: The first radio button specifies that if a system fails to complete a planned HyperSwap, it will be partitioned out of the sysplex. The other images will have their UCBs swapped by the HyperSwap operation. The second radio button specifies that the HyperSwap operation will be backed out, and HyperSwap will be disabled.
We recommend you select the Disable HyperSwap option for planned HyperSwap errors. It allows other partitions to complete the HyperSwap and resume their workload sooner. On unplanned HyperSwap error These options dictate how the sysplex should behave in case of an unplanned HyperSwap. The choices are the same as for planned HyperSwap: Partition the system that can not complete the HyperSwap operation out of the sysplex. That system enters a 0B5 wait state, and it is partitioned out of the sysplex. You have to re-IPL this system. Back out the HyperSwap operation and disable it.
We recommend you use the option to partition the system(s) out of the sysplex. The others system in the sysplex will survive the planned HyperSwap and will continue to run even if there is a crash in an entire disk subsystem box. Manage H1-H2 with Open HyperSwap If volumes are attached to an IBM AIX host, TPC for Replication can manage the swapping and flow of replication of H1 and H2 volumes in a Metro Mirror Failover/Failback session using Open HyperSwap. If you select this option, option is selected, a failure on the host accessible volumes triggers a swap, which redirects application I/O to the secondary volumes.
340
TPCRDS8K_zOS-Using TPCR.fm
Disable Open HyperSwap. Select this option to prevent a swap from occurring by a command or event while keeping the configuration on the host system and all primary and secondary volumes coupled. After you choose all the Properties that are relevant to your environment (as shown in Figure 7-107), click Next to define location sites.
From the pull-down Site 1 Location menu (see Figure 7-108 on page 342) select the location of your H1 storage subsystem previously defined in Adding IBM ESS or DS Storage Server to Tivoli Storage Productivity Center for Replication server on page 141 and click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
341
TPCRDS8K_zOS-Using TPCR.fm
From the pull-down Site 2 Location menu (see Figure 7-109) select the location of your H2 storage subsystem previously defined and click Next to continue.
Figure 7-110 on page 343 displays the message that the MM FO-FB session was successfully created. Click Finish to exit the Create Session wizard. Alternatively, you have an option to add Copy Sets and click on Launch Add Copy Sets Wizard and follow the
342
TPCRDS8K_zOS-Using TPCR.fm
instructions described in 7.5.2, Add Copy Sets to a Metro Mirror Failover/Failback session on page 344.
Go back to the TPC for Replication home page and select Sessions hyperlink to check on the recently created Metro Mirror Failover/Failback session. Figure 7-111 now displays the Metro Mirror Failover/Failback session which we successfully created.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
343
TPCRDS8K_zOS-Using TPCR.fm
Again note that this session with its name is just a token and represents a Metro Mirror Failover/Failback Copy Services type. At this stage there is no storage server nor any volumes associated with this MM FO-FB session. Refer to 7.3.3, JES3 considerations on page 302 if you intend to exploit the Enable Hardened Freeze or the Hyperswap options in a JES3 environment.
Note that over time various terminology is introduced for the same thing. Peer-to-Peer Remote Copy or PPRC (known today as Metro or Global Mirror) started out with primary volumes and secondary volumes. This terminology was fine for a 2-site solution. With the arrival of switching sites from an application viewpoint as well as for the storage subsystem used, TPC for Replication introduced a different terminology for these PPRC volumes and their association to a certain site. Host 1 or Site 1 refers to the primary volumes as a starting point. This may be the local site or application site, it is considered to be the primary site. But it has the potential to change. You may want to switch application sites from Host 1 in Site 1 to Host 2 in Site 2 and also switch at the same time the Copy Services role of the associated storage subsystems. This led to Host 1, Site 1 volumes and Host 2, Site 2 volumes terminology being used in TPC for Replication.
344
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-113 on page 345 displays the panel which provides details on the primary volumes or local volumes which are called Host 1 volumes relating to the fact that these volumes reside in the Site 1 or application site or local site. These are all synonyms and refer to the same environment. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS where your H1 volume resides. Once the LSS has been selected, choose the appropriate volume from the Host 1 volume pull-down list. The alternative way to add a large number of volumes to this session is to create CSV file as explained in Using CSV files for importing and exporting sessions on page 196. In case you have a CSV file ready select Use a CSV file to import copy sets check box and provide a path to your CSV file. In our example we selected DS8000 disk subsystem serial number L3331, LSS 0C and volume 07, as shown in Figure 7-113. Click Next to continue.
Figure 7-113 Add Copy Sets to Metro Mirror FO/FB session- Choose Host 1
In case you would like to define all volumes within a certain LSS in the Metro Mirror session, there is an option to select All Volumes from the Host1 volume list as shown in Figure 7-114. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
345
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-114 Add Copy Sets to Metro FO/FB session - Choose Host 1 and All Volumes option
The next step is to define Site 2 storage subsystems and volumes as shown in Figure 7-115. Select the desired Host 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 2 logical storage subsystem list. Select the LSS where your H2 volumes resides. If All volumes for a given LSS were selected in the previous step while defining Host 1 volumes, you do not have an option to select any volume from Host 2 volumes list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. In our example we selected All Volumes in Choose Host1 panel. Click Next to continue.
346
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-115 Add Copy Sets to Metro Mirror Single FO/FB session - choose Host 2 volume
The next screen in Figure 7-116 on page 348 displays a warning with regard to the matching results. Clicking on the hyperlink Show, we can see more details about this Copy Set, including the reason for the warning. In our example, we are adding a Copy Set that is already defined in a session called BH_Robert. We can ignore this warning, as long as this session BH_Robert will remain Inactive. You have the option to add more Copy Sets, by clicking on the Add More button. When you had defined all the Copy Sets that you need, click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
347
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-116 add Copy Sets to Metro Mirror FO/FB session - Matching Results
The next screen displays the number of Copy Sets which are going to be created as well as the number of unresolved matches (or not selected) as shown in Figure 7-117. Click Next to continue.
Figure 7-117 Add Copy Sets to Metro Mirror FO/FB session - Confirm panel
348
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication shows you a progress panel while it adds all the Copy Sets to its database. After a few seconds the progress panel reaches 100% and leaves the Adding Copy Sets panel and progress to the next panel shown in Figure 7-118. Click Finish to exit the Add Copy Sets wizard. You have the opportunity to review and make an appropriate action, if you need to.
Figure 7-118 All Copy Sets are added to the TPC for Replication database
Figure 7-119 confirms that all Copy Sets are successfully added to the MM FO-FB session and the database is successfully updated. The session status is still Inactive.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
349
TPCRDS8K_zOS-Using TPCR.fm
This concludes the steps through the GUI when you add Copy Sets to a Metro Mirror Failover/Failback session.
350
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-121 is a warning that you are about to initiate a Metro Mirror session. It will start the copying of data from Host 1 to Host 2 volumes defined previously by adding Copy Sets, thus overwriting any data on Host 2 volumes. Click Yes to continue.
The message at the top of the screen in Figure 7-122 confirms that start of Metro Mirror session is completed. The session is in Preparing state and Warning status. You can click on the session Name hyperlink (MM FO-FB in our example) to find more details on this session and track the progress of the initial copy.
Figure 7-122 Start of Metro Mirror session is reported as Completed Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
351
TPCRDS8K_zOS-Using TPCR.fm
When all Copy Sets in the session reaches Full Duplex (Prepared State), the State of the session changes to Prepared, and its Status to Normal, as shown in Figure 7-123.
Once the Metro Mirror session is started and it has Normal status and in Prepared State, the following options are available: Hyperswap Start H1 H2 Stop Suspend Terminate performs Basic HyperSwap, directing application I/Os to the H2 volumes restarts replication from H1 to H2 volumes stops replication with inconsistent H2 volumes stops replication with consistent H2 volumes terminates the session (under Cleanup submenu)
7.5.4 Hyperswap
You can perform a planned Hyperswap from the TPC for Replication GUI. This can be useful when you need to do some maintenance in the site where your applications are running now, and you need to keep them active during the maintenance period. From the Sessions panel select the Metro Mirror session radio button you are working on (MM FO-FB in our example), Hyperswap from the Select Action pull-down list, and click Go, as shown in Figure 7-124.
352
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication shows you a warning. This warning tells you that your application I/Os are going to be moved from Site1 volumes to Site2 volumes. Click Yes to confirm that you want this swap to happen, as shown in Figure 7-125 on page 353.
After successful execution of Hyperswap, you see your session in a Target Available State, and Active Host H2, meaning that your application I/Os are now directed to the H2 volumes, in Site2, as shown in Figure 7-126 on page 354.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
353
TPCRDS8K_zOS-Using TPCR.fm
The Metro Mirror replication is now suspended. The disk subsystems (ESS or DS8000) in Site2 create bit maps to track all updates that your application systems do to the volumes on Site2 and arent yet replicated to Site1 volumes. HyperSwap is disabled at this time.
Return to Site1
After the maintenance on Site1 is finished, you would probably move your application I/Os back to the H1 (Site1) volumes. The first step to go home is to establish replication from H2 to H1 volumes. You can start the replication in two ways: Start H2 H1 Starts Metro Mirror replication from H2 to H1 volumes. If you start the session by using the Start H2 H1 action, the session enters Normal State and Prepared Status as soon as all updated tracks from H2 volumes are copied to the H1 volumes, and the replication enter synchronous mode for all the Copy Sets in the session. StartGC H2 H1 Starts Global Copy replication from H2 to H1 volumes. The replication stays in asynchronous until you perform a Start H2 H1 action. In our example, we restart our session by performing the StartGC H2 H1 action. From the sessions panel you can start the replication in Global Copy mode, by choosing StartGC H2 H1 from the Select Action pull-down menu, and clicking in Go, as shown in Figure 7-127.
354
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication asks you confirm this operation. It shows you a warning that data will be overwritten on Inactive Copy Sets. This is just warning for Copy Sets that could be added to this session after the execution of the Hyperswap. TPC for Replication also warns you that it will try to establish at least one path for each LSS pair defined in the session. Click Yes to continue. See Figure 7-128 on page 355.
The session enters a Warning Status and Preparing State. You can monitor the progress of the copying of tracks between the H2 and H1 volumes by clicking on the session name hyperlink in the Sessions panel. You see details for the session as shown in Figure 7-129.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
355
TPCRDS8K_zOS-Using TPCR.fm
When the Progress bar reaches near 100%, you can switch from the replication mode from Global Copy to Metro Mirror by selecting Start H2 H1 from the Select Action pull-down list, as seen in Figure 7-130.
Again TPC for Replication will show you a warning saying that is going to overwrite any data that can exist on H1 volumes that were added recently, after the HyperSwap, and ask your confirmation for this information. You can click Yes to continue.
356
TPCRDS8K_zOS-Using TPCR.fm
Session reaches Normal Status and Prepared State as soon as all Copy Sets are in synchronous mode (Full Duplex). Also, HyperSwap is enabled again, as seen in Figure 7-131.
In order for you to proceed with the go home activities, you need to do a Hyperswap again, in order for your application I/Os to be directed to the H1 volumes again. You can perform the Hyperswap action from the Session Details panel, by selecting Hyperswap from the Select Action pull-down menu and clicking Go, as seen in Figure 7-132.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
357
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication warns you that this HyperSwap will move application I/Os to Site1 as shown in Figure 7-133. Click Yes to continue.
After the HyperSwap completes successfully, you see the session in Normal Status and Target Available State, as shown in Figure . Your application I/Os are directed to the H1 volumes, and the replication between H1 and H2 volumes is suspended. HyperSwap is disabled at this time.
358
TPCRDS8K_zOS-Using TPCR.fm
It is time now for you to reestablish the replication between H1 and H2 volumes by using either Start H1 H2 or StartGC H1 H2. We choose Start H1 H2, as shown Figure 7-135.
TPC for Replication shows you a warning telling that this operation can overwrite any data on Site2 disks that are in Copy Sets inactive at the moment. It also warns you that it is going to establish at least one path between each LSS pair in this session. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
359
TPCRDS8K_zOS-Using TPCR.fm
After all the Copy Sets are in synchronous replication, the session status is Normal, and its state is Prepared again as seen in Figure 7-137. HyperSwap is also enabled again.
360
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-139 is a warning that you are about to Suspend Metro Mirror session. Note: Once the Suspend is initiated all host I/O is temporarily halted if Release I/O after Suspend has been selected during the Metro Mirror session creation step (see Figure 7-107 on page 341). In case Hold I/O after Suspend policy has been selected then all systems that can update the H1 volumes are on hold before the Extended Long Busy (ELB) for CKD volumes ends (default is 120 seconds). Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
361
TPCRDS8K_zOS-Using TPCR.fm
The status of our Metro Mirror session has changed from Normal to Severe status indicating that data is not replicated anymore between Host 1 and Host 2 volumes. State of the session is Suspended as indicated in the Session Details panel shown in Figure 7-140.
If you suspend a Metro Mirror session that was created with Hold I/O after Suspend and Enable Hardened Freeze or Manage H1-H2 with Hyperswap options, z/OS will block write operations against the H1 volumes (your application volumes) until you issue the SETHS
362
TPCRDS8K_zOS-Using TPCR.fm
RESUMEIO command from a z/OS console. You can find more details about the SETHS RESUMEIO command in 7.3.6, Suspend Metro Mirror session on page 308. If you suspend a Metro Mirror session which has been created with Hold I/O after Suspend policy and neither the Enable Hardened Freeze or Manage H1-H2 with Hyperswap options, all systems that can update the H1 volumes are on hold before the Extended Long Busy (ELB) status that the disk subsystems (ESS or DS8000) present. The default ELB is 120 seconds. If you do not want to stop your production for 120 seconds you have an option to release I/O immediately after the Metro Mirror session has been suspended. Select Release I/O from Select Action pull-down menu and click Go to continue (see Figure 7-141).
The next message shown in Figure 7-142 on page 364 is a warning that you are about to allow writes to continue to H1 volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
363
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Mirror session is suspended the following options are available: Start H1 H2 StartGC H1 H2 Recover Release I/O restarts Metro Mirror from H1 to H2 volumes restarts Global Copy replication from H1 to H2 volumes makes H2 volumes available for application systems releases I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. terminates the session (under Cleanup submenu)
Terminate
364
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-144 is a warning that you will allow H2 volumes available to your host. Click Yes to continue.
There is a message at the top of the screen in Figure 7-145 indicating that Recover action has been successfully completed. The status of our Metro Mirror session is Normal and the State is Target Available indicating that H2 volume is available to your host.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
365
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-145 Metro Mirror FO/FB session - H2 storage subsystem volume available to host
After the Metro Mirror session is recovered in target available state the following options are available: Start H1 H2 StartGC H1 H2 Release I/O restarts the replication from H1 to H2 volumes in Metro Mirror mode. restarts the replication from H1 to H2 volumes in Global Copy mode. releases I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. terminates the session (under Cleanup submenu)
Terminate
366
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-147 is a warning that you are about to enable the command which initiates copying data from H2 to H1 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 1 are not being used by any application prior to enabling the command that allows copying data to Site 1. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
367
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-148 confirms that Enable Copy to Site 1 command is completed. The status of our Metro Mirror session is the same as it was after Recover command: Normal status and Target Available state, indicating that H2 volume is available to your host.
The following options are available at this stage: Start H2 H1 StartGC H2 H1 Release I/O starts replication from H2 to H1 volumes in Metro Mirror mode. starts replication from H2 to H1 volumes in Global Copy mode. releases I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. enables Start H1 H2 and StartGC H1 H2 actions. terminates the session (under Cleanup submenu).
368
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-150 is a warning that you are about to initiate starting of Metro Mirror session. It will start copying of data from Host 2 to Host 1 volumes, thus overwriting any data on Host 1 volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
369
TPCRDS8K_zOS-Using TPCR.fm
After all H2-H1 pairs are synchronized again, the session Status changes to Normal, and its State changes to Prepared, as sees in the Session Details panel shown in Figure 7-151.
Once the Metro Mirror session is started and it has Normal status and in Prepared State The following options are available: Start H2 H1 StartGC H2 H1 Stop Suspend Terminate restarts replication from H2 to H1 volumes in Metro Mirror Mode. converts the replication from H2 to H1 volumes to Global Copy mode. stops replication with inconsistent H1 volumes. stops replication with consistent H1 volumes. terminates the session (under Cleanup submenu).
370
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-153 is a warning that you are about to enable the command which initiates copying data from H1 to H2 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 2 are not being used by any application prior to enabling the command that allows copying data to Site 2. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
371
TPCRDS8K_zOS-Using TPCR.fm
The status of our Metro Mirror session is the same as it was after Recover command: Normal status and Target Available state, as seen in the Session Details panel shown in Figure 7-154, and the H1 volumes are available to your system applications.
The following options are available at this stage: Start H1 H2 StartGC H1 H2 Release I/O restarts replication from H1 to H2 volumes in Metro Mirror mode restarts replication from H1 to H2 volumes in Global Copy mode releases I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. enables Start H2 H1 and StartGC H2 H1 actions. terminates the session (under Cleanup submenu)
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-156 is a warning that you will stop data replication from H1 to H2 volumes and H2 volumes are not consistent. Click Yes to continue.
There is a message at the top of the Session panel, or Session Details panel as shown in Figure 7-157 on page 374) indicating that Stop action has been successfully completed. The status of our Metro Mirror session is Severe and the state is Suspended. Column
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
373
TPCRDS8K_zOS-Using TPCR.fm
Recoverable in Figure 7-157 has value NO indicating that H2 storage subsystem volumes are not consistent.
Once the Metro Mirror session is stopped the following options are available: Start H1 H2 StartGC H1 H2 Start H2 H1 StartGC H2 H1 Recover (if the stop was issued with H1 active volumes) - restarts replication from H1 to H2 volumes in Metro Mirror mode (if the stop was issued with H1 active volumes) - restarts replication from H1 to H2 volumes in Global Copy mode (if the stop was issued with H2 active volumes) - restart replication from H2 to H1 volumes in Metro Mirror mode (if the stop was issued with H2 active volumes) - restart replication from H2 to H1 volumes in Global Copy mode makes target (secondary) volumes available for host access
Attention: target (secondary) volumes may be not consistent after Stop action. Release I/O releases I/O to H1 or H2 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. terminates the session (under Cleanup submenu).
Terminate
TPCRDS8K_zOS-Using TPCR.fm
Metro Mirror session once it has been terminated, full copy will take place from H1 to H2 volumes, or from H2 to H1 volumes, depending of the copy direction. Select Metro Mirror session radio button (MM FO-FB in our example shown in Figure 7-158) and Terminate action from the Select Action pull-down menu. Click Go to continue. In our example we terminated the session when copy direction was from H1 volumes to H2 volumes.
The next message shown in Figure 7-159 on page 376 is a warning that you are about to terminate Metro Mirror relationship between H1 and H2 volumes. Note that if you require to start the very same Metro Mirror session again a full copy between volumes in a Metro Mirror session will be required. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
375
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of screen, as shown in the Session Details panel in Figure 7-160, indicating that Terminate action has been successfully completed. The status of our Metro Mirror session is now Inactive and the state is Defined.
376
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Mirror session is terminated the following option is available: Start H1 H2 StartGC H1 H2 starts replication from H1 to H2 volumes in Metro Mirror mode starts replication from H1 to H2 volumes in Global Copy mode.
Choose the Hardware Type and choose Metro Mirror Failover/Failback w/ Practice session from the pull-down menus as shown in Figure 7-162 on page 378. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1 represents Site 1 volumes, H2 represents Site 2 volumes and I2 represents Site 2 Intermediate volumes which are actually used as copy target. When you define Copy Sets this pictograph helps you to orient and understand replication direction. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
377
TPCRDS8K_zOS-Using TPCR.fm
This leads us to the session Properties panel as Figure 7-163 shows. The Properties panel for a Metro Mirror Failover/Failback session with Practice is similar to the Properties panel of a Metro Mirror Single Direction session. Refer to 7.3.1, Create Metro Mirror Single Direction session on page 291 for a description of the options in this panel. In addition, it allows you to specify a persistent FlashCopy relationship between I2 and H2 volumes.
From the pull-down Site 1 Location menu (see Figure 7-164 on page 379) select the location of your H1 storage subsystem previously defined and click Next to continue.
378
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-164 Define Metro Mirror FO/FB w/Practice session Site 1 Location
From the pull-down Site 2 Location menu (see Figure 7-165) select the location of your H2 storage subsystem previously defined and click Next to continue.
Figure 7-165 Define Metro Mirror FO/FB w/ Practice session Site 2 Location
Figure 7-166 on page 380 displays the message that the MM Practice session was successfully created. Click Finish to exit the Create Session wizard. Alternatively, you have an option to add Copy Sets and click on Launch Add Copy Sets Wizard and follow the instructions described in 7.6.2, Add Copy Sets to a Metro Mirror Failover/Failback w/ Practice session on page 380.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
379
TPCRDS8K_zOS-Using TPCR.fm
Going back to the Sessions panel you can check the status of the recently created Metro Mirror Failover/Failback w/ Practice session. Figure 7-167 now displays the Metro Mirror Failover/Failback w/ Practice session which we successfully created.
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-168 shows the Sessions panel with the session MM Practice which we defined previously. The next logical action which you perform on session MM Practice is to add volumes or Copy Sets to it. Select your Metro Mirror session name radio button (MM Practice in our example) and choose Add Copy Sets from the Select Action pull-down menu. Click Go to invoke the Add Copy Sets wizard.
Figure 7-168 Add Copy Sets to Metro Mirror Failover/Failback with Practice
Figure 7-169 on page 382 displays the panel which provides details on the primary volumes or local volumes which are called Host 1 volumes relating to the fact that these volumes reside in the Site 1 or application site or local site. These are all synonyms and refer to the same environment. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS where your H1 volume resides. Once the LSS has been selected, choose the appropriate volume from the Host 1 volume pull-down list. The alternative way to add a large number of volumes to this session is to create CSV file as explained in Using CSV files for importing and exporting sessions on page 196. In case you have CSV file ready select Use a CSV file to import copy sets check box and provide a path to your CSV file. In our example we selected DS8000 disk subsystem and appropriate volume as shown in Figure 7-169 on page 382. In case you would like to define all volumes within a certain LSS in the Metro Mirror session, there is an option to select All Volumes from the Host1 volume list. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
381
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-169 Add Copy Sets to Metro Mirror FO/FB w/ Practice session - Choose Host 1
The next step is to define Site 2 storage subsystems and volumes as shown in Figure 7-170. Select the desired Host 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 2 logical storage subsystem list. Select the LSS where your H2 volumes resides. If All volumes for a given LSS were selected in the previous step while defining Host 1 volumes, you do not have an option to select any volume from Host 2 volumes list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. Click Next to continue.
Figure 7-170 Add Copy Sets to Metro Mirror FO/FB w/ Practice session - Choose Host 2
The next step is to define Site 2 storage subsystems and Intermediate volumes as shown in Figure 7-171 on page 383. Select the desired Intermediate 2 storage subsystem from the
382
TPCRDS8K_zOS-Using TPCR.fm
pull-down menu and wait for a few seconds to get the Intermediate 2 logical storage subsystem list. Select the LSS where your I2 volumes resides. If All volumes for a given LSS were selected in the step while defining Host 1 volumes, you do not have an option to select any volume from Intermediate 2 volumes list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Intermediate 2 storage subsystem. Click Next to continue.
Figure 7-171 Add Copy Sets to Metro Mirror FO/FB w/ Practice session - Choose Intermediate 2
The next panel shown in Figure 7-172 allows to select, deselect and add more Copy Sets to this session. Click Next after you finish selecting your Copy Sets.
Figure 7-172 Add Copy Sets to Metro Mirror FO/FB w/ Practice session - Select Copy Sets
The next screen displays the number of Copy Sets which are going to be created as shown in Figure 7-173 on page 384. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
383
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-173 Add Copy Sets to Metro Mirror FO/FB w/ Practice session - Confirm
TPC for Replication internally adds this Copy Sets to its database. After a few seconds, TPC for Replication shows the results of the adding the Copy Sets, as shown in Figure 7-174. Click Finish to exit the Add Copy Sets wizard.
Figure 7-174 All Copy Sets are successfully added to the TPC-R database
Figure 7-175 on page 385 confirms that all Copy Sets are successfully added to the session MM Practice and the database is successfully updated. The session status is still Inactive.
384
TPCRDS8K_zOS-Using TPCR.fm
This concludes the steps through the GUI when you add Copy Sets to a Metro Mirror session.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
385
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-177 is a warning that you are about to initiate a Metro Mirror session. It will start copying of data from Host 1 to Intermediate 2 volumes defined previously by adding Copy Sets, thus overwriting any data on Intermediate 2 volumes. At this stage data on Host 2 volumes is not yet overwritten. Click Yes to continue.
Once the copy progress reaches 100%, session goes into Normal status. As shown in the Session Details panel in Figure 7-178 on page 387 the Metro Mirror session has Normal 386
Tivoli Storage Productivity Center for Replication for System z
TPCRDS8K_zOS-Using TPCR.fm
status and there are no errors. The copy progress is 100% and the session has changed to Prepared state. Metro Mirror replication is going on from H1 to I2 volumes.
Once the Metro Mirror session is started and it has Normal status and in Prepared State The following options are available: Flash Start H1 H2 StartGC H1 H2 Stop Suspend Terminate creates consistent point in time copy on H2 volumes, continuing replication from H1 to I2 volumes restarts replication from H1 to I2 volumes starts replication in Global Copy Mode stops replication with inconsistent H2 volumes stops replication with consistent H2 volumes terminates the session (under Cleanup submenu).
387
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-180 is a warning that you are about to Flash a Metro Mirror session. Click Yes to continue.
388
TPCRDS8K_zOS-Using TPCR.fm
The status of our Metro Mirror session briefly changed to Warning status while initial point in time copy was established. After copy is established session status is returned to Normal. in Figure 7-181 you can see the session details after executing Flash action.
In the line of H2-I2 role pair you can see the timestamp when point in time copy was created. This can be used as a reference. After Flash action completed you can start using the Host 2 volumes. The point in time copy is created with background copy option. This means it will be copied completely after creation. You can use Flash action anytime in the life span of the session. Once the Metro Mirror session is flashed the following options are available: Flash Start H1 H2 StartGC H1 H2 Stop Suspend Terminate creates another consistent point in time copy on H2 volumes, continuing copying from H1 to I2 volumes restarts copying from H1 to H2 volumes converts the replication from Site 1 to Site 2 to Global Copy mode stops copying with inconsistent H2 volumes stops copying with consistent H2 volumes terminates the session (under Cleanup submenu).
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
389
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-183 on page 391 is a warning that you are about to Suspend Metro Mirror session. Click Yes to continue.
390
TPCRDS8K_zOS-Using TPCR.fm
The status of our Metro Mirror session has changed from Normal to Severe status indicating that data is no longer replicated between Host 1 and Host 2 volumes. The State of the session is Suspended as indicated in Figure 7-184.
If you are going to suspend the Metro Mirror session which has been created with Hold I/O after Suspend policy, then all systems that can update the H1 volumes are on hold before
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
391
TPCRDS8K_zOS-Using TPCR.fm
the Extended Long Busy (ELB), which defaults to 120 seconds. If you do not want to stop your production for 120 seconds you can perform the Release I/O action, by selecting it from the actions pull-down menu. Also, if the session has Enable Hardened Freeze option defined, you must issue the SETHS RESUMEIO command from any z/OS console in the sysplex. Once the Metro Mirror session is suspended the following options are available: Recover Start H1 H2 StartGC H1 H2 Terminate makes H2 volumes available for application systems restarts copying from H1 to I2 volumes restarts replication from H1 to I2 volumes in Global Copy mode terminates the session (under Cleanup submenu).
The next message shown in Figure 7-186 on page 393 is a warning that you will allow target (secondary) volumes available to your host. In our example we initiated Recover command while data replication direction was from H1 to H2 volumes. Therefore, H2 as target (secondary) volumes in a Metro Mirror relationship will be available to host. Click Yes to continue.
392
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-187 indicating that Recover action has been successfully completed. The status of our Metro Mirror session is Normal and the State is Target Available indicating that H2 volume is available to your host.
Figure 7-187 Metro Mirror session - H2 storage subsystem volume available to host
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
393
TPCRDS8K_zOS-Using TPCR.fm
After the Metro Mirror session is recovered in target available state the following options are available: Flash creates consistent point in time copy on H2 volumes from I2 volumes. With this you can always return to the state after suspend. releases I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. restarts copying from H1 to I2 volumes in Metro Mirror mode restarts copying from H1 to I2 volumes in Global Copy mode before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It will enable Start H2 H1 and StartGC H2 H1 actions. terminates the session (under Cleanup submenu).
Release I/O
Terminate
The next message shown in Figure 7-189 on page 395 is a warning that you are about to enable the command which initiates copying data from H2 to H1 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the
394
TPCRDS8K_zOS-Using TPCR.fm
volumes in this session located at Site 1 are offline to all systems prior to enabling the command that allow copying data to Site 1. Click Yes to continue.
The message at the top of the screen in Figure 7-190 confirms that Enable Copy to Site 1 command is completed. The status of our Metro Mirror session is the same as it was after Recover command: Normal status and Target Available state, indicating that H2 volume is available to your host.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
395
TPCRDS8K_zOS-Using TPCR.fm
The following actions are available at this stage: Flash creates consistent point in time copy on H2 volumes from I2 volumes. With this you can always return to the state after suspend. releases I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. starts replication from H2 to H1 volumes in Metro Mirror mode starts replication from H2 to H1 volumes in Global Copy mode re-enabled replication from H1 to I2 volumes terminates the session (under Cleanup submenu).
Release I/O
The next message shown in Figure 7-192 on page 397 is a warning that you are about to initiate starting of the Metro Mirror session. It will start copying of data from Host 2 to Host 1 volumes, thus overwriting any data on Host 1 volumes. At this stage data on Intermediate 2 volumes is not changed. Click Yes to continue.
396
TPCRDS8K_zOS-Using TPCR.fm
After all the updated tracks in H2 volumes are copied to H1 volumes, the replication between H2 and H1 volumes become synchronous. At this stage Metro Mirror session has Normal status and Prepared state. Figure 7-193 shows the Session Details panel. As you can see when the copy direction is from Host 2 volumes to Host1 volumes, Intermediate volumes are not used.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
397
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Mirror session is started and it has Normal status and in Prepared State The following options are available: Start H2 H1 StartGC H2 H1 Stop Suspend Terminate restarts copying from H2 to H1 volumes in Metro Mirror Mode restarts copying from H2 to H1 volumes in Global Copy mode stops copying with inconsistent H1 volumes stops copying with consistent H1 volumes terminates the session (under Cleanup submenu).
The next message shown in Figure 7-195 on page 399 is a warning that you are about to enable the command which initiate copying data from H1 to H2 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 2 are not being used by any application prior to enabling the command that allow copying data to Site 2. Click Yes to continue.
398
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-196 confirms that the Enable Copy to Site 2 command is completed. The status of our Metro Mirror session is the same as it was after Recover command: Normal status and Target Available state, indicating that H1 volume is available to your host.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
399
TPCRDS8K_zOS-Using TPCR.fm
The following options are available at this stage: Release I/O releases I/O to H2 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. restarts copying from H1 to I2 volumes in Metro Mirror mode restarts copying from H1 to I2 volumes in Global Copy mode re-enables replication from Site 2 to Site 1 terminates the session (under Cleanup submenu).
The next message shown in Figure 7-198 on page 401 is a warning that you will stop data replication from H1 to H2 volumes and H2 volumes are not consistent. Click Yes to continue.
400
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-199 indicating that Stop action has been successfully completed. The status of our Metro Mirror session is Severe and the state is Suspended. Column Recoverable in Figure 7-199 has value NO indicating that I2 and H2 volumes are not consistent.
Once the Metro Mirror session is stopped the following options are available:
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
401
TPCRDS8K_zOS-Using TPCR.fm
makes target (secondary) volumes available for host access releases I/O to primary Metro Mirror volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. restarts copying from H1 to I2 volumes (if the Stop was issued with active H1 volumes), in Metro Mirror mode restarts copying from H1 to I2 volumes (if the Stop was issued with active H1 volumes), in Global Copy mode restarts copying from H2 to H1 volumes (if the Stop was issued with active H2 volumes), in Metro Mirror Mode restarts copying from H2 to H1 volumes (if the Stop was issued with active H2 volumes), in Global Copy mode terminates the session.
402
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-201 is a warning that you are about to terminate Metro Mirror relationship between H1 to H2 volumes. Note that if you require to start the very same Metro Mirror session again a full copy from H1 to H2 volumes will be required. Click Yes to continue.
There is a message at the top of the screen in Figure 7-202 on page 404 indicating that Terminate action has been successfully completed. The status of our Metro Mirror session is now Inactive and the state is Defined.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
403
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Mirror session is terminated the following options are available: Start H1 H2 StartGC H1 H2 starts copying from H1 to I2 volumes in Metro Mirror mode starts copying from H1 to I2 volumes in Global Copy Mode.
404
TPCRDS8K_zOS-Using TPCR.fm
Select the Hardware Type for the storage subsystems from the Choose Hardware Type pull-down menu. Select the Global Mirror Failover/Failback session from the Choose Session Type pull-down menu as shown in Figure 7-204 on page 405. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1 represents Site 1 volumes, H2 represents Site 2 volumes and J2 represent journal volumes used to restore data to the last consistency point. When you define Copy Sets this pictograph helps you to orient and understand the replication direction. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
405
TPCRDS8K_zOS-Using TPCR.fm
This leads us to the session Properties panel as Figure 7-205 on page 407 shows. The Properties panel is also important because it requires that you specify at least a name for the session which is about to be created. We choose the name GM FO-FB for our session. An optional Description is recommended to understand the purpose of the session because the session name may not reveal what this session is intended for. Figure 7-205 on page 407 comments on the storage subsystems involved in this session. You may add a location of each storage server and a date when the session is created or changed. The Properties panel allows you to specify some parameters to monitor and control the Global Mirror session. These parameters are described in the following paragraphs: Consistency Group interval time (sec) This value specifies how long to wait between the formation of the next Consistency Groups. This is specified in seconds, and the default is zero (0) seconds. Zero seconds means that Consistency Group formation happens constantly. As soon as a Consistency Group is successfully created, the process to create a new Consistency Group starts again immediately. Recovery Point Objective Alert Specifies the length of time that you want to set for the recovery point objective (RPO) thresholds. The values determine whether a Warning or Severe alert is generated when the RPO threshold is exceeded for a role pair. The RPO represents the length of time in seconds of data exposure that is acceptable in the event of a disaster. The thresholds are specified in the following way: Warning level threshold (seconds): when the RPO is greater than this value, you want an alert to be generated. Severe level threshold (seconds): when RPO is greater than this value, an alert is also generated, and the session status changes to Severe. Fail MM/GC if the target is online (CKD only) This option insures that all target (secondary) volumes in a Global Mirror session are offline and not visible to any host, otherwise the create session task will fail. This applies to CKD volumes only. We recommend to use this option since target (secondary) volumes in a Global Mirror session should be offline to all hosts.
406
TPCRDS8K_zOS-Using TPCR.fm
From the pull-down Site 1 Location menu (see Figure 7-206 on page 407) select the location of your H1 storage subsystem and click Next to continue.
From the pull-down Site 2 Location menu (see Figure 7-207) select the location of your H2 storage subsystem and click Next to continue. 407
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-208 displays the message that the GM FO-FB session was successfully created. Click Finish to exit the Create Session wizard. Alternatively, you have an option to add Copy Sets and click on Launch Add Copy Sets Wizard and follow the instructions described in 7.7.2, Add Copy Sets to a Global Mirror Failover/Failback session on page 409.
408
TPCRDS8K_zOS-Using TPCR.fm
Note that over time various terminology is introduced for the same thing. Peer-to-Peer Remote Copy or PPRC (today known as Metro Mirror and Global Mirror) started out with primary volumes and secondary volumes. This terminology was fine for a 2-site solution. With the arrival of switching sites from an application viewpoint as well as for the storage subsystem used, TPC for Replication introduced a different terminology for these PPRC volumes and their association to a certain site. Host 1 or Site 1 refers to the primary volumes as a starting point.This may also be the local site or application site. It may always be considered as the customer primary site. But it has the potential to change. A customer may want to switch application sites from Host 1 to Host 2 in Site 2 and also switch at the same time the Copy Services role of the associated storage subsystems. This led to Host 1, Site 1 volumes and Host 2, Site 2 volumes terminology being used in TPC for Replication. In Global Mirror session there are journal volumes J2 used to restore data to the last consistency point and they are located at Site 2. Figure 7-210 on page 410 displays the panel which provides details on the primary volumes or local volumes which are called Host 1 volumes relating to the fact that these volumes reside in the Site 1 or application site or local site. These are all synonyms and refer to the same environment. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
409
TPCRDS8K_zOS-Using TPCR.fm
where your H1 volumes resides. Once the LSS has been selected, choose the appropriate volume from the Host 1 volume pull-down list or select All Volumes. The alternative way to add a large number of volumes to this session is to create CSV file as explained in Using CSV files for importing and exporting sessions on page 196. In case you have a CSV file ready, select Use a CSV file to import copy sets check box and provide a path to your CSV file. Click Next to continue.
Figure 7-210 Add Copy Sets to Global Mirror FO/FB session - Choose Host 1
The next step is to define Site 2 storage subsystems and volumes as shown in Figure 7-211 on page 411. Select the desired Host 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 2 logical storage subsystem list. Select the LSS where your H2 volume resides. If you selected All volumes for a given LSS in the previous step while defining Host 1 volumes, you do not have an option to select any volume from Host 2 volume list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. Click Next to continue.
410
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-211 Add Copy Sets to Global Mirror FO/FB session - Choose Host 2 volumes
In order to complete Global Mirror Copy Set definition we need to define J2 journal volume as shown in Figure 7-212 on page 411. Select the desired Journal 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Journal 2 logical storage subsystem list. Select the LSS where your J2 volume resides. If you selected All volumes for a given LSS in the previous step while defining Host 1 and Host 2 volumes, you do not have an option to select any volume from Journal 2 volume list. TPC for Replication will automatically match all volumes from selected LSS in Host 2 storage subsystem with all volumes from selected LSS in Journal 2 storage subsystem. Click Next to continue.
Figure 7-212 Add Copy Sets to Global Mirror FO/FB session - Choose Journal 2 volumes
The next screen in Figure 7-213 on page 412 shows the Copy Set that you are about to add. If you need to add more Copy Sets to this session, just click on Add More. 411
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
TPCRDS8K_zOS-Using TPCR.fm
After you have chosen all Copy Sets that you need, click Next.
The next screen displays the number of Copy Sets which are going to be created. Click Next to continue.
Figure 7-214 Add Copy Sets to Global Mirror FO/FB session - Confirm
TPC for Replication internally adds Copy Sets to its database. Note this does not start Global Mirror relationship between H1, H2 and J2 volumes. It is just an TPC for Replication internal process to add these Copy Sets to the TPC for Replication inventory database.
412
TPCRDS8K_zOS-Using TPCR.fm
After a few seconds TPC for Replication shows the Results panel, as shown in Figure 7-215. Click Finish to exit the Add Copy Sets wizard.
Figure 7-215 All Copy Sets are successfully added to the TPC-R database
This concludes the steps through the GUI when you add Copy Sets to a Global Mirror session.
You can choose the StartGC H1 H2 action if you plan to start the replication between H1 and H2 volumes while a lot of write activity is going on the H1 volumes (the batch window, for example), and you have limited network connectivity between your storage boxes in Site1 and Site2. After the copying between H1 and volumes reaches 100%, you can perform the Start H1 H2 action, in order for your session to begin forming consistency groups. Figure 7-216 on page 414 displays the GM FO-FB session as we defined it previously. From the Select Action pull-down menu select Start H1 H2 and click Go.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
413
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-217 is a warning that you are about to initiate a Global Mirror session. It will start copying of data from Host 1 to Host 2 volume defined previously by adding data set, thus overwriting any data on Host 2 volume. Furthermore, it will initiate a FlashCopy to J2 journal volume. Click Yes to continue.
414
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-218 confirms that Global Mirror session is started. The session is in Preparing state and Warning status. Click on the session name hyperlink (GM FO-FB in our example) to find more details on this session.
As shown in Figure 7-219 on page 416 the Global Mirror session has a Warning status. The session is still in Preparing state since the copying data between H1 and H2 volumes is still in progress.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
415
TPCRDS8K_zOS-Using TPCR.fm
Wait until the session changes to Normal status as shown in Figure 7-220 on page 417. It means that the initial copy has finished and the first consistency group has been created.
416
TPCRDS8K_zOS-Using TPCR.fm
Once the Global Mirror Failover/Failback session is started and it has Normal status and in Prepared State the following options are available: Start H1 H2 Suspend Terminate restarts Global Mirror session suspends replication between H1 and H2 volumes terminates the session (under Cleanup submenu).
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
417
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-222 is a warning that you are about to Suspend Global Mirror session. Click Yes to continue.
The status of our Global Mirror session has changed from Normal to Severe status indicating that data is not replicated anymore between Host 1 and Host 2 volumes. State of the session is Suspended as indicated in Figure 7-223 on page 419.
418
TPCRDS8K_zOS-Using TPCR.fm
restarts the session starts Global Copy between H1 and H2 volumes makes H2 volumes available for host access terminates the session (under Cleanup submenu).
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
419
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-225 on page 421 is a warning that you will allow H2 volumes to be available to your host. Click Yes to continue. Note: Before executing the Recover action, H1 volumes must be offline for your application systems.
420
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-226 on page 422 indicating that the Recover action has been successfully completed. The status of our Global Mirror session is Normal and the State is Target Available indicating that H2 volume is available to your systems.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
421
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-226 Global Mirror session - H2 storage subsystem volume available to systems
After the Global Mirror session is recovered in target available state the following options are available: Start H1 H2 StartGC H1 H2 Enable Copy to Site 1 restarts the Global Mirror session starts Global Copy between H1 and H2 volumes before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It enables Start H2 H1 command. terminates the session (under Cleanup submenu).
Terminate
422
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-228 is a warning that you are about to enable the command which initiate copying data from H2 to H1 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 1 are not being used by any application prior to enabling the command that allow copying data to Site 1. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
423
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-229 confirms that Enable Copy to Site 1 command is completed. The status of our Global Mirror session is the same as it was after Recover command: Normal status and Target Available state, indicating that H2 volume is available to your host.
The following options are available at this stage: Start H2 H1 Re-enable Copy to Site 2 Terminate starts replication from H2 to H1 volumes re-enables replication from H1 to H2 volumes terminates the session (under Cleanup submenu).
7.7.7 Start H2 H1
After Recover action and Enable Copy to Site 1 command, Host 2 volumes are active. As Host 2 volumes are now active site you can initiate copying from Host 2 to Host 1 volumes. To achieve this, from the Select Action pull-down menu select Start H2 H1 and click Go as shown in Figure 7-230 on page 425.
424
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-231 is a warning that you are about to initiate starting of Global Mirror session. It will start copying of data from Host 2 to Host 1 volumes, thus overwriting any data on Host 1 volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
425
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-232 confirms that start of Global Mirror session is completed. The session is in Preparing state and Warning status. Click on the session name hyperlink (GM FO-FB in our example) to find out more details on this session.
Note: After starting the session from H2 H1 the session will forever stay in preparing state. The reason for this is that there is no journal volume at Site 1 where consistent copy could be formed. If you want to switch back to H1 site you need to stop I/O on H2 volumes and suspend the session. This will cause that all data which is not yet copied from H2 to H1 volumes to be copied. Once a session is suspended you can recover it back to H1 volumes. After start of the session in direction H2 H1 is completed, the following options are available: Suspend Start H2 H1 Terminate suspends replication from H2 to H1 volumes restarts replication from H2 to H1 volumes terminate the session (under Cleanup submenu)
426
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-234 is a warning that you are about to suspend the replication between H2 and H1 volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
427
TPCRDS8K_zOS-Using TPCR.fm
The status of our Global Mirror session has changed to Severe status indicating that data is no longer replicated between Host 2 and Host 1volumes. State of the session is Suspended as indicated in Figure 7-235.
Once the Global Mirror session is suspended the following actions are available: Recover Start H2 H1 Terminate makes H2 volumes available for host access restarts copying from H2 to H1 volumes terminates the session (under Cleanup submenu).
428
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Example 7-237 is a warning that you will allow H1 volumes available to your host. Click Yes to continue. Note: Before executing Recover action, H2 volumes must be offline for your application systems.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
429
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-238 indicating that Recover action has been successfully completed. The status of our Global Mirror session is Normal and the State is Target Available indicating that H1 volumes are available to your host.
Figure 7-238 Global Mirror session - H1 storage subsystem volume available to host
After the Global Mirror session is recovered in target available state the following options are available: Start H2 H1 Enable Copy to Site 2 restarts Global Copy replication from H2 to H1 volumes before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. This command will enable Start H1 H2 command. terminates the session.
Terminate
430
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-240 is a warning that you are about to enable copy from Site 1 to Site 2 since the command which initiates copying data from H1 to H2 volumes is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 2 are not being used by any application prior to enabling the command that allow copying data to Site 2. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
431
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-241 confirms that Enable Copy to Site 2 command is completed. The status of our Global Mirror session is the same as it was after Recover command: Normal and the State is Target Available indicating that H1 volume is available to your host.
The following options are available at this stage: Start H1 H2 StartGC H1 H2 Re-enable Copy to Site 1 Terminate restarts the Global Mirror session, with replication from H1 to H2 volumes starts Global Copy replication from H1 to H2 volumes re-enable H2 to H1 replication terminates the session (under Cleanup submenu).
432
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-243 is a warning that you are about to terminate Global Mirror relationship between H1, H2 and J2 volumes. Note that if you require to start the very same Global Mirror session again a full copy from H1 to H2 volumes will be required. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
433
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-244 indicating that Terminate action has been successfully completed. The status of our Global Mirror session is now Inactive and the state is Defined.
Once the Global Mirror Failover/Failback session is terminated the following option is available: Start H1 H2 StartGC H1 H2 restarts the Global Mirror session starts Global Copy from H1 to H2 volumes.
434
TPCRDS8K_zOS-Using TPCR.fm
Select the Hardware Type of the storage subsystems from the Choose Hardware Type pull-down menu. Select the Global Mirror Failover/Failback session from the Choose Session Type pull-down menu as shown in Figure 7-246. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1 represents Site 1 volumes, H2 represents Site 2 volumes, I2 represents Site 2 Intermediate volumes which are actually used as copy target, and J2 represent journal volumes used to restore data to the last consistency point. When you define Copy Sets this pictograph helps you to orient and understand replication direction. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
435
TPCRDS8K_zOS-Using TPCR.fm
This leads us to the session Properties panel as Figure 7-247 on page 437 shows. The Properties panel is also important because it requires that you specify at least a name for the session which is about to be created. An optional Description is recommended to understand the purpose of the session because the session name may not reveal what this session is intended for. The Properties panel allows you to specify some parameters to monitor and control the Global Mirror session. These parameters are described in the following paragraphs: Consistency Group interval time (sec) This value specifies how long to wait between the formation of the next Consistency Groups. This is specified in seconds, and the default is zero (0) seconds. Zero seconds means that Consistency Group formation happens constantly. As soon as a Consistency Group is successfully created, the process to create a new Consistency Group starts again immediately. Recovery Point Objective Alert Specifies the length of time that you want to set for the recovery point objective (RPO) thresholds. The values determine whether a Warning or Severe alert is generated when the RPO threshold is exceeded for a role pair. The RPO represents the length of time in seconds of data exposure that is acceptable in the event of a disaster. The thresholds are specified in the following way: Warning level threshold (seconds) When the RPO is greater than this value, you want an alert to be generated. Severe level threshold (seconds) When RPO is greater than this value, an alert is also generated, and the session status changes to Severe. Fail MM/GC if the target is online (CKD only) This option insures that all target (secondary) volumes in a Global Mirror session are offline and not visible to any host, otherwise the create session task will fail. This applies to CKD volumes only. We recommend to use this option since target (secondary) volumes in a Global Mirror session should be offline to all hosts. DS FlashCopy Options for Role pair H2-I2 The following options are available only for DS8000 version 4.2 or later. Persistent Select this option to keep FlashCopy pairs persistent on the hardware. No Copy Select this option if you do not want the hardware to write the background copy until the source track is written to. Data is not copied to the H2 volume until the blocks or tracks of the I2 volume are modified. The point-in-time volume image is composed of the unmodified data on the I2 volume and the data that was copied to the H2 volume. If you want a complete point-in-time copy of the I2 volume to be created on the H2 volume, do not use the No Copy option. This option causes the data to be asynchronously copied from the I2 volume to the H2 volume. DS FlashCopy Options for Role pair I2-J2 This option is available only for System Storage DS8000 version 4.2 or later. Reflash After Recover
436
TPCRDS8K_zOS-Using TPCR.fm
Select this option if you want to create a FlashCopy replication between the I2 and J2 volumes after the recovery of a Global Mirror session. If you do not select this option, a FlashCopy replication is created only between the I2 and H2 volumes.
From the pull-down Site 1 Location menu (see Figure 7-248) select the location of your H1 storage subsystem previously defined and click Next to continue.
Figure 7-248 Define Global Mirror FO/FB w/ Practice session Site 1 Location Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
437
TPCRDS8K_zOS-Using TPCR.fm
From the pull-down Site 2 Location menu (see Figure 7-249) select the location of your H2 storage subsystem previously defined and click Next to continue.
Figure 7-249 Define Global Mirror FO/FB w/ Practice session Site 2 Location
Figure 7-250 displays the message that the GM Practice session was successfully created. Click Finish to exit the Create Session wizard. Alternatively, you have an option to add Copy Sets and click on Launch Add Copy Sets Wizard and follow the instructions described in Add Copy Sets to a Global Mirror session on page 438.
438
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-251 shows the session overview panel with the GM Practice session which we defined previously. The next logical action which you perform on GM Practice session is to add Copy Sets to it. Select your Global Mirror session name radio button (GM Practice in our example) and choose Add Copy Sets from the Select Action pull-down menu. Click Go to invoke the Add Copy Sets wizard.
Figure 7-251 Add Copy Sets to Global Mirror FO/FB w/ Practice session
Note that over time various terminology is introduced for the same thing. Peer-to-Peer Remote Copy or PPRC (today known as Metro Mirror and Global Mirror) started out with primary volumes and secondary volumes. This terminology was fine for a 2-site solution. With the arrival of switching sites from an application viewpoint as well as for the storage subsystem used, TPC for Replication introduced a different terminology for these PPRC volumes and their association to a certain site. Host 1 or Site 1 refers to the primary volumes as a starting point.This may also be the local site or application site. It may always be considered as the primary site. But it has the potential to change.You may want to switch application sites from Host 1 in Site 1 to Host 2 in Site 2 and also switch at the same time the Copy Services role of the associated storage subsystems. This led to Host 1, Site 1 volumes and Host 2, Site 2 volumes terminology being used in TPC for Replication. In Global Mirror Failover/Failback w/ Practice session there are Journal volumes J2 used to restore data to the last consistency point and they are located at site 2. In case of the session with practice we are introducing Intermediate volumed as Site 2. In this type of session replication is established from H1 to I2 volumes. Figure 7-252 on page 440 displays the panel which provides details on the primary volumes or local volumes which are called Host 1 volumes relating to the fact that these volumes reside in the Site 1 (also referred to as the application site or local site). These are all synonyms and refer to the same environment. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS where your H1 volumes resides. Once the LSS has been selected, choose the appropriate volume from the Host 1 volume pull-down list or select All Volumes.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
439
TPCRDS8K_zOS-Using TPCR.fm
The alternative way to add a large number of volumes to this session is to create CSV file as explained in Using CSV files for importing and exporting sessions on page 196. If you have a CSV file ready select Use a CSV file to import copy sets check box and provide a path to your CSV file. Click Next to continue.
Figure 7-252 Add Copy Sets to Global Mirror FO/FB w/ Practice session - Choose Host 1
The next step is to define Site 2 storage subsystems and volumes as shown in Figure 7-253 on page 441. Select the desired Host 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 2 logical storage subsystem list. Select the LSS where your H2 volume resides.If you selected All volumes for a given LSS in the previous step while defining Host 1 volumes, you do not have an option to select any volume from Host 2 volume list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. In our example we selected All volumes in Choose Host1 step. Click Next to continue.
440
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-253 Add Copy Sets to Global Mirror FO/FB w/ Practice session - Choose Host 2 volumes
The next step is to define Site 2 storage subsystem and Journal volumes as shown in Figure 7-254. Select the desired Journal 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Journal 2 logical storage subsystem list. Select the LSS where your J2 volume resides. If you selected All volumes for a given LSS in the previous step while defining Host 1 and Host 2 volumes, you do not have an option to select any volume from Journal 2 volume list. TPC for Replication will automatically match all volumes from selected LSS in Host 2 storage subsystem with all volumes from selected LSS in Journal 2 storage subsystem. Click Next to continue.
Figure 7-254 Add Copy Sets to Global Mirror FO/FB w/ Practice session - Choose Journal 2 volumes
The next step is to define Site 2 storage subsystems and Intermediate volumes as shown in Figure 7-255 on page 442. Select the desired Intermediate 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Intermediate 2 logical storage subsystem list. Select the LSS where your I2 volumes resides. If All volumes for a given LSS
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
441
TPCRDS8K_zOS-Using TPCR.fm
were selected in the step while defining Host 1 volumes, you do not have an option to select any volume from Intermediate 2 volumes list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Intermediate 2 storage subsystem. Click Next to continue.
Figure 7-255 Add Copy Sets to Global Mirror FO/FB w/ Practice session - Choose I2 volumes
The next screen in Figure 7-256 displays the Select Copy Sets panel, where you have the chance to Deselect, Select or Add More Copy Sets.
Figure 7-256 Add Copy Sets to Global Mirror FO/FB w/ Practice session - Select Copy Sets
After you have added all the Copy Sets that you need, click Next to continue, as shown in Figure 7-257 on page 443.
442
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-257 Add Copy Sets to Global Mirror FO/FB w/ Practice session - Copy Sets
The next screen displays the number of Copy Sets which are going to be created, as shown in Figure 7-258. Click Next to continue.
Figure 7-258 Add Copy Sets to Global Mirror FO/FB w/ Practice session - Confirm
After a few seconds TPC for Replication displays the Results of the Add Copy Sets operation, as shown in Figure 7-259 on page 444. Click Finish to exit the Add Copy Sets wizard.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
443
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-259 All Copy Sets are successfully added to the TPC-R database
Figure 7-260 confirms that all Copy Sets are successfully added to the GM Practice session and the database is successfully updated. The session status is still Inactive and in Defined state.
This concludes the steps through the GUI when you add Copy Sets to a Global Mirror session.
444
TPCRDS8K_zOS-Using TPCR.fm
You can choose the StartGC H1 H2 action if you plan to start the replication between H1 and I2 volumes while a lot of write activity is going on the H1 volumes (the batch window, for example), and you have limited network connectivity between your storage boxes in Site 1 and Site 2. After the copying between H1 and volumes reaches 100%, you can perform the Start H1 H2 action, in order for your session to begin forming consistency groups. Figure 7-261 displays the GM Practice session as we defined it previously. We defined only one Copy Set in our example. From the Select Action pull-down menu select Start H1 H2 and click Go.
The next message shown in Figure 7-262 on page 446 is a warning that you are about to initiate Global Mirror session. It will start copying data from Host 1 to Intermediate 2 volumes defined previously by adding the copy set, thus overwriting any data on Intermediate 2 volumes. Furthermore, it will initiate a FlashCopy to J2 journal volumes. At this stage data on Host 2 volumes is not yet overwritten. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
445
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-263 confirms that Global Mirror session is started. The session is in Preparing state and Warning status. Click on the session name hyper link (GM Practice in our example) to find more details on this session.
446
TPCRDS8K_zOS-Using TPCR.fm
As shown in Figure 7-264 on page 447 the Global Mirror session is in Warning status. The session is still in Preparing state since the copying data between H1 and H2 volumes is still in progress.
Wait until session changes to Normal status as shown in Figure 7-265 on page 448. It means that the initial copy has finished and the first consistency group has been created.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
447
TPCRDS8K_zOS-Using TPCR.fm
Once the Global Mirror session is started and it has Normal status and in Prepared State the following options are available: Flash Initiate Background Copy Start H1 H2 Suspend Terminate flashes data from I2 volumes to H2 volumes copies all tracks from I2 volumes to H2 volumes restarts the session stops copying with consistent I2 volumes terminates the session (under Cleanup submenu).
448
TPCRDS8K_zOS-Using TPCR.fm
From the Sessions panel select your Global Mirror session radio button and Flash from the Select Action pull-down list and click Go as shown in Figure 7-266.
The next message shown in Figure 7-267 on page 450 is a warning that you are about to Flash a Global Mirror session. Note: Flash copy for H2 volumes is taken from last consistency copy of J2 volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
449
TPCRDS8K_zOS-Using TPCR.fm
The status of our Global Mirror session briefly changed to Flashing status while initial point in time copy was established. After copy is established session status is returned to Normal. in Figure 7-268 on page 451 you can see the session details after executing Flash action.
450
TPCRDS8K_zOS-Using TPCR.fm
In the line of H2-I2 pair you can see the timestamp when point in time copy was created. This can be used as a reference. After Flash action completed you can start using the Host 2 volumes. The point in time copy is created with background copy option. This means it will be copied completely after creation. Once it is copied it will not affect the performance of the source volumes. You can use Flash action anytime in the life span of the session.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
451
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication shows you a warning that it is about to initiate background copy of data that was flashed with No Copy option as shown in Figure 7-270. Click Yes to initiate the background copy.
452
TPCRDS8K_zOS-Using TPCR.fm
TPC for Replication shows you a message while the background copy is going on, as seen in Figure 7-271 on page 453.
The FlashCopy relationship between I2 and H2 volumes ends as soon as all tracks are copied from I2 to H2 volumes.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
453
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-273 is a warning that you are about to Suspend Global Mirror session. Click Yes to continue.
The status of our Global Mirror session has changed from Normal to Severe status indicating that data is no longer being replicated between Host 1 and Intermediate 2 volumes. State of the session is Suspended as indicated in Figure 7-274 on page 455.
454
TPCRDS8K_zOS-Using TPCR.fm
in Figure 7-275 on page 456 you can see the session details after executing Suspend action. In the line of H1-J2 pair you can see the timestamp when session was suspended. This can be used as a reference.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
455
TPCRDS8K_zOS-Using TPCR.fm
Once the Global Mirror session is suspended the following options are available: Initiate Background Copy Recover Start H1 H2 StartGC H1 H2 Terminate copies all tracks from I2 volumes to H2 volumes make target (secondary) volumes available for host access restarts the Global Mirror session restarts replication between H1 and I2 volumes in Global Copy mode terminates the session (under Cleanup submenu).
456
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-277 is a warning that you will allow H2 volumes available to your host. Click Yes to continue.
There is a message at the top of the screen in Figure 7-278 on page 458 indicating that Recover action has been successfully completed. The status of our Global Mirror session is Normal and the State is Target Available indicating that H2 volumes are available to your host.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
457
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-278 Global Mirror session - H2 storage subsystem volume available to host
After the Global Mirror session is recovered in Target Available state the following options are available: Flash Initiate Background Copy Start H1 H2 StartGC H1 H2 Enable Copy to Site 1 creates consistent point in time copy on H2 volumes copies all tracks from I2 volumes to H2 volumes restarts the Global Mirror session restarts the replication between H1 and I2 volumes in Global Copy mode before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It enables Start H2 H1 command. terminates the session (under Cleanup submenu).
Terminate
458
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-280 is a warning that you are about to enable the command which initiate copying data from H2 to H1 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 1 are not being used by any application prior to enabling the command that allow copying data to Site 1. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
459
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-281 confirms that Enable Copy to Site 1 command is completed. The status of our Global Mirror session is the same as it was after Recover command: Normal status and Target Available state, indicating that H2 volume is available to your host.
The following options are available at this stage: Flash Initiate Background Copy Start H2 H1 Re-enable Copy to Site 2 Terminate creates consistent point in time copy on H2 volumes copies all tracks from I2 volumes to H2 volumes restarts copying from H2 to H1 volumes re-enables replication from Site 1 to Site 2 terminates the session (under Cleanup submenu).
7.8.9 Start H2 H1
After Recover action, and Enable Copy to Site 1 command, Host 2 volumes are active. As Host 2 site is now active site you can initiate copying from Host 2 to Host 1 volumes. To achieve this, from the Select Action pull-down menu select Start H2 H1 and click Go as shown in Figure 7-282 on page 461.
460
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-283 is a warning that you are about to initiate the start of the Global Mirror session. It will start copying of data from Host 2 to Host 1 volumes, thus overwriting any data on Host 1 volumes. At this stage, data on Intermediate 2 volumes is not changed. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
461
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-284 confirms that start of Global Mirror session is completed. The session is in Preparing state and Warning status. Click on the session name hyperlink (GM Practice in our example) for more details on this session.
Note: After starting the session from H2 H1 the session will forever stay in Preparing state. The reason for this is that there is no journal volume at Site 1 where consistent copy could be formed. If you want to switch back to H1 site you need to stop I/O on H2 volumes and suspend the session. This will cause that all data which is not yet copied from H2 to H1 volumes to be copied. Once the session is suspended you can recover it back to H1 volumes. After start of the session in direction H2 H1 is completed, the following options are available: Initiate Background Copy Start H2 H1 Suspend Terminate copies all tracks from I2 volumes to H2 volumes restarts copying from H2 to H1 volumes stop copying with consistent H1 volumes terminates the session (under Cleanup submenu).
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-286 is a warning that you are about to Suspend Global Mirror session. Click Yes to continue.
The status of our Global Mirror session has changed from Normal to Severe status indicating that data is not replicated anymore between Host 2 and Host 1volumes. Before getting to
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
463
TPCRDS8K_zOS-Using TPCR.fm
suspended status all H2 and H1 volumes were synchronized. State of the session is Suspended as indicated in Figure 7-287.
In Figure 7-288 on page 465 you can see the session details after executing Suspend action. In the line of H1-H2 pair you can see the timestamp when session was suspended. This can be used as a reference.
464
TPCRDS8K_zOS-Using TPCR.fm
Once the Global Mirror session is suspended the following options are available: Initiate Background Copy Start H2 H1 Terminate Recover copies all tracks from I2 volumes to H2 volumes restart copying from H2 to H1 volumes terminates the session makes H1 volumes available for application systems.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
465
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-290 is a warning that you will allow H1 volumes available to your host. Click Yes to continue. Note: Before executing Recover action, H2 volumes must be offline to your application systems.
466
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-291 indicating that Recover action has been successfully completed. The status of our Global Mirror session is Normal and the State is Target Available indicating that H1 volumes are available to your host.
After the Global Mirror session is recovered in target available state the following options are available: Initiate Background Copy Start H2 H1 Enable Copy to Site 2 copies all tracks from I2 volumes to H2 volumes restart copying from H2 to H1 volumes before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It enables replication to Site 2. terminates the session (under Cleanup submenu).
Terminate
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
467
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-292 is a warning that you are about to enable the command which initiate copying data from H1 to H2 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 2 are not being used by any application prior to enabling the command that allow copying data to Site 2. Click Yes to continue.
468
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-293 confirms that Enable Copy to Site 2 command is completed. The status of our Metro Mirror session is the same as it was after Recover command: Normal status and Target Available state, indicating that H1 volume is available to your host.
The following options are available at this stage: Initiate Background Copy Start H1 H2 StartGC H1 H2 Re-enable Copy to Site 1 Terminate copies all tracks from I2 volumes to H2 volumes restarts the Global Mirror session restarts the replication between H1 and I2 volumes in Global Copy mode re-enables Start H2 H1 command terminates the session (under Cleanup submenu).
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
469
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-296 is a warning that you are about to terminate Global Mirror relationship between H1, H2, I2 and J2 volumes. Note that if you need to start the very same Global Mirror session again, a full copy from H1 to H2 volumes will be required. Click Yes to continue.
470
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-297 indicating that Terminate action has been successfully completed. The status of our Global Mirror session is now Inactive and the state is Defined.
Once the Global Mirror session is terminated the following option is available: Start H1 H2 StartGC H1 H2 restarts the Global Mirror session restarts the replication between H1 and I2 volumes in Global Copy mode.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
471
TPCRDS8K_zOS-Using TPCR.fm
Select the Metro Global Mirror session from the pull-down menu as shown in Figure 7-299. On the right hand side a pictograph symbolizes the involved sites and their volume types. H1 represents Site 1 volumes, H2 represents Site 2 volumes, H3 represents Site 3 and J3 represents journal volumes in a Global Mirror session used to restore data to the last consistency point. When you define Copy Sets this pictograph helps you to orient and understand replication direction. Click Next to continue.
472
TPCRDS8K_zOS-Using TPCR.fm
The Properties panel is also important because it requires that you specify a name for the session which is about to be created. An optional Description is recommended to understand the purpose of the session because the session name may not reveal what this session is intended for. In Figure 7-300 we are showing our session definition. You may add a location of each storage server and a date when the session is created or changed. Metro Global Mirror is a method of three-site continuous data replication and it combines Metro Mirror synchronous copy and Global Mirror asynchronous copy into one session. In this combined session, the Metro Mirror target is the Global Mirror source. Furthermore, if there is a storage failure (or disaster) at Site 2, or even simply a loss of connectivity between Site 1 and Site 2, data can no longer be cascaded to the remote Site 3. However we could still copy data from Site 1 to Site 3 in a Global Mirror session. Therefore we need to setup Consistency Group interval time (sec) for Site 2 to Site 3 Global Mirror session (H2-J3) and for Site 1 to Site 3 Global Mirror session (H1-J3) as shown in Figure 7-300. Consistency Group interval time (sec) value specifies how long to wait between the formation of the next Consistency Groups. This is specified in seconds, and the default is zero (0) seconds. Zero seconds means that Consistency Group formation happens constantly. As soon as a Consistency Group is successfully created, the process to create a new Consistency Group starts again immediately. The maximum value for Consistency Group interval time is 18 hours. Fail MM/GC if the target is online (CKD only) option insures that all target (secondary) volumes in a Metro Mirror session are offline and not visible to any host, otherwise the create session task will fail. This applies to CKD volumes only. We recommend you use this option since target (secondary) volumes in a Metro Mirror session should be offline to all hosts. TPC for Replication issues Freeze command in a Metro Mirror session as soon as the Copy Set volumes are suspended due to an error (for example, disk or link failure). TPC for Replication always does a Freeze to create a consistent set of H2 volumes. The action that TPC for Replication takes subsequent to the Freeze is specified by the Metro Mirror Suspend Policy. You basically have two options available: 1. Hold I/O after Suspend - known as Freeze and Stop policy After a freeze, new writes are not allowed to the H1 volumes thus stopping your production systems. 2. Release I/O after Suspend - known as Freeze and Go policy After a freeze, you can make new writes to the H1 volumes, but no replication will occur to the secondary volumes. This is the default setting for all new sessions. Which option you select is really a business, rather than an IT, decision. If your Recovery Point Objective - RPO is zero (that is, you cannot tolerate any data loss in case of production site disaster), you must select Hold I/O after Suspend. This option will hold I/O at production site on all volumes defined in the session. Because all systems that can update the production site volumes are on hold before the Extended Long Busy for CKD volumes or SCSI queue full timer for FB volumes ends (default is 120 seconds), you are sure that no updates are made to the production volumes that are not mirrored to the volumes at the DR site. You need to take some action against this session within 120 seconds, otherwise I/O will be released to the production volumes. On the other hand, if the event that caused TPC for Replication to take Hold I/O after Suspend action was a transient event (for example, temporary link lost between sites) rather than a real disaster, you will have brought all production systems down unnecessarily. If your RPO is higher than zero, you may decide to let the production systems continue operation once the H2 volumes have been protected. This is known as Freeze and Go policy
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
473
TPCRDS8K_zOS-Using TPCR.fm
and you need to select Release I/O after Suspend option. In this case, if the trigger was only a transient event, you have avoided an unnecessary outage. On the other hand, if the trigger was the first sign of an actual disaster, you could continue operating for some amount of time before all systems actually fail (so called Rolling disaster). Any updates made to the primary volumes during this time will not have been remote copied, and therefore are lost. In our example we use Release I/O after Suspend option as shown in Figure 7-300. Click Next to define location sites.
From the pull-down Site 1 Location menu (see Figure 7-301) select the location of your H1 storage subsystem previously defined and click Next to continue.
474
TPCRDS8K_zOS-Using TPCR.fm
From the pull-down Site 2 Location menu (see Figure 7-302) select the location of your H2 storage subsystem previously defined and click Next to continue. Note: In our scenario we are going to create Metro Mirror session inside one DS8000 storage system. That is the reason why Site 1 and Site 2 location logical names are the same.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
475
TPCRDS8K_zOS-Using TPCR.fm
From the pull-down Site 3 Location menu (see Figure 7-303) select the location of your H3 storage subsystem previously defined and click Next to continue.
Figure 7-304 displays the message that the MGM1 session was successfully created. Click Finish to exit the Create Session wizard. Alternatively, you have an option to add Copy Sets and click on Launch Add Copy Sets Wizard and follow the instructions described in 7.9.2, Add Copy Sets to a Metro Global Mirror session on page 477
476
TPCRDS8K_zOS-Using TPCR.fm
Go back to the TPC for Replication home page and select the Sessions hyperlink to check on recently created FlashCopy session. Figure 7-305 now displays the Metro Global Mirror session which we successfully created.
Again note that this session with its name is just a token and represents a Metro Global Mirror Copy Services type. At this stage there is no storage server nor any volumes associated with this MGM1 session.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
477
TPCRDS8K_zOS-Using TPCR.fm
Note that over time various terminology is introduced for the same thing. Peer-to-Peer Remote Copy or PPRC (known today as Metro or Global Mirror) started out with primary volumes and secondary volumes. This terminology was fine for a 2-site solution. With the arrival of switching sites from an application viewpoint as well as for the storage subsystem used, TPC for Replication introduced a different terminology for these PPRC volumes and their association to a certain site. Host 1 or Site 1 refers to the primary volumes as a starting point. This may also be the local site or application site. It may always be considered as the primary site. But it has the potential to change. You may want to switch application sites from Host 1 in Site 1 to Host 2 in Site 2 and also switch at the same time the Copy Services role of the associated storage subsystems. This led to Host 1, Site 1 volumes and Host 2, Site 2 volumes terminology being used in TPC for Replication. In a Metro Global Mirror three site configuration, the primary site is referred as Site 1, intermediate site as Site 2 and remote site as Site 3. Figure 7-307 displays the panel which provides details on the primary volumes or local volumes which are called Host 1 volumes relating to the fact that these volumes reside in the Site 1 or application site or local site. These are all synonyms and refer to the same environment. Select the desired Host 1 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 1 logical storage subsystem list. Select the LSS where your H1 volume resides. Once the LSS has been selected, choose the appropriate volume from the Host 1 volume pull-down list. The alternative way to add a large number of volumes to this session is to create CSV file as explained in Using CSV files for importing and exporting sessions on page 196. In case you have CSV file ready select Use a CSV file to import copy sets check box and provide a path to your CSV file. In our example we selected DS8000 disk subsystem and appropriate volume as shown in Figure 7-307. Click Next to continue.
478
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-307 Add Copy Sets to Metro Global Mirror session- Choose Host 1
In case you would like to define all volumes within a certain LSS for this session, there is an option to select All Volumes from the Host1 volume list as shown in Figure 7-308. Click Next to continue.
Figure 7-308 Add Copy Sets to Metro Global Mirror session - Choose Host 1 and All Volumes option
The next step is to define Site 2 storage subsystems and volumes as shown in Figure 7-309. Select the desired Host 2 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 2 logical storage subsystem list. Select the LSS where your H2
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
479
TPCRDS8K_zOS-Using TPCR.fm
volumes resides.If All volumes for a given LSS were selected in the previous step while defining Host 1 volumes, you do not have an option to select any volume from Host 2 volumes list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 storage subsystem with all volumes from selected LSS in Host 2 storage subsystem. In our example we selected All Volumes in Choose Host1 step. Click Next to continue.
Figure 7-309 Add Copy Sets to Metro Global Mirror session - Choose Host 2 and All Volumes option
The next step is to define Site 3 storage subsystems and volumes as shown in Figure 7-310. Select the desired Host 3 storage subsystem from the pull-down menu and wait for a few seconds to get the Host 3 logical storage subsystem list. Select the LSS where your H3 volumes resides. If All volumes for a given LSS were selected in the previous steps while defining Host 1 and Host 2 volumes, you do not have an option to select any volume from Host 3 volumes list. TPC for Replication will automatically match all volumes from selected LSS in Host 1 and Host 2 storage subsystems with all volumes from selected LSS in Host 3 storage subsystem. In our example we selected All Volumes in previous steps. Click Next to continue.
480
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-310 Add Copy Sets to Metro Global Mirror session - Choose Host 3 and All Volumes option
The Journal 3 is the last volume required for Metro Global Mirror Copy Set definition. Select the desired Journal 3 storage subsystem from the pull-down menu and wait for a few seconds to get the Journal 3 logical storage subsystem list as shown in Figure 7-311. Select the LSS where your J3 volumes resides.If All volumes for a given LSS were selected in the previous steps while defining Host 1, Host 2 and Host 3 volumes, you do not have an option to select any volume from Journal 3 volume list. TPC for Replication will automatically match all volumes from selected LSS in Host 1, Host 2 and Host 3 storage subsystems with all volumes from selected LSS in Journal 3 storage subsystem. In our scenario we selected All Volumes in previous steps. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
481
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-311 Add Copy Sets to Metro Global Mirror session - Choose Journal 3 and All Volumes option
The next screen in Figure 7-312 displays a message with regard to the matching results. In our example we have a warning message due to one of the following reasons: the number of volumes at Host 1, Host 2, Host 3 and Journal 3 storage subsystem LSS is not the same volumes at Host 2 and H3 storage subsystem LSS are smaller then Host 1 storage subsystem LSS volumes the Host 1, Host 2, Host 3 or Journal 3 volumes are already defined in some other copy services session However, this warning message does not mean the failure of Copy Sets creation. Click Next to see the list of available Copy Sets.
482
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-312 Add Copy Sets to Metro Global Mirror session - Matching Results
In our example there are Copy Sets in error and if you click the Error hyperlink next to it, the message description appears as shown in Figure 7-313. We are not able to create a Copy Set for each H1 volume due to the fact that the capacity of volumes at Host 1 storage subsystem LSS and at Host 2/Host 3/Journal 3 storage subsystem LSS are not the same. As a consequence a few Copy Sets can not be selected. The rest of the volumes met the matching criteria and are automatically selected. You still have a chance to modify the current selection and deselect any of the Copy Set included in the list. The Show hyperlink next to each Copy Set provides additional information. We selected one Copy Set. Click Next to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
483
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-313 Add Copy Sets to Metro Global Mirror session - Select Copy Sets
The next screen displays the number of Copy Sets which are going to be created as well as the number of unresolved matches (or not selected) as shown in Figure 7-314. Click Next to continue.
Figure 7-314 Add Copy Sets to Metro Global Mirror session - Confirm
TPC for Replication internally adds that Copy Set to its database and you can monitor it through the progress panel which reports the number of Copy Sets added to the TPC for Replication inventory database. Note this does not establish Metro Global Mirror copy pairs. It
484
TPCRDS8K_zOS-Using TPCR.fm
is just an TPC for Replication internal process to add this Copy Set to the TPC for Replication inventory database. After a few seconds the progress panel reaches 100% and leaves the Adding Copy Sets panel and progress to the next panel shown in Figure 7-315. Click Finish to exit the Add Copy Sets wizard.
Figure 7-315 All Copy Sets are successfully added to the TPC-R database
Figure 7-316 confirms that all Copy Sets are successfully added to the MGM1 session and the database is successfully updated. The session status is still Inactive but in Defined state.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
485
TPCRDS8K_zOS-Using TPCR.fm
This concludes the steps through the GUI when you add Copy Sets to a Metro Global Mirror session.
The next message shown in Figure 7-318 is a warning that you are about to initiate a Metro Global Mirror session. This action creates Metro Mirror relationships between H1 and H2, and Global Mirror relationships between H2 and H3 volumes. In addition, it creates FlashCopy relationship between H3 and J3 journal volume as part of a Global Mirror configuration. Once all relationships are established, copying of data from Site 1 to Site 2 and Site 3 will overwrite all data on H2, H3 and J3 volumes. Click Yes to continue.
486
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-319 confirms that start of Metro Mirror session is completed. The session is in Preparing state and Warning status. Click on the session name hyperlink (MGM1 in our scenario) for more details on this session.
As shown in Figure 7-320 the Metro Mirror session has Warning status and there are no errors. The copy progress between H1 and H2 volumes has reached 100% but there is still data copying in progress between H2 and H3 volumes and consistency group has not been formed yet. Therefore the session is still in Preparing state.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
487
TPCRDS8K_zOS-Using TPCR.fm
Once the data has been copied from H2 to H3 volumes and the first consistency group has been created on J3 volumes the session state changes to Prepared state and Normal status as shown Figure 7-321. Click the Sessions hyperlink in the Health Overview section at the bottom left hand side of the screen to go back to the main screen.
Once the Metro Global Mirror session is started and it has Normal status and in Prepared State the following options are available:
488
TPCRDS8K_zOS-Using TPCR.fm
Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Start H1 H3- this will start Global Mirror session between H1 and H3 volumes Suspend - this will suspend Metro Mirror session between H1 and H2 volumes and it will stop copying with consistent H2 volumes SuspendH2H3- this will suspend Global Mirror session between H2 and H3 volumes (it causes Global Mirror to stop forming consistency groups) Terminate - this will terminate the session (under Cleanup submenu)
The next message shown in Figure 7-323 is a warning that you are about to Suspend Global Mirror leg in a Metro Global Mirror session. It will stop the copying of data between H2 and H3 volumes but will not affect the copying of data from H1 to H2 volumes in Metro Mirror session. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
489
TPCRDS8K_zOS-Using TPCR.fm
The status of our Metro Global Mirror session has changed from Normal to Severe status indicating that data is not replicated anymore between Host 2 and Host 3 volumes. The State of the session is SuspendedH2H3 as indicated in Figure 7-324.
Click on the session name hyperlink (MGM1 in our example) to find out more details on this session as shown in Figure 7-325. There is a timestamp for H2-J3 pair indicating when the session was suspended. It can be used as a reference.
490
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Global Mirror session has SuspendedH2H3 state the following options are available: RecoverH3 - this will make H3 volumes available for host access Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Suspend - this will suspend Metro Mirror session between H1 and H2 volumes and it will stop copying with consistent H2 volumes Terminate - this will terminate the session (under Cleanup submenu)
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
491
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-327 is a warning that you will allow H3 volumes available to your host. Click Yes to continue.
There is a message at the top of the screen in Figure 7-328 indicating that RecoverH3 action has been successfully completed. The status of our Metro Global Mirror session is Normal and the State is Prepared. H3 volume is now available to your host.
492
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-328 Metro Global Mirror session - H3 storage subsystem volume available to host
Click on the session name hyperlink (MGM1 in our example) to find out more details on this session as shown in Figure 7-329. As you can see, there is only one volume pair set left in this Metro Global Mirror Copy Set since we recovered H3 volumes and thus stopped data copying from H2 to H3 volumes.
After the RecoverH3 action the following options are available: Start H1 H2 - this will restart copying from H1 to H2 volumes in a Metro Mirror session
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
493
TPCRDS8K_zOS-Using TPCR.fm
Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Suspend - this will suspend Metro Mirror session between H1 and H2 volumes and it will stop copying with consistent H2 volumes Terminate - this will terminate the session (under Cleanup submenu)
The next message shown in Figure 7-331 is a warning that you are about to Suspend Metro Mirror leg in a Metro Global Mirror session. Click Yes to continue. Note: Once the Suspend is initiated all host I/O is temporarily halted if Release I/O after Suspend has been selected during the Metro Mirror session creation step (see Figure 7-300 on page 474). In case Hold I/O after Suspend policy has been selected then all systems that can update the H1 volumes are on hold before the Extended Long Busy for CKD volumes or SCSI Queue Full timer for FB volumes ends (default is 120 seconds).
494
TPCRDS8K_zOS-Using TPCR.fm
There is a message indicating that suspend has been successful and the status of our Metro Global Mirror session has changed from Normal to Severe status since the data is not replicated anymore between Host 1and Host 2 volumes. State of the session is Suspended as indicated in Figure 7-332.
Click on the session name hyperlink (MGM1 in our example) to find out more details on this session as shown in Figure 7-333. There is a timestamp for H1-H2 pair indicating when the Metro Mirror session was suspended. It can be used as a reference.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
495
TPCRDS8K_zOS-Using TPCR.fm
Once the Metro Global Mirror session has Suspended state the following options are available: RecoverH2 - this will make H2 volumes available for host access RecoverH3 - this will make H3 volumes available for host access Release I/O - this will release I/O to H1 volumes after Suspend event. This option is available only if the Metro Global Mirror session is created with Hold I/O after Suspend policy. Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Start H1 H3 - this will restart copying from H1 to H3 volumes in a Global Mirror session Terminate - this will terminate the session (under Cleanup submenu)
496
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-335 is a warning that you are about to allow writes to continue to H1 volumes. Click Yes to continue.
There is a message at the top of the screen indicating that Release I/O action has been successfully completed as shown in Figure 7-336.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
497
TPCRDS8K_zOS-Using TPCR.fm
The following options are available after Release I/O action: RecoverH2 - this will make H2 volumes available for host access RecoverH3 - this will make H3 volumes available for host access Release I/O - this will release I/O to H1 volumes after Suspend event. This option is available only if the Metro Global Mirror session is created with Hold I/O after Suspend policy. Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Start H1 H3 - this will restart copying from H1 to H3 volumes in a Global Mirror session Terminate - this will terminate the session (under Cleanup submenu)
498
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-338 is a warning that you will allow H2 volumes available to your host. Click Yes to continue.
There is a message at the top of the screen in Figure 7-339 indicating that RecoverH2 action has been successfully completed. The status of our Metro Global Mirror session is Normal and the State is Target Available indicating that H2 volume is available to your host.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
499
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-339 Metro Mirror session - H2 storage subsystem volume available to host
Click on the Session name hyperlink (MGM1 in our example) to find out more details on this session as shown in Figure 7-340. There is a timestamp for H1-H2 pair indicating when the Metro Mirror session was suspended. It can be used as a reference.
After the Metro Mirror session is recovered in Target Available state the following options are available:
500
TPCRDS8K_zOS-Using TPCR.fm
Release I/O - this will release I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Start H2 H3 - this will restart copying from H2 to H3 volumes in a Global Mirror session Enable Copy to Site 1- before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It will enable Start H2 H1 H3 command. Terminate - this will terminate the session (under Cleanup submenu)
The next message shown in Figure 7-342 is a warning that you are about to enable the command which initiate copying data from H2 to H1 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 1 are not being used by any application prior to enabling the command that allow copying data to Site 1. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
501
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-343 confirms that Enable Copy to Site 1 command is completed. The status of our Metro Global Mirror session is the same as it was after RecoverH2 command: Normal status and Target Available state, indicating that H2 volume is available to your host.
The following options are available at this stage: Release I/O - this will release I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy.
502
TPCRDS8K_zOS-Using TPCR.fm
Start H2 H1 H3 - this will restart copying from H2 to H1 volumes in a Metro Mirror session and H1 to H3 volumes in a Global Mirror session. Start H2 H3 - this will restart copying from H2 to H3 volumes in a Global Mirror session Re-enable Copy to Site 2 - it will enable Start H1 H2 H3 command. Terminate - this will terminate the session (under Cleanup submenu)
The next message shown in Figure 7-345 is a warning that you are about to initiate Global Mirror between H2 and H3 volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
503
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-346 confirms that start of Global Mirror between H2 and H3 is completed. The session is in Prepared state and Normal status. Click on the Session name hyperlink (MGM1 in our example) for details on this session.
As shown in Figure 7-347 Global Mirror data copying progress reached 100%. Click the Sessions hyperlink in the Health Overview section at the bottom left hand side of the screen to go back to the Sessions screen.
504
TPCRDS8K_zOS-Using TPCR.fm
At this stage the following options are available: Start H2 H1 H3 - this will create Metro Mirror session between H2 to H1 volumes and Global Mirror session between H1 and H3 volumes Start H2 H3 - this will restart copying from H2 to H3 volumes in a Global Mirror session Suspend - this will stop copying with consistent H1 volumes Terminate - this will terminate the session (under Cleanup submenu)
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
505
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-349 is a warning that you are about to initiate Metro Mirror between H2 and H1 volumes as well as Global Mirror between H1 and H3 volumes. Click Yes to continue.
The message at the top of the screen in Figure 7-350 confirms that start of Metro Global Mirror between H2, H1 and H3 is completed. The session is initially in Preparing state and Warning status since the data between H2 and H1 are not 100% synchronized. Once the H1 and H2 volumes are fully synchronized and the first data consistency is created between H3 506
Tivoli Storage Productivity Center for Replication for System z
TPCRDS8K_zOS-Using TPCR.fm
and J3 volumes the session goes to Prepared states and Normal status. Click on the Session name hyperlink (MGM1 in our example) for details on this session.
As shown in Figure 7-351 Metro Mirror data copying progress reached 100% and data consistency has been created on J3 volumes. Click the Sessions hyperlink in the Health Overview section at the bottom left hand side of the screen to go back to the Sessions screen.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
507
TPCRDS8K_zOS-Using TPCR.fm
At this stage the following options are available: Start H2 H1 H3 - this will create Metro Mirror session between H2 to H1 volumes and Global Mirror session between H1 and H3 volumes Suspend - this will stop copying with consistent H1 volumes Suspend H1 H3 - this will suspend Global Mirror session between H1 and H3 volumes (it causes Global Mirror to stop forming consistency groups) Start H2 H3 - this will restart copying from H2 to H3 volumes in a Global Mirror session Terminate - this will terminate the session (under Cleanup submenu)
The next message shown in Figure 7-353 is a warning that you are about to suspend Metro Mirror between H2 and H1 volumes. Click Yes to continue.
508
TPCRDS8K_zOS-Using TPCR.fm
The status of our Metro Global Mirror session has changed from Normal to Severe status indicating that data is not being replicated between Host 2 and Host 1 volumes. State of the session is Suspended as indicated in Figure 7-354.
Click on the Session name hyperlink (MGM1 in our example) to find out more details on this session as shown in Figure 7-355. There is a timestamp for H1-H2 pair indicating when the Metro Mirror session was suspended. It can be used as a reference.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
509
TPCRDS8K_zOS-Using TPCR.fm
The following option are available at this stage: Recover H1 - this will make H1 volumes available for host access Recover H3 - this will make H3 volumes available for host access Release I/O - this will release I/O to H1 volumes after Suspend event. This option is available only if the Metro Mirror session is created with Hold I/O after Suspend policy. Start H2 H1 H3 - this will create Metro Mirror session between H2 to H1 volumes and Global Mirror session between H1 and H3 volumes Start H2 H3 - this will restart copying from H2 to H3 volumes in a Global Mirror session Terminate - this will terminate the session (under Cleanup submenu)
7.9.13 RecoverH1 in a Metro Global Mirror session (after suspending H2 H1 H3 Metro Global Mirror session)
Once the Metro Global Mirror session and its associated Copy Sets have been suspended in H2 H1 H3 direction we can initiate RecoverH1 action by selecting Metro Global Mirror session radio button (MGM1 in our example shown in Figure 7-356) and RecoverH1 action from the Select Action pull-down menu. Click Go to continue.
510
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-357 is a warning that you will allow H1 volumes available to your host. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
511
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-358 indicating that RecoverH1 action has been successfully completed. The status of our Metro Global Mirror session is Normal and the State is Target Available indicating that H1 volume is available to your host.
Figure 7-358 Metro Mirror session - H1 storage subsystem volume available to host
Click on the Session name hyperlink (MGM1 in our example) to find out more details on this session as shown in Figure 7-359. There is a timestamp for H1-H2 pair indicating when the Metro Mirror session was suspended. It can be used as a reference.
512
TPCRDS8K_zOS-Using TPCR.fm
After the Metro Mirror session is recovered in target available state the following options are available: Start H1 H3 - this will restart copying from H1 to H3 volumes in a Global Mirror session Start H2 H1 H3 - this will restart copying from H2 to H1 volumes in a Metro Mirror session and H1 to H3 volumes in a Global Mirror session. Enable Copy to Site 2 - before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It will enable Start H1 H2 H3 command. Terminate - this will terminate the session (under Cleanup submenu)
The next message shown in Figure 7-361 is a warning that you are about to enable the command which initiate copying data from H1 to H2 volumes. This command is disabled to
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
513
TPCRDS8K_zOS-Using TPCR.fm
protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 2 are not being used by any application prior to enabling the command that allow copying data to Site 2. Click Yes to continue.
The message at the top of the screen in Figure 7-361 confirms that Enable Copy to Site 2 command is completed. The status of our Metro Mirror session is the same as it was after Recover command: Normal status and Target Available state, indicating that H1 volume is available to your host.
514
TPCRDS8K_zOS-Using TPCR.fm
The following options are available at this stage: Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Re-enable Copy to Site 1- it will enable Start H2 H1 H3 command Terminate - this will terminate the session (under Cleanup submenu)
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
515
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-364 is a warning that you will stop data replication from H1 to H2 volumes and establish Global Mirror session between H1 and H3 volumes. Click Yes to continue.
There is a message at the top of the screen in Figure 7-365 indicating that Start H1 H3 action has been successfully completed. The status of our Metro Global Mirror session is Normal and the state is Prepared. Click on the Session name hyperlink (MGM1 in our scenario) for more details on this session. 516
Tivoli Storage Productivity Center for Replication for System z
TPCRDS8K_zOS-Using TPCR.fm
The following options are available at this stage: Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Start H1 H3- this will start Global Mirror session between H1 and H3 volumes Suspend - this will suspend Metro Mirror session between H1 and H2 volumes and it will stop copying with consistent H2 volumes Terminate - this will terminate the session (under Cleanup submenu)
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
517
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-367 is a warning that you are about to suspend Global Mirror between H1 and H3 volumes. Click Yes to continue.
The status of our Metro Global Mirror session has changed from Normal to Severe status indicating that data is not replicated anymore between Host 1and Host 3 volumes. State of the session is Suspended as indicated in Figure 7-368.
518
TPCRDS8K_zOS-Using TPCR.fm
Click on the Session name hyperlink (MGM1 in our example) for more details on this session as shown in Figure 7-369. There is a timestamp for H1-J3 pair indicating when the Metro Mirror session was suspended. It can be used as a reference.
The following option are available at this stage: Recover - this will make target volumes available for host access Start H1 H3- this will start Global Mirror session between H1 and H3 volumes
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
519
TPCRDS8K_zOS-Using TPCR.fm
Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session Terminate - this will terminate the session (under Cleanup submenu)
7.9.17 Recover Metro Global Mirror session (after suspending H1 H3 Global Mirror session)
Once the Metro Global Mirror session and its associated Copy Sets have been suspended between H1 and H3 volumes we can initiate Recover action by selecting Metro Global Mirror session radio button (MGM1 in our example shown in Figure 7-370) and Recover action from the Select Action pull-down menu. It will make H3 volumes available to your host. Click Go to continue.
The next message shown in Figure 7-371 is a warning that you will allow H3 volumes available to your host. Click Yes to continue.
520
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-372 indicating that Recover action has been successfully completed. The status of our Metro Global Mirror session is Normal and the State is Target Available. H3 volumes are now available to your host.
Click on the Session name hyperlink (MGM1 in our example) for more details on this session as shown in Figure 7-373.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
521
TPCRDS8K_zOS-Using TPCR.fm
The following actions are available at this stage: Start H1 H3- this will start Global Mirror session between H1 and H3 volumes Enable Copy to Site 1- before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It will enable Start H3 H1 H2 command. Terminate - this will terminate the session
522
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-375 is a warning that you are about to enable the command which initiate copying data from H3 to H1 volumes. This command is disabled to protect against accidentally copying over production data. Ensure that all of the volumes in this session located at Site 1 are not being used by any application prior to enabling the command that allow copying data to Site 1. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
523
TPCRDS8K_zOS-Using TPCR.fm
The message at the top of the screen in Figure 7-376 confirms that Enable Copy to Site 1 command is completed. The status of our Metro Global Mirror session is the same as it was after Recover command: Normal status and Target Available state, indicating that H3 volume is available to your host.
The following options are available at this stage: Start H3 H1 H2 - this will restart copying from H3 to H1 and then to H2 volumes but without data consistency due to a Global Copy relationship between H3 and H1 volumes Re-enable Copy to Site 3- before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It will enable Start H1 H3 command. Terminate - this will terminate the session (under Cleanup submenu)
7.9.19 Start H3 H1 H2 in a Metro Global Mirror session (after recovering H3 volumes in a H1 H3 Global Mirror session)
Once the Recover action against H3 volume in a Global Mirror session between H1 and H3 has been executed (see 7.9.17, Recover Metro Global Mirror session (after suspending H1 H3 Global Mirror session) on page 520) and Enable Copy to Site 1, it is possible to re-establish the data copying between H3 and H1 and then to H2 volumes. Note that the relationship between H3 and H1 is only Global copy and thus data consistency is not guaranteed during this process. To initiate this action, select Start H3 H1 H2 from the Select Action pull-down menu and click Go as shown in Figure 7-377.
524
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-378 is a warning that you are about to initiate data replication between H3 and H1 volumes in Global Copy mode (without data consistency) and between H1 and H2 volumes in a Metro Mirror session. Click Yes to continue.
The message at the top of the screen in Figure 7-379 confirms that start of H3 H1_>H2 action is completed. The session is in Preparing state and Warning status since there is no data consistency between H3 and H1volumes. Click on the Session name hyperlink (MGM1 in our example) to find out more details on this session.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
525
TPCRDS8K_zOS-Using TPCR.fm
As shown in Figure 7-380 data copying is in progress between H3 and H1 and between H1 and H2. Click the Sessions hyperlink in the Health Overview section at the bottom left hand side of the screen to go back to the Sessions screen.
526
TPCRDS8K_zOS-Using TPCR.fm
Note: After starting the session from H3 H1 H2 the session will forever stay in preparing state. The reason for this is that there is no journal volume at Site 1 where consistent copy could be formed. If you want to switch back to H1 site you need to stop I/O on H3 volumes and suspend the session. This will cause that all data which is not yet copied from H3 to H1 volumes to be copied. Once session is suspended you can recover it back to H1 volumes. At this stage the following options are available: Start H3 H1 H2 - this will restart copying from H3 to H1 and then to H2 volumes but without data consistency due to a Global Copy relationship between H3 and H1 volumes Suspend - this will stop copying with consistent H1 volumes Terminate - this will terminate the session (under Cleanup submenu)
The next message shown in Figure 7-382 is a warning that you are about to stop data replication between H1 and H3 volumes. Click Yes to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
527
TPCRDS8K_zOS-Using TPCR.fm
The status of our Metro Global Mirror session has changed from Warning to Severe status indicating that data is not replicated anymore between H3 and Host 1 volumes. State of the session is Suspended as indicated in Figure 7-383.
Click on the Session name hyperlink (MGM1 in our example) for more details on this session as shown in Figure 7-384. There is a timestamp for H1-H3 pair indicating when the Metro Mirror session was suspended. It can be used as a reference.
528
TPCRDS8K_zOS-Using TPCR.fm
The following option are available at this stage: Recover - this will make target volumes available for host access Start H3 H1 H2 - this will restart copying from H3 to H1 and then to H2 volumes but without data consistency due to a Global Copy relationship between H3 and H1 volumes Terminate - this will terminate the session (under Cleanup submenu)
7.9.21 Recover Metro Global Mirror (after suspending H3 H1 H2 Metro Global Mirror session)
Once the H3 H1 H2 Metro Global Mirror session and its associated Copy Sets have been suspended we can initiate Recover action by selecting Metro Global Mirror session radio button (MGM1 in our example shown in Figure 7-385) and Recover action from the Select Action pull-down menu. It will make H1 volume available to your host. Click Go to continue.
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
529
TPCRDS8K_zOS-Using TPCR.fm
The next message shown in Figure 7-386 is a warning that you will allow H1 volumes available to your host. Click Yes to continue.
There is a message at the top of the screen in Figure 7-387 indicating that Recover action has been successfully completed. The status of our Metro Global Mirror session is Normal and the State is Target Available. H1 volume is now available to your host.
530
TPCRDS8K_zOS-Using TPCR.fm
Figure 7-387 Metro Global Mirror session - H3 storage subsystem volume available to host
Click on the Session name hyperlink (MGM1 in our example) for more details on this session as shown in Figure 7-388.
After the Recover action the following options are available: Start H3 H1 H2 - this will restart copying from H3 to H1 and then to H2 volumes but without data consistency due to a Global Copy relationship between H3 and H1 volumes
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
531
TPCRDS8K_zOS-Using TPCR.fm
Enable Copy to Site 2- before reversing the direction of copying in a failover and failback session, you must run this command, and confirm that you want to reverse the direction of replication. It will enable Start H1 H2 H3 command. Terminate - this will terminate the session (under Cleanup submenu) If you want to get back to the original Metro Global Mirror configuration (H1 H2 H3) you need to issue Enable copy to Site 2 command. It will allow you to Start H1 H2 H3 command.
The next message shown in Figure 7-390 is a warning that you are about to terminate Metro Global Mirror relationship between H1 to H2 and H3 volumes. Note that if you have to start the very same Metro Global Mirror session again a full copy from H1 to H2 and H3 volumes will be required. Click Yes to continue.
532
TPCRDS8K_zOS-Using TPCR.fm
There is a message at the top of the screen in Figure 7-391 indicating that Terminate action has been successfully completed. The status of our Metro Global Mirror session is now Inactive and the state is Defined.
Click on the session name hyper link (MGM1 in our example) to find out more details on this session. As shown in Figure 7-392 the Copy Set volume pairs are inactive, there is no data
Chapter 7. Using Tivoli Storage Productivity Center for Replication for DS8000
533
TPCRDS8K_zOS-Using TPCR.fm
copying in progress, but the Metro Mirror and Global Mirror logical path between H1, H2 and H3 volumes are still there.
Once the Metro Mirror session is terminated the following option is available: Start H1 H2 H3 - this will restart copying from H1 to H2 volumes in a Metro Mirror session and from H2 to H3 volumes in a Global Mirror session
534
TPCR-Outages-Ch8_08012007.fm
Chapter 8.
535
TPCR-Outages-Ch8_08012007.fm
TPCR-Outages-Ch8_08012007.fm
537
TPCR-Outages-Ch8_08012007.fm
Note: When returning back from H2 to H1 volumes, H1 volumes will be fully re synchronized. This means the data will be overwritten. It is recommended that you create a copy of H1 volumes before performing this action. This can be achieved by creating Flash Copy session where H1 volumes would be source volumes. This would guarantee that you would have consistent copy of data in case of failure in H2 site while re synchronizing back to H1 site.
538
TPCR-Outages-Ch8_08012007.fm
Note: The Start action after failure of the H2 site will perform a full copy from H1 volumes to H2 volumes.
539
TPCR-Outages-Ch8_08012007.fm
1. Verify that all sessions are in Normal state. 2. Execute the Suspend action as described in 7.3.6, Suspend Metro Mirror session on page 308. 3. Perform the actions on the H2 site. 4. Execute the Start H1 H2 action in order to restart the Metro Mirror session as described in 7.3.4, Start Metro Mirror session on page 303. Note: The Start H1 H2 action after suspend will only perform a copy of changes from H1 volumes to H2 volumes.
540
TPCR-Outages-Ch8_08012007.fm
1. Execute the Start H1 H2 action as described in 7.4.3, Start Global Mirror session on page 329. Note: The Start H1 H2 action after failure of H2 site will perform a full copy from H1 volumes to H2 volumes.
541
TPCR-Outages-Ch8_08012007.fm
542
TPCR-Outages-Ch8_08012007.fm
543
TPCR-Outages-Ch8_08012007.fm
6. Perform the actions on H1 site. To return back to site H1 perform the following steps: 1. Execute Start H2 H1 action as described in 7.6.8, Start H2 H1 Metro Mirror session on page 396. Note: The Start H2 H1 action after will only perform copy of changes from H2 volumes to H1 volumes. It is recommended that you create a copy of H1 volumes before performing this action. This can be achieved by creating a Flash Copy session where H1 volumes would be source volumes. This would guarantee that you would have a consistent copy of data in case of failure in H2 site while re-synchronizing back to the H1 site. 2. Wait for volumes to synchronize. Verify that all sessions are in Normal state. 3. Stop I/O on servers accessing volumes on the H2 site. 4. Execute the Suspend action as described in 7.6.5, Suspend Metro Mirror session on page 390. 5. Execute the Recover action as described in 7.6.6, Recover Metro Mirror session on page 392. 6. Execute Start H1 H2 action as described in 7.6.8, Start H2 H1 Metro Mirror session on page 396. 7. Start I/O on the servers accessing volumes on the H1 site.
544
TPCR-Outages-Ch8_08012007.fm
1. Execute the Recover action as described in 7.6.6, Recover Metro Mirror session on page 392. 2. Start I/O on the servers accessing volumes on H2 site. In order to move production back to H1 site once it has been re-established, perform the following steps: 1. Execute Start H2 H1 action as described, 7.6.8, Start H2 H1 Metro Mirror session on page 396. Note: The Start action after failure of the H2 site will perform full copy from H2 volumes to H1 volumes. 2. Wait for volumes to synchronize. Verify that all sessions are in Normal state. 3. Stop I/O on servers accessing volumes on H2 site. 4. Execute the Suspend action as described in 7.6.5, Suspend Metro Mirror session on page 390. 5. Execute the Recover action as described in 7.6.6, Recover Metro Mirror session on page 392. 6. Execute Start H1->H2 action as described in 7.6.3, Starting the session on page 385. 7. Start I/O on the servers accessing volumes on H1 site.
545
TPCR-Outages-Ch8_08012007.fm
4. Execute the Start H1 H2 action as described in 7.8.3, Start H1 H2 Global Mirror session on page 445. 5. Perform practice actions on H2 site.
546
TPCR-Outages-Ch8_08012007.fm
547
TPCR-Outages-Ch8_08012007.fm
8.15.1 Planned outage of local H1 site, with a production move to intermediate H2 site and return
In this scenario, the goal is to take down the local H1 site, move production to the intermediate H2 site and return to the original configuration. Perform the following steps to accomplish that: 1. Execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486. Issue this action to begin Metro Global Mirror on your system when the production I/O is running on H1 site. Wait for the session to go to the prepared state. Note: Stop production I/O on H1 before continuing to the next step. 2. Execute Suspend action as described in 7.9.6, Suspend Metro Global Mirror session on page 494. This command creates the planned outage by suspending the Metro Mirror relationship between H1 and H2. When this step is complete, the session state should be suspended and recoverable. 3. Execute RecoverH2 action as described in 7.9.8, RecoverH2 in a Metro Global Mirror session on page 498. This command makes H2 target-available. When the session state is target-available at H2, run production I/O to H2. 4. Execute Start H2 H3 action as described in 7.9.10, Start H2 H3 Metro Global Mirror session (after RecoverH2) on page 503. This command starts a Global Mirror session during the outage of H1. Production I/O can continue on H2. Wait for the session to go to the prepared state. Note: A suspend is not necessary before going to the next step. 5. Execute Start H2 H1 H3 action as described in 7.9.11, Start H2 H1 H3 Metro Global Mirror session (after RecoverH2) on page 505. Issue this command when H1 is available and ready to be brought back into the configuration. This command starts a Metro Global Mirror session with H1 and H2 roles reversed from the original configuration. Production I/O can continue on H2. Wait for the session to go to the prepared state. 548
Tivoli Storage Productivity Center for Replication for System z
TPCR-Outages-Ch8_08012007.fm
Note: The following steps are optional to return to the original configuration. Stop production I/O on H2 before continuing to the next step and when ready to return back to H1 location. 6. Execute Suspend action as described in 7.9.6, Suspend Metro Global Mirror session on page 494. This command suspends the Metro Mirror relationship between H2 and H1. When this step is complete, the session state should be suspended and recoverable. 7. Execute RecoverH1 action as described in 7.9.13, RecoverH1 in a Metro Global Mirror session (after suspending H2 H1 H3 Metro Global Mirror session) on page 510. This command makes H1 target-available. When the session state is target-available at H1, run production I/O to H1. 8. Execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486. This command restores the original configuration.
8.15.3 Planned outage of local H1 site and intermediate H2 site in normal configuration with production move to remote H3 site and return
In this scenario, we will cover planned actions on the H1 site and the H2 site while in normal configuration, we will move the production to the remote H3 site and then restore the original configuration. Perform the following actions to accomplish that: 1. Execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486. Issue this action to begin Metro Global Mirror on your system when the production I/O is running on H1. Wait for the session to go to the prepared state.
Chapter 8. DS8000 recovery scenarios
549
TPCR-Outages-Ch8_08012007.fm
Note: Stop production I/O on H1 before continuing to next step. 2. Execute Suspend action as described in 7.9.16, Suspend Metro Global Mirror session (after Start H1 H3) on page 517. This action creates the planned outage by suspending the Metro Mirror relationship between H1 site and H2 site. When this step is complete, the session state should be suspended and recoverable. 3. Execute RecoverH3 action as described in 7.9.17, Recover Metro Global Mirror session (after suspending H1 H3 Global Mirror session) on page 520. This action makes H3 target-available. When the session state is Target Available at H3, run test I/O to H3. This command makes H3 target-available. When the session state is target-available at H3, run production I/O to H3. Note: Continue running production I/O on H3 before continuing to next step. 4. Execute Start H3 H1 H2 action as described in 7.9.19, Start H3 H1 H2 in a Metro Global Mirror session (after recovering H3 volumes in a H1 H3 Global Mirror session) on page 524. Issue this action when H1 and H2 are available and ready to be brought back into the configuration. Changes from production at the remote H3 site flow from the local H1 site to the intermediate H2 site. The H3-H1 and H1-H2 pairs are Global Copy. The session stays in a preparing state. Note: When ready to return back to H1 site, stop production I/O on H3 before continuing to the next step. 5. Execute Suspend action as described in 7.9.20, Suspend Metro Mirror session (after Start H3 H1 H2) on page 527. This action suspends Global Copy relationship between H3 H1 and H1 H2. When this step is complete, the session state should be suspended and recoverable. 6. Execute RecoverH1 action as described in 7.9.13, RecoverH1 in a Metro Global Mirror session (after suspending H2 H1 H3 Metro Global Mirror session) on page 510. This action makes H1 target-available. When the session state is target-available at H1, run production I/O to H1. 7. Execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486. Issue this action when H2 is available and ready to be brought back into the configuration. This action restores the original configuration.
550
TPCR-Outages-Ch8_08012007.fm
2. Execute SuspendH2H3 action as described in 7.9.4, SuspendH2H3 Metro Global Mirror session on page 489. Issue this action to suspend the Global Mirror portion of the session only. The Metro Mirror continues between H1 and H2. This causes the session state to be suspended. 3. Once the H3 site is available again execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486. This command restores the original configuration by starting the Global Mirror portion.
8.16.1 Unplanned outage of local H1 site, production move to intermediate H2 site and return
In this scenario, we describe actions in case of an unplanned outage of the local site (H1), move the production to the intermediate site (H2), and return to the original configuration. Note: Make sure that the heartbeat function is enabled. 1. Execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486. Issue this command to begin Metro Global Mirror on your system when the production I/O is running on H1. Assume that the session state goes to prepared in this example. 2. An unplanned outage of H2 will cause the session to go suspended. The session is recoverable if it was in a prepared state when the outage occurred. If the session state was preparing, the session would not be recoverable. This will also cause your production I/O on H1 to crash. 3. Execute Recover H2 action as described in 7.9.8, RecoverH2 in a Metro Global Mirror session on page 498. This command makes H2 target-available. When the session state is target-available at H2, run production I/O to H2. 4. Execute Start H2 H3 action as described in 7.9.10, Start H2 H3 Metro Global Mirror session (after RecoverH2) on page 503. This command starts a Global Mirror session during the outage of H1. Production I/O can continue on H2. Wait for the session to go to the prepared state. Note: A Suspend action is not necessary before going to the next step. 5. Execute Start H2 H1 H3 action as described in 7.9.11, Start H2 H1 H3 Metro Global Mirror session (after RecoverH2) on page 505 Issue the RecoverH2 command when H1 is available and ready to be brought back into the configuration. This command starts an MGM session with H1 and H2 roles reversed from the original configuration. Production I/O can continue on H2. Wait for the session to go to the prepared state. Note: The following steps are used to return to the original configuration once the H1 site is available. Stop production I/O on H2 before continuing to the next step
551
TPCR-Outages-Ch8_08012007.fm
6. Execute Suspend action as described in 7.9.16, Suspend Metro Global Mirror session (after Start H1 H3) on page 517. This command suspends the Metro Mirror relationship between H2 and H1. When this step is complete, the session state should be suspended and recoverable. 7. Execute RecoverH1 action as described in 7.9.17, Recover Metro Global Mirror session (after suspending H1 H3 Global Mirror session) on page 520 This command makes H1 target-available. When the session state is Target Available at H1, run production I/O to H1. 8. Execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486. This command restores the original configuration.
8.16.3 Unplanned outage of local H1 site and intermediate H2 site, with production move to remote H3 site and return
In this scenario, we will cover actions in case of an unplanned outage of the H1 site and the H2 site, move production to the H3 site, and return to the original configuration. Perform the following actions after outage: Note: Make sure that the heartbeat function is enabled. 1. Execute RecoverH3 action as described in 7.9.17, Recover Metro Global Mirror session (after suspending H1 H3 Global Mirror session) on page 520. This action makes H3 target-available. The session will be severe/target-available. When the session state is target-available at H3, run production I/O to H3. Note: Continue running production I/O on H3 before continuing to next step.
552
TPCR-Outages-Ch8_08012007.fm
2. In order to move production back to H1 site once site H1 and site H2 have been restored, execute Start H3 H1 H2 action as described in 7.9.19, Start H3 H1 H2 in a Metro Global Mirror session (after recovering H3 volumes in a H1 H3 Global Mirror session) on page 524. Issue this action when H1 and H2 are available and ready to be brought back into the configuration. Changes from production at the remote H3 site flow from the local H1 site to the intermediate H2 site. The H3-H1 and H1-H2 pairs are Global Copy. The session stays in the preparing state. Note: Stop production I/O on H3 before continuing to next step. 3. Execute Suspend action as described in 7.9.20, Suspend Metro Mirror session (after Start H3 H1 H2) on page 527. This command suspends the Global Copy relationship between H3 H1 and H1 H2. When this step is complete, the session state should be suspended and recoverable. 4. Execute Recover H1 action as described in 7.9.21, Recover Metro Global Mirror (after suspending H3 H1 H2 Metro Global Mirror session) on page 529. This action makes H1 target-available. When the session state is target-available at H1, run production I/O to H1. 5. Execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486. This action restores the original configuration.
553
TPCR-Outages-Ch8_08012007.fm
The RecoverH3 action acts differently than other recover actions. In this case, the action acts only to make the remote H3 site target-available and does not switch the application site to H3. The application site remains the original host site and the session acts as it does in a normal Metro Mirror Failover/Failback session. While the session is in this mode, it remains capable of disaster recovery and will freeze all pairs to maintain consistency during an error. You can also suspend the session by action and recover to the intermediate site if necessary. However, if the intermediate site is H2, it will not be possible to restart a full three-site solution. To get three sites back up and running, you need to copy back to H1 using the Start H2 H1 action and then suspend and recover to H1. Once H1 is the application host again, you can issue the Start H1 H2 H3 action to restart three-site support. If this is done, all data written to H3 during the practice is overwritten with the data that is on H1. In addition, you can restart the front-end after a normal recover to H3 while in an H1 H2 H3 configuration. For example, after issuing a RecoverH3 action, you can issue a Start H1 H2 action to set up disaster recovery. The difference here is that the RecoverH3 action in this scenario requires you to still issue the Start H1 H2 action to set up disaster recovery while practicing on H3.
554
TPCR-Outages-Ch8_08012007.fm
3. Execute RecoverH3 action as described in 7.9.5, RecoverH3 Metro Global Mirror session on page 491. 4. Execute Start H1 H2 action. After you have executed the actions, keep running application on H1, and set up a practice environment on H3. The difference between the second method and the first method is that a suspend is issued instead of a suspend H2H3. When the session is in suspended mode (versus suspended H2H3), the session does not automatically go into practice mode. The application site switches to H3 and you must issue a start H1 H2 to enable the practice mode.
8.17.2 Practice scenario 1: while practicing, disaster occurs at H1 site, recover to H2 site
This section describes an H3 site practice scenario, where a disaster occurs at H1 site, followed by a recovery to H2 site. If a disaster occurs at H1 site, the session immediately suspends itself. You then need to complete the following steps: 1. Execute RecoverH2 action as described in 7.9.8, RecoverH2 in a Metro Global Mirror session on page 498. 2. Move the application I/O to H2 site. 3. Execute Start H2 H1 action in order to make the session capable of disaster recovery again.
8.17.3 Practice scenario 2: while practicing, planned outage at H1 site, recover to H2 site
This section describes an H3 site practice scenario, with a planned outage at H1 site, followed by a recovery to H2 site. While practicing, if you want a planned outage with a switch to H2 site, do the following steps: 1. Execute Suspend action as described in 7.8.6, Suspend Global Mirror session on page 453. 2. Execute RecoverH2 action as described in 7.9.8, RecoverH2 in a Metro Global Mirror session on page 498. 3. Move the application I/O to H2 site. 4. Execute Start H2 H1 action as described in 7.8.9, Start H2 H1 on page 460, in order to make the session capable of disaster recovery again.
8.17.4 Practice scenario 3: while practicing with production at H2 site, move production back to H1 site
This section describes an H3 practice scenario, with production running at H2 site, followed by a move of production to H1 site. While practicing, if you moved the production to H2 site, you need to move it back to H1 site before you can start three-site support. Assuming the session is running H2 H1, do the following steps to move production back to H1 site:
Chapter 8. DS8000 recovery scenarios
555
TPCR-Outages-Ch8_08012007.fm
1. Execute Suspend action as described in 7.9.6, Suspend Metro Global Mirror session on page 494. 2. Issue RecoverH1 action. 3. Move the application I/O to H1 site. 4. Execute Start H1 H2 action to make the session capable of disaster recovery again and move the application I/O to H3 site. 5. Execute Start H1 H2 H3 action as described in 7.9.3, Start H1 H2 H3 Metro Global Mirror session on page 486 when H3 site is ready to receive a copy of the data at H1 site. When this action completes, all writes that were done to H3 site are overwritten by the data on H1 site.
556
TPCR-Outages-Ch8_08012007.fm
3. Perform the actions on H2 site storage subsystem. When the Site 2 storage subsystem is ready, perform the following step to re-start data copying from H1 to H2 volumes: 1. Execute the Start H1 H2 action as described in 6.6.5, Start H1->H2 in a Basic HyperSwap session on page 247.
557
TPCR-Outages-Ch8_08012007.fm
558
7563bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only. Tivoli Storage Productivity Center V5.1 Technical Guide, SG24-8053 Tivoli Storage Productivity Center V5.2 Release Guide, SG24-8204 Configuring Secure Communication between HyperSwap and Tivoli Storage Productivity Center for Replication using TLS and AT-TLS, REDP-5061 You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website: ibm.com/redbooks
Other publications
These publications are also relevant as further information sources: Tivoli Storage Productivity Center developerWorks site http://www.ibm.com/developerworks/servicemanagement/sm/tpc/index.html Tivoli Storage Productivity Center Support Site http://www-947.ibm.com/support/entry/portal/product/tivoli/tivoli_storage_produ ctivity_center?productContext=1039251977
Online resources
These websites are also relevant as further information sources: Tivoli Common Reporting Users Guide, SC23-8737 Tivoli Storage Productivity Center Users Guide, SC27-4048
559
7563bibl.fm
ibm.com/services
560
7563IX.fm
Index
Symbols
. 293 consistency group 10 Consistency Group interval time 320, 406, 473 Consistency Group interval time value 473 Copy Services Manager 8 Copy Services volume pairs 296 copy sets 9 adding to database 246, 327, 384, 412, 484 copy sets definition 240, 278, 323, 344, 380, 409, 477 copy-on-write technique 286 Create Session wizard 277 CSM Provider 107 CSMCLI session create 323 CSV file 279, 324 copy set export 345
A
active server 40 AOPBATCH utility 55 application design 8
B
basic functions 2 Basic HyperSwap 556 cache fast write 230 commands 221 Concurrent Copy 230 considerations 223 Copy Set 240 customization 219 Disable Hyperswap 231 disk subsystems 223 Event triggers 219 hardware reserves 229 JES2 considerations 226 JES3 considerations 225 MFA implementation 221 overview 216 Planned HyperSwap function 218 session 217 session options 236 session status 246 sharing volumes 230 Sysplex CDS 228 Unplanned HyperSwap 219 Basic HyperSwap session 235 Bsaic HyperSwap establishing paths 233
D
data source provider 105 data source test connection 119 DB2 buffer pool 57 DB2 Data source 112 DB2 database tables 99 DB2 Driver type 111 DB2 repository install 54 administrative ID 55 creating repository database 57 DB2 instance bind 55 IWNDBALO job 59 IWNDBELM job 61 IWNDBHAE job 73 IWNDBHAH job 74 IWNDBHAR job 78 IWNDBHAS job 85 IWNDBHWL job 63, 67, 73 IWNDBMIG job 86 IWNDBREP job 67 IWNDBSHL job 60 tables 99 DB2 storage group 54 DB2 SYSADM authority 116 DB2M display DDF command 111 DDF keyword 111 Disable HyperSwap 231 DS6000 SMC 45 TPC for Replication 45 DS6000 communication 44 DS6000 username password file 132 DS6800 configuration /persost/etc/fccwUsers 132 DS8000 first slot 46 recovery scenarios 535 DS8000 connectivity 46 DS8000 Username Password 139
C
CCW interface 43 CLASSPATH update 55 Cleanup submenu 432 commands $CKPTDEF 226 D HS,STATUS 222 lsnetworkport -l 136 SETHS DISABLE 223 SETHS ENABLE 222, 231 SETHS SWAP 223 setnetworkport 136 shownetworkport 136 TSO CGROUP FREEZE 218 communication paths 8 Concurrent Copy 230
561
7563IX.fm
E
Enable Copy to Site 2 command 398, 513 Enable copy to Site 2 command 532 error code 201 57 ESS 800 Storwatch Specialist 130 ESS 800 communication 43 Ethernet card ports 133 Ethernet ports 46 DSCLI configuration 136 GUI configuration 133 Extended Long Busy 293, 339, 557 Extended Long Busy status 218
F
Fail MM/GC if the target is online 320, 406 FCP link 266 FCP ports 268 firewall considerations 42 Flash 286 Flash action 448 Flash Copy initiate 284 FlashCopy Inactive session status 283 incremantal 274 Initiate Background Copy 286 overview 3 session details 287 T-zero copy 3 FlashCopy options changing 274 No Copy 274 persistent 274 FlashCopy relationship Terminate 289 FlashCopy session confimation 285 create 272 Flash 286 FlashCopy volumes 279 pictograph 273 Terminate 286 FlashCopy session options 286 FlashCopy target volumes 273 Flashing status Global Mirror session Flashing status 450 Freeze and Go policy 294, 339 Freeze command 293, 339
Journal volumes 439 Recover action 421 Global Mirror Failover/Failback Create session 404 planned outages 541 Global Mirror Failover/Failback practive unplanned outages 547 Global Mirror Failover/Failback w/ Practice practice 545 unplanned outages 547 Global Mirror FOFB consistency group 447 Copy Set matching errors 441 Enable Copy action 458 Enable Copy options 460 Enable Copy to Site 2 469 Intermediate volumes 435 Journal 2 storage subsystem 441 journal volumes 435 Global Mirror FOFB session Enable Copy to Site 2 options 469 reverse copy direction 467 Global Mirror session copy sets definition 438 Flash action 448 initate copying 424 options 332 Peer-to-Peer Remote Copy 439 Properties panel 320, 406 Recover action 333, 419, 428, 456, 465 Recover command 430 Recover options 467 Severe status 463 Start session 329 Suspend 332 Suspend action 417, 453, 462 Suspend action options 465 Suspend command 430 Suspend options 456 Suspend status 333 Suspens session 426 Target Available state options 458 Terminate 335 Terminate action 432, 434, 469 Terminate options 471 Global Mirror Single Failover/Failback w/ Practice 434 Global Mirror Single Direction session 319 unplanned outages 538 Global Mirror Single direction Journal 2 storage subsystem 326
H
H1 H2 unplanned outage 552 H2 site planned outage 549 H2 unplanned outage 552 H3 practice scenario 555 H3 site practice scenario 555 hardware requirements 41
G
GDPS HyperSwap 216 Global Mirror 4 copy set add 323
562
7563IX.fm
heartbeat function 551 high availability capability 7 High Availability environment 42 HMC Ethernet ports 47 Hold I/O after Suspend policy 310, 494 Host 1 logical storage subsystem 297 Host 1 primary volumes 409 Host 1 storage subsystem 297 HostSite1 10 HostSite2 10 HSIB API address space 221 HSIB Management address space 221 HyperSwap command 556 HyperSwap unplanned outages 557
L
logical path 266 removing 270 logical paths creation 269
M
Metro Global Mirror definition 473 H1 H2 unplanned outage 552 H2 unplanned outage 552 H2H3 is suspended 553 H3 practice scenarios 553 H3 practice site 555 H3 site remote outage 550 local H1 site outage 548 outage H1 site 549 planned H2 site outage 549 planned outages 548 RecoverH1 549 RecoverH3 command 553 SuspendedH2H3 553 SuspendH2H3 551 unplanned outage 551 Metro Global Mirror session Recover action 520 RecoverH3 action 491 Copy Set definition 481 Enable Copy to Site 1 501 Enable Copy to Site 1 command 522 Enable Copy to Site 1options 502 onsistency Group interval time 473 Properties panel 473 Recover action 529 Recover action options 531 RecoverH1 action 510 RecoverH2 action 498, 505 RecoverH3 action options 493 Start action 486 Start H1 H3 action 515 Start H2 H3 503 Start H3 h1 h2 action 524 Start options 488 Suspend action 494, 508, 517 Suspended state options 496 SuspendedH2H3 state 490 SuspendedH2H3 state options 491 SuspendH2H3 action 489 Target Available state options 500 Terminate action 532 Terminate action options 534 Metro Mirror 3 CKD volumes 293, 338 copy sets definition 296 Failover/Failback w/ Practice session 394 session GUI create 291 session options 306 session Properties panel 292
I
ICAT server 43 install jobs IWNDB2ZZ 59 IWNDBALO 58 IWNDBELM 58 IWNDBHAE 58 IWNDBHAH 58 IWNDBHAR 58 IWNDBHAS 58 IWNDBHWL 58 IWNDBMIG 59 IWNDBREP 58 IWNDBSHL 58 install_RM log 52, 124 IWNDBELM job 61 IWNDBHWL job 63 IWNE9407E message 299 IWNE9422W message 326 IWNR1021I message 295 IWNR1026I message 290, 318, 351 IWNR1800W message 304, 351, 369, 425, 461 IWNR1801W message 373 IWNR1803W message 427 IWNR1805W message 335, 375 IWNR1806W message 334, 457 IWNR1811W message 289 IWNR1828W message 311, 363 IWNR1834W message 423 IWNR1934W message 394 IWRN1026I message 365, 368 IWRN1834W message 399
J
J2EE Connector Architecture 114 JDBC bind 55 JDBC bind utility 55 JDBC Provider Data sources 109 JDBC Providers 105 JournalSite2 10
Index
563
7563IX.fm
Single Direction session 291 single direction session 291 Suspend action 360 Suspend status 309 Terminate 316 METRO MIRROR copy sets adding to database 301 Metro Mirror Failover/Failback copy set 344 create 337 unplanned outages 540 Metro Mirror Failover/Failback w/ Practice practice 543 Metro Mirror session Add Copysets 381 CSV file 381 define all volumes 298 Enable Copy to Site 1 395 Enable Copy to Site 2 398 Failover/Failback 350 health overview 352 options 394, 398 Recover action 312, 364, 393 start 303 started options 352 Stop 400 Stop action 314, 372 stop data replication 400 Suspend command 370 suspend options 312 Suspended State 362 Suspens 308 Terminate action 374 Metro Mirror Single Direction planned outages 536 unplanned outage 536 Metro Mirror Suspend Policy options 473 MetroMirror Freeze command Freeze command 293, 339 Hold i/O after Suspend policy 310 target online 293, 338 Metroo Global Mirror H3 site practice scenario 555 MFA implementation 221
R
Recover H1 action 549 RecoverH1 action 556 RecoverH1 command 552 RecoverH2 action 498, 503, 505, 548, 555 RecoverH2 command 551 RecoverH3 action 491, 553555 Recovery Point Objective 293, 339 recovery scenarios 535 Redbooks website 559 Contact us xiv Release I/O action options 498 Release I/O after Suspend option 294, 339, 474 Release I/OMetro Mirror session Release I/O 310 removing logical paths 270 Rolling disaster 294, 339
S
SCSI Queue Full timer 310 SCSI queue full timer 293, 339 Secure Sockets Layer 42 Security Settings panel 42 Security tab 42 session 9, 272 session commands 13 Session create Properties panel 292 session GUI create 235, 272, 318, 404, 471 Failover/Failback 337 session Properties panel 338 session states 11 session types 12 setnetworkport 136 Severe status 309 SFI number 139 shownetworkport 136 snetworkport -l 136 Source 10 SSL protocol 42 standby servers 40 Start action 548 Storwatch Specialist user 129 Suspend Metro Mirror session 308 SuspendedH2H3 553 SuspendH2H3 action 489, 551 SYSADM authority 56 SYSSTC service class 221 system requirements 40
N
nitiate Flash action 284
P
Parallel Access Volumes 284, 302, 328, 413, 444 PAV 284, 302, 328, 413, 444 Peer-to-Peer Remote Copy 240, 297, 344, 409, 478 physical planning 42 planning hardware requirements 41 ports 43 practice session 543
T
Target 10
564
7563IX.fm
TCP/IP port 43 TCP/IP port 2433 9 Terminate action Metro Mirror session 402 TPC for Replication 43 IP communication 42 repository database 98 z/OS Basic Edition 40 TPC for Replication administrative user 116 Two Site Business Continuity 6 T-zero copy 3
U
unplanned action on H2 site 540
V
verify WebSphere security setup 119
W
Web browser requirements 41 WebSphere master configuration 108 WebSphere Admin console 101 Log and Trace 122 WebSphere Admin console URL 101 WebSphere install JDBC provider 110 Websphere install CSM Provider 107 wsowner user ID 116
Z
z/OS install parameters 59 SYSDEFLT storage group 54
Index
565
7563IX.fm
566
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
7563spine.fm
567
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided 250 by 526 which equals a spine width of .4752". In this case, you would use the .5 spine. Now select the Spine width for the book and hide the others: Special>Conditional Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
7563spine.fm
568
Back cover