Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
202 views

PowerMax CM&BC-Student Guide - 2022

Uploaded by

chunglungwu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
202 views

PowerMax CM&BC-Student Guide - 2022

Uploaded by

chunglungwu
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 502

PowerMax and VMAX All Flash

Configuration and Business Continuity Administration

1 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Welcome to the PowerMax and VMAX All Flash Configuration and Business Continuity Administration course.

Copyright © 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Dell, EMC, and other trademarks are
trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective
owners. Published in the USA.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” DELL EMC MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE .

Use, copying, and distribution of any DELL EMC software described in this publication requires an applicable software license. The trademarks, logos,
and service marks (collectively "Trademarks") appearing in this publication are the property of DELL EMC Corporation and other parties. Nothing
contained in this publication should be construed as granting any license or right to use any Trademark without the prior written permission of the party
that owns the Trademark.

AccessAnywhere Access Logix, AdvantEdge, AlphaStor, AppSync ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems,
Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Aveksa, Bus-Tech, Captiva, Catalog Solution, C-Clip, Celerra, Celerra
Replicator, Centera, CenterStage, CentraStar, EMC CertTracker. CIO Connect, ClaimPack, ClaimsEditor, Claralert ,CLARiiON, ClientPak,
CloudArray, Codebook Correlation Technology, Common Information Model, Compuset, Compute Anywhere, Configuration Intelligence, Configuresoft,
Connectrix, Constellation Computing, CoprHD, EMC ControlCenter, CopyCross, CopyPoint, CX, DataBridge , Data Protection Suite. Data Protection
Advisor, DBClassify, DD Boost, Dantz, DatabaseXtender, Data Domain, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, DLS
ECO, Document Sciences, Documentum, DR Anywhere, DSSD, ECS, elnput, E-Lab, Elastic Cloud Storage, EmailXaminer, EmailXtender , EMC
Centera, EMC ControlCenter, EMC LifeLine, EMCTV, Enginuity, EPFM. eRoom, Event Explorer, FAST, FarPoint, FirstPass, FLARE, FormWare,
Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, Illuminator , InfoArchive, InfoMover, Infoscape,
Infra, InputAccel, InputAccel Express, Invista, Ionix, Isilon, ISIS,Kazeon, EMC LifeLine, Mainframe Appliance for Storage, Mainframe Data Library, Max
Retriever, MCx, MediaStor , Metro, MetroPoint, MirrorView, Mozy, Multi-Band Deduplication,Navisphere, Netstorage, NetWitness, NetWorker, EMC
OnCourse, OnRack, OpenScale, Petrocloud, PixTools, Powerlink, PowerPath, PowerSnap, ProSphere, ProtectEverywhere, EMC Proven, EMC Proven
Professional, QuickScan, RAPIDPath, EMC RecoverPoint, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine,
SAN Advisor, SAN Copy, SAN Manager, ScaleIO Smarts, Silver Trail, EMC Snap, SnapImage, SnapSure, SnapView, SourceOne, SRDF, EMC
Storage Administrator, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder,
TwinStrata, UltraFlex, UltraPoint, UltraScale, Unisphere, Universal Data Consistency, Vblock, VCE. Velocity, Viewlets, ViPR, Virtual Matrix, Virtual
Matrix Architecture, Virtual Provisioning, Virtualize Everything, Compromise Nothing, Virtuent, VMAX, VMAXe, VNX, VNXe, Voyence, VPLEX, VSAM-
Assist, VSAM I/O PLUS, VSET, VSPEX, Watch4net, WebXtender, xPression, xPresso, Xtrem, XtremCache, XtremSF, XtremSW, XtremIO, YottaYotta,
Zero-Friction Enterprise Storage.

Revision Date: 01/2022

Revision Number: ES112STG00370.POWERMAXOS 5978.2.0

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. PowerMax and VMAX Family Configuration and Business Continuity Administration 1
Course Overview
This Specialist level course provides participants with an in-depth understanding of configuration tasks on
PowerMax and VMAX All Flash arrays. It also provides the knowledge required to deploy and manage PowerMax
and VMAX All Flash array-based local and remote replication solutions for business continuity needs. Key
features and functions of the arrays are covered in detail. Topics include storage provisioning concepts, virtual
provisioning, device creation and port management, and service level-based storage allocation to hosts.
Description
Operational details and implementation considerations for Dell EMC TimeFinder SnapVX and Symmetrix Remote
Data Facility (SRDF) are covered. Participants will use Unisphere for PowerMax and Solutions Enabler (SYMCLI)
to manage configuration changes on the arrays.

Hands-on lab exercises using Symmetrix Command Line Interface (SYMCLI) and Unisphere for PowerMax will be
performed on ESXi hosts attached to PowerMax and VMAX All Flash arrays.

This course is intended for Dell EMC customers, partners and employees responsible for configuration and
Audience
administration of PowerMax and VMAX All Flash arrays.
Upon completion of this course, you should be able to:
• Provide an overview of PowerMax and VMAX All Flash configurations
• Discuss storage provisioning concepts
• Manage ports and port characteristics
Objectives
• Perform Service Level based provisioning to hosts
• Provide an overview of storage management in a virtualized environment
• Use Unisphere for PowerMax for Compliance Monitoring and Workload Planning
• Provide details on local and remote replication offerings in PowerMax and VMAX All Flash arrays
2 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This Specialist level course provides participants with an in-depth understanding of configuration tasks on
the PowerMax and VMAX All Flash arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. PowerMax and VMAX Family Configuration and Business Continuity Administration 2
Agenda – Days 1 and 2

Module/Lessons Supporting Activities


• Configuration Administration Overview
– PowerMax and VMAX All Flash Overview • Lab: Explore Lab Environment with Unisphere
– Storage Provisioning Overview for PowerMax and SYMCLI
• Virtual Provisioning Concepts
– Virtual Provisioning Overview
Day 1 • Device and Port Management
• Device Management
• Port Management
• Port Management with Unisphere and SYMCLI
• Lab: Port Management with Unisphere and
SYMCLI

• Storage Allocation using Auto-provisioning Groups • Lab: SL Based Provisioning with Unisphere
• Auto-provisioning Groups Overview • Lab: Service Level Based Provisioning with
• Host Considerations – Storage Allocation SYMCLI
• Service Level Based Provisioning with Unisphere • Lab: Cascaded Storage Groups and SL Type
Day 2 • Service Level Based Provisioning with SYMCLI Modifications
• Management in a Virtualized Environment • Lab: Managing Host I/O Limits
• Virtual Server Management – Unisphere for
PowerMax
• EMC VSI for VMware vSphere Web Client

3 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The agenda lists the modules, lessons, and labs covered in the PowerMax and VMAX Family
Configuration and Business Continuity Administration course. This is the recommended agenda for days
one and two.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. PowerMax and VMAX Family Configuration and Business Continuity Administration 3
Agenda – Day 3

Module/Lessons Supporting Activities


• Monitoring and Workload Planning with Unisphere for
PowerMax • Lab: Monitoring SRP and SL Compliance with
• Monitor SRP Unisphere for PowerMax
• Monitor Storage Group Compliance • Lab: Workload Planning with Unisphere for
• Monitor Compression PowerMax
• Workload Planning
• Introduction to Business Continuity
• TimeFinder
Day 3
• SRDF
• Integrated Solutions • Lab: TimeFinder SnapVX Operations
• TimeFinder SnapVX Operations
• TimeFinder SnapVX Concepts
• TimeFinder SnapVX Operations using Unisphere for
PowerMax

4 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This is the recommended agenda for day three.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. PowerMax and VMAX Family Configuration and Business Continuity Administration 4
Agenda – Days 4 and 5
Module/Lessons Supporting Activities

• SRDF/Synchronous Operations
• SRDF Initial Setup • Lab: SRDF/Synchronous Operations
• SRDF Disaster Recovery Operations • Lab: SRDF/Asynchronous Operations
• SRDF Decision Support Operations
• SRDF/S Operations – Unisphere for PowerMax
Day 4
• SRDF/Asynchronous Operations
• SRDF/A concepts and Operations
• SRDF/A Resiliency Features
• SRDF/A Multi-session Consistency (MSC)

• SRDF/Metro Operations
• SRDF/Metro Overview • Lab: SRDF/Metro
• SRDF/Metro Setup
Day 5
• SRDF/Metro Monitoring
• SRDF/Metro Failure Scenarios and Operations
• SRDF/Metro Device Pair Online Expansion

5 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This is the recommended agenda for days four and five.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. PowerMax and VMAX Family Configuration and Business Continuity Administration 5
Module: Configuration Administration Overview

Upon completion of this module, you should be able to:

• Describe the PowerMax and VMAX All Flash arrays

• Provide overview of key features

• Identify the available tools for managing these arrays

• Articulate Storage Provisioning concepts

6 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module describes the PowerMax and VMAX All Flash arrays and provides an overview of key
features. It also covers tools for management of the arrays and storage provisioning concepts.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 6
Lesson: Overview
This lesson covers the following topics:

• PowerMax and VMAX All Flash model comparison

• Key features

• Management tools

7 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson describes the PowerMax and VMAX All Flash arrays. It provides an overview of
key features and tools for management of the arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 7
PowerMax and VMAX All Flash

SCALABILITY

VMAX 850F/FX
VMAX 950F/FX
PowerMax 8000 1 to 8 V-Bricks
1920 Drives
1 to 8 Bricks
288 Drives VMAX 450F/FX
1 to 4 V-Bricks
960 Drives
PowerMax 2000
1 to 2 Bricks VMAX 250F/FX
96 Drives 1 to 2 V-Bricks
100 Drives

8 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax models consist of the PowerMax 2000 and the PowerMax 8000. The VMAX All Flash arrays
models include the VMAX 250, the VMAX 450, the VMAX 850, and the VMAX 950. PowerMax and VMAX
All Flash arrays provide appliance-like packaging. Engines and drives are packaged in set sizes, and
software is included. In the PowerMax, this appliance-like packaging is known as a P-Brick. In the VMAX
All Flash, they are called V-Bricks. Additional capacity packs, also in set sizes, can be added to the arrays
to increase the usable storage. The PowerMax and VMAX All Flash arrays are 100% virtually provisioned
and preconfigured in the factory. The arrays are built for management simplicity, extreme performance,
and massive scalability in a small footprint.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 8
PowerMax – Model Comparison
PowerMax 2000 PowerMax 8000
Number of Bricks 1-2 1-8
Cache per Brick 512 GB, 1 TB, 2 TB 1 TB, 2 TB
2.5 GHz 2.8 GHz
Engine Type
48-core 72-core
Max 2.5” Drives per
96 288
array
Max Usable Capacity 1 PBe 4 PBe
Max Front-End Ports 64 256
InfiniBand Fabric None (direct InfiniBand connections) Dual 18-Port switches
NVMe Drives (2.5”) 1.92 TB, 3.84 TB, 7.68 TB, 15.36 TB 1.92 TB, 3.84 TB, 7.68 TB, 15.36 TB
SCM Drives (2.5”) 750 GB, 1.5 TB 750 GB, 1.5 TB
Essentials Software PowerMaxOS, eManagement, TimeFinder SnapVX, Compression and Deduplication, Non-
Package Disruptive Migration (NDM), and AppSync Starter Package
All from Essentials Package, Data at Rest Encryption (D@RE), SRDF/S, SRDF/A, SRDF 3-
Pro Software Package site and 4-site, SRDF/Metro, Embedded NAS (eNAS), Unisphere 360, PowerPath (75
Hosts), Storage Resource Management (SRM), and AppSync Full Suite

9 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This table shows a comparison of the PowerMax models.

The PowerMax 2000 is configured with one to two bricks. When fully configured with two bricks, the
PowerMax 2000 supports up to 96 2.5” drives, providing up to 1 Petabyte of usable capacity, and up to 64
front-end ports. There are no switches in the PowerMax 2000, as the two engines in the bricks are directly
connected to each other for data and communications.

The PowerMax 8000 is configured with one to eight bricks. With the maximum eight-brick configuration,
the PowerMax 8000 supports up to 288 2.5” drives, providing up to 4 Petabytes of usable capacity. When
fully configured, the 8000 provides up to 256 front-end ports for host connectivity. The internal fabric
interconnect uses dual InfiniBand 18-port switches for redundancy and availability.

Two software offerings are available with the PowerMax arrays. The Essentials Package, which is a
starter package, and the Pro Package, which has additional software that is included with the system.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 9
PowerMax 2000 Bay Configurations
One System Two Systems

Single-engine Dual-engine Flexible racking options


DAE4 DAE4
DAE3 DAE3

Engine 2 Engine 2

System 2
SPS SPS SPS SPS
DAE2 DAE2 DAE2 DAE2
DAE1 DAE1 DAE1 DAE1

Engine 1 Engine 1 Engine 1 Engine 1

SPS SPS SPS SPS SPS SPS SPS SPS

DAE4 DAE4 DAE4

DAE3 DAE3 DAE3

Engine 2 Engine 2 Engine 2

System 1
SPS SPS SPS SPS SPS SPS

DAE2 DAE2 DAE2 DAE2 DAE2 DAE2

DAE1 DAE1 DAE1 DAE1 DAE1 DAE1

Engine 1 Engine 1 Engine 1 Engine 1 Engine 1 Engine 1

SPS SPS SPS SPS SPS SPS SPS SPS SPS SPS
SPS SPS

10 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax 2000 array provides flexible racking options. The PowerMax 2000 system, whether a
single-brick or dual-brick is located in the bottom half of the rack, leaving the upper 20U for an additional
PowerMax system.

Two systems can be configured in a single rack, and flexible options include two single-brick systems, two
dual-brick systems, and a mix of the two. Certain restrictions and configuration rules apply. See
www.dellemc.com for details.

Onsite connectivity to the arrays, as with the PowerMax 8000 system, requires a service laptop, which is
not included.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 10
PowerMax 8000 Bay Configurations

PowerMax 8000

SPS SPS SPS SPS


IB Switch B
IB Switch A
DAE6 DAE6
DAE5 DAE5
DAE4 DAE4

Engine 4 Engine 8

SPS SPS SPS SPS

Engine 3 Engine 7
Ethernet Ethernet
Service Tray Service Tray

Engine 2 Engine 6

SPS SPS SPS SPS

Engine 1 Engine 5

DAE3 DAE3
DAE2 DAE2
DAE1 DAE1
SPS SPS SPS SPS

11 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax 8000 systems are available in quad-engine bay configurations only. Up to four Bricks and
supporting Standby Power Supplies (SPSs) are installed per bay in the PowerMax 8000 systems. Support
for fewer than four Bricks is available with the PowerMax 8000, installed in a quad-engine bay. Ethernet
switches and, in multi-engine systems, InfiniBand switches are installed in System Bay 1 only. Service
trays are available in both PowerMax 8000 bays for connecting a service laptop, which is not included, for
onsite access to the array. No KVM is included with the PowerMax 8000 system.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 11
VMAX All Flash – Model Comparison
VMAX 250 VMAX 450 VMAX 850 VMAX 950
Number of V-
1-2 1-4 1-8 1-8
Bricks
512 GB, 1 TB or 2 1 TB (2 TB available
Cache per V-Brick 1 TB or 2TB 1 TB or 2 TB
TB as upgrade)

Engine Type 2.2 GHz, 48-core 2.6 GHz, 32-core 2.7 GHz, 48-Core 2.3 GHz, 72-core

Max 2.5” Drives 100 960 1920 1920


Max Usable 1 PB (2 PB with 2
1 PB 4 PB 4 PB
Capacity TB engine upgrade)
Max Front-End 192 (OS)
64 96 192
Ports 256 (MF)
Dual 12-Port
InfiniBand Fabric None Dual 18-Port switches Dual 18-Port switches
switches
HYPERMAX OS, Thin Provisioning, Inline Compression, Non-Disruptive Migration, Virtual
F Package Volumes, QOS: Host I/O Limits, Embedded Management, TimeFinder SnapVX, AppSync iCDM
Starter Bundle
All from F Package, Data at Rest Encryption (D@RE), SRDF/S, SRDF/A, SRDF 3-site and 4-site,
FX Package SRDF/Metro, Embedded NAS, Unisphere 360, PowerPath (75 Hosts), CloudArray Enabler*, ViPR
SRM, AppSync Advanced
12 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This table shows a comparison of the VMAX All Flash models.

The VMAX 250 is configured with one to two engines. When fully configured with two engines, the VMAX
250 supports up to 100 2.5” drives, providing up to 1 Petabyte of usable capacity, and up to 64 front-end
ports. There are no switches in the VMAX 250, as the two engines are directly connected to each other for
data and communications.

The VMAX 450 is configured with one to four engines. With the maximum four-engine configuration, the
VMAX 450 supports up to 960 2.5” drives. This configuration provides up to 2 Petabytes of usable
capacity, when all engines are upgraded with 2 Terabytes of cache. When fully configured, the 450
provides up to 96 front-end ports for host connectivity. The internal fabric interconnect uses dual
InfiniBand 12-port switches for redundancy and availability.

The VMAX 850 is configured with one to eight engines. With the maximum eight-engine configuration, the
VMAX 850 supports up to 1920 2.5” drives, providing up to 4 Petabytes of usable capacity. When fully
configured, the 850 provides up to 192 front-end ports for host connectivity. The internal fabric
interconnect uses dual InfiniBand 18-port switches for redundancy and availability.

The VMAX 950 is also configured with one to eight engines. With the maximum eight-engine configuration,
the VMAX 950 supports up to 1920 2.5” drives, providing up to 4 Petabytes of usable capacity. When fully
configured, the 950 provides up to 256 front-end ports for host connectivity. The internal fabric
interconnect uses dual InfiniBand 18-port switches for redundancy and availability.

Two software offerings are available with the VMAX All Flash arrays:

• F Package, which is a starter package

• FX Package, which has additional software that is included with the system

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 12
VMAX 250 Bay Configurations

One System Two Systems


Single-engine Dual-engine Flexible racking options*
Direct DAE BB Direct DAE BB
Direct DAE BA Direct DAE BA

Engine 2 Engine 2

System 2
SPS SPS SPS SPS
Direct DAE AB Direct DAE AB Direct DAE AB Direct DAE AB
Direct DAE AA Direct DAE AA Direct DAE AA Direct DAE AA

Engine 1 Engine 1 Engine 1 Engine 1


SPS SPS SPS SPS SPS SPS SPS SPS
Direct DAE BB Direct DAE BB Direct DAE BB
Direct DAE BA Direct DAE BA Direct DAE BA

Engine 2 Engine 2 Engine 2


SPS SPS System 1 SPS SPS SPS SPS
Direct DAE AB Direct DAE AB Direct DAE AB Direct DAE AB Direct DAE AB Direct DAE AB
Direct DAE AA Direct DAE AA Direct DAE AA Direct DAE AA Direct DAE AA Direct DAE AA

Engine 1 Engine 1 Engine 1 Engine 1 Engine 1 Engine 1


SPS SPS SPS SPS SPS SPS SPS SPS SPS SPS SPS SPS

*Restrictions apply – see dell.com/support for details

13 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Like the PowerMax 2000, flexible racking options with the VMAX 250 models include upgrade capabilities
in a single system. Notice that the system, whether a single-engine or dual-engine configuration is located
at the bottom half of the rack. This configuration leaves the upper 20U for an additional VMAX 250 system,
or foreign components such as customer-provided hosts and switches.

Two systems can be configured in a single rack, and flexible options include two single-engine systems,
two dual-engine systems, and a mix of the two. Certain restrictions and configuration rules apply. See
www.dellemc.com for details.

Onsite connectivity to VMAX 250 arrays, as with the PowerMax systems, requires a service laptop, which
is not included.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 13
VMAX All Flash 450, 850, 950 Bay Configurations

Dual-Engine

VMAX 450F/FX, 850F/FX, 950F/FX

System Bay 1 System Bays 2-4

Engine 1 Engine Odd

Engine 2 Engine Even


Ethernet Ethernet
KVM

Engine Odd
Direct B DAE

Engine Odd
Direct A DAE

Engine Even
Direct B DAE
Engine Even
Direct A DAE

14 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

VMAX All Flash 450, 850 and 950 models use dual-engine bays. The dual-engine bay configuration
contains up to two engines per bay, a supporting power subsystem, and up to 4 DAEs. All 4 DAEs in the
bay are direct-attach, two to each engine. There is no daisy chaining in the dual-engine bays.

In dual-engine systems, there are unique components only present in System Bay 1. These components
include the Keyboard, Video, Mouse (KVM), a pair of Ethernet switches for internal communications, and
dual InfiniBand switches used for the fabric interconnect between engines. The dual InfiniBand switches
are present in multi-engine systems only. In system bays 2 through 8, a work tray is located in place of the
KVM and Ethernet switches. The work tray provides the option to connect a service laptop, which is not
included, for remote access to scripts, diagrams, and other service processor functionality.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 14
Dynamic Virtual Matrix

APPLICATION ACCESS

MULTI DATA SERVICES DYNAMIC


PROCESSOR RESOURCE
SCALE-OUT DATA
INTEGRITY
QoS VIRTUAL
PROVISIONING
And
TIME
FINDER
SRDF
ALLOCATION
FAST

STORAGE ACCESS

15 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash arrays feature the world’s first and only Dynamic Virtual Matrix. It enables
hundreds of CPU cores to be pooled and allocated on-demand to meet the performance requirements for
dynamic mixed workloads. It is architected for agility and efficiency at scale.

Resources are dynamically apportioned to host applications, data services, and storage pools to meet
application service levels. These resources enable the system to automatically respond to changing
workloads and to optimize itself to deliver the best performance available from the current hardware.

The Dynamic Virtual Matrix provides:

• Fully redundant architecture along with fully shared resources within a dual controller node and across
multiple controllers

• A dynamic load distribution architecture

The Dynamic Virtual Matrix is essentially the bios of the array operating software. It provides a truly
scalable multi-controller architecture that scales and manages from two fully redundant storage controllers
up to 16 fully redundant storage controllers. All controllers share common I/O, processing, and cache
resources.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 15
Multi-Core Technology
PowerMax and VMAX All Flash
DEFAULT (BALANCED) SETTING 100%
LOAD ON
FRONT-END (FA) PORTS THIS
PORT
0 1 N-1 N

ALL CORES
CAN BE
APPLIED

FRONT-END CORE POOL


HYPERMAX
OS
BACK-END CORE POOL

0 1 N-1 N

BACK-END (DA) PORTS

16 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax and VMAX All Flash systems can focus hardware resources, namely cores, as needed by
storage data services.

The PowerMax and VMAX All Flash architecture provides a CPU pooling concept, and further, it provides
a set of threads on a pool of cores. The pools provide a service for FE access, BE access, or a data
service such as replication. As displayed here, the default configuration has the services balanced across
FE ports, BE ports, and data services.

A unique feature enables the system to provide the best performance possible even when the workload is
not well distributed across the various ports/drives and central data services – as the Example shows
when there may be 100% load on a port pair. In this specific use case for the heavily used FE port pair, all
the FE cores can be used for a period of time to the active dual port.

There are three core allocation policies: balanced, front-end, back-end. Dell EMC Services can shift the
bias of the pools between balanced, front-end—for example, lots of small host I/O and high cache hits—
and back-end—for example, write-heavy workloads. This shifting becomes dynamic and automated over
time. This change cannot be managed with management software.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 16
Key Features
100% Virtually
Provisioned

Arrays shipped preconfigured


eManagement Data Devices, Data Pools, Storage Resource
Manage the array without software Pool (SRP) and Service Levels (SLs)
installed on a host

PowerMax and
VMAX All Flash Arrays
Service Level Provisioning
Classify applications at the Storage
MMCS Group level
Integrated Service Processor in System
Bay 1 – Management Module Control
Station

Local and Remote Replication


TimeFinder SnapVX
SRDF

17 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is a brief overview of some of the features of PowerMax and VMAX All Flash arrays. The arrays are
preconfigured at the factory, and are all virtually provisioned. The preconfiguration creates all required
Data Pools and RAID protection levels.

Service Level provisioning provides a simpler way to provision storage, enabling classification of
applications at the Storage Group level.

For local and remote replication, TimeFinder, SRDF, and are available on the arrays. TimeFinder SnapVX
does not require a target volume, providing space-efficient point-in-time local copies of data. Symmetrix
Remote Data Facility, or SRDF, offers multiple remote replication options, including synchronous and
asynchronous replication.

PowerMax and VMAX All Flash arrays have Management Module Control Stations (MMCSs) installed in
System Bay 1. The MMCS is an integrated service processor which provides environmental monitoring of,
support functionality for, and accessibility to the arrays.

eManagement is a capability that enables customers to run Dell EMC array management software
components inside the array. eManagement provides a tightly integrated management solution for
customers interested in managing a single PowerMax or VMAX All Flash array. Dell EMC Solutions
Enabler (SE) and Unisphere for PowerMax provide array management and control of the arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 17
Additional Features – Data Reduction

Dell EMC Data Reduction Description


Array Technique
VMAX All Inline • Compresses data as it is written to flash drives
Flash Compression • Storage Group (SG) level
Family • Open systems FBA data only
PowerMax Inline • Further improves efficiency
Family Compression • Reduces the number of copies of identical
and tracks that are stored on drives
Deduplication • Open systems FBA data only

18 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Data reduction technologies supported in PowerMax and VMAX All Flash arrays include compression and
deduplication.

VMAX All Flash arrays support inline compression. Inline compression compresses data as it is written to
flash drives and is a feature of Storage Groups. Compression is enabled by default, and new I/O to an SG
is compressed when written to disk. If there is existing data on the SG, it starts to compress in the
background. After compression is disabled, new I/O is no longer compressed, and existing data remains
compressed until it is written again, at which time it decompresses. Compression is available on open
systems (FBA) only, which includes eNAS data.

PowerMax arrays feature deduplication with inline compression. Deduplication works hand-in-hand with
inline compression. Deduplication reduces the number of copies of identical tracks that are stored on
back-end drives. Enabling deduplication also enables compression. That is, deduplication cannot operate
independently of compression. Both must be active. In addition, deduplication operates across an entire
system. It is not possible to use compression only on some Storage Groups and compression with
deduplication on others. As with inline compression, deduplication is available on open systems (FBA)
data only.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 18
Additional Features – eNAS

File Services

NFS, SMB

19 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Embedded Network-Attached Storage (eNAS) extends file-level storage capability to PowerMax and
VMAX All Flash arrays. The storage hypervisor on the array manages and protects embedded services by
extending high availability to these services that traditionally would have run outside the array. It provides
direct access to hardware resources to maximize performance. Virtual instances of Data Movers and
Control Stations running on the PowerMax and VMAX All Flash arrays provide the NAS services.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 19
Additional Features – PowerPath

SAN
Load Balancing
Up to 75 Hosts
HBA
... HBA
HBA
HBA X
Failover

20 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax Pro and VMAX All Flash FX software packages include Dell EMC PowerPath and
PowerPath/Virtual Edition (PowerPath/VE) for up to 75 hosts. PowerPath dynamically routes I/O to the
most efficient paths using patented algorithms to balance workloads. Testing of paths includes health and
performance checks, and improves performance and availability compared to traditional host-based
multipathing tools. With PowerPath, users gain the automated data path management and load balancing
for all heterogeneous servers, networks, and storage deployed in their environment. This streamlined data
path enables more predictable and consistent application availability while providing up to three times the
IOPS even during periods of high I/O.

PowerMaxOS 5978 integration with PowerPath includes advanced host and array information sharing,
including hostname, operating system version, and cluster details. Host I/O Limits and Service Levels are
known to PowerPath as well, and automated host information mapping is included for simplified
management. PowerMax machine learning integrates with PowerPath/VE to improve application
performance using I/O tagging, currently supported with Oracle only.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 20
Configuration Tools

• End-user tools for configuration and management


– Solutions Enabler
– Unisphere for PowerMax

21 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The initial configuration of the PowerMax and VMAX All Flash arrays is done at the Dell EMC factory with
SymmWin and Simplified SymmWin. These software applications run on the Management Module Control
Station (MMCS) of the arrays, and are restricted for use by Dell EMC personnel only. Once the arrays
have been installed, Solutions Enabler (SYMCLI), and Unisphere for PowerMax can be used to manage
them.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 21
Management Tools

• Solutions Enabler and Unisphere for PowerMax


– Installed in local, remote, or embedded configurations

Embedded
SYMAPI
Management
Server
SRDF SRDF SRDF

Clients Clients

Management Unisphere
Embedded
Server Server
Management

Embedded
Management

Local Remote Embedded

22 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Local, remote, or embedded instances of Solutions Enabler (SE) and Unisphere for PowerMax
(Unisphere) can be used to monitor, manage and configure PowerMax and VMAX All Flash arrays.
Solutions Enabler provides Command Line Interface (CLI) access, and Unisphere provides a graphical
User Interface (GUI).

In a local configuration, SE and Unisphere are loaded onto a management server that is connected to the
array(s). A SYMAPI server is used, and accessed by the management server in a remote configuration.
Users typically access the management hosts through clients that are configured in the data center. The
newest implementation of management tools for PowerMax and VMAX All Flash arrays are Embedded
Management, or eManagement (eMgmt). eMgmt provides individual instances of array management tools
running on the array. eMgmt includes Solutions Enabler, Unisphere, SMI-S—an industry standard
intended to facilitate the management of storage devices from multiple vendors in Storage Area
Networks—and DBA—Data Base Analyzer, used with Unisphere for viewing storage at database object
levels. eMgmt can be used to monitor both local and remotely attached arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 22
Solutions Enabler Integration with Array Operating Systems

Unisphere SYMCLI
REST API

SYMAPI

MMCS

SymmWin

PowerMaxOS/HYPERMAX OS

23 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This diagram illustrates the software layers and where each component resides.

Dell EMCs Solutions Enabler APIs are the storage management programming interfaces that provide an
access mechanism for managing the PowerMax, and VMAX All Flash arrays. They can be used to
develop storage management applications. SYMCLI resides on a host system to monitor and perform
control operations on the arrays. SYMCLI commands are invoked from the host operating system
command line—shell. The SYMCLI commands are built on top of SYMAPI library functions, which use
system calls that generate low-level I/O SCSI commands to the storage arrays.

Unisphere for PowerMax is the graphical user interface that makes API calls to SYMAPI to access the
array.

SymmWin, running on the MMCS, accesses PowerMaxOS/HYPERMAX OS directly.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 23
Dell EMC Solutions Enabler – Introduction

• Symmetrix Command Line Interface


(SYMCLI)
• Comprehensive command set for managing
PowerMax and VMAX All Flash arrays
– Invoked from the host operating system
command line
– Scripts that may provide further integration
with operating system and application

• Security and access controls


– Monitor only
– Host-based and user-based controls

24 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler command line interface (SYMCLI) is used to perform control operations on PowerMax
and VMAX All Flash arrays, and the array devices, tiers, groups, directors, and ports. Some of the array
controls include setting array-wide metrics, creating devices, and masking devices.

You can invoke SYMCLI from the local host to make configuration changes to a locally connected
PowerMax and VMAX All Flash arrays, or to an RDF-linked array.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 24
Unisphere for PowerMax – Introduction

• Management Console for


PowerMax and VMAX All Flash
arrays
• Performance Analyzer
– Installed by default
– PostgreSQL

• APIs for Automation and


Provisioning

25 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC Unisphere for PowerMax is the management console for PowerMax and VMAX All Flash arrays.

In previous versions of Unisphere, Performance Analyzer was an optional component. With Unisphere
9.0.x and above, the installation of Performance Analyzer is done by default during the installation of
Unisphere. Also, with Unisphere 9.0.x and above, PostgreSQL replaces MySQL as the database for
Performance Analyzer. Unisphere for PowerMax also provides a comprehensive set of APIs which can be
used by orchestration services like SRM, Open Stack, and VMware.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 25
Unisphere for PowerMax – Functionality

• Manage eLicenses, Users, and Roles


• Storage Configuration Management
– SL-based provisioning
• Configure and Monitor Alerts
• Performance Monitoring
– Real time, root cause, and historical
– Dashboards
› Predefined
› User-customized

26 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

You can use Unisphere for PowerMax for a variety of tasks, including managing eLicenses, user accounts
and roles, and performing array configuration and volume management operations, such as SL-based
provisioning on PowerMax and VMAX All Flash arrays.

With Unisphere, you can also configure and monitor alerts and alert thresholds.

In addition, Unisphere provides tools for performing analysis and historical trending of performance data
with Performance Analyzer. Performance Analyzer provides a view of high frequency metrics in real time,
system heat maps, and graphs detailing system performance. You can also drill down through data to
investigate issues, monitor performance over time, run scheduled and ongoing reports (queries), and
export that data to a file. Users can use various predefined dashboards for many of the system
components, or customize their own dashboard view.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 26
Lesson: Storage Provisioning Overview
This lesson covers the following topics:

• Factory Preconfiguration

• Service Level based provisioning

• Introduction to configuration changes with Unisphere and SYMCLI

28 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers factory pre-configuration and storage provisioning concepts for PowerMax and VMAX
All Flash arrays. An introduction to configuration changes with Unisphere for PowerMax and SYMCLI is
also provided.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 28
Environment Logical Architecture

29 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The logical architecture for an environment with PowerMax or VMAX All Flash arrays consists of:

• Host or hosts attached to a SAN


▪ Multiple SANs recommended for redundancy

• Connectivity from the SAN to the array through front-end (FE) directors and ports
▪ Minimum of two or more ports on different directors is recommended for redundancy

• Global cache

• Connectivity to physical disks through back-end (BE) directors and ports


▪ Ports from two directors are connected to every disk for redundancy

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 29
Factory Preconfiguration

Disk Group
• Collection of physical drives
Data Pools
• Collection of Data Devices (TDATs) in each Disk
Group
• Performance capability is known based on drive
type, speed, capacity, quantity of drives, and RAID
protection
Storage Resource Pool (SRP)
• Collection of Data Pools
Service Level (SL)
• Expected average response time target

30 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Disk Groups in PowerMax and VMAX All Flash arrays are similar to previous generation VMAX arrays. A
Disk Group is a collection of physical drives. Each drive in a Disk Group shares the same performance
characteristics, determined by the rotational speed and technology of the drives—15K, 10K, 7.2K, or
Flash—and the capacity.

Data Pools are a collection of data devices. Each individual Disk Group is preconfigured with data devices
(TDATs). All the data devices in the Disk Group have the same RAID protection. Thus, a given Disk Group
only has data devices with one single RAID protection. All the data devices in the Disk Group have the
same fixed size devices. All available capacity on the disk is consumed by the TDATs. All the data devices
(TDATs) in a Disk Group are added to a Data Pool. There is up to sixteen-to-one relationship in VMAX All
Flash and PowerMax arrays, which is dynamically configured by PowerMaxOS. The performance
capability of each Data Pool is known and based on the drive type, speed, capacity, quantity of drives, and
RAID protection.

One Storage Resource Pool (SRP) is preconfigured.

The available Service Levels are also preconfigured. Disk Groups, Data Pools, Storage Resource Pools,
and Service Levels cannot be configured or modified by Solutions Enabler or Unisphere. They are created
during the configuration process in the factory.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 30
Protection Options – Data Pools

Option Characteristics Protection Performance Array Support


• Maintains a duplicate copy of Fast Read
RAID 1 Highest PowerMax
a device on two drives. Fast Write

• Parity-based protection
Fast Read VMAX All Flash (7+1 only)
RAID 5 – 3+1 and 7+1 High
Good Write PowerMax 2000
• Striped data and parity
PowerMax 8000 (7+1 only)
• Two parity drives
– 6+2 and 14+2
RAID 6 • Data availability is primary Highest
Fast Read VMAX All Flash (14+2 only)
consideration Fair Write PowerMax (6+2 only)
• Performance is a secondary
consideration

31 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash arrays are preconfigured with Data Pools and Disk Groups as discussed
earlier. The Data Devices in the Data Pools are configured with one of the data protection options listed on
the slide. The choice of the data protection option is made during the ordering process, and the array is
configured with the chosen and available options.

RAID 5 is based on the industry standard algorithm and can be configured with three data and one parity,
or seven data and one parity. While the latter provides more capacity per dollar, there is a greater
performance impact in degraded mode where a drive has failed. All surviving drives must be read to
rebuild the missing data. VMAX All Flash and PowerMax 8000 systems support RAID 5 only in a 7+1
configuration. PowerMax 2000 arrays support both 3+1 and 7+1 RAID 5 protection.

RAID 6 focuses on availability. With the new larger capacity disk drives, rebuilding may take multiple days,
increasing the exposure to a second disk failure. VMAX All Flash supports RAID 6 only in a 14+2
configuration. PowerMax arrays support RAID 6 only in a 6+2 configuration. Random read performance is
similar across all protection types, assuming you are comparing the same number of drives. The major
difference is write performance. With mirrored devices, for every host write, there are two writes on the
back end. With RAID 5, each host write results in two reads and two writes. For RAID 6, each host write
results in three reads and three writes.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 31
Storage Resource Pool (SRP)

• Collection of Data Pools


• Factory Preconfiguration includes one SRP
• Not configurable with Solutions Enabler or Unisphere
• Multiple SRPs may be configured
Storage Resource Pool
Flash - RAID 5 (7+1) Flash - RAID 5 (7+1) Flash - RAID 5 (7+1)
Data Pool 0 Data Pool 1 Data Pool 2

1:1 8:1 16:1

32 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Resource Pool (SRP) is a collection of Data Pools, which are configured from Disk Groups. A
Data Pool can only be included in one SRP. The different data pools represent different compression ratio.
SRPs are not configurable using Solutions Enabler or Unisphere. The factory preconfigured array includes
one SRP that contains all Data Pools in the array. If required, multiple SRPs can be configured by
qualified Dell EMC personnel. If there are multiple SRPs, one of them must be marked as the default.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 32
Service Level Based Provisioning

Service Level (SL) Storage Group (SG)


• Defines ideal performance operating range • Can be explicitly associated with an SRP
of an application • Implicitly associated with Default SRP and
• Preconfigured Optimized SL

33 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A Service Level (SL) defines the ideal performance operating range of an application. Each SL contains
an expected maximum response time range. The response time is measured from the perspective of the
front-end adapter. The SL can be combined with a workload type to further refine the performance
objective on VMAX All Flash arrays. SLs are predefined and are prepackaged with the array, and are not
customizable by Solutions Enabler or Unisphere.

A Storage Group (SG) is a logical grouping of devices that are used for device masking, control, and
monitoring. A Storage Group can be associated with an SRP, enabling devices in the SGs to allocate
storage from any pool in the SRP. SL-based provisioning is covered in more detail in subsequent
modules in the course.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 33
Service Level Details

Dell EMC Array Operating System Service Levels Behavior


PowerMax Family PowerMaxOS 5978 • Front-end queue management
• Predictable and consistent application performance
VMAX All Flash Family PowerMaxOS 5978 • Front-end queue management
• Predictable and consistent application performance
VMAX All Flash Family HYPERMAX OS 5977 • Diamond and Optimized Service Level
– All flash drives

34 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS uses response time targets to prioritize performance of Storage Groups using front end
queuing, using a floor and a ceiling. SGs never go below their set floor, and never go above their set
ceiling, ensuring consistency when adding new applications to an existing array. Previous applications
continue to perform as they have. There is no impact when adding new applications, and predictable
performance can be assigned using Service Levels.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 34
5978 Family Service Levels

Service Level Expected Average Limit


Response Time
Diamond 0.6 ms Upper

Platinum 0.8 ms Upper

Gold 1.0 ms Upper

Silver 3.6 ms Upper and Lower

Bronze 7.2 ms Upper and Lower

Optimized N/A N/A

35 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

With PowerMax arrays, a single tier of flash storage is configured. Multiple Service Levels were
reintroduced with PowerMax arrays to enable users to set expectations for applications to provide
predicable and consistent performance. Users can prioritize data access and set priority on critical, high
priority applications while managing lower priority applications using Service Levels in PowerMax arrays.
Service Levels have a target response time, and, in some cases, an upper and lower response time limit
as well.

Diamond, Platinum, and Gold SLs have the highest priority and performance. All have an upper response
time limit, but no lower response time limit, ensuring they are serviced as fast as possible.

Silver, and Bronze SLs have both an upper and a lower limit, designed to enable higher priority SLs to be
unaffected. These SLs are managed such that their average response time is greater than or equal to the
lower response time limit.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 35
Host View of Storage

• Auto-provisioning Groups are used to allocate storage to hosts


• Thin Devices are presented to hosts
– Open Systems host sees thin devices as FBA SCSI disk drives
– Mainframe host sees thin devices as 3380 or 3390 CKD volumes
• Thin Device – Size Metrics
– Sector – 16 Blocks (512-byte block) – 8 KB
– Track Size – 16 Sectors – 16*8 = 128 KB
– Cylinder Size – 15 Tracks – 15*128 = 1920 KB
– Maximum Device Size – 35791394 Cylinders = 65536 GB = 64 TB

36 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Auto-provisioning groups are used to allocate storage to hosts. PowerMax and VMAX All Flash arrays are
100% virtually provisioned and thin devices are presented to the hosts. From the perspective of an open
systems host, the thin device is simply seen as one or more FBA SCSI devices. In the mainframe, thin
devices are seen as CKD 3380 or 3390 volumes. Standard SCSI commands such as SCSI INQUIRY and
SCSI READ CAPACITY return low-level physical device data, such as vendor, configuration, and basic
configuration, but have very limited knowledge of the configuration details of the storage system.

Knowledge of array-specific information, such as director configuration, cache size, number of devices,
mapping of physical-to-logical, port status, flags, and so on, requires a different set of tools. Solutions
Enabler and Unisphere are tools that are used to gather and display array-specific information.

Host I/O operations are managed by the operating environment which runs on the arrays. Thin devices are
presented to the host with the following configuration or emulation attributes:

• Each device has N cylinders. The number is configurable.

• Each cylinder has 15 tracks (heads).

• Each device track in a fixed block architecture (FBA) is 128 KB (256 blocks of 512 bytes each).

• Maximum Thin Device size that can be configured on a VMAX All Flash is 35791394 cylinders or about
64 TB.

Unisphere device creation requests can be specified in cylinders, MB, GB, or TB. Solutions Enabler
device creation requests can be specified in cylinders, MB, or GB.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 36
Storage Allocation
Auto-provisioning groups are used to allocate storage to hosts:
• Fiber Channel (WWN) or iSCSI Initiator
Initiator Group (IG) • Port Flags set on Initiator Group
 FCID Lockdown per initiator

• Front-end ports
• A port can belong to multiple Port Groups
Port Group (PG) • A Port Group contains either all physical ports (fiber) or all virtual targets (fiber or
iSCSI)
• Ports must have the ACLX flag enabled
• Thin devices
Storage Group (SG) • A device can belong to more than one Storage Group
• Can be associated with SRP, and SL

Masking View (MV) • One of each type of group is associated together to form a Masking View

37 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Auto-provisioning Groups are used for device masking on the PowerMax and VMAX All Flash arrays.

An Initiator Group contains the World Wide Name (WWN) or iSCSI name of a host initiator, also known as
a Host Bus Adapter (HBA). An Initiator Group may contain a maximum of 64 initiator addresses or 64 child
initiator group names. Initiator Groups cannot contain a mixture of host initiators and child IG names or
types. Port flags are set on an Initiator Group basis, with one set of port flags applying to all initiators in the
group. However, the FCID lockdown is set on a per-initiator basis. An individual initiator can only belong to
one Initiator Group. However, once the initiator is in a group, the group can be a member in another
Initiator Group. It can be grouped within a group. This feature is called Cascaded Initiator Groups, and is
only allowed to cascade one level.

A Port Group may contain a maximum of 32 front-end ports. Front-end ports may belong to more than one
Port Group. Before a port can be added to a Port Group, the ACLX flag must be enabled on the port. A
Port Group contains either physical ports (fiber) or virtual targets (iSCSI). A mix of port types in a Port
Group is not supported.

Storage Groups can only contain devices or other Storage Groups. No mixing is permitted. A Storage
Group with devices may contain up to four-thousand logical volumes. A logical volume may belong to
more than one Storage Group. There is a limit of sixteen-thousand Storage Groups per PowerMax or
VMAX All Flash array.

A parent SG can have up to 64 child Storage Groups. One of each type of group is associated together to
form a Masking View.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 37
Managing Configuration and Provisioning

• Execute using Unisphere or SYMCLI


– Unisphere – various wizards and tasks
– SYMCLI
› symconfigure
› symaccess
› symsg
› symdev

• Perform configuration and storage provisioning


– Thin device management – creation, deletion, attribute modification
– Front-end port management – attributes, association
– Array metrics
– Manage Auto-provisioning groups – storage provisioning

38 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Configuration and provisioning are managed with Unisphere for PowerMax or SYMCLI. Unisphere has
numerous wizards and tasks to help achieve various objectives. The symconfigure SYMCLI command
is used for the configuration of thin devices and for port management. The symaccess SYMCLI
command is used to manage Auto-provisioning groups which enables storage allocation to hosts (LUN
Masking). The symsg SYMCLI command is used to manage Storage Groups. Arrays running
PowerMaxOS 5978 or HYPERMAX OS 5977 support the management of devices using the symdev
create, symdev modify, and symdev delete commands.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 38
Configuration Architecture

Host Local Array Remote Array


SYMCLI UNISPHERE
SYMAPI FE
SIL
RA RA

Ethernet Ethernet
SYMWIN SYMWIN
Scripts Scripts

SYMMWIN SYMMWIN

MMCS MMCS

39 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Configuration Manager architecture enables SymmWin scripts to run on the MMCS. Configuration
change requests are generated either by the symconfigure SYMCLI command, or a SYMAPI library call
generated by a user making a request through the Unisphere GUI. These requests are converted by
SYMAPI on the host to syscalls and transmitted to the array through the channel interconnect. The front
end routes the requests to the MMCS, which invokes SymmWin procedures to perform the requested
changes. In the case of SRDF connected arrays, configuration requests can be sent to the remote array
over the SRDF links.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 39
Gatekeeper Devices

• 3-cylinder thin devices (~ 6MB)


• Receives low-level SCSI I/O from SYMCLI/GUI
• Used as target of SYMCLI/SYMAPI commands
– Commands are passed through gatekeepers to the array for action
– Locked during the passing of commands
– Lots of commands flowing to the array from many applications on the same host
can cause gatekeeper shortage

• Must be accessible from the host running the commands

40 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler is a Dell EMC software component used to control the storage features of Symmetrix,
VMAX All Flash and PowerMax arrays. It receives user requests from SYMCLI, GUI, or other means, and
generates system commands that are transmitted to the array for action. Gatekeeper devices are LUNs
that act as the target of command requests to array-based functionality. These commands arrive in the
form of disk I/O requests. The more commands that are issued from the host, and the more complex the
actions required by those commands, the more gatekeepers are required to handle those requests in a
timely manner.

When Solutions Enabler successfully obtains a gatekeeper, it locks the device, and then processes the
system commands. Once Solutions Enabler has processed the system commands, it closes and unlocks
the device, freeing it for other processing. A gatekeeper is not intended to store data and is usually
configured as a small three-cylinder device—approximately 6 MB. Gatekeeper devices should be mapped
and masked to single hosts only and should not be shared across hosts.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 40
Unisphere: Job List

• List of jobs
– Not yet run – Can be run on demand or
scheduled for later execution
– Jobs that are running, successfully completed,
or failed

• Job List can be accessed by clicking:


– Job List link in the System section dropdown
– Job List link in the status bar

41 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Most of the configuration tasks in Unisphere can be added to the Job List for execution at a later time. The
Job List shows all the jobs that are yet to be run (Created status), jobs that are running, jobs that have run
successfully, and jobs that have failed. You can go to the Job List by clicking the Job List link in the
Events section dropdown or by clicking the Job List link in the status bar.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 41
Unisphere: Job List Example

42 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

An example of a Job List is shown here. In this example, an SRDF director was added to the
configuration. Double-clicking the job displays the job details, shown on the right of the screen.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 42
Configuration Changes Preparation – SYMCLI

• Verify that configuration changes can be made safely:


– symconfigure verify –sid <SymmID>

• Check usage of the configured Storage Resource Pools


– symcfg list –srp –sid <SymmID>

• Consider impact on I/O:


– To make devices not ready:
› symdev not_ready <symDev> -sid <SymmID>

• After allocation/de-allocation of storage to a host update the host operating


system environment before attempting I/O

43 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Before making configuration changes, it is important to know the current array configuration.

Verify that the current array configuration is a viable configuration for host-initiated configuration changes.
The command symconfigure verify -sid <SymmID> returns successfully if the array is ready for
configuration changes.

The capacity usage of the configured Storage Resource Pools can be checked using the command
symcfg list –srp –sid <SymmID>.

To understand the impact that a configuration change operation can have on host I/O, check the product
documentation.

After allocating storage to a host, you must update the host operating system environment. Attempting
host activity with a device after it has been removed or altered, but before you have updated the device
information of the host, can cause host errors.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 43
SYMCLI – Query/Abort Configuration Sessions

• Query
– symconfigure query –sid <SymmID>

• Abort
– Configuration change sessions can be terminated prematurely using the
abort command.
– Premature termination is only possible before the point of no return
– symconfigure –sid <SymmID> abort –session_id <SessID>

44 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Configuration change sessions can be viewed using the symconfigure query command. If there are
multiple sessions running, all session details are shown. In rare instances, it might become necessary to
cancel configuration changes. To cancel configuration changes, use the symconfigure abort
command as long as the point of no return has not been reached.

Aborting a change that involves RDF devices in a remote array might necessitate the termination of
changes in a remote array.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 44
Module Summary

Key points covered in this module:

• PowerMax and VMAX All Flash arrays

• Key features

• Storage Provisioning concepts

• Available tools for managing PowerMax and VMAX All Flash arrays

45 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered an overview of the PowerMax and VMAX All Flash arrays. Key features and storage
provisioning concepts were covered, as well as tools for managing the arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Configuration Administration Overview 45
Module: Provisioning and Service Levels

Upon completion of this module, you should be able to:

• Provide an overview of Virtual Provisioning

• Explain PowerMaxOS 5978 Service Level implementation

46 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on Virtual Provisioning and Service Levels.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 46
Lesson: Virtual Provisioning
This lesson covers the following topics:

• Virtual Provisioning overview

• Thin Provisioning elements

• Service Levels

47 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson provides an overview of Virtual Provisioning, Thin Provisioning elements, and Service Levels.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 47
Storage Provisioning

• 100% Virtually Provisioned


– Thin Devices are presented to Hosts

• Arrays are preconfigured


– Disk Groups
– Data/Virtual Provisioning Pools
– Storage Resource Pool(s)
– Service Levels

48 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash arrays are all 100% virtually provisioned. Arrays are preconfigured at the
factory with the elements for thin provisioning.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 48
Virtual Provisioning (Thin Provisioning)
Compute Systems
Virtual Provisioning (Thin Provisioning)

The ability to present a LUN to a


compute system with more capacity than
what is physically allocated to the LUN.
10 TB 10 TB 10 TB
Thin Device Thin Device Thin Device

Compute 4 TB
3 TB 3 TB
• Capacity-on-demand from the Storage Reported
Capacity
Allocated
Allocated
Allocated
Resource Pool
– Physical storage allocated only when the
compute system requires it
Data Pools
– Extent Size – One Track – 128 KB
Pool 0 Pool 1 Pool 2
RAID 5 RAID 5 RAID 5
2 TB 1 TB Used
(3+1) (3+1)
3 TB (3+1)
Used
Used

Storage Resource Pool

49 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

One of the biggest challenges for storage administrators is balancing the storage space that is required by
various applications in their data centers. Administrators typically allocate storage space based on
anticipated storage growth to reduce the management overhead and application downtime required to add
new storage later on. This allocation generally results in the overprovisioning of storage capacity, which
leads to higher costs, increased power, cooling, floor space requirements, and lower capacity utilization.
These challenges are addressed by Virtual Provisioning.

Virtual Provisioning is the ability to present a logical unit—Thin LUN—to a compute system, with more
capacity than what is physically allocated to the LUN on the storage array. Physical storage is allocated to
the application on-demand from a shared pool of physical capacity. Virtual Provisioning provides more
efficient utilization of storage by reducing the amount of allocated, but unused physical storage.

The shared storage pool, called the Storage Resource Pool, contains one or more Data Pools with internal
devices that are called Data Devices. When a write is performed to a portion of the thin device, the array
allocates a minimum allotment of physical storage from the pool and maps that storage to a region on the
thin device, including the area targeted by the write. The allocation operation is performed in small units of
storage called virtually provisioned device extents. The virtually provisioned device extent size is one track
(128 KB).

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 49
Thin Provisioning

• Preconfigured thin provisioning pools


• Create host-addressable volumes (TDEVs) using
– Unisphere for PowerMax
– Solutions Enabler

• Physical storage allocated on host write

50 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash arrays are preconfigured at the factory with thin provisioning pools ready
for use. Create host-addressable thin devices (TDEVs) using Unisphere or Solutions Enabler. TDEVs can
be added to existing Storage Groups. When the host writes to the TDEVs, physical storage is
automatically allocated from the default Storage Resource Pool.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 50
Thin Provisioning Components
PowerMax and VMAX All Flash Arrays
Service Levels

Diamond Platinum Gold


Storage
Groups Silver Bronze Optimized
VP_ProdApp1 VP_ProdApp2

Storage
Resource SRP_1
Pool

Pool 0 – Pool F
Virtual
Provisioning
RAID 5 (7+1)
Pool

Disk DG 0
Groups
SSD
3.84 TB

51 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The elements related to Thin Provisioning are Disk Groups, Virtual Provisioning Pools, Storage Resource
Pools, Service Levels, and Storage Groups.

Disk Groups, Data Pools with Data Devices (TDATs), Storage Resource Pools, and Service Levels all
come preconfigured on the array and cannot be modified using management software. Solutions Enabler
and Unisphere give the end user visibility to the preconfigured elements, but no modifications are allowed.
Storage Groups are logical collections of thin devices. Storage Groups and thin devices can be
configured—created/deleted/modified, and so on—with Solutions Enabler and Unisphere. In the example
shown here, the array has been configured with one Disk Group, 16 Virtual Provisioning Pools, one
Storage Resource Pool, and the SLs. This is an example using a PowerMax array.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 51
Disk Group and Virtual Provisioning Pool
PowerMax and VMAX All Flash Arrays

• Disk Group
– Collection of physical disks with same
Pool 0 characteristics
Virtual
Provisioning › Capacity
RAID 5
Pool
(7+1) – Preconfigured with Data Devices (TDATs)
› Single RAID protection
› Fixed hyper sizes – 64 hypers per disk
Disk DG 0
Group
NVMe
• Virtual Provisioning Pool
3.84 TB – Many-to-1* relationship with Disk Group
– All TDATs in disk group added to data pool
– Performance capability is known

*Number of VP Pools to DG is based on the compressibility of the data


52 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A Disk Group is a collection of physical drives sharing physical and performance characteristics. Drives
are grouped based on technology, capacity, form factor, and RAID protection type. PowerMax and VMAX
All Flash arrays support up to 512 internal Disk Groups.

Each Disk Group is automatically configured with data devices (TDATs) upon creation. All the data
devices in the disk group are of a single RAID protection type, and are all the same size. Each drive in the
group has the same amount of hypers, all sized the same. Each drive has a minimum of 16 hypers. Larger
drives may have more hypers.

A Virtual Provisioning Pool is a collection of data devices of the same emulation and RAID protection.
PowerMax and VMAX All Flash arrays support up to 512 Virtual Provisioning Pools. In PowerMax and
VMAX All Flash arrays, based on the compression ratio, PowerMaxOS dynamically creates many pools in
the same disk group. The number is based on the compressibility of the data. There is up to a many-to-1
relationship between the Virtual Provisioning Pool and the Disk Group. The performance capability of each
pool is known and is based on the drive type, speed, capacity, quantity of drives, and RAID protection.

Data devices provide the dedicated physical space that is used by thin devices. Data devices are internal
devices.

Disk Group, Virtual Provisioning Pools, and data devices (TDATs) cannot be modified using management
software. Solutions Enabler and Unisphere for PowerMax give the end user visibility to the preconfigured
elements, but no modifications are allowed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 52
Storage Resource Pool

• Collection of Virtual Provisioning Pools


– A virtual provisioning pool can only be in one SRP

• Factory preconfiguration includes one SRP


– Contains all the configured data pools

Storage
Resource
SRP_1
Pool
Pool 0 - Pool F
Virtual
Provisioning RAID 5 (7+1)
Pool

53 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Resource Pool (SRP) is a collection of Virtual Provisioning Pools. An SRP can have up to 512
Virtual Provisioning Pools. Individual pools can only be part of one SRP. By default, a single SRP is
configured, which contains all the configured Virtual Provisioning Pools.

When multiple SRPs are configured, one of the SRPs must be marked as the default SRP.

SRP configuration cannot be modified using management software. Solutions Enabler and Unisphere give
the end user visibility into the preconfigured SRPs, but no modifications are allowed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 53
Display SRP Details – Unisphere

54 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Configured SRPs can be displayed in Unisphere for PowerMax under Storage > Storage Resource
Pools by double-clicking the SRP. Details are shown on the right.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 54
Display SRP Details – SYMCLI

symcfg list -srp -v -sid <SymmiD>

55 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Configured SRPs can also be displayed with SYMCLI using the symcfg list -srp -v -sid
<SymmID> command.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 55
Storage Groups

• Logical collection of thin devices


– Used for LUN masking and/or Service Level allocation

• Can be explicitly associated with an SRP


– By default an SG is associated with the default SRP

Service Levels
Diamond Platinum Gold
Storage
Group
VP_ProdApp1 Silver Bronze Optimized

SRP_1

56 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Group (SG) is a logical collection of thin devices that are managed together. Typically, they
constitute the devices that are used for a single application.

A Storage Group can be explicitly associated with an SRP or an SL or both. Associating an SG with an
SRP defines the physical storage on which data in the SG can be allocated. The association of the SL and
Workload Type defines the response time target for that data. By default, devices within an SG are
associated with the default SRP and are managed by the Optimized SL. Changing the SRP association on
an SG results in all the data being migrated to the new SRP.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 56
Thin Device Considerations

• Upon creation
– By default associated with default SRP and the Optimized SL
– Device is automatically in the ready state

• Devices could be added to an existing SG during creation


– Device inherits SRP and SL from SG

• No extents allocated when device is created


– Extents allocated as a result of host write or preallocation request

57 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When a thin device is created, it is implicitly associated with the default SRP and is managed by the
Optimized SL. As a result of being associated with the default SRP, thin devices are automatically in a
ready state upon creation.

During the creation of thin devices, you could optionally add them to an existing Storage Group. The thin
device then inherits the SRP and SL set on the SG.

No extents are allocated during the thin device creation. Extents are allocated only as a result of a host
write to the thin device or a preallocation request.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 57
PowerMax and VMAX All Flash Service Levels

• Quality-of-service controls for individual Storage Groups


– Ensure that applications have consistent and predictable performance

• Performance ceiling
– All Service Levels (except Optimized)
– Maximum response time
– Lower priority workloads see elongated response times

• Performance floor
– Silver, and Bronze Service Levels only
– Minimum response time

58 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash systems running PowerMaxOS 5978 use response time management to
set performance expectations on applications. With a single tier of all flash storage, data is stored on high-
speed flash storage without the need to move any data for better performance. Access is managed per
Storage Group with a specified Service Level. All Service Levels have a ceiling, which defines the shortest
time that each I/O operation on the SG takes to complete. Gold, Silver, and Bronze Service Levels also
have a performance floor, which defines the longest acceptable time for any I/O operation on the SG to
complete. The Optimized Service Level is exempt from ceiling and floor limits.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 58
Service Levels Overview - 5978
Service Level Biasing feature/enhancement for SCM pools

SG = 0 used for Guest OS will automatically be set to Bronze SL (as previously), but no delays will be
imposed

Maximum delays are cut in half, from 10x base to 5x base:

Service Level Base Delay (ms) Max Delay (ms)

Diamond or Optimized n/a n/a

Platinum n/a 5

Gold <1 5

Silver 3 15

Bronze 6 30

59 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Service levels are offered with various ranges of performance expectations which are defined by their own
characteristics of a target response time. The target response time is the average response time expected
for the storage group based on the selected service level.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 59
Service Level Priorities & Compliance

SL Priority Compliance Effect RT Differentiation

Diamond Highest Never

Platinum Only assists higher priority SL

Gold Only assists higher priority SLs

Minimum average response time enforced


Silver Only assists higher priority SLs (RT “floor” ~ 3 ms)

Minimum average response time enforced


Bronze Only assists higher priority SLs (RT “floor” ~ 6 ms)

Optimized Lowest * Does NOT assist other SLs * Changed for PowerMax

60 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Diamond, Platinum, and Gold: These service levels have the highest priority and performance. Each
has an upper response time limit but no lower response time limit which ensures they will be serviced as
fast as possible.

Silver and Bronze: These service levels have both an upper and lower limit designed to allow higher-
priority service levels to be unaffected.

Optimized: This service level does not have a target response time nor an upper or lower limit. Optimized
is designed to use all allowable resources, equal to that of Diamond, and is not managed to assist any
other service level.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 60
Response Time Management

• Throttling of lower-priority SGs


– Diamond throttles Platinum, Gold, Silver, and Bronze SGs
– Platinum throttles Gold, Silver, and Bronze SGs
– Gold throttles Silver and Bronze SGs
– Silver throttles Bronze SGs
– Optimized is exempt from throttling
› Response time may degrade as the system load increases

• Front-end queue management


• Real-time machine learning
– Models workload characteristics
– Predictive function anticipates workload demands for an SG

61 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS is continually monitoring the system to ensure that lower-priority applications are not
disruptive to higher-priority applications. If the response time of a higher-priority application approaches
the upper limit of its selected SL, the system begins to manage any lower-priority applications. If a Storage
Group begins to exceed the boundaries of the Service Level, the system ensures that lower priority SGs
are throttled using front-end queue management. The Optimized SL is exempt from throttling, however,
response times may degrade as the load on the system increases.

PowerMaxOS uses real-time machine learning to model workload characteristics. These models provide a
predictive function that allows PowerMaxOS to anticipate workload demands for SGs. With these
anticipated workload demands, the system adapts as necessary to changes in block size, write ratio, or
I/O load.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 61
Service Level Biasing for SCM - 5978
To give SG priority for data placement on SCM, set SL = Diamond

If not enough space to fit all Diamond data on SCM, “hot” data is given priority

Service Level Biasing is only applicable for mixed boxes with EFD + SCM
– For example, using Diamond SL on EFD only system will NOT prioritize data placement in EFD
uncompressed (128K) pool

To avoid placing SG data on SCM, set SL = Bronze or Silver

62 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax sets a service level bias in order to apply the priority of promotions and demotions.

Promotion: • Diamond: - Highest promotion priority - During optimal utilization, PowerMaxOS attempts to
put all data labeled Diamond on SCM drives

• Platinum, Gold, Optimized: All data has equal priority

• Silver, Bronze: Excluded from promotion

Demotion: • Silver, Bronze: Highest demotion priority

• Diamond: - Data is demoted when there is a need for more active Diamond data to be promoted

- Can also be demoted if the SCM usage exceeds pool reserved capacity

• Platinum, Gold, Optimized: - Equal demotion priority

- Demotions occur when there is a need to create available space in SCM for higher priority data or more
active data with the same priority.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 62
Service Level Usage Examples

Critical Applications Service Providers


• Use Diamond SL to protect critical • Ability to provide different levels of
applications requiring the fastest performance within the array
response time • Introduces delays to establish a range of
• Use lower SLs for other applications performance
– For example, assign Diamond SL to – For example, using Silver and Bronze in
the mission-critical application and PowerMax and VMAX All Flash introduces
Bronze to less-critical applications, delays, simulating a less expensive tier of
such as batch jobs storage

63 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Service Levels in PowerMax and VMAX All Flash arrays have many valuable uses. Mission-critical
applications that require almost immediate response times are not jeopardized by other applications in the
arrays when assigning proper SLs to the applications. For Service Providers, SLs provide the ability to
have different levels of performance within the array. Using lower SLs, such as Silver or Bronze,
introduces delays so applications are not seeing the high level of performance normally seen with all flash
storage.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 63
Availability of Service Levels

Storage Groups/Applications
• Containing Open Systems FBA devices
– All Service Levels available

• Containing mainframe CKD devices


– Diamond, Bronze, and Optimized only

64 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

All six Storage Levels are available for SGs containing Open Systems FBA devices. For SGs containing
mainframe CKD devices, only Diamond, Bronze, and Optimized Service Levels are available.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 64
Create Storage Group with Service Level
Unisphere for PowerMax

Storage Group
Dashboard

Service Levels
Dashboard

65 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Create a Storage Group with a Service Level in Unisphere through either the Storage Group Dashboard or
the Service Level Dashboard. From the Storage Groups Dashboard, click Create to open the Provision
Storage wizard. From the Service Levels Dashboard, select the Service Level and click Provision to
open the wizard.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 65
Renaming Service Levels – Unisphere

66 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Renaming Service Levels with Unisphere for PowerMax is a simple process. From the Service Levels
Dashboard, hover to the right of the Service Level name and click the pencil icon. Enter the new SL name
and click the checkmark to save the change. In this example, the Diamond Service Level name has been
changed to CriticalApps.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 66
Change SL of SG – Unisphere

67 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To change the Service Level of a Storage Group in Unisphere for PowerMax, select the SG from the
Storage Groups Dashboard. Click the Modify button, and change the Service Level to the desired level
using the Service Level dropdown selection.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 67
Create Storage Group with Service Level
Solutions Enabler and RestAPI

Solutions Enabler RestAPI

Add –sl flag slold = Service Level name


Example:
symsg create DemoSG –sid <SymmID> -sl <SLName>

68 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Creating a Storage Group with a Service Level in Solutions Enabler requires using the –sl flag, as seen
in the example. The SG DemoSG was created with the Gold Service Level.

With RestAPI, input the Service Level name in the “slold” field. With both Solutions Enabler and
RestAPI, if a Storage Level is not selected, it is set to “None”.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 68
Compliance – Unisphere

69 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Information on an SG that is out of Service Level compliance can be found in the Storage Groups
Dashboard. Double-click the Storage Group, and choose the Compliance tab for detailed information. In
this example, a significant spike in activity happened on Thursday. Further investigation can be done on
this SG and its activity can be done using the Performance tab.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 69
Module Summary

Key points covered in this module:

• Overview of Virtual Provisioning

• Service Level implementation and management

70 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered Virtual Provisioning and Service Levels concepts.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Provisioning and Service Levels 70
Module: Device and Port Management

Upon completion of this module, you should be able to:

• Create, delete, and expand Thin Devices

• Manage port attributes and associations

71 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on device and port management.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 71
Lesson: Device Management
This lesson covers the following topics:

• Device types

• Creation, deletion, and expansion of devices

72 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers device types and the creation, deletion, and expansion of devices.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 72
Device Types

• Devices that can be managed with management software


– Thin Devices (TDEV)
› Thin Gatekeeper Devices
– Thin BCV Devices (BCV+TDEV)
– SRDF Thin Devices (RDF1 or RDF2)

• Preconfigured devices – cannot be managed with management software


– Data devices
– Internal Thin Devices (Int+TDEV)
› Used by Data Services Hypervisor VMs
o Tools VMs, eManagement VMs, and eNAS Control Station VMs

73 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler (SE) and Unisphere can be used to create and delete thin devices. Thin gatekeeper
devices are thin devices which have a capacity of three cylinders—approximately 6 MB. Thin BCV devices
and thin SRDF devices can also be managed with SE and Unisphere. The arrays come with factory
preconfigured devices which cannot be managed with SE and Unisphere. They are the data devices that
are used in the data pools and internal thin devices which are used by Data Services Hypervisor VMs.

On PowerMax and VMAX All Flash arrays, the operating system provides a data services hypervisor
running natively. The Data Services Hypervisor provides storage infrastructure services through virtual
machines running on the embedded hypervisor. Storage to these virtual machines is provided by the
internal thin devices.

This lesson focuses on the creation and deletion of thin devices and thin gatekeeper devices. SRDF thin
devices and thin BCV devices are covered in other Dell EMC training courses.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 73
Thin Device Attributes

• Set at or after device creation


– SCSI3_persist_reserve – Enabled by default

• Dynamic RDF capable


– No specific attribute needs to be set

Name Used by
SCSI3_persist_reserve UNIX and Windows cluster software
DIF1 Oracle to ensure data integrity
AS400_GK IBM AS400 host control software STM

74 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The attributes that are listed here can be set on thin devices at or after device creation.

The SCSI-3 persistent reservation attribute, sometimes called the PER bit, is used by various UNIX and
Windows cluster software products. It is enabled by default.

The Data Integrity Field (DIF) is a setting on a device that is relevant to an Oracle environment and all
hosts that support the DIF protocol. Oracle objects that are built on devices that have the DIF attribute
send 520-byte Command Descriptor Blocks (CDBs) rather than the normal 512-byte CDBs. The extra 8
bytes are a form of checksum that validates the 512 bytes of data. When the array receives a CDB on a
device that has the DIF attribute, it validates the Oracle data and honors the write request. If the checksum
and the data do not match, it rejects the write request. The DIF setting is likely to have many different
versions of Data Integrity. PowerMaxOS and HYPERMAX OS support the DIF1 format.

The AS400_GK attribute on a PowerMax or VMAX All Flash thin device is required when an AS400 device
is used with IBM Server Task Manager (STM) host control software. This attribute is also used with the
Celerra NAS for Celerra gatekeeper devices.

All PowerMax and VMAX All Flash thin devices are dynamic RDF capable by default. No specific attribute
needs to be set.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 74
Thin Device Creation – SYMCLI

• symdev syntax
symdev create –tdev –cap <#> [-captype <cyl|mb|gb|tb>][-bcv]
[-N <#>]
– Default size unit is cylinders
– Default emulation type is FBA

75 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Also, you can use the symdev syntax for the creation of devices as shown on the slide. See the latest
Solutions Enabler Array Management CLI User Guide for more details. The size can be specified in
cylinders (CYL), megabytes (MB), gigabytes (GB), or in terabytes (TB). Cylinders is the default.

Arrays running PowerMaxOS 5978 or HYPERMAX OS 5977 enable the creation of externally visible
(TDEV) devices up to 64 TB. The device configuration type for thin devices is TDEV. To create BCV thin
devices, use the option -bcv. The option –N sets the number of devices to create.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 75
Thin Device Creation – symdev Examples

• Create (1) 10 GB FBA emulation thin devices


– symdev –sid 217 create –tdev –cap 10 –captype gb

• Create (5) 10 GB FBA emulation BCV thin devices


– symdev –sid 217 create –tdev –cap 10 –captype gb –N 5

76 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are two examples of thin device creation using the symdev command.

To display all nonprivate Symmetrix devices that are configured in one or more Symmetrix arrays that are
connected to this host, use the symdev list command:

symdev list -bcv

Symmetrix ID: 000197600217

Device Name Dir Device


---------------------------- ------- -------------------------------------
Cap
Sym Physical SA :P Config Attribute Sts (MB)
------------------------- ------- -------------------------------------
000C5 Not Visible ???:??? BCV+TDEV N/Asst'd RW 10241

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 76
Gatekeeper Creation – SYMCLI

• symdev syntax
– Default emulation type is FBA

• Example: Create (6) FBA gatekeeper devices


– symdev create –gk –N 6 –sid 217

77 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Also, you can use the symdev command to create a gatekeeper. The –gk option creates the gatekeeper
devices with the proper size.

The –gk option automatically sets the FBA emulation.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 77
Thin Device Creation – Unisphere for PowerMax

• Create Volumes Wizard


– Used to create Thin Devices and
Gatekeeper devices
– Launched by
› Create button in Volumes listing under
Storage

• Storage Groups
– Create New Storage Group Wizard includes
selecting hosts and ports
• Service Levels
– Provision Wizard includes selecting hosts
and ports

78 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Thin devices and gatekeeper devices can be created in Unisphere for PowerMax by using the Create
Volumes wizard. Devices can also be created from the Storage Group wizard or the Provision wizard
under Service Levels.

This lesson focuses on the Create Volumes wizard. The Create Volumes wizard is launched from the
Volumes selection under the Storage dropdown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 78
Create Volume – Thin Devices

Optionally add
devices to an
existing Storage
Group

79 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

On PowerMax and VMAX All Flash arrays, the Create Volume wizard only supports the creation of thin
devices (TDEV), thin BCV devices (BCV+TDEV), or thin gatekeepers (Virtual Gatekeeper).

Configuration: To create thin devices, select TDEV in the Configuration drop-down selector.

Emulation: FBA is the default. You can also create CELERRA_FBA or AS/400_D910_099 emulation
devices.

Number of Volumes: Type in the required number of devices.

Volume Capacity: You can type in the required capacity of the devices or use the capacity field drop-down
to pick an existing device size. You can specify the capacity units in Cyl, MB, or GB.

Add to Storage Group: This entry is an optional field. You can choose to select an existing Storage Group
to which the newly created devices will be added. To chose an existing Storage Group, click the down
arrow.

The Advanced Options button enables you to optionally give the new devices a Volume Identifier. The
Volume Identifier is equivalent to the device name and number that is specified in SYMCLI. Also, users
can choose to enable Mobility ID, which enables FBA device mobility by ensuring the WWN is unique, and
to allocate the full volume capacity. After specifying the requirements, add the job to the job list by using
the Add to Job List option. Alternately, you can run the job immediately using the Run Now option from
the Add to Job List dropdown.

The preferred method is to add the job to the job list as the job list enables running multiple jobs together
and also enables scheduling of jobs.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 79
Create Volume – Thin Gatekeepers

Optionally add
devices to an
existing Storage
Group

80 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Notice when Virtual Gatekeeper is chosen, there is no option for volume capacity. Because Gatekeepers
were selected, the wizard knows to create three-cylinder devices for this purpose.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 80
Job List – Group Jobs

81 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Go to the Job List by clicking the Job List link in the banner area, or by selecting Job List under Events
in the left navigation pane. You can group multiple jobs and then run the group of jobs as a single job.
Highlight the list of jobs to be grouped and click the Group button.

In the Group Jobs dialog, provide a name for the group. Optionally, you can reorder the jobs and schedule
the grouped job to run on a certain date and time.

The job group is displayed in the Jobs List. By selecting the grouped job, you can modify it, run the job, or
schedule it to run later.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 81
Job Group Details

Job Details

82 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view details of a job or job group, double-click the job.

Click the More Actions icon—three vertical dots—to Ungroup a grouped job, or to delete a job from the
Jobs List.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 82
Run Job Group

83 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To run the job, click Run and click OK in the confirmation dialog. The Status of the job changes to
RUNNING.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 83
Job Succeeded

84 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

If the jobs complete successfully, the Status changes to SUCCEEDED. To see the Tasks details, double-
click the job.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 84
Unisphere – Volumes View

85 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The newly created devices can be seen in the Volumes View. Go to the Volumes View by choosing
Volumes from the Storage section dropdown. The display has been scrolled to show the newly created
devices 00A0:00AA.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 85
Thin Device Expansion – SYMCLI

• symdev syntax
symdev modify <SymDevName> -tdev -cap <#> [-captype
<cyl|mb|gb|tb> ] –devs <<SymDevStart>:<SymDevEnd>

• Examples:
– Expand device 000AD to 20 GBs
symdev modify 000AD –cap 20 –captype gb –tdev

– Expand devices 000AA-000AF to 300 GBs


symdev modify –devs 000AA:000AF -cap 300 –captype gb -tdev

86 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

TDEVs can be increased in size with online device expansion. The Solutions Enabler symdev modify
command is used to expand device capacity. Devices can be expanded up to 64 TB, online and non-
disruptively. Shown here is an example of the command to expand a single device, device 000AD, to 20
GB in size. Also shown is an example of the command to expand a range of devices to 300 GB.

The –captype option specifies the units of capacity, either cylinders, megabytes, gigabytes, or
terabytes. The default is cylinders.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 86
Thin Device Expansion – Unisphere

87 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A single volume can be expanded in Unisphere for PowerMax from the Volume page. Select an existing
volume and click the Expand button. In this example, the capacity of the volume is 10 GB. An additional
capacity of 5 GB is added to expand the volume capacity to 15 GB. Select Run Now to run the job now.
After the job completes, rescan storage on the host to verify that the operation completed successfully.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 87
Storage Group Capacity Expansion

88 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Unisphere for PowerMax also supports modifying the size of multiple volumes in a Storage Group with a
single operation. From the Storage Group Dashboard, select a Storage Group and click the Modify
button. In the Modify Storage Group dialogue, increase the Volume Capacity setting. In this example,
DemoGroup has five 10 GB volumes. New volume capacity can be typed in or the drop-down menu can
be used to add additional capacity to the SG. PowerMaxOS supports expanding a volume up to 64 TB.
Notice that each volume has increased from 10 GB to 15 GB, adding 25 GB to the SG, for a total capacity
of 75 GB.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 88
Deleting Devices

• Thin devices
– Must not be mapped to a front-end port
– Must not have any allocations or written tracks

• Unisphere
– Select devices to be deleted in Volume listing and click
Delete (Trash Can icon)

• Data Devices (TDATs) cannot be deleted with Solutions


Enabler or Unisphere

89 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler or Unisphere for PowerMax can be used to delete thin devices. The device to be
deleted must not be mapped to a front-end port and must not have any allocations or written tracks.

The symdev syntax for the deletion of devices are shown on the slide. To delete devices in Unisphere, go
to the Volumes listing page, select the devices, and then click the Trash Can icon to delete. To run the
device deletion, click OK in the confirmation dialog.

To free up all allocations or written tracks, you can use the SYMCLI command symdev –sid <SymmID>
free –all –devs <SymDevStart>:<SymDevEnd>.

To free up all allocations or written tracks in Unisphere, go to the Volumes listing page, select the devices,
and then click More Actions button—three vertical dots—and choose Allocate/Free/Reclaim. In the
dialog choose Free Volumes, check Free all allocations for the volume (written and unwritten), and
run the job.

Data devices (TDATs) cannot be deleted with Solutions Enabler or Unisphere.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 89
Lesson: Port Management
This lesson covers the following topics:

• Director emulations

• Port attributes

• Port association

90 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers director emulations, port attributes, and port association.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 90
Director Emulations

Director Emulations

• Slice A - Infrastructure Management (IM)


• Slice B – HYPERMAX OS Data Services (ED)
• Slice C - Back End Emulation (DN)
• Slice D-H - Remaining Emulations
– FA, SE, RF, RE, EF*, FE, DX
• Each emulation appears only once and
consumes CPU cores
– I/O module ports mapped to emulation
– Maximum 16 ports per director

*PowerMax 2000 and VMAX All Flash 250 arrays do not support mainframe (EF) attach

91 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In the PowerMax and VMAX All Flash arrays, there are eight slices per director.

Slice A is used for the Infrastructure Manager (IM) system emulation. The goal of IM emulation is to place
common infrastructure tasks on a separate instance so that it can have its own CPU resources. The IM
performs all environmental monitoring and servicing. All environmental commands, syscalls, and FRU
monitoring are issued on the IM emulation only. DAE FRUs are monitored by the IM through the DN
emulation. If the DN emulation is down, access to DAE FRUs is affected.

Slice B is used by HYPERMAX OS Data Services (ED) system emulation. ED consolidates various
HYPERMAX OS functionalities to enable easier and more scalable addition of features. Its main goals are
to reduce I/O path latency and introduce better scalability for various HYPERMAX OS applications. EDS
also manages Open Replicator data services.

Slice C is used for back end emulation—DN – SAS back end.

Slices D through H are used for the remaining emulations. Only those emulations that are required are
configured.

Each emulation appears only once per director and consumes cores as needed. A maximum of 16 front-
end I/O module ports are mapped to an emulation. In order for a front-end port to be active, it must be
mapped to an emulation.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 91
Director Emulations Example
C:\Users\Administrator>symcfg list -dir all -sid 217

Symmetrix ID: 000197600217 (Local)

S Y M M E T R I X D I R E C T O R S

Ident Type Engine Cores Ports Status


----- ------------ ------ ----- ----- ------

IM-1A IM 1 3 0 Online
IM-2A IM 1 3 0 Online
ED-1B EDS 1 4 0 Online
ED-2B EDS 1 4 0 Online
DF-1C DISK 1 5 4 Online
DF-2C DISK 1 5 4 Online
FA-1D FibreChannel 1 8 9 Online
FA-2D FibreChannel 1 8 9 Online
SE-1E GigE 1 2 8 Offline
SE-2E GigE 1 2 8 Offline
RF-1F RDF-BI-DIR 1 2 1 Online
RF-2F RDF-BI-DIR 1 2 1 Online

92 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

There is a single emulation instance of a specific type per director. The output from the symcfg list –
dir command displays the director emulations. The emulation instances that are seen in this example
output are:

1A & 2A - Infrastructure Manager (IM)

1B & 2B - HYPERMAX OS Data Services (EDS)

1C & 2C - Disk adapter—Output shows it as DF – it is the DS backend emulation)

1D & 2D - Fibre Channel Frontend Adapter (FA)

1E & 2E - iSCSI/Gigabit Ethernet (SE)

1F & 2F - Fibre RDF (RF)

Also shown is the engine the emulations are running on, the number of cores each emulation is using, the
number of ports associated with the emulation type, and status. Notice the number of ports associated
with the FA and RF emulations.

With PowerMaxOS, all director emulations are capable of supporting multiple cores. The actual number of
cores assigned to a director is fixed. Also, all director emulations support a variable number of ports. Ports
are either physical or virtual. Virtual ports are associated with FA directors. You can associate and
disassociate ports from the FA and RF emulations if needed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 92
Add Director Emulations – Solutions Enabler

• New symconfigure command syntax examples

symconfigure –sid 217 commit –cmd “add dir slot_num = 1 type = FA;”
– add dir slot_num = <director number> type = <FA|FE|SE|RF|RE>

symconfigure –sid 217 commit –cmd “remove dir 1f;”


– remove dir <director number>

• Supported director types


– FA – Fibre channel front-end
– FE – FCoE front-end (VMAX3 only)
– SE – iSCSI front-end
– RF – Remote Fibre (SRDF)
– RE – Remote Ethernet (SRDF)

• No duplicate emulations

93 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Solutions Enabler 8.2 and above supports the ability to add and remove directors through the
symconfigure CLI. New options, add dir and remove dir are used to support director
management. Examples of these CLI options are shown here.

When adding or removing directors, only front-end fibre channel (FA), Fibre Channel over Ethernet (FCoE
– FE on VMAX3 arrays only) and iSCSI (SE) can be used. Also, in SRDF configurations, remote fibre (RF)
or remote Ethernet (RE) can be added or removed. All other emulations cannot be added or removed.
Directors can be modified in slices D, E, F, G, and H only, as others—A, B, and C—are reserved for
internal and back-end emulations. An available slice must be present to add an emulation. If an emulation
already exists on a director, another instance of the emulation cannot be added to that director.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 93
Add Director Emulations – Unisphere

Supported director types


• FA – Fibre channel front-end
• FE – FCoE front-end
• SE – iSCSI front-end
• RF – Remote Fibre (SRDF)
• RE – Remote Ethernet (SRDF)
No duplicate emulations

94 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Adding Director Emulations can also be done using Unisphere for PowerMax. The same rules and
restrictions as were seen with Solutions Enabler apply to Unisphere add/remove activities.

From the System section, choose Hardware, and click the Manage Emulation button. Choose add or
remove, select the director slot (director number) and emulation type, and choose Add to Jobs List or
Run Now to add or remove an emulation.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 94
Logical Port Layout
PowerMax 2000
Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot
0 1 2 3 4 5 6 7 8 9 10

7 11 23 27 31

Compression &
Deduplication
Management

NVMe Flash

NVMe Flash
Reserved
6 10 22 26 30 1
Even
B

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28
Director

7 11 23 27 31

Compression &
Management

Deduplication
NVMe Flash

NVMe Flash
Reserved

6 10 22 26 30 1
Odd
A

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28 Director

Flash Back End Flash Universal/FE


MMCS/MM Universal/FE Fabric (SIB)
Compression/
Deduplication

95 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The logical port layout for a PowerMax 2000 array is shown here. PowerMax 2000 engines support 32
ports per director, ports 0 through 31. Ports 0, 1, 2, 3, 20, 21, 22, and 23 are reserved and not currently
used. Ports 4 through 11 and 24 through 31 can be used for front-end connectivity, and a compression
and deduplication module is installed in slot 7. Ports 12, 13, 16, and 17 are used for back-end connectivity.
On the SIB, ports 0 and 1 are used for inter-engine connectivity, as the PowerMax 2000 with its maximum
configuration of two bricks, does not require a fabric. Port numbers do not become available unless an I/O
module is inserted in the slot. Each FA emulation also supports 32 virtual ports numbered 32 to 63.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 95
Logical Port Layout
PowerMax 8000
Multi-engine system “Born as” Multi-Engine System
• Compression and deduplication module
located in Slot 7
• Subsequent engines added have
compression and deduplication in Slot 7

Single engine system

“Born as” Single-Engine System


• Compression and deduplication module
located in Slot 9
• Subsequent engines added have
compression and deduplication in Slot 9

96 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The logical port layout for PowerMax 8000 arrays is shown here. It is important to understand what the
system was “Born as”, that is, how it was initially ordered from the factory. With systems born as multi-
engine systems, ports 4 through 11 and 24 through 31 can be used for front-end connectivity. The
compression and deduplication module is installed in slot 7. Upgrades adding more bricks to this
configuration also have the compression and deduplication module in slot 7.

In systems that are born as single-engine systems, notice that the the compression and deduplication
module is installed in slot 9. When this module is installed in slot 9, it eliminates the ability to use ports 28
through 31 for front-end connections. Upgrades adding more bricks to a single-engine configuration also
have the compression and deduplication module installed in slot 9, decreasing the number of front-end
ports over the entire system. In both multi-engine and single-engine configurations, ports 0, 1, 2, 3, 20, 21,
22, and 23 are reserved and not currently used. Ports 12, 13, 16 and 17 are used for back-end
connectivity. Ports 0 and 1 on the SIB are used to connect redundantly to the two InfiniBand switches, or
the fabric, in multi-engine systems.

In single-engine systems, the SIB is not installed. If a single-engine system is upgraded, SIB modules and
a fabric are included as part of the upgrade process and must be installed into the existing system. All
engines then have SIB modules to connect to the fabric for inter-director and inter-engine
communications. Port numbers do not become available unless an I/O module is inserted in the slot. Each
FA emulation also supports 32 virtual ports numbered 32 to 63.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 96
Logical Port Layout
VMAX All Flash 450, 850 and 950
Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot
0 1 2 3 4 5 6 7 8 9 10

7 11 15 19 23 27 31

Vault to Flash

Vault to Flash
Vault to Flash

Vault to Flash
Management

Compression
6 10 14 18 22 26 30 1
Even
B

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28
Director

7 11 15 19 23 27 31
Vault to Flash
Vault to Flash

Vault to Flash
Management

Vault to Flash

Compression
6 10 14 18 22 26 30 1
Odd
A

5 9 13 17 21 25 29 0
4 8 12 16 20 24 28 Director

Flash Back End Flash Compression

MMCS/MM Universal/FE Universal/FE Fabric (SIB)


97 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

VMAX All Flash 450, 850 and 950 model logical port layouts are the same. Slot 9 contains a compression
module by default. This slot and the associated ports—ports 28 to 31—are not available for FE
connectivity in these models. Ports 12 through 19 are used for back-end connectivity. On the SIB, ports 0
and 1 are used for connectivity to the fabric in each director. Port numbers do not become available unless
an I/O module is inserted in the slot. Each FA emulation also supports 32 virtual ports numbered 32 to 63.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 97
Logical Port Layout
VMAX 250
Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot Slot
0 1 2 3 4 5 6 7 8 9 10

7 11 15 23 27 31

Vault to Flash
Vault to Flash

Vault to Flash
Management

Compression
6 10 14 22 26 30 1
Even
B

5 9 13 21 25 29 0
4 8 12 20 24 28
Director

7 11 15 23 27 31
Vault to Flash

Vault to Flash
Management

Vault to Flash

Compression
6 10 14 22 26 30 1
Odd
A

5 9 13 21 25 29 0
4 8 12 20 24 28 Director

Flash Universal/FE Empty Universal/FE


MMCS/MM Flash Fabric (SIB)
Compression
Back End

98 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

VMAX All Flash 250 models are designed to support 32 ports per director, ports 0 through 31. Ports 0, 1,
2, 3, 20, 21, 22, and 23 are reserved and not currently used. Ports 4 through 11 and 24 through 31 can be
used for front-end connectivity. VMAX 250 directors have up to three Vault to Flash I/O Modules in slots 0,
1, and 6, versus four in the other VMAX All Flash models. Slot 4 is used for the backend connections to
the disk drives in the DAEs, and Slot 5 is empty and unused. Slot 7 is used for the hardware compression
I/O Module, which is installed by default in every VMAX All Flash model. Finally, slot 10 is used for the
directly connected 56 Gb/s inter-director links, as no switches are used in the VMAX 250 models.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 98
System Hardware

99 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Unisphere System Hardware view for an array shows front-end, RDF, and back-end director
information. Clicking the down arrow on a director, as shown here with FA-1D and SE-1E, shows a listing
of the associated ports and status information. The Available Ports icon displays ports not in use, and the
type and speed of the port.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 99
Host Types

• Physical servers running UNIX offered by different vendors


– HP-UX from Hewlett Packard
– Solaris from Oracle
– IBM-AIX from IBM
– Linux from various companies
• Windows servers running Microsoft Windows
• Virtual Machines running on Hypervisors
• Mainframe
– z/OS
– z/TPF
– z/VM
– Linux on System Z

100 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash arrays can be attached to a wide variety of operating systems—too
numerous to list here. In the open systems world, the most widely used operating systems are MS-
Windows and UNIX flavors such as Solaris, HP-UX, AIX, and Linux.

In recent years as VMware has grown in popularity, it is also common to find these arrays attached to
VMware ESXi servers. For a complete list of supported hosts and operating systems, consult the E-Lab
navigator accessible through the Dell EMC Support website.

PowerMaxOS supports mainframe attach on PowerMax and VMAX All Flash arrays. Operating system
support for mainframe includes z/OS, z/TPF, z/VM, and Linux running on System Z. PowerMax 2000 and
VMAX All Flash 250 arrays do not support mainframe attach.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 100
Front-End Port Flag Requirements

• Different operating systems need different bits or flags set on the front-end
port to recognize devices
• For Fiber Channel ports, ACLX flag enables storage auto-provisioning for
the port
• Port flag settings can be overridden at the initiator group level
• Hosts should access devices over two or more ports
– With path management software Host A

› Higher Availability
› Load
If hosts A andbalancing
B have the same port flag requirements Host B
and host C has different requirements, the initiators
on host C can be set to override the port flags on the
front-end port of the array
Host C
101 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Many vendors require that specific fibre/SCSI flags are set to communicate with the storage array.
PowerMax and VMAX All Flash arrays permit the setting of flags at the front-end port level. Front-end
ports can be shared by multiple hosts as shown in the diagram. Sometimes hosts sharing the front-end
ports may have different bit/flag requirements.

PowerMax and VMAX All Flash arrays permit port flags to be overridden by flags set at the initiator or
initiator group level to accommodate hosts with different bit/flag requirements. The auto-provisioning
SYMCLI command symaccess or Unisphere for PowerMax is used to allocate storage to hosts.

The auto-provisioning process automatically maps and masks the devices. Most hosts typically access
storage through multiple front-end ports. Host-based path management software—for example,
PowerPath—is used to provide higher availability and load balancing.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 101
Simple Support Matrices – Dell EMC E-Lab

https://elabnavigator.dell.com/eln/elnhome
102 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Browse to the Dell EMC E-Lab Interoperability Navigator website


(https://elabnavigator.dell.com/eln/elnhome). Under Simple Support Matrices, choose Storage. Click the
link for the desired Simple Support Matrix. The Director Bit Settings Simple Support Matrix, shown here,
lists the port flag settings that are required for the various operating systems.

The host connectivity guides for the different operating systems can be found on the E-Lab website also.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 102
Excerpt from Simple Support Matrix

For most operating systems, the required flags are enabled by default

103 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an excerpt from the Simple Support Matrix for Director Bit Settings in a Fibre Channel Switched
environment for VMAX All Flash arrays. For most operating systems, the required flags are enabled by
default. For HP-UX systems, the Volume Set Addressing flag has to be enabled. The Simple Support
Matrix also lists settings iSCSI and FCoE. See the Simple Support Matrix for more details.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 103
Port Settings for Common Hosts

• HP
– Volume Set Addressing (V)
– SPC-2 Compliance (SPC2)
– SCSI-3 Compliance (SC3)

• IBM AIX, Solaris, Linux, Windows, VMware


– SPC-2 Compliance (SPC2)
– SCSI-3 Compliance (SC3)

• Most of the required flags are enabled by default


• ACLX flag is required for auto-provisioning

104 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are the common SCSI bus and Fibre port settings that are used by the common operating
systems. The ACLX flag needs to be enabled on the port to use auto-provisioning groups on PowerMax
and VMAX All Flash arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 104
ACLX Port Considerations

• ACLX flag is required for auto-provisioning


– Enabled by default

• ACLX device
– Preconfigured
› No user management of ACLX device
– Default LUN Address of 0
– Visibility on Port
› Controlled by Show_ACLX_Device attribute
› Enabled on one FA port by default

105 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Auto-provisioning groups require the ACLX enabled ports. By default, the ACLX flag is enabled on all FA
ports. PowerMax and VMAX All Flash arrays come preconfigured with one ACLX device. A user cannot
create, delete, or change the attributes of the ACLX device. The device is visible to hosts at the default
address of 000. The device is only visible on front-end ports that have the Show_ACLX_Device port
characteristic set to Enabled.

When arrays come out of the factory, the first ACLX enabled port typically has the Show ACLX device flag
enabled. All other ACLX enabled ports typically have the flag disabled. As a result, the ACLX device is
visible to hosts only on one port.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 105
Unisphere – Port Details and Attributes

106 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

From the System Hardware page, select a front-end port. In this example, FA-1D, Port 5 has been
chosen. Notice that details of the port are shown on the right of the screen. Scroll down to see the port
attributes enabled/disabled state on this port. A Performance tab is also available for performance
information about this port.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 106
Unisphere – Set Port Attributes

107 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To set port attributes, click the More Actions icon (three vertical dots). Clicking the Set Port Attributes
selection opens the Set Port Attributes dialog.

The current port flag settings are shown in the dialog. In this example, Volume Set Addressing is disabled.

Make the desired changes in the Set Port Attributes dialog. For example, enable Volume Set Addressing
by selecting the box next to it.

After making the desired changes, click Add to Job List. The task is listed in the Job List view, and the
command can be run from there. Alternately, you can choose Run Now.

Front-end port attributes or characteristics can be set with the SYMCLI symconfigure command. The
symconfigure syntax is # set port DirectorNum:PortNum FlagName=enable|disable;

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 107
Listing Free Ports

C:\Users\Administrator>symcfg list –free -port -sid 217

Symmetrix ID: 000197600217

Slot Port FCISDRE Gb/sec Status


---- ---- ------- ------ -------
1 7 Y...YY. 16 Powered
2 7 Y...YY. 16 Powered

Legend:
Flags:
(F)A : Y = Yes, . = No
F(C)OE : Y = Yes, . = No
F(I)CON : Y = Yes, . = No
(S)E : Y = Yes, . = No
(D)X : Y = Yes, . = No
(R)F : Y = Yes, . = No
R(E) : Y = Yes, . = No

108 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

For PowerMax and VMAX All Flash arrays, there is only a single emulation instance of a specific type—
FA, DS, RF, and so on—available per director board as discussed earlier. If you need more connectivity,
you can add additional ports to an existing emulation instance. That instance uses all cores that are
configured to it to drive the workload across all ports that are assigned to it.

A capability attribute on each physical port determines the set of front-end emulations to which the port
may be assigned. You can associate—assign—unused ports to front-end emulations and disassociate—
free—ports from the FA and RF emulation types.

Ports that are available to be associated with an emulation can be listed with SYMCLI or with Unisphere
as shown here. The Slot numbers refer to the directors.

In this example, available ports are port 7 on directors 1 and 2, which are 16 GB fibre ports. To view the
free ports, use the symcfg list –free –port command in SYMCLI, or the Available Ports tab on the
System Hardware page in Unisphere.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 108
Disassociate Ports

• Disassociate considerations
– Front-end port must not be in a port group
– RDF port must not have any RDF groups configured
– Port needs to be offline

• symconfigure syntax
– disassociate port <portnum>,[<portnum>,…] from dir <dirnum>;
– Example: Disassociate ports 30 and 31 from director 1D
disassociate port 30,31 from dir 1D;

• Unisphere
– Select desired port from port listing and choose Disassociate from the More Actions icon

109 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Before disassociating, ensure that a front-end port is not in a port group or the RDF port does not have
any RDF groups configured. Ports have to be offline before they can be disassociated from a given
director. You can offline the port with SYMCLI or Unisphere for PowerMax.

The SYMCLI symconfigure syntax is shown here with an example. In Unisphere, from the System
Hardware page, select the port to be disassociated from the Front-End or RDF port listing. Choose
Disassociate from the More Actions (three vertical dots) icon.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 109
Associate Ports

• symconfigure syntax
– associate port <portnum>,[<portnum>,…] to dir <dirnum>;
– Example: Associate port 7 to director 1D
associate port 7 to dir 1D;

• Online the port after association

110 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Free/Available ports can be associated with a desired director emulation. The SYMCLI symconfigure
syntax is shown here with an example. In Unisphere for PowerMax, select an available port from the
Available Ports listing and then click the Associate button. In the Port Association dialog, select the
desired emulation to which the port should be associated and then click OK to complete the association.
In this example, port 7 is associated with the fibre channel emulation on director 1.

Once the port has been associated, it needs to be brought online. Use the SYMCLI symcfg –fa
<dirnum> –p <portnum> online command or use Unisphere to enable the port. The port can be
enabled from the Front-End Director port list view—you saw this view when you were setting port
attributes in Unisphere.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 110
Lab: Port Management with Unisphere and SYMCLI

This lab covers:


• Port management with Unisphere for PowerMax
• Port management with SYMCLI

111 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers port management with Unisphere and SYMCLI.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 111
Module Summary

Key points covered in this module:

• Creation, deletion, and expansion of thin devices

• Management of port attributes and port associations

112 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered device creation, deletion, expansion, and port management.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Device and Port Management 112
Module: Storage Allocation using Auto-provisioning
Groups

Upon completion of this module, you should be able to:

• Describe storage allocation using auto-provisioning groups

• Explain the benefits, features, and considerations of host I/O limits

• Articulate host considerations for storage allocation

• Perform SL-based provisioning with Unisphere for PowerMax

• Perform SL-based provisioning with SYMCLI

113 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on allocation of storage to hosts using auto-provisioning groups. Auto-provisioning
groups, host I/O limits, and host considerations while allocating storage are discussed. Using Unisphere
for PowerMax and SYMCLI to perform SL-based storage provisioning is shown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 113
Lesson: Auto-provisioning Groups Overview
This lesson covers the following topics:

• Auto-provisioning Groups

• Host I/O Limits

114 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson provides an overview of auto-provisioning groups and host I/O limits. SYMCLI syntax to
manage auto-provisioning groups is introduced.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 114
Auto-provisioning Overview

• Easy to provision storage in environments with clusters and hosts using


multiple paths to the array
• Managed with SYMCLI or Unisphere for PowerMax
• SYMCLI
– symaccess command manages all auto-provisioning groups and Masking Views
– symsg command manages Storage Groups (SGs)
› Used for auto-provisioning and performance definition:
o Performs many of the functions that symaccess performs on SGs
o Also used to set Host I/O limits, SRP, SL

• Unisphere for PowerMax


– Storage Groups and Hosts sections

115 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

As the number of volumes in a single array continue to increase, auto-provisioning offers a flexible
scheme for provisioning storage in large enterprises. Auto-provisioning groups enable storage
administrators to create groups of host initiators (Hosts), front-end ports (Port Groups), and logical devices
(Storage Groups). These groups are then associated to form a Masking View, from which all controls are
managed. Masking Views reduce the number of commands that are needed for masking devices, and
enables easy management of LUN masking.

Auto-provisioning in PowerMax and VMAX All Flash arrays are achieved by using the symaccess
SYMCLI command or with Unisphere for PowerMax. The symaccess command can manage Storage
Groups, Port Groups, Hosts, and Masking Views.

The symsg SYMCLI command manages Storage Groups and is used for auto-provisioning.

In Unisphere, the Storage Groups and Hosts sections are used to manage auto-provisioning. The Storage
section has the Storage Groups Dashboard. Port Groups, Hosts (Initiator Groups), and Masking Views are
managed under the Hosts section.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 115
Auto-provisioning Groups
Group Names
• Up to 64 characters long
• Case insensitive
• Unique per group type

Initiator/Host Group Port Group Storage Group


Contains FC WWNs Contains Ports Contains Devices
or iSCSI IQNs

Masking View
116 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Auto-provisioning groups are used for device masking in PowerMax and VMAX All Flash arrays.

An Initiator Group (IG) contains the world wide name (WWN) or the ISCSI Qualified Name (IQN) of a host
initiator. A host initiator is also known an HBA or host bus adapter. An IG may contain a maximum of 64
initiator addresses or 64 child IG names. IGs cannot contain a mixture of host initiators and child IG
names.

Port flags are set on an Initiator Group basis, with one set of port flags applying to all initiators in the
group. However, the FCID lockdown is set on a per initiator basis. An individual initiator can only belong to
one IG, but once the initiator is in a group, the group can be a member in another IG. It can be grouped
within a group. This feature is called cascaded Initiator Groups, and is only allowed to a cascaded level of
one.

A Port Group may contain a maximum of 32 front-end ports. Front-end ports may belong to more than one
Port Group. Before a port can be added to a Port Group, the ACLX flag must be enabled on the port.

Storage Groups can only contain devices or other Storage Groups. No mixing is permitted. A Storage
Group with devices may contain up to 4K PowerMax or VMAX All Flash logical volumes. A logical volume
may belong to more than one Storage Group. There is a limit of 16K Storage Groups per PowerMax or
VMAX All Flash array. A parent Storage Group can have up to 64 child Storage Groups.

One of each type of group is associated together to form a Masking View.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 116
Auto-provisioning Flexibility

• Initiators can be dynamically added or removed from Initiator Groups


• Ports can be dynamically added or removed from Port Groups
• Storage can be dynamically added or removed from Storage Groups
• Automatic session rollback in the event of a session failure
– Audit log contains messages relating to the rollback

117 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Once the groups have been created, auto-provisioning represents an easy way to handle provisioning. It
enables you to mask multiple devices, ports, and HBAs by placing them into groups. These groups can be
dynamically altered to give the host access to new storage.

With the symaccess command, all groups and views are backed up to a file, and can be restored from a
backup file.

When an auto-provisioning session fails on a PowerMax or VMAX All Flash array, the system
automatically rolls back the ACLX database to the state it was in prior to initiating the session. This
rollback feature recovers the database and releases the session lock automatically. The audit log contains
any messages relating to the rollback.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 117
Provisioning Limits

Object PowerMax and VMAX All Flash Maximums


Devices 16K per director
4K per Storage Group
Initiator Group (IG) 64 initiators (or 64 child IGs) per IG
Storage Group (SG) 16K SGs per array
64 child SGs per parent SG
Port Group (PG) 16K PGs per array
32 ports per PG
Masking View (MV) 16K MVs per array
LUN Addresses 4K LUN addresses per director port

118 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The table shows the provisioning limits for PowerMax and VMAX All Flash arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 118
Storage Groups

• Collection of devices
– Used for LUN masking

• Can be explicitly associated with SRP and SL


– Default – SG is associated with Default SRP and Optimized SL
– A thin device may only be in one SG

119 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A Storage Group is a logical collection of thin devices that are to be managed together. A Storage Group
can be explicitly associated with an SRP or an SL or both. By default, devices within an SG are associated
with the default SRP and are managed by the Optimized SL.

Devices may be included in more than one SG. This restriction ensures that a single device is not
managed by more than one SL or have data that is allocated from more than one SRP.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 119
Cascaded Storage Groups

• Storage Group with other Storage Groups as


members
Parent SG
– Single level of cascading
• Parent SG Child SG1 Child SG2
– Inherits all the devices in the child SGs
– Cannot inherit same device from more than one child
• Child SG
– Contains devices only
– SRP, and SL set on child
– May only be contained in a single parent
• Masking is typically done on the parent

120 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Cascaded Storage Groups are Storage Groups that contain other Storage Groups. A Storage Group that
has other Storage Groups as members is called the parent. A child Storage Group contains only devices,
and is contained within a parent Storage Group. Cascading of Storage Groups enables individual
policies—SRP, SL, and, where applicable, and a Masking View for the parent Storage Group.

Only a single level of cascading is permitted. A parent Storage Group may not be a child of another
Storage Group. Storage Groups can only contain devices or other Storage Groups. No mixing is
permitted.

Empty SGs can be added to a parent SG if the parent SG inherits at least one device when the parent SG
is in a view. A parent SG cannot inherit the same device from more than one child Storage Group. A child
Storage Group may only be contained by a single parent Storage Group.

Masking is not permitted for a child Storage Group which is contained by a parent Storage Group already
part of a Masking View. Masking is not permitted for the parent Storage Group which contains a child
Storage Group that is already part of a Masking View.

A child Storage Group cannot be deleted until it is removed from its parent Storage Group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 120
Create Storage Group – SYMCLI Syntax

symaccess example: Create SG and add devices


#symaccess create –sid 225 –name SG_1 –type storage devs 50:52

symsg example: Create SG, and set SL


#symsg create –sid 225 SG_1 –sl Gold
#symsg –sg SG_1 addall -devs –range 50:52

Devices 50, 51, and 52

121 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The example shows how to use both symaccess and symsg commands to create Storage Groups. The
symaccess command enables you to create the Storage Group and simultaneously add devices or child
Storage Groups.

The symsg command enables you to create an empty Storage Group first and then populate it with
devices or child Storage Groups. The symsg command also enables you to set the SL, and Host I/O limits
while the Storage Group is created.

See the latest Solutions Enabler Array Management CLI User Guide for more details and options while
creating and managing Storage Groups with the symaccess and symsg commands.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 121
Other Common SG Operations – SYMCLI

Action symaccess symsg


symaccess –sid 80 –name MyGrp symsg –sg MyGrp –sid 80 addall -
Add devices –type storage add devs CD:F4 devs –range CD:F4

symaccess –sid 80 –name MyGrp –type symsg –sg MyGrp –sid 80 rmall -
Remove devices storage remove devs CD:F4 devs –range CD:F4
symaccess list -type storage –sid 80
symsg list -sid 80
Display group info symaccess show –name MyGrp –type
symsg show MyGrp –sid 80
storage –sid 80
symaccess -sid 80 -name MyGrp -type
Delete a group storage delete
symsg delete MyGrp -sid 80

122 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Here are some other commonly performed Storage Group operations. Storage Groups can be renamed if
needed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 122
Set SRP/SL – symsg Syntax

symsg -sg <SgName> -sid <SymmID>

set [-sl <SLName> |-nosl]

[-srp <SRPName> | -nosrp]

-sl

Diamond, Platinum, Gold, Silver, Bronze, Optimized (default)

-nosl
Removes any explicitly set SL

123 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

By default, Storage Groups use the default SRP and are managed by the Optimized SL. The valid
arguments for the –sl and –nosl options are listed. Workload type is no longer used with PowerMax
and VMAX All Flash arrays running PowerMaxOS, as previously mentioned. The –nosl option removes
any explicitly set SL and WL type. The SG is managed by the Optimized SL. The –nosrp option removes
any explicitly set SRP. The SG uses the default SRP.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 123
PowerMaxOS Data Reduction

INTELLIGENT INLINE DATA REDUCTION


Native Data Reduction • Supports all data services

• Granular
– Storage Group (Application) level
– Can compress and/or dedupe existing data

• Use VMAX Sizer for proper configuration

• Data is reduced by I/O module


INLINE DATA REDUCTION
– If hardware fails, software is used

• Performance-Optimized
– Balances performance and efficiency
– Hot data not compressed

• Data deduplication (dedupe) and compression on PowerMax

• Data compression on VMAX All Flash

124 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS includes data reduction, which maximizes the PowerMax and VMAX All Flash value
proposition by providing the best space savings. Intelligent inline data reduction delivers higher space
efficiency, reducing the overall cost per usable TB. It works with all VMAX trusted data services such as
SnapVX, SRDF, eNAS, and D@RE. Compressed data can be encrypted in real time, which is unique.

Data reduction operates granularly at the Storage Group (Application) level so customers can target those
workloads that benefit the most. Data reduction techniques can be applied to existing data. In addition to
effective capacity calculations, cache requirements must be met to support compression. VMAX Sizer
must be used to size a system that will have data reduction enabled.

Data is reduced as it moves from the system cache to the back end drives using a data reduction I/O
Module on each director. If the hardware I/O module fails, software-based reduction is used.

PowerMaxOS compression is performance-optimized and smart enough to ensure the most active data is
not compressed. This optimization enables the system to deliver maximum throughput using cache and
SSD technology, and ensures that system resources are balanced and always available when required.

PowerMaxOS running on PowerMax arrays provides compression and deduplication (dedupe) together.
PowerMaxOS running on VMAX All Flash arrays provides compression only.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 124
PowerMaxOS Data Reduction by Platform

PowerMax VMAX All Flash


Inline Compression and
Data Reduction Technology Inline Compression
Inline Deduplication
I/O Module Type Data Reduction Module Compression Module
Adaptive Compression Engine (ACE)
• Algorithms learn from incoming workload to Yes Yes
create customized back end
Compression Algorithm Deflate LZS
Extended Data Compression (EDC)
Yes No
• Further compresses compressed data
SRP Type Open Systems (FBA) Open Systems (FBA)

125 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS data reduction technology differs slightly between PowerMax and VMAX All Flash arrays. In
PowerMax arrays, both compression and deduplication are done inline using a data reduction hardware
I/O module. In VMAX All Flash arrays, inline compression is used, without deduplication. VMAX All Flash
arrays use a compression hardware I/O module.

Both systems employ an Adaptive Compression Engine, or ACE, which is a combination of multiple core
components working together to achieve maximum system efficiency. Intelligent algorithms learn from
incoming workloads and dynamically create a customized back end, catering to the incoming workload.
ACE changes the backend compression pool layout as needed to ensure it operates at optimal levels for
both performance and space efficiency. The algorithms also identify the busiest data in the system and
that data is not compressed, minimizing decompression overhead.

Deflate compression is used in PowerMax, while LZS is used in VMAX All Flash arrays. Both of these
techniques are lossless data compression algorithms.

PowerMax systems include Extended Data Compression, or EDC. EDC compresses already compressed
data to gain further capacity savings. Data that qualifies for EDC is data that has not been accessed for 30
days. To be eligible for EDC, the data must also belong to a data reduction enabled Storage Group, and
must not be already compressed by EDC.

Data reduction applies to Open Systems Fixed Block Architecture (FBA) data only.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 125
PowerMax Deduplication

Dedupe Function/Component Description


• Inline process using the same data reduction module as ACE
Hardware Acceleration • Data is passed through the module to generate a unique Hash ID
• Identifies identical data patterns based on Hash ID
Deduplication Algorithm • Performed as data is passed through data reduction module

• Unique Hash IDs stored in table in system memory


Hash Table • Representation of data in a dedupe relationship

• Only exist when dedupe relationship exists


Deduplication Management Object
• Stores and manages pointers between front end devices and the
(DMO) single instance of data stored on disk

126 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax uses inline deduplication to reduce redundant copies of data that would consume storage
capacity. Pointers are used to replace the redundant copies, and provide access to multiple sources for
the subsequent requests for that data. Deduplication in PowerMax arrays is accomplished through a
series of functions and components that are described in the table above.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 126
PowerMaxOS Write I/O Flow
Are there <5 Yes
Start FE pointers?
Yes

No
Compress data
Write initiated and create hash Does a DMO Add to existing
No Create new
ID* exist? DMO
DMO

No

Is Data Is hash ID in Update hash ID


Reduction Yes Is it active data? Yes
hash table? in hash table
enabled?

Yes
No No

Perform normal Add hash ID to Allocate data to


I/O flow hash table disk Finish

*Hash IDs and hash tables are not used in VMAX All Flash Arrays
127 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The write I/O flow for PowerMax is shown here. With VMAX All Flash arrays, the IO flow does not involve
a hash ID and hash table which are used for deduplication, but the compression flow is the same.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 127
PowerMax Data Reduction IO Flow

1 Host write
1 • Write to TDEV
AAAA
• Destage at later time
Data
Reduction
TDAT Module 2 Destage
TDEV
021
FFCF6
• Data Reduction Module
2 – Compresses data
4A
– Generates unique hash
AAAA
4A
3 I/O Engine
1-22DB-CEDCDC
3 • Checks hash table
– No entry in table
I/O Engine
Hash Table • Adds hash entry to table
1-22DB-CEDCDC
• Writes data to TDAT
• Links hash entry to TDAT

128 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Writes to PowerMax arrays are handled much in the same as VMAX Family arrays. Writes are stored in
cache, and the write is immediately acknowledged to the host. At a later point, the array destages the data
from cache to backend disks. With PowerMax data reduction, the destage process differs. Once the
decision to destage has been made, data is placed in the I/O Engine. From there, the Data Reduction
module compresses the data and creates a random hash for the data. The I/O Engine then checks the
hash table to see if there is already an entry for that data. Since it is a new write, there is not, and the hash
is added to the table. The compressed data is written to the backend TDAT, and the hash is linked to it.
The write is now complete, and the TDEV is pointing to the data on the TDAT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 128
Duplicate Write I/O Flow

1 Host write
• Write to TDEV
• Destage at a later time
Data
Reduction
TDAT Module 2 Destage
TDEV
021
FFCF6
• Data Reduction Module
2 – Compresses data
4A
– Generates unique hash
AAAA
4A
1 3 I/O Engine
1-22DB-CEDCDC
3 • Checks hash table
AAAA – Sees duplicate entry
TDEV I/O Engine
A4E Hash Table • Updates table with additional
1-22DB-CEDCDC
pointer

129 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Some time later, a duplicate write is done to a different TDEV. The Data Reduction module compresses
the data and generates the dedupe hash, which is identical to the previous write. The I/O Engine then
checks the hash table and sees that there is already a hash entry for this data. Data does not need to be
destaged. The pointer in the table is updated for the new write.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 129
Compression Settings – Unisphere

130 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Compression enables users to compress user data on Storage Groups. Compression is enabled by
default on PowerMax and VMAX All Flash arrays, and can be turned on and off at the Storage Group
level. If a Storage Group is cascaded, enabling compression at the parent level enables compression for
each of the child Storage Groups. The user has the option to disable compression on one or more of the
child Storage Groups if desired. To turn off the feature on a particular Storage Group in Unisphere, clear
the Compression check box when creating or modifying Storage Groups. Disabling compression does not
automatically decompress data, but new I/O is not compressed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 130
Compression Settings – Solutions Enabler

• Enable compression on selected Storage Group


symsg –sid <SymmID> –sg <sg_name> set –compression

• Disable compression on selected Storage Group


symsg –sid <SymmID> –sg <sg_name> set –nocompression

• Create new Storage Group with compression enabled


symsg –sid <SymmID> create <sg_name> -compression –srp
<srp_name> -sl <Service Level_name>

131 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To enable or disable compression on a particular Storage Group using Solutions Enabler, use the symsg
set commands that are shown here.

When creating a new Storage Group with compression enabled, use the symsg create command that
is shown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 131
Host I/O Limits Overview

• Set limits on the front-end bandwidth and IOPS consumed by applications


• Limits are set on a per Storage Group basis
– Storage Group is associated with a Masking View
– Limits are distributed across the directors in the Port Group of the associated
Masking View
› Distribution can be dynamic
– I/O limits may be placed on parent and child Storage Groups

132 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Host I/O Limits feature enables users to place limits on the front-end bandwidth and IOPS that are
consumed by applications.

Limits are set on a per Storage Group basis. As users build Masking Views with these Storage Groups,
limits for maximum front-end IOPS or MB/s are distributed across the directors within the associated
Masking View. The system then monitors and enforces against these set limits.

The Host I/O Limits can be managed and monitored using both Solutions Enabler and Unisphere for
PowerMax.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 132
Host I/O Limits – Benefits

• Ensures that applications cannot exceed their set limit, reducing the potential of
impacting other applications
• Provides greater levels of control on performance allocation in multi-tenant
environments
• Enables predictability needed to service more customers on the same array
• Simplifies quality-of-service management by presenting controls in industry-
standard terms of IOPS and throughput
• Provides the flexibility of setting either IOPS, throughput, or both, based on
application characteristics and business needs
• Manages expectations of application administrators with regard to performance

133 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The benefits of Host I/O Limits are listed here.

Host I/O limits are beneficial whenever an array is shared among multiple tenants by enabling the setting
of consistent performance SLAs. They prevent applications from using more than their allotted share of
front-end resources.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 133
Host I/O Limits – Features

• Cascaded Storage Groups


– Limits can be set on parent, on child, or on both
– Sum of all child limits may exceed the limit of the parent
› Total I/O rate of all the children remains limited by the limit of the parent
– Individual child limit may not exceed the limit of the parent
• Dynamic I/O Distribution
– Mode can be Never, OnFailure, or Always
› Never: Implies static even distribution (default)
› OnFailure: On a failure, I/O limits are redistributed to online ports
› Always: I/O limits are dynamically distributed based on demand

134 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

For Cascaded Storage Groups, users may set up a cascaded SG configuration where there are optional
limits that are assigned to each individual child SG. The parent SG may also have its own assigned limit.
The sum of child limits may exceed the limit of the parent. However, the combined I/O rate of all child SGs
remains limited by the limit of the parent. Also, the individual child SG limits may not exceed the assigned
limit of the parent.

Host I/O distribution is governed by the Dynamic Mode setting. The default mode is Never which implies a
static even distribution of configured limits across the participating directors in the Port Group. The
OnFailure mode causes adjustment of the fraction of the configured Host I/O limits available to a
configured port based on the number of ports that are online. Setting the dynamic distribution to Always
causes dynamic distribution of the configured limits across the configured ports, enabling the limits on
each individual port to adjust to fluctuating demand.

For example, if the mode is set to OnFailure in a two-director Port Group which is part of a Masking View,
both directors are assigned half of the total limit. If one director goes offline, the other director is
automatically assigned the full amount of the limit. This assignment makes it possible to ensure that the
application is running at full speed regardless of a director failure.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 134
Host I/O Limits – Considerations

• Only one limit per Storage Group


• Devices in multiple Storage Groups can only adhere to one limit
• A Storage Group with a Host I/O limit can be associated with, at most, one
Port Group in any provisioning view
• Usually, the total Host I/O limits may only be achieved with proper host load
balancing between directors

135 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Only one limit can be set per Storage Group, and devices in multiple Storage Groups can only adhere to
one limit.

At any given time, an SG with a Host I/O Limit can be associated with, at most, one Port Group (PG) in
any provisioning view. If the SG with a Host I/O Limit is in a provisioning view with a PG, the SG and PG
combination have to be used when creating other provisioning views on the SG.

Usually, the total Host I/O Limits may only be achieved with proper host load balancing between directors.
Load balancing is achieved using multipathing software on the hosts, such as PowerPath.

Solutions Enabler supports the initiator Bandwidth limits.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 135
Port Groups

• Contain valid front-end ports


• A port can belong to more than one Port Group 1D:6

• Ports must have ACLX flag enabled 2D:6

Create Port Group – SYMCLI Example:


#symaccess create –sid 225 –name PG_1 -type port –dirport
1D:6,2D:6

136 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Port Groups contain front-end director and port identification. A port can belong to more than one Port
Group. Ports can be Fibre Channel or iSCSI. On PowerMax and VMAX All Flash arrays, you cannot mix
different types of ports, that is, Fibre Channel and iSCSI, within a single Port Group. Ports must have the
ACLX flag enabled. The ACLX flag is enabled by default.

Ports can be added and removed. When a Port Group is no longer associated with a Masking View, it can
be deleted.

The SYMCLI example that is shown creates a new PG named PG_1 containing two front-end ports.

See the latest Solutions Enabler Array Management CLI User Guide for more details and options while
creating and managing Port Groups with the symaccess command.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 136
Other Common PG Operations – SYMCLI

Action symaccess
Add port to Port Group symaccess -sid 80 -name MyPorts -type port -dirport 1D:6 add

Remove port from group symaccess -sid 80 -name MyPorts -type port remove -dirport 1D:6

symaccess list -name MyPorts -sid 80


Display group contents
symaccess show MyPorts -type port -sid 80

Delete a group symaccess -sid 80 -name MyPorts -type port delete

List the group or groups that symaccess list -sid 80 -type port -dirport 1D:6
a port belongs to

137 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are some of the operations that are commonly performed on a Port Group. Port Groups can
be renamed if needed.

See the latest Solutions Enabler Array Management CLI User Guide for more details and options while
creating and managing Port Groups with the symaccess command.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 137
Initiator Groups (IG)

• Container of host initiators


– Can be cascaded
– Cannot mix host initiators and child IG names
– Cannot mix host initiator types

• An initiator may belong to only one IG


• A child IG may belong to one or more parent IGs
• Cascaded IG Example
– Child_IG1 contains WWN1 & WWN2
– Child_IG2 contains WWN3 & WWN4
– Parent_IG contains Child_IG1 and Child_IG2

138 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

An Initiator Group is a container of one or more host initiators, which are Fibre WWNs or iSCSI IQNs.
Each Initiator Group can contain up to 64 initiator addresses or 64 child IG names. Initiator Groups cannot
contain a mixture of host initiators and child IG names. Thus an IG contains only host initiators or it
contains only child IG names.

You cannot mix different types of initiators, that is, Fibre Channel WWNs and iSCSI IQNs, within a single
Initiator Group. Also, all child IG names that are added to a parent IG must contain the same initiator type.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 138
Create Initiator Group – SYMCLI Example

#symaccess create -sid 225 -name IG_1 -type initiator –consistent_lun


–wwn 2100001b321e9dd5

#symaccess -sid 225 -name IG_1 -type initiator add -wwn 2101001b323e9dd5

OR
#symaccess create –sid 225 –name IG_1 –type initiator –consistent_lun
–file HBA_WWNS

Initiator WWNs
File HBA_WWNS contains 2100001b321e9dd5
wwn:2100001b321e9dd5
2101001b323e9dd5
wwn:2101001b323e9dd5

139 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

You can create an Initiator Group using the WWN of the HBA, or a file containing WWNs or another
Initiator Group name. Use the -consistent_lun option if the devices of an SG in a view must be seen
on the same LUN on all ports of the Port Group. If the -consistent_lun option is set on the IG,
PowerMaxOS ensures that the host LUN number that is assigned to devices is the same for the ports. If
this option is not set, the first available LUN on each individual port is chosen.

For arrays running PowerMaxOS or HYPERMAX OS 5977 or higher, you can create an Initiator Group
using the iSCSI IQN of the HBA also. To support iSCSI targets, the symaccess command includes the -
iscsi_dirport and -iqn options.

See the latest Solutions Enabler Array Management CLI User Guide for more details and options while
creating and managing Initiator Groups with the symaccess command.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 139
Other Common IG Operations – SYMCLI

Action symaccess
symaccess -sid 80 -name MyInit -type initiator add -wwn
Add initiator to group 10000000c92ab6de

Remove initiator from symaccess -sid 80 -name MyInit -type initiator


Initiator Group remove -wwn 10000000c92ab6de

symaccess list -name MyInit –sid 80


Display group contents symaccess show MyInit –type initiator –sid 80
symaccess show MyInit –type initiator –sid 80 –detail

Delete a group symaccess -sid 80 -name MyInit -type initiator delete

List the group or groups to symaccess list -sid 80 -type initiator -wwn 10000000c93124ae
which an initiator belongs

140 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are some of the operations that are commonly performed on an Initiator Group. Initiator
Groups can be renamed if needed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 140
Masking View

• Association of one Initiator Group, one Port Group, and one Storage Group
– Devices in SG become visible to host initiators in the IG through the ports in the
PG

• Create Masking View – SYMCLI example


#symaccess create view –sid 225 –name MV_1 –sg SG_1 –pg PG_1 -ig
IG_1

SAN

IG_1 PG_1
SG_1

MV_1
141 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A Masking View is created by associating one Initiator Group, one Port Group, and one Storage Group. So
a Masking View is a container of a Storage Group, a Port Group, and an Initiator Group.

When you create a Masking View, the devices in the Storage Group become visible to the host. The
devices are masked and mapped automatically.

See the Solutions Enabler Array Management CLI User Guide for more details and options while creating
and managing Masking Views with the symaccess command.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 141
Other Common Masking Operations – SYMCLI
Action symaccess
symaccess -sid 80 rename view -name MyView
Rename a Masking View –new_name YourView

Delete a Masking View symaccess -sid 80 -name MyView delete view

symaccess list view -name MyView -sid 80


Display view info
symaccess show view MyView -sid 80

Backup symaccess –f symm80.bak -sid 80 backup

Restore symaccess –f symm80.bak -sid 80 restore

142 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are some of the operations that are commonly performed on Masking Views. The
symaccess backup command backs up the entire masking database.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 142
Non-Disruptive Device Moves Between SGs

• Moving devices from one SG to another SG does not disrupt host visibility
for the devices
– Certain conditions must be met*

• symsg syntax
symsg -sg <SgName> -sid <SymmID>

move dev <SymDevName> <DestSgName> [-force]

[-devs <<SymDevStart>:<SymDevEnd> | <SymDevName>


[,<<SymDevStart>:<SymDevEnd> | <SymDevName>>...]> |
-file <DeviceFileName> [-tgt] ]
moveall <DestSgName> [-force]

*See notes
143 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash arrays support moving devices from one SG to another SG without
disrupting host visibility for the devices. Moving a device to another SG does not disrupt the host visibility
for the device, if any one of the conditions is met:

• Moves between child SGs of a parent SG, when the view is on the parent SG.

• Moves between SGs when a view is on each SG, and both the Initiator Group (IG) and the Port
Group (PG) are common to the views.

• Moves between SGs when a view is on each SG, and they have a common IG. They have different
PGs but the same set of ports, or the target PG is a superset of the source PG.

• Moves when source SG is not in a Masking View.

If none of the conditions are met, the operation is rejected, but the move can be forced by specifying the -
force flag. Forcing a move may affect the host visibility of the device.

The symsg syntax is shown here.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 143
Non-Disruptive Cascaded SG Conversion
Stand-alone SG to Cascaded SG Cascaded SG to Stand-alone SG

• Parent SG retains the name of • Allowed only if a Cascaded SG has a


original stand-alone SG single child SG
• New stand-alone SG retains the name
• If original stand-alone SG is part of of the original parent SG
any Masking Views, after conversion • If original parent SG is part of any
all views will be moved to the new Masking Views, after conversion all
parent SG views will be moved to the new stand-
alone SG
• Existing Host I/O limits can be • Existing Host I/O limits on parent SG
migrated to the new parent or child are migrated to the new stand-alone
SG
• If original child SG was FAST
• If original stand-alone SG was FAST managed, the stand-alone SG will be
managed, only the child SG will be FAST managed after conversion
FAST managed after conversion

144 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash arrays support non-disruptively performing the conversion of a stand-
alone SG to Cascaded SG or a Cascaded SG to a stand-alone SG. This conversion enables FAST-
managed SGs containing devices with a single SL to be expanded to include devices in a second SL,
without disrupting the availability of those devices from host applications.

To convert a stand-alone SG to a cascaded configuration, the command supplies the name of the stand-
alone SG being converted and the name of the new child SG. Upon successful completion, the parent SG
retains the name of the stand-alone group and the child SG is given the new child name. If the SG starts in
one or more Masking Views, at the end of the operation all of the views are moved to the parent Storage
Group. If the SG starts with Host I/O Limits configured, these limits can be migrated to the parent SG or to
the child SG.

To convert a cascaded SG to a stand-alone configuration, the command supplies the name of the parent
SG being converted to a stand-alone SG. This conversion is allowed only if the cascaded SG has a single
child SG. Upon successful completion, the stand-alone SG retains the name of the parent group. If the
parent Storage Group starts in one or more Masking Views, at the end of the operation all of the views are
moved to the stand-alone SG. If the parent SG starts with Host I/O Limits configured, these limits are
migrated to the stand-alone SG.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 144
Cascaded SG Conversion – symsg Syntax

• Convert stand-alone SG to cascaded SG


symsg -sid <SymmID>
convert -cascaded <SgName> <ChildSgName>
[-host_IO <on_parent | on_child>]
-host_io must be specified if stand-alone SG has Host I/O defined

• Convert cascaded SG to stand-alone SG


symsg -sid <SymmID>
convert –standalone <SgName>
[-host_IO <keep_parent | keep_child>]
-host_io must be specified if Host I/O has been defined on both parent and child
SGs

• Unisphere for PowerMax can also be used

145 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg convert –cascaded command enables the non-disruptive conversion of a stand-alone
SG to a cascaded SG consisting of a parent SG and a single child SG. If the stand-alone SG has a Host
I/O Limit, the user must specify if, after the conversion, the limit will be set on the parent or the child SG.

The symsg convert –standalone command enables the non-disruptive conversion of a cascaded
SG consisting of a parent SG and a single child SG to a stand-alone SG. If either the parent SG or the
child SG has a Host I/O Limit that is defined, it is set on the stand-alone SG. But if both parent and child
SGs have a Host I/O Limit, the user must supply the host_IO option.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 145
Lesson: Host Considerations – Storage Allocation
This lesson covers the following topics:

• HBA Flags

• Rescan SCSI Bus

146 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers host considerations that are related to storage provisioning. HBA flag settings and the
commands to rescan the SCSI bus on common server platforms are shown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 146
HBA Flag Settings for Common Hosts

• HP
– Volume Set Addressing (V)
– SPC-2 Compliance (SPC2)
– SCSI-3 Compliance (SC3)

• IBM AIX, Solaris, Linux, Windows, VMware


– SPC-2 Compliance (SPC2)
– SCSI-3 Compliance (SC3)

147 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Earlier, the required SCSI and Fibre port settings at the array Port Level were set. Shown here are the
common SCSI bus and Fibre port settings that are used by the common operating systems. The port flags
settings can be overridden at the initiator or Initiator Group level.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 147
Setting HBA Flags

• HBA flags that can be set on Initiator Groups


– Disable_Q_Reset_on_UA [D]
– Environ_Set [E]
– Volume_Set_Addressing [V]
– Avoid_Reset_Broadcast [ARB]
– OpenVMS [OVMS]
– SCSI_3 [SC3]
– SPC2_Protocol_Version [SPC2]
– SCSI_Support1 [OS2007]

• HBA flags that can be set on initiators


– All of the above except Volume Set Addressing

148 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMax and VMAX All Flash arrays enable you to set the HBA port flags on a per initiator or Initiator
Group basis. This feature allows specific host flags to be enabled and disabled on the director port. If a
flag conflicts with any initiator in the group, it cannot be set for the group. After a flag is set for a group, it
cannot be changed on an initiator basis.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 148
Set Port Flags – SYMCLI

• To set port flags, use the symaccess command

Port level:
symaccess -sid <SymmID> -wwn <wwn> | -iscsi <iscsi>
set hba_flags <on <flag,flag,flag...> <-enable |-disable> |
off [flag,flag,flag...]>

Group level:
symaccess -sid <SymmID> -name <GroupName> -type initiator
set ig_flags <on <flag> <-enable |-disable> |
off [flag]>
set consistent_lun <on | off [-force]>

149 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To set—or reset—the HBA port flags on a port, use the following SYMCLI syntax:

# symaccess -sid <SymmID> -wwn <wwn> | -iscsi <iscsi> set hba_flags <on <flag>
<-enable |-disable> |off [flag]>

To set—or reset—the HBA port flags on an Initiator Group, use the following SYMCLI syntax:

# symaccess -sid <SymmID> -name <GroupName> -type initiator set ig_flags <on
<flag> <-enable |-disable> |off [flag]>

If a flag conflicts with any initiator in the group, it cannot be set for the group. After a flag is set for a group,
it cannot be changed on an initiator basis.

See the Solutions Enabler Array Management CLI User Guide for more details on overriding port flags
with the symaccess command.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 149
Commands to Scan SCSI Bus for Devices (1of 3)

• Solaris
– devfsadm –C
• IBM AIX
– lsdev –Cc adapter –Fname | grep fc (identifies the fibre channel
directors such as fcs0, fcs1)
– cfgmgr –l fcs? (? represents the bus number such as 0)
• HP-UX
– ioscan –fnC disk (scans host bus and identifies new devices)
– insf –e (creates device special volumes)

150 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

After devices have been provisioned to a host by the creation of a Masking View, the operating system on
the host must recognize the device. A SCSI bus rescan must be initiated from the host to recognize the
device. The bus rescan commands vary from operating system to operating system.

The commands that are shown here are taken from the Dell EMC Host Connectivity Guides. While they
work reliably in most cases, they may not work for every version of a particular operating system. Verify
the accuracy of these commands by checking the vendor documentation.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 150
Commands to Scan SCSI Bus for Devices (2 of 3)

• Linux (does not work for all drivers)


– cd /sys/class/scsi_host/host? (? represents host bust adapter instance,
for example, host0 or host1)
– ls –al scan
– echo ‘- - -’ > scan (dashes represent channel, target, and LUN numbers)
• QLogic scan utility available from QLogic website
– ./ql-dynamic-tgt-lun-disc.sh
• Emulex lun_scan utility from Emulex website
– ./lun_scan.sh all

151 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Since there are several flavors of commercially available Linux, there are various ways that the SCSI bus
on those systems can be rescanned. The methods that are documented here are taken from the Linux
Host Connectivity Guide.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 151
Commands to Scan SCSI Bus for Devices (3 of 3)

• Windows
– Uses the DISKPART CLI utility
› C:\DISKPART
DISKPART> rescan
DISKPART> exit
– Windows Disk Management GUI can also be used to perform a rescan
• Solutions Enabler commands to rescan the bus
– symcfg scan
– symntctl rescan (Windows only)

152 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In addition to the vendor-supplied commands, Dell EMC also has some commands in Solutions Enabler
that are designed to scan the SCSI bus. The Dell EMC commands are convenient to use, but the vendor-
supplied commands are the most reliable.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 152
Commands to Scan SCSI Bus – VMware

ESXi Server
• Command issued on ESXi server with esxcli enabled
esxcli storage core adapter rescan --all
• The preferred method is to have the command issued from a host that is
network-attached to the ESXi server and has ESX cli installed
esxcli -s 10.127.94.252 -u root -p <password> storage
core adapter rescan –-all
• VMware vSphere GUI can also be used to rescan for new devices into the
ESXi server

153 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The CLI commands that are shown here are useful for rescanning the SCSI bus. The preferred method of
using vCLI (esxcli) is to run it on a host that is network-attached to the ESXi console. Also, the VMware
vSphere GUI can be used to rescan the SCSI bus.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 153
Steps to Replace HBA

1. View old HBA WWN


symaccess list logins

2. Swap out the old HBA board with the new HBA
3. Discover the WWN of the new HBA
symaccess discover hba or symaccess list hba

4. Use the replace action


symaccess –sid 80 replace –wwn <WWN_old> -new_wwn <WWN_new>

5. Use the rename action to establish the new alias for the HBA
symaccess discover hba –rename

154 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

If a host adapter fails or needs replacement, replace the adapter and assign devices to a new adapter by
using the replace action in the following form:

# symaccess –sid <SymmID> replace -wwn <wwn> -new_wwn


<NewWWN>

# symaccess –sid <SymmID> replace -iscsi <iscsi >-new_iscsi


<NewiSCSI>

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 154
Lesson: Service Level Based Provisioning with
Unisphere
This lesson covers the following topics:

• Managing Hosts (Initiator Groups)

• Storage Provisioning Wizard

• Managing Storage Groups

• Managing Port Groups

• Managing Masking Views

155 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SL-based provisioning of PowerMax and VMAX All Flash storage using Unisphere for
PowerMax.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 155
Managing Hosts – Initiator Groups

Hosts View
• Create, Modify, Provision Storage to Host, Set Flags, Delete, View Details

Host Details

156 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, Initiator Groups are called hosts. The configured hosts can be listed by
clicking Hosts under the Hosts panel. From the Hosts view, you can create new Hosts or Host Groups—
cascaded Initiator Groups. Clicking a host displays details of that host, which is shown on the right of the
screen. When a host is selected, users can modify, provision storage, and, using the More Actions button,
set flags, or delete the host.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 156
Create Host Wizard

157 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To provision storage to a host, first use the Create Host wizard to create the Initiator Group for the host.
The Create Host wizard is available by clicking the Create button in the Hosts view and selecting Create
Host.

Enter the desired name of the host, and select Fibre Channel or iSCSI. The available initiators are shown
according to the technology chosen. Select the WWNs of the HBAs of your host and click the greater than
(>) button to add them to the list.

If a host is not yet zoned, you can type in the WWN using the + symbol to Add User Defined Initiator to
Host.

You can optionally click the Set Host Flags button to override or set any port flag settings.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 157
Create Host Wizard – Set Host Flags: Consistent LUNs

158 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To set host flags, you can click the Set Host Flags button. In this example, Consistent LUNs are used,
enabled by checking the Consistent LUNs box. Also, override or enable any of the other port flags listed
using the boxes that are associated with the flags. In this example, no overrides are done. To close the
Set Host Flags dialog, click OK.

To complete the Create Host process, add the task to a Job List or choose Run Now.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 158
Host – Detailed View

159 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the details of a host, select the host in the host list view. Details display on the right of the screen.
In this example, the host has one Masking View and one Initiator. The Consistent LUNs option is enabled.
Click the Masking View to and select it to view details on the Masking View associated with this host.
From here, you can create a Masking View, rename the Masking View, and view path details.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 159
Host – Modify

160 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Modify button enables you to add or remove initiators from an existing Host. To remove an initiator,
select the initiator from the Initiators in Host listing on the right, and select the less than (<) button. To add
a new initiator, select an available initiator and click the greater than (>) button. To complete the add or
remove, click either Add to Job List or Run Now.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 160
Provision Storage to Host Wizard

• Wizard simplifies provisioning storage to hosts


– Creates Storage Groups with desired
› SL
› Volumes with specified capacity
o Uses existing devices or create devices as needed
– Creates Port Group or uses existing Port Groups
– Creates Masking View

• Launched from
– Hosts listing
– Detailed view of host

161 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Provision Storage to Host wizard simplifies the process of provisioning storage to a host. The wizard
creates the desired Storage Groups, Port Group, and Masking View. The Storage Groups are created with
the required Service Levels, and capacity.

The wizard can create stand alone Storage Groups or cascaded Storage Groups. The wizard is typically
launched from the context of a host—Initiator Group—either from the hosts listing or the detailed view of a
host.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 161
Storage Provisioning Wizard

162 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, the Storage Provisioning wizard has been launched from the context of an existing host.
The host does not have to be specified. Type a name for the Storage Group being created. Hover the
mouse pointer over any of the Service Level, Volume, or Volume Capacity dropdowns, and two icons are
displayed on the right. The Pencil icon enables you to add multiple volume sizes to the Storage Group,
and optionally set Volume Identifier Names. The Plus icon enables you to add multiple Storage Groups to
this host. Each Storage Group can have a different Service Level.

In this example, a single Storage Group with devices is created, and the Service Level, Number of
Volumes, and Capacity are specified. Click Next to continue with the Provisioning wizard.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 162
Select Port Group

163 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Once all Storage Group details have been completed, choose the Port Group for this request. You can
choose an existing Port Group or create a new one.

To view host-invisible ports—unmasked and unmapped—select Include port not visible to the host.
The wizard shows the Port Group recommendation dialog if the port selections do not match the Dell EMC
best practice recommendation. Once complete, click Next to go to the Summary page.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 163
Optionally Set Host I/O Limits

164 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To limit the amount of bandwidth and IOPS that are consumed by a Storage Group, use Host I/O limits. To
set Host I/O Limits, click the Set Host I/O Limits button. Set the desired values in the Host I/O limits
dialog and click OK to return to the Provisioning wizard.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 164
Run Suitability Check

165 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

On the review page, Suitability Suitability checks if the array can meet the Service Levels for the
provisioning request. In order for the Suitability Check to work, the arrays must be registered for
performance data collection. The review screen also shows the names of the Storage Group, Host, and
Port Group. The Masking View name can be edited as needed.

To receive alerts when the performance of the Storage Group changes relative to its SL target, select
Enable SL Compliance Alerts.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 165
Provisioning Job – Success

166 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The job has been successfully run. The provisioning task either finds existing devices or creates new
devices as needed to satisfy the provisioning request. To view the details of the job, click Show Task
Details. In this example, volumes 000E7 and 000E8 were created for the request. Notice that a Masking
View, DemoHostSG_MV was also created.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 166
New Masking View

167 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To see a listing of all Masking Views, click Masking Views in the Hosts menu.

The new Masking View DemoHostSG_MV is listed. To see information about the view, select the
Masking View. Details are shown on the right of the screen.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 167
Storage Groups Details

Storage Group Details


168 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Storage Group management is done in the Storage Groups section. To see a listing of the Storage
Groups, click Storage Groups under the Storage section. From this view you can create SGs, modify
SGs, provision an existing SG to a host, view details, and set Host I/O limits.

Details of the selected Storage Group are shown on the right. To view additional details, click the View All
Details link.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 168
View All Details – Storage Group

169 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Additional details, including capacity, compliance, and performance information can be seen from the All
Details view.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 169
Create SG – Provisioning Wizard

170 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

New Storage Groups can be created by clicking the Create button in the Storage Groups listing page. The
Provisioning wizard that is shown on the screen is launched. This wizard is similar to the Create Host
wizard. The only difference is that you can choose the host to which this storage should be provisioned. In
the Create Host wizard, the host is selected before starting the wizard.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 170
Modify Storage Group

171 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To modify a Storage Group, select a Storage Group from the Storage Group listing and click Modify to
launch the Modify Storage Group dialog. For cascaded Storage Groups, the dialog always shows the
parent and child SGs even if the Modify button is clicked from the context of the child SGs.

You can make the desired changes—change SL, add volumes, modify the size of multiple volumes—
PowerMaxOS supports expanding a volume up to 64 TB—or add a new child by clicking the Plus icon to
add an additional SG. In this example, the size of each of the five volumes is increased from 10 GB to 15
GB.

You can run the Suitability Check when modifying Storage Groups. Once the desired changes are made,
add the job to the job list or run now.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 171
Managing Port Groups

172 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To show the list of Port Groups configured on an array, choose Port Groups in the Hosts section. From
this view, you can create new Port Groups or click a Port Group to modify or delete the Port Group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 172
Create Port Group

173 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To create a Port Group, click the Create button in the Port Groups view.

In the Create Port Group dialog, type a name for the Port Group and select ports from the available list.

Click Add to Job List or Run Now to complete the request. The new Port Group is listed in the Port
Groups view.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 173
Port Group Details

Port Group Details

174 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To see the details of a specific Port Group, select it in the Port Groups view. Details are displayed on the
right of the screen.

All Port Groups have a link to a ports listing. In this example, the link is the number 2, which is the number
of ports in this Port Group. Clicking the Ports link shows a listing of the ports. If the Port Group is part of
one or more Masking Views, a Masking View link is shown. In this example, the Port Group is part of one
Masking View. Clicking the link displays details about the Masking View.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 174
Port Group – Ports: Add/Remove

175 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Clicking the Ports link in the details of a Port Group displays the ports listing. To add ports to the Port
Group, click the Add Ports button. Highlighting a port in the port listing, as shown in the example, enables
the Remove button, enabling you to remove the port from the Port Group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 175
Create Masking View

176 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Provisioning wizard creates Masking Views as part of the provisioning process. However, you can
choose to manually create a Masking View by clicking the Create Masking View button in the Masking
View listing.

Creating a Masking View requires the manual selection of Host, Port Group, and Storage Group. The
Host, Port Group, and Storage Group must already exist.

In the Create Masking View dialog, type a name for the Masking View and pick a Host, PG, and SG from
the list of available groups. Optionally, click the Set Dynamic LUNs button if you want to change the host
LUN address. The Starting LUN number should be specified. To close the LUN address dialog, click OK.

To complete the creation of the Masking View, click OK. The new Masking View is listed in the Masking
Views page.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 176
Masking View – Details

Masking View Details

177 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To see the details of a specific Masking View, select it in the Masking Views listing. Details are displayed
on the right of the screen. The details frame has links for Host, Port Group, and Storage Group. Clicking
these links shows a listing of those objects.

Selecting a Masking View also enables buttons to Rename the Masking View and View Path Details. Use
the More Actions (three vertical dots) button to delete the Masking View.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 177
Masking View – View Path Details

• One view to see all the components in a Masking View


• View can be used to troubleshoot connectivity issues

178 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The View Path Details of a Masking View button enables you to see all the components that make up the
Masking View. The path details page contains three tree lists for each of the component groups in the
Masking View: Hosts, Ports, and Storage Groups.

The parent group is the default top-level group in each expandable tree view. It contains a list of all
components in the Masking Group including child entries which are also expandable.

To filter the Masking View, single or multiselect—hold shift key and select—the items in the list view. As
each selection is made, the filtered results table is updated to reflect the current combination of filter
criteria.

This view can be useful for troubleshooting. As an example, you could filter the view by choosing only one
of the hosts and one of the ports. This view enables you to see which of the initiators is logged in to the
array.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 178
iSCSI Masking View
iSCSI Target Node
iqn.1992-04.com.emc:6000098
Network Portal Director 1E
10.127.200.10/24
VLAN 80
Director 1E Port 4 7

6
iqn.2015-05.com.microsoft:host1
TCP/IP
5
Connection
10.127.200.9/24 4
NIC
VLAN 80
Port 052 053 054
NIC 10.127.100.9/24 Group
VLAN 81
7
Initiator TCP/IP Storage
Group Connection 6 Group
Multipath
5
IO
4
Network Portal
Masking View 10.127.100.10/24
VLAN 81
Initiator Group iqn.2015-05.com.microsoft:host1 Director 2E Port 4 iSCSI Target Node
iqn.1992-04.com.emc:6000097
Port Group iqn.1992-04.com.emc:6000098 Director 2E
iqn.1992-04.com.emc:6000097

Storage Group 052, 053, 054

179 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

iSCSI is supported with the 10 GbE 4-port I/O module on arrays running HYPERMAX OS 5977 Q3 2015
SR and above. iSCSI configuration supports multiple iSCSI targets (IQNs) and IP addresses on SE
emulation, to support the whole range of possible storage configurations allowed by the iSCSI
architecture. The purpose of this diagram is to explain the iSCSi Masking View management.

Once you have configured all the iSCSI components, you can build a Masking View and add the iSCSI
components to it.

Starting at the host there are two NICs. Each NIC is assigned an IP address, Network_prefix, and VLAN
by the person who administers the host. The host has Multipath software typically presents one host IQN.
Shown here is an example of a Microsoft host. The IQN of the host is added to an Initiator Group. Most
host-based MPIO present a single initiator IQN to iSCSI target nodes with host name embedded in the
name.

The NIC and the iSCSI Target form a TCP connection. They are a part of a session which is the primary
communication link between the Initiator and Target. This IQN of the Target is added to a Port Group
(PG). There can be multiple target IQNs in the Port Groups.

Next are the devices. The devices get added to a Storage Group (SG). The Storage Group contains
Symmetrix volume IDs, which is no different than creating a Storage Group in a Fibre Channel
environment.

After the Masking View is created, the host must discover its storage. Depending on the operating system,
the procedure to discover a target differs. When using IP, discover the target by its IP address.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 179
Set Host I/O Limits on Parent SG
C:\>symsg -sid 217 -sg app_server_parent set -bw_max 200 -dynamic always

C:\>symsg -sid 217 show app_server_parent


Name: app_server_parent
Symmetrix ID 000197600217 :
Last updated at :
Tue Nov 27 17:42:58 2018
Masking Views Yes :
FAST Managed No :
Service Level Name <none> :
Workload <none> :
SRP Name <none> :
VP Saved (%) N/A :
Compression Enabled No :
Compression Ratio N/A :
Host I/O Limit Defined :
Host I/O Limit MB/Sec
200 :
Host I/O Limit IO/Sec
NoLimit :
Dynamic DistributionAlways :
Number of Storage Groups
2 :
Storage Group Names app_server_app1 :(IsChild)
app_server_app2 (IsChild)
------------- Output Truncated ----------------------------------
180 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, Host/IO Limits are set on a parent SG. A bandwidth limit is also set, and dynamic
distribution is set to Always.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 180
SG Listing – Check Host I/O Setting

C:\>symsg list -sid 217

S T O R A G E G R O U P S

Symmetrix ID: 000197600217

Flags Number Number Child


Storage Group Name EFM SLC Devices GKs SGs
-------------------------------------------------
app_server_app1 FXX CSX 2 0 0
app_server_app2 FXX CSX 2 0 0
app_server_parent F.X PD 4 0 2
---------- Output Truncated -----------------

181 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg list command shows the Storage Groups. Host I/O Limits are defined on the parent
indicated by the D in the L column. The S in the L column of the child SGs indicates that the children are
currently sharing Host I/O Limits. There is no explicit setting for the children.

Legend:

Flags:

Device (E)mulation A = AS400, F = FBA, 8 = CKD3380,

9 = CKD3390, M = Mixed, . = N/A

(F)ast X = Fast Managed, . = N/A

(M)asking View X = Contained in Mask View(s), . = N/A

Cascade (S)tatus P = Parent SG, C = Child SG, . = N/A

Host IO (L)imit D = Defined, S = Shared, B = Both, . = N/A

(C)ompression X = Compression Enabled, . = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 181
Set Host I/O Limits on Child SG
C:\>symsg -sid 217 -sg app_server_app2 set -bw_max 100

C:\>symsg -sid 217 show app_server_app2


Name: app_server_app2
Symmetrix ID : 000197600217
Last updated at : Tue Nov 27 17:46:14 2018
Masking Views : Yes
FAST Managed : Yes
Service Level Name : Gold
Workload : <none>
SRP Name : <none>
VP Saved (%) : N/A
Compression Enabled : Yes
Compression Ratio : N/A
Host I/O Limit : Defined (Shared)
Host I/O Limit MB/Sec : 100 (200)
Host I/O Limit IO/Sec : NoLimit (NoLimit)
Dynamic Distribution : Always
Number of Storage Group : 1
Storage Group Names : app_server_parent (IsParent)
------------- Output Truncated ----------------------------------

182 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, Host I/O Limits on a child SG are explicitly defined. There is an explicit setting on the
parent as well. A bandwidth limit has been set in this example—less than the limit of the parent. The
symsg show output shows the bandwidth limit for this SG is 100, while the limit on the parent is 200.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 182
SG Listing – Check Host I/O Setting Again

C:\>symsg list -sid 217

S T O R A G E G R O U P S

Symmetrix ID: 000197600217

Flags Number Number Child


Storage Group Name EFM SLC Devices GKs SGs
-------------------------------------------------
app_server_app1 FXX CSX 2 0 0
app_server_app2 FXX CBX 2 0 0
app_server_parent F.X PD 4 0 2
---------- Output Truncated -----------------

183 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg list command shows the Storage Groups. The app2 Storage Group shows a B in the L
column indicating that Host I/O Limits are defined both on the parent and the child.

Legend:

Flags:

Device (E)mulation A = AS400, F = FBA, 8 = CKD3380,

9 = CKD3390, M = Mixed, . = N/A

(F)ast X = Fast Managed, . = N/A

(M)asking View X = Contained in Mask View(s), . = N/A

Cascade (S)tatus P = Parent SG, C = Child SG, . = N/A

Host IO (L)imit D = Defined, S = Shared, B = Both, . = N/A

(C)ompression X = Compression Enabled, . = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 183
Host I/O Limit – Demand Report by PG

C:\>symsg -sid 217 list -demand -by_pg -pg app_server_pg

Symmetrix ID: 000197600217

Port Group IO Limit Bandwidth Limit


----------------------- ---------------- --------------------------------------
Maximum Number Port Grp Maximum Number
Flags Demand Nolimit Speed Demand NoLimit Excess
Name HD (IO/Sec) SGs (MB/Sec) (MB/Sec) (%) SGs (MB/Sec)
----------------- ----- -------- ------- -------- -------- --- ------- --------
app_server_pg YY 0 1 2000 200 10 0 +1800

Legend:
Flags:
(H)ost IO Limit Exists Y = Yes, N = No, M = Mixed, . = N/A
(D)ynamic Distribution Y = Yes, N = No, . = N/A

184 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

You can run the symsg list –demand –by_pg command to view quota information sorted by Port
Group. The –pg option limits the output to the specified Port Group. The –v option is supported for further
detail.

The columns display all the available capacity and IOPS quotas, and bandwidth quotas that are enforced
within Port Groups.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 184
Host I/O Limit – Demand Report by Port
C:\>symsg -sid 217 list -demand -by_port
Symmetrix ID: 000197600217

Director IO Limit Bandwidth Limit


-------------- ---------------- --------------------------------------
Maximum Number Port Maximum Number
Flags Demand Nolimit Speed Demand NoLimit Excess
DIR:PORT HD (IO/Sec) SGs (MB/Sec) (MB/Sec) (%) SGs (MB/Sec)
-------- ----- -------- ------- -------- -------- --- ------- --------
01D:004 NN 0 1 1000 0 0 1 +1000
01D:005 MY 0 3 1000 100 10 2 +900
01D:032 NN 0 2 500 0 0 2 +500
01D:033 NN 0 0 - 0 - 0 -
02D:004 NN 0 3 1000 0 0 3 +1000
02D:005 MY 0 3 1000 100 10 2 +900
02D:032 NN 0 2 500 0 0 2 +500
02D:033 NN 0 0 - 0 - 0 -
Legend:
Flags:
(H)ost IO Limit Exists Y = Yes, N = No, M = Mixed, . = N/A
(D)ynamic Distribution Y = Yes, N = No, . = N/A

185 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

You can run the symsg list –demand –by_port command to view quota information sorted by front-
end director ports. The –v option is supported for further detail.

The columns display all the available capacity and IOPS quotas, and bandwidth quotas that are enforced
by front-end directors.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 185
Setting Host I/O Limits – symsg Syntax

• Can be set during SG creation or on existing SG


– During creation
symsg -sid <SymmID> create <SgName>
[-bw_max <MBperSec>]
[-iops_max <IOperSec>]
[-dynamic <NEVER | ALWAYS | ONFAILURE>]
– Existing SG
symsg -sg <SgName> -sid <SymmID>
set <[-bw_max <MBperSec> | NOLIMIT ]
[-iops_max <IOperSec> | NOLIMIT ]
[-dynamic <NEVER | ALWAYS | ONFAILURE>]>

186 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Host I/O Limits can be set with the symsg command when the SG is created or on an existing SG.

Options:
• -bw_max – Limits the bandwidth, specified in megabytes per sec. The valid range for bandwidth is
from 1 MB/Sec to 100,000 MB/Sec. NOLIMIT removes any set limits.
• -iops_max – Limits the IOPS. The valid range for IOPS is from 100 IOPS to 2,000,000 IOPS and
must be specified in units of 100 IOPS. NOLIMIT removes any set limits.
• -dynamic – Sets the mode for the dynamic I/O distribution discussed earlier in this lesson.
NEVER is the default.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 186
Lab: Service Level Based Provisioning with Unisphere

This lab covers:


• Service level based provisioning with Unisphere
for PowerMax
• Service level based provisioning with SYMCLI

187 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In this lab, the Unisphere for PowerMax Storage Provisioning wizard is used to perform SL-based
provisioning to an open systems host.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 187
Lab: Cascaded Storage Groups and SL Type Modifications

This lab covers:


• Convert Standalone SG to Cascaded SG
• Manage SL

188 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers converting standalone storage group to cascaded storage group, and managing SL.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 188
Lab: Managing Host I/O Limits

This lab covers:


• Host I/O Limits

189 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers the management of Host I/O Limits.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 189
Module Summary

Key points covered in this module:

• Auto-provisioning Groups

• Host I/O Limits

• Host considerations for Storage Allocation

• SL-based provisioning with Unisphere for PowerMax

• SL-based provisioning with SYMCLI

190 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered storage allocation of PowerMax and VMAX All Flash storage to hosts using auto-
provisioning groups. An overview of auto-provisioning groups, Host I/O Limits, and host considerations
while allocating storage was presented. SL-based storage provisioning with Unisphere for PowerMaX and
SYMCLI was covered in detail.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Storage Allocation using Auto-provisioning Groups 190
Module: Management in a Virtualized Environment

Upon completion of this module, you should be able to:

• Manage Virtual Servers with Unisphere for PowerMax

• Describe the Dell EMC Virtual Storage Integrator (VSI) for VMware vSphere Client features for
PowerMax and VMAX All Flash arrays

191 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on management of PowerMax and VMAX All Flash storage in a virtualized
environment using Unisphere for PowerMax. Also covered is the Dell EMC Virtual Storage Integrator (VSI)
for VMware vSphere Client features.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 191
Lesson: Virtual Server Management – Unisphere for
PowerMax
This lesson covers the following topics:

• Virtual Server Management with Unisphere for PowerMax

192 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers virtual server management with Unisphere for PowerMax.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 192
Virtual Servers – Management

• HOME > VMWARE > vCenters and ESXi

193 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Using Unisphere for PowerMax, you can discover vCenter and ESXi hosts. Once the Virtual Server is
discovered, you can view its details. To see a listing of all the discovered virtual servers, choose
VMWARE > vCenters and ESXi from the HOME screen. This page is also used to register new servers.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 193
Register vCenter and ESXi

194 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To register a new server, click the Register vCenter/ESXi Server button. Enter the Server Name or IP
address, Username, and Password. Choose Run Now from the ADD TO JOB LIST dropdown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 194
ESXi Host – Details

195 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To see the details of a specific ESXi host, select it in the listing. Details are shown on the right. For more
details, double click on the server name.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 195
ESXi Host – View All Details

196 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

By double clicking on the server name, displays more details on the ESXi host. The Details tab includes
properties of the host, such as Memory and Virtual Machines, and Array Related Storage details including
Masking Views, Storage Groups, and Capacity information. Tabs for Masking Views, Virtual Machines,
and Performance are also provided.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 196
ESXi Host – Masking Views

197 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

From the Masking Views tab, clicking the View Path Details button brings you to the HOSTS > Masking
Views page on the associated array. This page displays the Masking View Path Details of the ESXi host,
including Hosts, Ports, Storage, and Volumes.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 197
ESXi Host – Details: Virtual Machines

198 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Double-click a Virtual Machine (VM) in the listing in the Virtual Machines tab to display details about a VM.
Details are shown on the right of the screen, and include a link to Virtual Disks associated with this VM.
Double-clicking the Virtual Disk in the listing shows advanced details—not shown—such as Physical Disk
information, about the Virtual Disk.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 198
ESXi Server – Performance

199 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Performance tab displays performance information for the ESXi host, such as Storage Group
performance details, and Front-end Director and Port details.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 199
Lesson: Dell EMC VSI for VMware vSphere Client
This lesson covers the following topics:

• Dell EMC VSI 9.0 for VMware vSphere Client features for PowerMax and VMAX All Flash

200 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the Dell EMC VSI 9.0 for VMware vSphere Client features for PowerMax and VMAX All
Flash arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 200
VMware vSphere Client

HTML5

201 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The vSphere Client is an HTML5-based client. You manage the vSphere environment with the vSphere Client
by connecting to vCenter Server Appliance.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 201
Dell EMC VSI 9.0 for VMware vSphere Client

• Enables VMware administrators to provision and manage Dell EMC storage


systems
– PowerMax and VMAX All Flash storage arrays running PowerMaxOS
– XtremIO storage arrays
– Unity/UnityVSA

• Can run with VSI 8.x in the same environment to support HYPERMAX OS
arrays
• Documentation
– VSI for VMware vSphere Web Client Product Guide
– VSI for VMware vSphere Web Client Release Notes
– Dell EMC Simple Support Matrix

202 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC Virtual Storage Integrator (VSI) version 9.0 for VMware vSphere Client is a plug-in for VMware
vCenter. It enables VMware administrators to provision and manage the Dell EMC storage systems that
are listed here for VMware ESXi hosts.

Tasks that administrators can perform with VSI include storage provisioning, storage mapping, managing
data protection systems, and viewing information such as capacity utilization.

VSI consists of a GUI and the Dell EMC Solutions Integration Service (SIS). SIS is the programming
interface that provides communications to the storage and data protection systems. The administrator
uses VMware vCenter Client to provision and manage storage.

VSI 9.0 version supports the HTML5 vSphere Client and PowerMaxOS only. Users may run VSI 9.0 and
8.x in the same VMware environment to support HYPERMAX OS arrays.

Refer to the listed documentation, found on Dell EMCs support website at


https://www.dell.com/support/home/en-us/product-support/product/vsi-for-vmware-vsphere-web-
client/docs for detailed information about the installation and configuration process.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 202
Dell EMC VSI 9.0 Features

• Storage system and Storage Group administration


• Provision VMFS 6 datastores
• View VMFS 6 datastore PowerMax/VMAX All Flash storage properties
• Dell EMC VSI 9.0 requirements for PowerMax and VMAX All Flash
– PowerMaxOS 5978.144.144 or later
– Unisphere for PowerMax 9.x
– Masking views for the ESX/ESXi hosts must exist on the array
– Array is connected to the vCenter host (FC or SCSI)
– Array must be registered in VSI

203 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Dell EMC VSI plug-in enables storage system and Storage Group administration for PowerMax and
VMAX All Flash arrays. You can provision datastores that are built on array storage to ESXi hosts. The
VSI plug-in automatically provisions storage to the ESXi host and creates a datastore. VSI shows the
properties of the datastores.

To provision and manage PowerMax and VMAX All Flash, VSI requires a supported version of
PowerMaxOS and Unisphere for PowerMax 9.x. The ESXi hosts must have a masking view on the array,
and the array must be registered in Dell EMC VSI.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 203
VSI – High-Level Deployment Steps

• Download Dell EMC


VSI OVA from Dell
EMC Support
• Deploy Dell EMC VSI
OVA
– Use VMware vSphere
Client

• Register VSI plug-in


with VMware vCenter
• Register arrays

204 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The high-level steps to deploy Dell EMC VSI for VMware vSphere Client are shown. Refer to the VSI for
VMware vSphere Web Client Product Guide and VSI for VMware vSphere Web Client Release Notes
found on https://www.dell.com/support for detailed steps.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 204
Register Arrays

Add array

205 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To register an array, log in to the vSphere Client. From the Dashboard, select vCenters. Choose Storage
Systems and click the + icon to add an array.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 205
Register Arrays – Type and Connection Settings

206 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Register Storage System dialog, choose the type of array and enter the connections settings. For
PowerMax and VMAX All Flash systems, enter the Unisphere Host Name or IP address and port, and the
username and password. Click NEXT to continue to the Storage Systems to Register dialog. Choose the
array being registered, and click NEXT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 206
Register Arrays – Add Storage Group

Add Storage
Group

207 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Storage Groups dialog, click the + icon to add a new Storage Group. Provide a name for the SG,
choose the storage system, enter the capacity for the SG, and click ADD.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 207
Register Arrays – Storage Access

208 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the Storage Group and click NEXT to provide storage access. Select the users or groups to allow
access to the Storage Groups and click NEXT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 208
Register Arrays – Complete Registration

209 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To complete the registration, click FINISH. The registered array is shown with a status of Connected.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 209
Register Arrays – Retrieve Arrays

Array Details

Storage
Groups

210 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view details of the registered array, click the down arrow next to the array. To display the Storage
Group that was created, click the Storage Groups tab.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 210
VSI – Provision Datastore

211 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC VSI can be used to provision a new datastore to an ESXi host with PowerMax or VMAX All
Flash storage. In the vSphere Client, go to the Hosts and Clusters view. Right-click a host or cluster, and
then select Dell EMC VSI Actions, New Datastore.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 211
Provision Datastore – Steps 1 and 2

212 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Create Datastore on Dell EMC Storage dialog has multiple steps. Shown here are steps 1 and 2. In
step 1, choose the Type of datastore. Dell EMC VSI 9.0 defaults to a VMFS 6 datastore. To continue to
step 2, Datastore Settings, click NEXT. Provide a name, and verify the location for the datastore. Click
NEXT to continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 212
Provision Datastore – Steps 3 and 4

213 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the storage system in the Storage System Selection dialog, and click NEXT to continue to Storage
Settings. Enter a Capacity for the datastore. In this example, the datastore capacity is 20 GB. Select the
Storage Group, in this example, VSI_DEMO, and click NEXT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 213
Provision Datastore – Steps 5 and 6

214 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In step 5, select the Initiator Group, and click NEXT. Review and verify all entries in the Ready to
Complete dialog, and click FINISH. Once the array has completed its task, Dell EMC VSI rescans the
ESXi host and then creates the datastore on the newly presented devices. The newly created datastore is
now available to the ESXi host.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 214
Datastore Details

Datastores
Menu

215 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In the vSphere Client, select your datastore from the Datastores menu. Details of the datastore are
displayed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 215
EMC VSI – Dashboard

216 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Storage Group is now displayed on the Dell EMC VSI Dashboard.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 216
Module Summary

Key points covered in this module:

• Management of Virtual Servers with Unisphere for PowerMax

• Dell EMC VSI 9.0 for VMware vSphere Client features for PowerMax and VMAX All Flash arrays

217 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered the management of storage in a virtualized environment. Management of virtual
servers with Unisphere for PowerMax and Dell EMC VSI 9.0 for VMware vSphere Client features for
PowerMax and VMAX All Flash arrays was also shown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Management in a Virtualized Environment 217
Module: Monitoring and Workload Planning with
Unisphere for PowerMax
Upon completion of this module, you should be able to:
• Monitor Storage Resource Pool (SRP) Reports
• Monitor Storage Group Compliance
• Monitor Data Reduction
• Perform Workload Planning
• SRP Headroom
• Suitability Check

218 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on monitoring and workload planning with Unisphere for PowerMax (Unisphere).
Unisphere is used to monitor Storage Resource Pool reports, compliance, and data reduction. The SRP
Headroom and Suitability Check workload planning features are also covered.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 218
Lesson: Monitor SRP
This lesson covers the following topics:

• SRP details

• SRP reports
– Storage Group Demand Report
– Service Level Demand Report
– Compressibility Report

• Utilization Alerts

219 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers monitoring SRPs with Unisphere for PowerMax. Unisphere is used to view SRP
reports and utilization alerts.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 219
Storage – Storage Resource Pools

220 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

From the Storage section, click Storage Resource Pools to display the configured SRPs. In this example,
there is one SRP configured, named SRP_1. The used and allocated capacity is shown as a percentage
of the overall capacity, and the total usable and subscribed capacity is shown in terabytes (TB).

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 220
View SRP Details

221 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the details of an SRP, click the checkbox next to it. Details, such as Emulation, Overall Efficiency,
and capacity information, are shown on the right. Clicking the checkbox also enables the Modify and Add
eDisks buttons to allow modifications to the SRP.

SRP details can be shown in SYMCLI using the symcfg show command:

C:\Users\Administrator>symcfg show -srp SRP_1 -sid 217

Symmetrix ID : 000197600217

Name : SRP_1
Description :
Default SRP : FBA
Effective Used Capacity (%) :11
Usable Capacity (GB) : 12518.0
Used Capacity (GB) : 1376.7
Free Capacity (GB) : 11141.3
User Subscribed Capacity (GB) : 1264.5
Reserved Capacity (%) : 10
Compression State : Enabled
Compression Ratio : 1.5:1
Usable by RDFA DSE : Yes

Disk Groups (1):


------------------ Output Truncated ------------------------------------

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 221
Reports

222 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SYSTEM HEALTH, SG COMPLIANCE, CAPACITY and DATA PROTECTION reports for an SRP are
available from the Dashboard. From the Dashboard, click the CAPACITY selection, and select an SRP
from the System dropdown to enable the actions buttons to retrieve the reports.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 222
Storage Group Demand Report

223 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Storage Group Demand report shows the subscribed and used capacity in GB. It also shows the allocated %
from the perspective of the subscribed capacity for the SGs. GB used and compression ratio are also shown, along
with Snapshot information for the SG.

This report can be generated using the SYMCLI symcfg list command:

C:\>symcfg list -srp -demand -type sg -sid 217

STORAGE RESOURCE POOLS

Symmetrix ID : 0001976002171

Name : SRP_1
Usable Capacity (GB) : 12518.0
SRDF DSE Allocated (GB) : 0.0
-------------------------------------------------------------------------------
Snapshot
Subscribed Allocated Allocated
SG Name (GB) (GB) (%) (GB)
-------------------------------- ------------- -------------- ----------
esxi-94-161-GK_SG 0.1 0.0 0 0.0
esxi-94-163_GK_SG 0.1 0.0 0 0.0
esxi-94-161_Data 240.0 60.0 25 60.0
esxi-94-163_Data 240.0 0.0 0 0.0
NDM_SRC_253_TGT_217_SG1 22.5 0.0 0 0.0
NDM_SRC_253_TGT_217_SG2 22.5 0.0 0 0.0
NDM_SRC_253_TGT_217_SG3 22.5 0.1 0 0.0
NDM_SRC_253_TGT_217_SG4 22.5 18.3 81 0.0
nestedesxi55_prod1 20.0 1.8 8 0.0
nestedesxi55_prod2 20.0 0.0 0 0.0
DemoGroup 75.0 0.0 0 0.0
esxi-88-68_sg 40.0 4.7 11 0.0
esxi-88-67_sg 40.0 5.2 13 0.0

------------------ Output Truncated ------------------------------------

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 223
Service Level Demand Report

224 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Service Level Demand report shows the allocated and subscribed capacity in GB. It also shows the
allocated % (as a percentage of subscription) and subscribed % (as a percentage of the overall SRP capacity).

This report can be generated using the SYMCLI symcfg list command:

C:\>symcfg list -srp -demand -type sl -detail -sid 217

STORAGE RESOURCE POOLS

Symmetrix ID : 000197600217

Name : SRP_1
Usable Capacity (GB) : 12518.0
SRDF DSE Allocated (GB) : 0.0
Snapshots Allocated (GB): 60.0

---------------------------------------------------------------
Service Level Subscribed Allocated
Name Workload (GB) (GB) (%)
------------------------ -------- -------------- --------------
<none> N/A 0.1 0.0 0
Diamond <none> 1174.4 78.5 6
Optimized N/A 89.9 18.4 20
-------- -------- -----
Total 1264.5 96.8 7

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 224
Compressibility Report

225 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Compressibility report shows the maximum data reduction for all SGs in the SRP when compression
is enabled in the system. The display includes the Storage Group name, number of volumes in the group,
the allocated and used capacity in GB, and the Target Ratio. The Target Ratio is the expected
compression based on the last 24 hours of samples.

This report can be generated using the SYMCLI symcfg list command:

C:\>symcfg list -sid 217 –sg_compression

STORAGE GROUPS

Symmetrix ID: 000197600217

Name : SRP_1

Number Allocated Used Estimated


Storage Group Name Devices (GB) (GB) Ratio
--------------------------------------------------------------------------
esxi-94-161-GK_SG 24 0.0 0.0 N/A
esxi-94-163_GK_SG 24 0.0 0.0 N/A
esxi-94-161_Data 24 60.0 60.0 16.0:1
esxi-94-163_Data 24 0.0 0.0 16.0:1
NDM_SRC_253_TGT_217_* 1 0.0 0.0 N/A
NDM_SRC_253_TGT_217_* 1 0.0 0.0 N/A
<not_in_sg> 25 0.0 0.0 N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 225
Utilization Alerts
• Utilization alerts are enabled by default

226 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To display Pool Threshold Alerts, click the Settings icon from the top banner area. Expand the Alerts
section, and choose Symmetrix Threshold and Alerts. Alert Thresholds can be set on Storage Resource
Pools (SRPs), System Meta Data, Local Replication Utilization, and Backend and Frontend Meta Data
Usage. These utilization alerts are enabled by default with the default threshold policies shown. The
default threshold policies cannot be modified or deleted.

To set up customized thresholds, click the Create button. In the Create Symmetrix Threshold Alert dialog,
select the category from the dropdown menu. To create a threshold alert for an SRP, add it to the
Instances to Enable field, and set the threshold levels for Warning, Critical and Fatal. For all other alerts,
choose the category and set the levels for Warning, Critical, and Fatal. To enable the threshold alert, click
the OK button.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 226
Lesson: Monitor Compliance
This lesson covers the following topics:

• Service Levels

• Storage Group compliance

• Storage Group performance

227 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the monitoring of Storage Group compliance and performance.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 227
SL – Expected Average Response Times

228 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The available Service Levels and the expected average response time for each SL is displayed as shown.
Clicking the Service Levels link in the Storage section brings up this view. For compliance, the response
time of the SL must lie within the compliance range of the SL.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 228
Renaming Service Levels

229 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To rename a Service Level, hover the mouse pointer over the Service Level and click the pencil icon.
Type the new name over the existing name and click the checkmark.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 229
Dashboard – SG Compliance

230 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Selecting SG Compliance from the Dashboard gives a summary of Storage Group compliance. The
Compliance panel displays the number of SGs that are Critical, Marginal, Stable, and SGs that have No
Status. To view the detailed Compliance Report for all SGs, click the link on the bottom of the panel.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 230
Compliance Report

231 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To assess the performance for a Storage Group, the weighted response time for the past 4 hours and for
the past 2 weeks is calculated. The two values are then compared to the maximum response time
associated with the given SL for the SG. If both calculated values fall within or under the SL defined
response time band, the compliance state is STABLE. If one of them is in compliance and the other is out
of compliance, the compliance state is MARGINAL. If both are out of compliance, the compliance state is
CRITICAL.

This report can be exported to a .pdf file using the Export button. It can be set to run as a report and
scheduled to be distributed by email on a user-defined schedule using the Schedule button.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 231
Storage Group Compliance

• Critical and Marginal – SGs


which do not meet SL
• Stable – SGs which meet SL
• No Status – SGs with no
explicit SL
• Total – All SGs

232 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Icons for Critical, Marginal, Stable, No Status, and Total are displayed on the Dashboard. Clicking the icon
directs you to the appropriate listing.

For example, clicking the Total icon directs you to a listing of all the Storage Groups configured on the
array. Clicking Stable directs you to the listing of Storage Groups which are performing within the SL
target. Marginal indicates that the performance is below the SL target, while Critical indicates performance
well below the SL target. No Status is the listing of SGs on which an SL has not been explicitly set.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 232
Stable Storage Groups

233 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an example of a listing of Stable Storage Groups. To see the details of the compliance of a specific
Storage Group, select the Storage Group from the listing. Details of the SG are shown on the right. For
more details, click the VIEW ALL DETAILS link.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 233
Stable SG – View All Details

234 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The detailed view of the Storage Group shows detailed information about the SG, and includes tabs for
Compliance, Volumes, and Performance information for the SG.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 234
Stable SG Example – Compliance Tab

235 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Compliance tab of a Storage Group shows details of Compliance for the SG. The display shows
Response Time, Workload Skew, IOPS, and I/O Mixture for the SG.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 235
Response Time Details

236 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view details of the Response Time for the SG, click the VIEW DETAILS link in the Response Time
panel.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 236
Stable SG Example – Volumes Tab

237 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view volumes in the SG, click the Volumes tab. To view details of a volume, select the volume. Details
are shown on the right of the screen.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 237
Stable SG Example – Performance Tab

238 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Performance for the SG is displayed using the Performance tab. You can see the graphs in more detail by
maximizing each graph individually.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 238
Performance Dashboard

239 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Performance Dashboard includes array-level performance information, along with SG, Hosts, and
components such as FE, BE, and SRDF directors. Performance information about Disk Technology is also
included. You can see the graphs in more detail by maximizing each graph individually. Custom
Dashboards can be created from this screen.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 239
Performance Dashboard – Storage Groups

240 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Storage Group Performance Dashboard displays an overview of the performance for all Storage
Groups.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 240
Performance Dashboard – SG Workload

241 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view details of a Storage Group, select a Storage Group. Here is an example of the Performance
Dashboard for a Storage Group showing the SG Workload. You can see the graphs in more detail by
maximizing each graph individually.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 241
Performance Dashboard – SG IO Profile

Information

242 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an example of the Performance Dashboard for a Storage Group showing its IO Profile. You can
see the graphs in more detail by maximizing each graph individually, and get more details by clicking the
Information icon.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 242
Performance Dashboard – SG Performance Thresholds

243 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Here is an example of the Performance Thresholds for a Storage Group showing its IO Profile. To view
more details, click any of the categories.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 243
Performance Dashboard – SG Noisy Neighbor

244 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The SG Noisy Neighbor tab helps identify potential issues with a Storage Group. This Dashboard charts
key performance metrics and details the relationship between the SG and the associated front-end
directors and ports. It also shows other SGs that are sharing ports that could potentially contribute to
performance issues.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 244
Storage Group – Performance Analyze View

245 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To analyze performance on a Storage Group, choose Analyze from the Performance section. Drill down
to the Storage Group by selecting the array and then the Storage Group. Real Time, Diagnostic, and
Historical tabs are available for viewing performance information about the SG.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 245
Compliance Alert

246 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

You can configure Unisphere for PowerMax to alert you when the performance of a Storage Group,
relative to its SL target, changes. Once configured, Unisphere for PowerMax assesses the performance of
the storage every 30 minutes, and deliver the appropriate alert level.

To open the Compliance Alert Policies list view, choose Settings from the banner area, and select
Compliance Alert Policies under Alerts. Click Create to open the Create Compliance Alert Policy dialog
box. Select the storage system on which the Storage Groups are located. Select one or more Storage
Groups and click Add. By default, compliance policies are configured to generate alerts for all compliance
states.

• Critical: Storage group performing well below SL target

• Marginal: Storage group performing below SL target

• Stable: Storage group performing within the SL target

In this example, the DemoHostSG is selected, and compliance alerts are enabled for Stable, Marginal,
and Critical. To change this default behavior, clear the box for any of the states for which you do not want
to generate alerts.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 246
Lesson: Monitor Data Reduction
This lesson covers the following topics:

• Monitoring Data Reduction in PowerMax and VMAX All Flash arrays

247 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the monitoring of overall efficiency and data reduction in PowerMax and VMAX All
Flash arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 247
Overall Efficiency

248 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Data reduction savings are presented as ratios and are available in both Unisphere for PowerMax and Solutions
Enabler. The capacity report provides a single location to view system efficiency, capacity and system resource
usage. The data is displayed in three sections, Array Usage, Efficiency and System Usage. There are two levels of
detail available.

As part of the Q1 2021 PowerMaxOS release there is a further breakdown under the efficiency section for the data
reduction ratio. A flyover display reveals additional information on the data reduction ratio. The data presented relates
specifically to data reduction enabled allocations.

Calculating efficiency Ratios:

Overall Efficiency Ratio: The range of values that describe the capacity space savings that a user may experience
regarding data reduction or other data services that offer capacity savings, such as Data Reduction, non-zero
allocation, over provisioning and SnapVX. 𝑆𝑢𝑏𝑠𝑐𝑟𝑖𝑏𝑒𝑑 𝑇𝑜𝑡𝑎𝑙 + 𝑆𝑛𝑎𝑝𝑠ℎ𝑜𝑡 𝑡𝑜𝑡𝑎𝑙/𝑈𝑠𝑒𝑟 𝑈𝑠𝑒𝑑

Data Reduction Ratio: Savings that represents the combination of inline compression and inline deduplication
presented as a ratio. When calculating the data reduction ratio using the values presented in the usage portion of the
capacity report. The ratio may reflect a different value due to the performance optimization leaving compressible data
uncompressed. The enabled percent being less than 100 may also be a factor.

𝑆𝑢𝑏𝑠𝑐𝑟𝑖𝑏𝑒𝑑 𝐴𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑁𝑜𝑛 𝑆ℎ𝑎𝑟𝑒𝑑 + 𝑀𝑜𝑑𝑖𝑓𝑖𝑒𝑑 𝑁𝑜𝑛 𝑆ℎ𝑎𝑟𝑒𝑑/𝑈𝑠𝑒𝑟 𝑈𝑠𝑒𝑑

Data Reduction Ratio on Reducible: Represents the data reduction savings using only data reduction enabled
allocations that have been reduced.

𝑅𝑒𝑑𝑢𝑐𝑖𝑏𝑙𝑒𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦/𝑅𝑒𝑑𝑢𝑐𝑖𝑏𝑙𝑒𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 − (𝐶𝑜𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑜𝑛𝐴𝑛𝑑𝐷𝑒𝑑𝑢𝑝𝑒𝑆𝑎𝑣𝑖𝑛𝑔𝑠 + 𝑃𝑎𝑡𝑡𝑒𝑟𝑛𝐷𝑒𝑡𝑒𝑐𝑡𝑖𝑜𝑛𝑆𝑎𝑣𝑖𝑛𝑔𝑠)

Enabled Percent: The amount of subscribed host allocations that have Data Reduction enabled.

Virtual Provisioning Savings: Savings achieved relative to provisioned capacity and total usable capacity displayed
as a ratio. This may exceed the maximum usable capacity.

𝑆𝑢𝑏𝑠𝑐𝑟𝑖𝑏𝑒𝑑 𝑇𝑜𝑡𝑎𝑙 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦/𝐴𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑛𝑜𝑛 − 𝑠ℎ𝑎𝑟𝑒𝑑

Snapshot Savings: A representation of savings resulting from the use of SnapVX to create local replication data.
𝑆𝑛𝑎𝑝𝑆ℎ𝑜𝑡 𝐶𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑇𝑜𝑡𝑎𝑙/𝑀𝑜𝑑𝑖𝑓𝑖𝑒𝑑 𝑁𝑜𝑛𝑆ℎ𝑎𝑟𝑒𝑑

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 248
Compression Ratio – Storage Group

249 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the Compression Ratios on Storage Groups in Unisphere, view the Storage Group Demand
Report from the Capacity section of the Dashboard.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 249
View Compression – Volume

250 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the Compression Ratio of a given volume, choose Storage Groups under the Storage section.
Select the SG, and click the Volumes tab. Select the volume and the right panel displays details on the
volume, including the Compression Ratio.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 250
View SRP Compression – Solutions Enabler

symcfg show -srp SRP_1 -sid 217 | more

Symmetrix ID : 000197600217

Name : SRP_1
Description :
Default SRP : FBA
Effective Used Capacity (%) : 11
Usable Capacity (GB) : 12518.0
Used Capacity (GB) : 1419.6
Free Capacity (GB) : 11098.4
User Subscribed Capacity (GB) : 1264.5
Reserved Capacity (%) : 10
Compression State : Enabled
Data Reduction Ratio : 1.4:1
Usable by RDFA DSE : Yes

...

251 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Use the symcfg show command to display SRP compression settings with Solutions Enabler. An
example of the command syntax and output is shown here.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 251
View SG Compression – Solutions Enabler
symsg -sid 217 show esxi-94-161_Data

Name: esxi-94-161_Data

Symmetrix ID : 000197600217
Last updated at : Mon Jun 18 16:47:46 2018
Masking Views : Yes
FAST Managed : Yes
Service Level Name : Diamond
Workload : <none>
SRP Name : SRP_1
VP Saved (%) : 75.0
Compression Enabled : Yes
Compression Ratio : 1.5:1
Host I/O Limit : None
Host I/O Limit MB/Sec : N/A
Host I/O Limit IO/Sec : N/A
Dynamic Distribution : N/A
Number of Storage Groups : 1
Storage Group Names : esxi-94-161_parent_SG (IsParent)
Number of Gatekeepers : 0
...

252 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To view the compression settings for a particular Storage Group, use the symsg show command. An
example of the command syntax and output is shown here.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 252
Lab: Monitoring SRP and SL Compliance with Unisphere

This lab covers:


• Monitoring SRP with Unisphere for PowerMax
• Monitoring SL Compliance with Unisphere for
PowerMax

253 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers the monitoring of SRP and Compliance with Unisphere for PowerMax.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 253
Lesson: Workload Planning
This lesson covers the following topics:

• Data Exclusion Windows

• SRP Headroom

• Suitability Check

254 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the workload planning features of Unisphere for PowerMax: Data Exclusion Windows,
SRP Headroom, and Suitability Check.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 254
Workload Planning – Unisphere for PowerMax
Simple, Automated, Workload-Aware, Service Level Based Sizing

Workload Planner

Features Supported by WLP


• Headroom Indicator
• Suitability Check

255 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Planning is greatly assisted by a software layer in the system that is known as Workload Planner, or WLP.
The main function of WLP is to abstract the array as a provider of services. Think of it as a mediator that
the converts array status—disk, ports, directors—to array capabilities. WLP helps Storage Administrators
plan for workloads being considered.

The workload planning features supported by Unisphere for PowerMax are Headroom Indicator and
Suitability Check. These features enable you to plan based on Service Level and workload.

In some environments, the array abruptly degrades in performance as new workloads are provisioned.
Headroom Indicator gauges the remaining capacity per Service Level so you can plan how many more
workloads can be provisioned.

When you are ready to provision, a Suitability Check can be run. The Suitability Check determines if the
capacity and the Service Level request can be met by the array with its current workload. Using the
workload planning features requires that the arrays are registered for performance data collection in
Unisphere for PowerMax.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 255
Data Exclusion Windows

256 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Data Exclusion Windows feature enables excluding specified performance stats that affect reporting
such as headroom, suitability, and compliance. Peaks in storage system statistics can occur due to
anomalies or unusual events, or recurring maintenance during off-hours that fully loads the storage
system.

Using Data Exclusion Windows enables Storage Administrators to focus on specific performance for
planning purposes.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 256
Data Exclusion Windows Settings

Blue shaded area


indicates Included

Gray shaded area


indicates Excluded

257 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Data Exclusion Windows page enables users to view and set One-Time Exclusion Periods and
Recurring Exclusion Periods for a selected storage system. It consists of two panels. The One-Time
Exclusion Periods panel displays 84 component utilizations—two weeks worth of data—in a single chart.
This chart enables you to set the one-time exclusion period value from a given time slot. This exclusion
results in all time slots, prior to the selected time slot, being ignored for the purposes of calculating
compliance and admissibility values. The Recurring Exclusion Periods panel displays the same data in a
one-week format. This format enables selection of repeating recurring exclusion periods during which any
collected data is ignored.

One-Time Exclusions can be set up to a maximum of two weeks, however, they are cleared automatically
when the selected time runs off the cycle. Recurring Exclusions remain set until removed.

To set exclusions, click the data point. When the data point is clicked, it changes to a gray color indicating
the point is excluded. If that same gray excluded data point is clicked again, it is added back to be included
in the reporting. Included data points show as a blue color. Once complete, choose either Set One-Time
Exclusion or Set Recurring Exclusions and save the selections. Exclusions can be set on all components
for suitability, or on back-end components only for headroom.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 257
SRP Headroom
• Useful for workload planning
– Displays the available headroom for an SL
– Assumes all the remaining capacity is this type

258 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRP Headroom indicator in the Dashboard is useful for workload planning. It displays the space
available for a particular SL if all remaining capacity is on that type.

The capacity for an SL indicates the amount that you can provision and be assured that the array is able to
meet the SL compliance requirements.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 258
Suitability Check

• Part of Provision Storage wizard


• Suitability Check can be performed when:
– Provisioning storage to a new host with Provision Storage to Host
– Modifying an existing Storage Group in a Masking View
› Adding more storage
› Modifying Service Level

• Determines if the array can handle the capacity and Service Level request
• Optional
• If check fails, does not prevent provisioning

259 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Suitability Check is an optional step that can be performed. This check can be performed by the
Provision Storage wizard when provisioning storage to a host. It can also be run when modifying an
existing SG which is part of a Masking View. An example of modification to an SG is the addition of more
storage or a change to the Service Level.

The Suitability Check determines if the array can handle the changes to the capacity and Service Level.
The provisioning process can be continued even if the Suitability Check fails.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 259
Suitability Check – Provision Storage to Host

260 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Provision Storage wizard includes a Run Suitability Check button on the Summary page.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 260
Suitability Check – Provision Storage Wizard

261 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Suitability Check is the default step on the Summary page of the Provision Storage wizard. In order
for the Suitability Check to work, the arrays must be registered for performance data collection.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 261
Suitability Check – Modify SG: Add Volumes

Added more
volumes
Hover for existing
and additional
workloads

262 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Suitability Check can be run when modifying a Storage Group that is a part of a Masking View. Any
changes to the number of volumes or the Service Level enables you to run the Suitability Check. In this
example, volumes have been added to an SG. Results of the Suitability Check are displayed in a bar
chart, showing Front End, Back End, and Cache. To see the existing and additional workload of each
component, hover the mouse pointer over the bar associated with the component.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 262
Suitability Check – Modify SG Service Level

Changed
Service Level

263 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, only the Service Level of the SG was changed. The number of Volumes is unchanged.
The Suitability Check returns that the modifications to this SG should meet the Service Level.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 263
Lab: Workload Planning with Unisphere

This lab covers:


• Monitoring for available headroom
• Running Suitability Check when allocating more
storage to a host

264 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers the workload planning features of Unisphere for PowerMax.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 264
Module Summary

Key points covered in this module:

• Monitoring SRP Reports

• Monitoring Compliance

• Monitoring Compression

• Workload Planning
• SRP Headroom
• Suitability Check

265 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered monitoring and workload planning with Unisphere for PowerMax. Unisphere for
PowerMax was used to monitor SRP, SGs, Compliance, and Compression. The SRP Headroom and
Suitability Check workload planning features were also covered.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Monitoring and Workload Planning with Unisphere for PowerMax 265
Module: Introduction to Business Continuity

Upon completion of this module, you should be able to:

• Describe the various PowerMax and VMAX Family Business Continuity features and integrated
solutions.

266 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module describes the PowerMax and VMAX Family of arrays Business Continuity features and
integrated solutions.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 266
Local Replication with TimeFinder SnapVX Overview

• Creates local point-in-time copies of


production data
• Target device required to mount replica
Production Snapshot • Highly scalable
Volume
– Manually, single source volume can have up
Snapshot to 256 snapshots
– With snapshot policy, the number goes up to
Snapshot 1024
• Highly efficient
Linked
Target • Snapshots share point-in-time tracks called
snapshot deltas

267 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

TimeFinder SnapVX is a local replication technology that was introduced with the VMAX3 platform running
HYPERMAX OS 5977, and is supported on all VMAX All Flash and PowerMax arrays as well. SnapVX
creates local point-in-time copies of data without requiring pairing between source and target volumes.
Targets are not required to create the snapshot and are only required to mount and use a replica.
TimeFinder SnapVX is highly scalable. A single source volume can have up to 256 snapshots. Up to 1024
target volumes can be linked to the snapshots of a single volume. The snapshots are made as efficient as
possible by sharing point-in-time tracks which are called snapshot deltas. SnapVX also provides emulation
modes for the classic Dell EMC local replication software options of TimeFinder Mirror, Clone, and VP
Snap.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 267
Local Replication Suite
Mainframe

Production Snapshot
Volume

Open Snapshot
Systems
Snapshot

Linked
Target

268 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Local Replication Suite includes TimeFinder SnapVX with cloud-scalable snaps and clones to protect
your data. For file-level local replication, SnapSure is available, and for mainframe environments,
Compatible Flash is provided.

Dedicated target devices are no longer required. TimeFinder SnapVX offers shorter create and terminate
times and removes the dependency on cache when scaling. It provides zero-impact and cloud-scalable
snaps to protect the data. TimeFinder SnapVX works in open systems and mainframe environments. It
provides the underlying technology which supports Data Protector for z Systems (zDP).

This course covers TimeFinder SnapVX snapshots with open systems hosts.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 268
Remote Replication Overview

• SRDF/Synchronous and SRDF/Asynchronous


• SRDF Concurrent, SRDF Cascaded, and SRDF/Star
• SRDF/Metro
• Non-Disruptive Migration
• Open Replicator
• RecoverPoint
• PowerProtect

269 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC Symmetrix Remote Data Facility (SRDF) is a replication technology that enables the mirroring of
a data center with minimal impact to the performance of the production application. SRDF provides
disaster recovery and data mobility solutions for the PowerMax and VMAX Family storage arrays in both
open systems and mainframe data centers. SRDF enables storage systems to be in the same room,
different buildings, or hundreds to thousands of kilometers apart. Non-Disruptive Migration (NDM)
migrates data without application downtime. The migration takes place over a metro distance, typically
within a data center. Open Replicator enables copying data from qualified arrays within a storage area
network (SAN) infrastructure to or from arrays running the PowerMaxOS. RecoverPoint is a data
protection solution designed to provide production data integrity at local and remote sites. PowerProtect
provides data backup and restore facilities for a PowerMax array.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 269
SRDF/Synchronous
Primary Secondary

Limited Distance
R1 Synchronous R2
Application Host

270 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Synchronous (SRDF/S) maintains a real-time—synchronous—mirrored copy of production data at


a physically separated storage system. The production volumes are labeled R1s and the copies are
labeled R2s. Host writes are written simultaneously to both arrays in real time before the application I/O
completes. Acknowledgments are not sent to the host until the data is stored in cache on both arrays.
SRDF/S can be used only for limited distance—up to 125 miles or 200 km.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 270
SRDF/Asynchronous
Primary Secondary

Unlimited Distance
R1 Asynchronous R2
Application Host

271 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Asynchronous (SRDF/A) mirrors data from the R1 devices while maintaining a dependent-write
consistent copy of the data on the R2 devices at all times. SRDF/A can be used for unlimited distance.
Host writes are collected for a configurable interval into delta sets. Delta sets are transferred to the remote
array in timed cycles. The copy of the data at the secondary site is typically only seconds behind the
primary site.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 271
Concurrent SRDF
Primary

R2

R1
Application Host

R2

272 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Concurrent SRDF is a disaster recovery solution where data is mirrored from the primary site concurrently
to two R2 devices. Usually, one copy running in SRDF/S mode is maintained at a nearby location and
offers zero data loss if the primary site fails. The second copy operating in SRDF/A mode offers an out-of-
region recovery site with an Recovery Point Objective (RPO) of seconds to minutes.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 272
Cascaded SRDF
Primary Secondary Tertiary

Application Host

273 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Cascaded SRDF is a three-site configuration that uses a bunker site and combines synchronous and
asynchronous modes. Data from a primary site is synchronously replicated to a secondary site and then
asynchronously replicated to a tertiary site. The major benefit provided with a cascading configuration is
its inherent capability to continue replicating from the secondary site to the tertiary sites when the primary
site goes down.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 273
Production Secondary
SRDF/Star

Application Host

274 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Star is a three-site disaster recovery solution consisting of primary, secondary, and tertiary sites.
The secondary site synchronously mirrors the data from the primary site, and the tertiary site
asynchronously mirrors the production data. When an outage occurs at the primary site, SRDF/Star allows
the user to quickly move operations and re-establish remote mirroring between the remaining sites.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 274
SRDF/Metro
Host with Multi-path Software Hosts with Clustering Software

R/W Access R/W Access

R/W Access

R1 R2 R1 R2

275
Site A
© 2022 Dell Inc. or its subsidiaries. All Rights Reserved.
Site B Site A Site B

SRDF/Metro allows both R1 and R2 devices to be Read/Write accessible to hosts. Hosts can
write to both the R1 and R2 side of the device pair, and R2 devices assume the same
external device identity as their R1 mate. This shared identity causes the R1 and R2 devices
to appear to hosts as a single virtual device across the two arrays. SRDF/Metro can be
deployed with either a single multi-path host or with a clustered host environment. For
single host configurations, multi-pathing software directs parallel reads and writes to each
array. For clustered host configurations, host I/Os can be issued by multiple hosts accessing
both sides of the SRDF device pair. In both configurations, writes to the R1 or R2 devices
are synchronously copied to the paired device. Any write conflicts are resolved by the
SRDF/Metro software to maintain consistent images on the SRDF device pairs.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 275
Non-Disruptive Migration
Host(s) with Multi-path Host(s) with Multi-path
Software Software

Source Array Target Array Source Array Target Array

Pass-through Metro-based
mode mode

VMAX (5876) VMAX3 (5977) VMAX (5977) PowerMax (5978)


VMAX All Flash (5977 or 5978) VMAX All Flash (5977) VMAX All Flash (5978)
PowerMax (5978)
276 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Non-Disruptive Migration (NDM) is a method for migrating data without application downtime. NDM is
supported across SRDF/S distances. However, because of the requirement that the host sees both the
source and target storage, migrations are typically performed between arrays within a data center. There
are two supported NDM implementations that are dependent mainly on the source array software version.
If migrating from a VMAX array running Enginuity 5876 to a PowerMax running PowerMaxOS 5978, Pass-
through mode is used. If migrating from a VMAX3 or VMAX All Flash array running HYPERMAX OS 5977
or later to a PowerMax running PowerMaxOS 5978, Metro-based mode is used.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 276
OpenReplicator
PowerMax OS Third Party Array

SAN

Application Host

277 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Open Replicator enables copying full or incremental copies of data from qualified third party arrays within
a SAN infrastructure to or from arrays running PowerMaxOS. Open Replicator uses the Solutions Enabler
SYMCLI symrcopy command.

Use Open Replicator to:

• Pull from source volumes on qualified remote arrays to a volume on an array running PowerMaxOS

• Perform online data migrations from qualified storage to an array running PowerMaxOS

For pull operations, the volume can be in a live state during the copy process. The local hosts and
applications can begin to access the data as soon as the session begins, even before the data copy
process has completed. The pull can also be performed in cold mode to a static volume.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 277
PowerProtect Storage Direct Integration
SnapVX

Production Data

Only Copy
Images FAST.X to Data
Application Host
DX Domain

Data Domain
PowerMax or VMAX Family Array
278 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The PowerMax and VMAX Family Business Continuity integrated solutions include PowerProtect,
RecoverPoint, and AppSync.

PowerProtect is an integration between PowerMax, VMAX All Flash or VMAX3 arrays, and Data Domain
storage systems to backup production data. TimeFinder SnapVX is used to create a replica—or a
snapshot—of a LUN. PowerProtect copies the snapshot to a vdisk on the Data Domain system. The vdisk
is seen by the source array as a FAST.X encapsulated LUN. Change tracking is enabled for the replica,
and therefore, only changes made are copied, providing performance increases and space savings.
PowerProtect eliminates the performance impact on applications, provides faster backup and recovery,
and reduces the costs and complexity of traditional backup solutions.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 278
RecoverPoint Integration
SnapVX

RPAs RPAs

FC/WAN

Application Host

New York London


279 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The RecoverPoint solution also leverages native PowerMax and VMAX Family snap
capabilities to create point-in-time consistent snaps of production volumes in a consistency
group. The snapshots are used to synchronize the production volumes with the copy
volumes. The data path is through the RecoverPoint Appliances (RPAs). The solution
supports manual, continuous, and periodic snapshots. Replication is asynchronous.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 279
AppSync

280 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Dell EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and
cloning critical Microsoft and Oracle applications and VMware environments. After defining service plans,
application owners can protect, restore, and clone production data quickly with item-level granularity by
using the underlying Dell EMC replication technologies.

On PowerMax arrays, the Essentials software package contains AppSync in a starter bundle. The
AppSync Starter Bundle provides the license for a scale-limited, yet fully functional version of AppSync.
The Pro software package contains the AppSync Full Suite.

AppSync supports the following applications:

• Oracle

• Microsoft SQL Server

• Microsoft Exchange

• SAP HANA

• VMware VMFS

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 280
Module Summary

Key points covered in this module:

• Described the various PowerMax and VMAX Family Business Continuity features and integrated
solutions.

281 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module described the PowerMax and VMAX Family of arrays Business Continuity features and
integrated solutions.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: Introduction to Business Continuity 281
Module: TimeFinder SnapVX Operations

Upon completion of this module, you should be able to:

• Describe TimeFinder SnapVX concepts

• Replicate a VMFS datastore using TimeFinder SnapVX

282 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on TimeFinder SnapVX local replication technology on PowerMax and the VMAX All
Flash arrays. Concepts, terminology, and operational details of creating snapshots and presenting them to
target hosts is discussed. Use of TimeFinder SnapVX for replication in a virtualized environment is also
presented.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 282
Lesson: TimeFinder SnapVX Concepts and Operations
This lesson covers the following topics:

• TimeFinder SnapVX concepts

• Performing TimeFinder SnapVX operations

283 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the concepts of TimeFinder SnapVX. Operational examples using SYMCLI are
presented in detail.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 283
TimeFinder SnapVX Overview

• Create local point-in-time copies (snapshots) of


data without requiring pairing between source
and target volumes Source
– Target volumes are required only if host access to
point-in-time data is wanted

• Highly scalable
– Manually, single source volume can have
Backup Backup Testing
up to 256 snapshots 6 AM 7 AM 8 AM
– With snapshot policy, the number goes up
to 1024
Snapshots
• Highly space-efficient
– Sharing of point-in-time tracks among different
snapshots

284 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

TimeFinder SnapVX provides a highly efficient mechanism for taking periodic point-in-time copies of
source data without the need for target devices. Target devices are required only for presenting the point-
in-time data to another host. Up to 1024 target volumes can be linked per source volume. Sharing
allocations between multiple snapshots makes it highly space efficient. A write to the source volume will
only require one snapshot delta to preserve the original data for multiple snapshots.

If a source track is shared with a target or multiple targets, a write to this track will preserve the original
data as snapshot delta and will be shared for all the targets. Write to the target will be applied only to the
specific target.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 284
TimeFinder SnapVX Terminology

• Source Volume
– A device whose point-in-time copy is to be preserved
• Snapshot
– Preserved point-in-time image of a source volume
• Snapshot Delta
– Original source volume tracks at the point-in-time of the snapshots that were
preserved during host writes to the source volume
• Linked Target Volume
– A device that is used to provide access to point-in-time data by linking it to a
snapshot
• Storage Resource Pool
– A collection of data pools that provide physical storage for thin devices

285 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The terminology used in SnapVX is described here. Note that all host accessible devices in a PowerMax
or VMAX All Flash array are thin devices.

Host writes to source volumes will create snapshot deltas in the SRP. Snapshot deltas are the original
point-in-time data of tracks that have been modified after the snapshot was established.

SRP configuration must be specified when ordering the system, prior to installation. The source and target
volumes can be associated with the same SRP or different SRPs. Snapshot deltas will always be stored in
the SRP of the source volume.
• Allocations owned by the source will be managed by its Service Level (SL).
• Allocations for the target will be managed by the SL of the target.
• Snapshot deltas will be managed by the Optimized SL.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 285
Redirect-on-Write – Preserving Point-in-Time Data
Host Access Backup
Source (Snapshot)

TDAT TDAT TDAT TDAT

Storage Resource Pool

Host Write Backup


Source (Snapshot)

TDAT TDAT TDAT TDAT

Storage Resource Pool

286 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When the snapshot is created, both the source device and the snapshot point to the location of data in the
SRP. When a source track is written to, the new write is asynchronously written to a new location in the
SRP. The source volume will point to the new data. The snapshot will continue to point to the location of
the original data. The preserved point-in-time data becomes the snapshot delta. This is the Redirect-on-
Write (ROW) technology.

Under some circumstances SnapVX will use Asynchronous Copy on First Write (ACOFW) in non-
PowerMax and VMAX All Flash arrays. This might be done to prevent degradation of performance for the
source device. For example, if the original track was allocated on Flash drive, then it would be better to
copy this down to a lower tier and accommodate the new write in the Flash drive.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 286
Snapshot Generations and Time-to-Live

• Generation Numbers
– Any snapshot of a source volume with a
Source
unique name is assigned Generation
Number 0 (most recent)

• Time-to-Live (TTL)
– At time of creation or later, an expiration
Backup Backup Testing
time for the snapshot can be set 6 AM (1) 7 AM (0) 8 AM (0)
– Snapshots automatically terminate when
the TTL expires
Snapshots

287 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Each snapshot is assigned a generation number. If the name assigned to the snapshot is reused, then the
generation numbers are incremented. The most recent snapshot with the same name will be designated
as generation 0, the one prior as generation 1, and so on. If each snapshot is given a unique name, they
will all be generation 0. Terminating a snapshot will result in reassignment of generation numbers.

Snapshots are kept until they are terminated, unless a Time-to-Live (TTL) is set. TTL is used to
automatically terminate a snapshot at a set time. This can be specified at the time of snapshot creation or
can be modified later. PowerMaxOS will terminate the snapshot at the set time. If a snapshot has linked
targets, it will not be terminated. It will be terminated only when the last target is unlinked. TTL can be set
as a specific date or as a number of days from set time.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 287
Secure Snapshots

• Prevent snapshot data deletion


– Date/time delta from current time or absolute date

• Snapshot automatically deleted


– Expiration date has passed, and snapshot has no links

• Secure snapshot expiration date can be extended


– Cannot be shortened

• Existing snapshot can be converted to a secure snapshot


– Secure snapshot cannot be converted into a traditional snapshot

• Expired secure snapshots with links are not deleted


– No longer considered secure

288 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Secure snapshots prevent administrators or other high-level users from intentionally or unintentionally
deleting snapshot data. When creating a secure snapshot, you assign it an expiration date/time either as a
delta from the current date or as an absolute date. Once the expiration date passes, and if the snapshot
has no links, PowerMaxOS automatically deletes the snapshot. Prior to its expiration, administrators can
only extend the expiration date—they cannot shorten the date or delete the snapshot. A snapshot can be
converted to a secure snapshot, but a secure snapshot may not be converted to a traditional snapshot. If a
secure snapshot expires, and it has a volume linked to it, or an active restore session, the snapshot is not
deleted. However, it is no longer considered secure.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 288
Accessing Point-in-Time Data – Linked Targets

• Snapshots should be linked to a target Source


volume if host access to the point-in-
time data is required
• Linked targets can be in:
– No Copy Mode (default) Backup Backup Testing
6 AM 7 AM 8 AM
– Copy Mode

• Mode can be specified at the time of Snapshots


linking the snapshot to target or can be Link

modified later

Target

289 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A snapshot has to be linked to a target volume to provide access to point-in-time data to a host. The link
can be in No Copy or Copy mode. Copy mode linked targets will provide full volume copies of the point-in-
time data of the source volumes—similar to full copy clones. Copy mode linked targets will have a useable
copy of the data even after termination of the snapshot—provided the copy has completed.

A snapshot can have both No Copy mode and Copy mode linked targets. Default is to create No Copy
mode linked targets. This can be changed later if desired.

Writing to a linked target will not affect the snapshot. The target can be re-linked to the snapshot to revert
to the original point-in-time.

A snapshot can be linked to multiple targets. But a target volume can be linked to only one snapshot.

There is no benefit to have the No Copy mode linked targets in an SRP different from the source SRP.
Writes to the source volume will only create snapshot deltas which will be stored in the SRP of the source
volume. The writes will not initiate any copy to the target.

A target volume that is larger than the source can be linked to a snapshot. This is enabled by default. The
environment variable SYMCLI_SNAPVX_LARGER_TGT can be set to DISABLE to prevent this.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 289
Linked Target – Undefined/Defined Tracks

Link
Target
Source Backup
Undefined

TDAT TDAT TDAT TDAT

Storage Resource Pool

• Undefined Tracks: Location of data for the target has to be resolved through snapshot pointers
• Defined Tracks: Location of data for the target points directly to the appropriate tracks in the SRP

290 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When a snapshot is linked to a target, the process of defining the tracks for the target is initiated internally.
In the undefined state, the location of data for the target has to be resolved through the pointers for the
snapshot. In the defined state, data for the target points directly to the corresponding locations in the SRP.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 290
Relinking and Unlinking Targets

• Relink operation unlinks the target from the current snapshot and links it to a
different snapshot
– Relink must be between the same source and target devices

• Relink to the same snapshot refreshes the target with original point-in-time
– Useful if target has been modified and you want to revert to the original snapshot

• Unlink operation disassociates the linked target from the snapshot


– Copy mode - Copies all relevant tracks from the snapshot’s point-in-time to the
linked target volume to create a complete copy of the point-in-time that will remain
available after the target is unlinked.
– No Copy mode - Does not copy data to the linked target volume but still makes
the point-in-time accessible using pointers to the snapshot.
Refer to the Dell EMC TimeFinder SnapVX Local Replication Technical Note for the
complete details

291 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Relink provides a convenient way of checking different snapshots to select the appropriate one to access.
A link between the snapshot of the source volume and the target must exist for the relink operation. Relink
can also be performed with a different snapshot of the same source volume or a different generation of the
same snapshot of the source volume.

The Unlink operation removes the relationship between a snapshot and the corresponding target. Copy
mode linked targets can be unlinked after the copying completes. This will provide a full, independent,
useable point-in-time copy of the source data on the target device.

No Copy mode linked targets can be unlinked at any time. The unlinked target behavior depends on the
storage system OS. Prior to HYPERMAX OS 5977.810.184, users could not access the data on a nocopy
target after having been unlinked. With PowerMaxOS and HYPERMAX OS 5977.810.184 and later, users
can access the data on fully-defined nocopy targets after having been unlinked. This functionality is
possible through the shared allocations.

When a target is unlinked, the allocation sharing remains in place. Even after unlinked, termination of a
snapshot results in the target owning the snapshot delta. And an updated write to the source track results
in the target owning the original track. The target also takes ownership of any shared source tracks if the
source is deallocated after unlink.

This enhanced functionality allows the user to continue to access the target data after unlink in the same
way that previously required full copy targets, but without duplicating the entire back-end data from the
point-in-time to the target.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 291
Restore to Source

• Restore from snapshot


– Snapshots can be directly restored to the source volume
– Source volume data is set back to the point-in-time of the snapshot
– Only changed data has to be restored from the snapshot delta—inherently
differential operation

• Restore from linked target


– Two-step process:
1. Create a snapshot of the linked target
2. Link this snapshot with the source volume—which will now be a linked target

292 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

From the perspective of the host, as the data on the source volume changes, the source volume should be
unmounted prior to the restore operation and then re-mounted. To restore from a linked target, a snapshot
of it must be established; this snapshot should be linked to the source volume. The source volume cannot
be unlinked until the copy completes. So the link should be created in Copy mode.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 292
Cascading Snapshots

Backup
Source Host process to
obfuscate sensitive
data

Link
Target backup_test

Link

Target

293 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Presenting sensitive data to test or development environments often requires that the source of the data
be disguised beforehand. Cascaded snapshots provide this separation and disguise. There is no limit to
the number of cascaded hops that can be created as long as the overall limit for SnapVX is maintained. If
no change to the data is required before linking the snapshots to the test or development environments,
there is no need to create a cascaded relationship.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 293
Reserved Capacity

• Reserved Capacity is a percentage of Storage Resource Pool (SRP) that


can be only allocated to new host writes
• When SRP gets to where only Reserved Capacity is left:
– Snapshot fails when new allocations for snapshot deltas are required
› The snapshot has to be terminated
– Copy to targets halt
› Copy resumes if free space is made available in the SRP or if the Reserved Capacity is
lowered

294 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Reserved capacity ensures that there will be sufficient capacity available in the SRP to accommodate new
host writes. When the allocated capacity reaches the point where only reserved capacity remains, SnapVX
allocations for snapshot deltas and copy processes will be affected.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 294
Expanding Storage Groups with Active Snapshots

• New volumes can be added to Storage Groups based on the growth of the
application
• Existing snapshots do not include the new volumes
• New snapshots include the newly added volumes
• If the SG is restored from an earlier snapshot, the new volumes are set as
Not Ready to the host
– The volumes will remain NR even after the restored session is terminated. User
has to decide the best course of action to include these for the application and
host.
• Likewise, if the SG of the linked targets has been expanded after the
snapshot, these volumes will be set as Not Ready to the host

295 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Care needs to be exercised when expanding SGs with existing snapshots. If there are more volume(s) in
the SG than are contained in the snapshot, then a restore from the snapshot will set these additional
volume(s) to Not Ready. This is because these volume(s) were not present when the snapshot was taken.

Of course subsequent snapshots, after the SG expansion, will contain all the volumes. Similarly, if the
linked target SG has been expanded and has more devices than the snapshot, then the additional
volumes in the linked target SG will be set to Not Ready.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 295
Online Device Expansion with SnapVX

• Expand SnapVX source or target devices


• Snapshot data remains the same size
• The ability to restore a smaller snapshot to an expanded source
device
• Target link and relink operations depend on the size of the source
device when the snapshot was taken, not its size after expansion
• Key User Benefits:
– Expand while retaining local and remote protection
– Reduces need to heavily overprovision

296 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

PowerMaxOS 5978 supports online device expansion for Local Replication (LREP) configurations. As with
standalone and SRDF devices, this means an administrator can increase the capacity of thin devices that
are part of an LREP relationship without any service disruption. In the past, you would need to delete an
existing snapshot to leverage ODE. ODE reduces the need for customers to heavily provision, or
overprovision TDEVs to avoid having to expand later. Devices eligible for expansion are those that are
part of SnapVX sessions and legacy sessions that use CCOPY, SDDF, or Extent.

After a source device expansion, the snapshot data remains the same size. ODE also enables the ability
to restore a smaller snapshot to an expanded source device. Note that the target link and relink operations
depend on the size of the source device when the snapshot was taken, not its size after expansion.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 296
Creating Snapshots

C:\>symsnapvx -sid 1888 -sg RDF1_SG establish -name backup

Execute Establish operation for Storage Group RDF1_SG (y/[n]) ? y

Establish operation execution is in progress for the storage group RDF1_SG. Please wait...

Polling for Establish.............................................Started.


Polling for Establish.............................................Done.
Polling for Activate..............................................Not Needed.

Establish operation successfully executed for the storage group RDF1_SG

297 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The most convenient and preferred way of performing TimeFinder SnapVX operations is using Storage
Groups. In this example, we are creating a snapshot named backup for the devices in the Storage Group
snapvx_sg.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 297
Listing Snapshots

C:\>symsnapvx -sid 1888 list -sg RDF1_SG -detail

Storage Group (SG) Name : RDF1_SG


SG's Symmetrix ID : 000197601888 (Microcode Version: 5978)

----------------------------------------------------------------------------------------------
Snapshot Total
Sym Flags Dev Size Deltas Non-Shared
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks) (Tracks) (Tracks)
----- --------------- ---- --------- ------------------------ ---------- ---------- ----------
000D1 backup 0 .... .... Wed Nov 28 14:26:48 2020 81930 0 0
backup 1 .... .... Wed Nov 28 14:24:48 2020 81930 820 20
backup 2 .... .... Wed Nov 28 14:22:45 2020 81930 3552 0
---------- ----------
4372 20

298 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

We have created three successive snapshots using the same name. Note that each snapshot is given a
generation number. As discussed earlier, the most recent snapshot is designated as generation 0. As
there is workload on the source devices, the changes are accumulated in snapshot deltas. The non-
shared tracks are unique to the specific snapshot.

These are the tracks that will be returned to the SRP if the snapshot is terminated.

Note that the output has been edited to fit the slide. The Expiration Date is not shown. As we did not
specify a time-to-live during the establish operation, the Expiration Date is NA.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 298
Setting Time-to-Live

C:\>symsnapvx -sid 1888 -sg RDF1_SG -snapshot_name backup -generation 0 set ttl -delta 2

Execute Set TTL operation for Storage Group RDF1_SG (y/[n]) ? Y

SetTimeToLive operation successfully executed for the storage group snapvx_sg

C:\>symsnapvx -sid 1888 list -sg RDF1_SG -detail

Storage Group (SG) Name : RDF1_SG


SG's Symmetrix ID : 000197601888 (Microcode Version: 5978)

----------------------------------------------------------------------------------------------
Snapshot
Sym Flags Dev Size
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks) Expiration Date
----- ------------- ---- --------- ------------------------ --------- ------------------------
000D1 backup 0 .... .... Wed Nov 28 14:26:48 2020 81930 Fri Nov 30 14:42:44 2020
backup 1 .... .... Wed Nov 28 14:24:48 2020 81930 NA
backup 2 .... .... Wed Nov 28 14:22:45 2020 81930 NA

299 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

We can set the time-to-live even after creating the snapshot. The parameter –delta is used to specify the
number of days for expiration from the time the snapshot was created. Note that the output has been
edited to fit the slide.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 299
Creating Secure Snapshots

C:\>symsnapvx -sid 1888 -sg RDF1_SG establish -name secure_backup -secure -delta 4:12 -nop

Establish operation successfully executed for the storage group RDF1_SG

C:\>symsnapvx -sid 1888 list -sg RDF1_SG -detail

Storage Group (SG) Name : RDF1_SG


SG's Symmetrix ID : 000197601888 (Microcode Version: 5978)

---------------------------------------------------------------------------------------------

Sym Flags
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp Expiration Date
----- -------------------------------- ---- --------- ----------------------- ---------------
000D1 secure_backup 0 .... .X.. Wed Nov 28 15:08:17 2020 Mon Dec 03
backup 0 .... .... Wed Nov 28 14:26:47 2020 Fri Nov 30
backup 1 .... .... Wed Nov 28 14:24:47 2020 NA
backup 2 .... .... Wed Nov 28 14:22:45 2020 NA

300 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A secure snapshot is an optional setting that prevents the accidental or intentional deletion of snapshots.
The –secure option creates a snapshot with a secure expiration time either as a number of days plus
hours from the current host time or an absolute date and hour in the future. In this example, the secure
snapshot cannot be terminated until four days and 12 hours have passed from the current host time. The
secure expiration time is set using the –delta option as shown or using the –absolute <Date:Hour>
option.

Flags:

(F)ailed : X = General Failure, . = No Failure

: S = SRP Failure, R = RDP Failure

(L)ink : X = Link Exists, . = No Link Exists

(R)estore : X = Restore Active, . = No Restore Active

(G)CM : X = GCM, . = Non-GCM

(T)ype : Z = zDP snapshot, . = normal snapshot

(S)ecured : X = Secured, . = Not Secured

(E)xpanded : X = Source Device Expanded, . = Source Device Not Expanded

(B)ackground: X = Background define in progress, . = No Background define

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 300
Linking Snapshot to Target

C:\>symsnapvx -sid 1888 link -sg RDF1_SG -snapshot_name backup -gen 2 -lnsg snapvx_tgt_sg -
nop

Link operation successfully executed for the storage group RDF1_SG

C:\>symsnapvx -sid 1888 list -sg RDF1_SG -detail

Storage Group (SG) Name : RDF1_SG


SG's Symmetrix ID : 000197601888 (Microcode Version: 5978)
------------------------------------------------------------------------------------------
Snapshot
Sym Flags Dev Size
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks)
----- -------------------------------- ---- --------- ------------------------ ----------
000D1 secure_backup 0 .... .X.. Wed Nov 28 15:08:17 2020 81930
backup 0 .... .... Wed Nov 28 14:26:47 2020 81930
backup 1 .... .... Wed Nov 28 14:24:47 2020 81930
backup 2 .X.. .... Wed Nov 28 14:22:45 2020 81930

301 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The first command shown here links the generation 2 backup snapshot to a Storage Group named
snapvx_tgt_sg using the –lnsg flag. The target device is contained in the snapvx-tgt-sg Storage Group.
The default for linking is the No Copy mode. After issuing the list detail command for the snapped Storage
Group, we see that the generation 2 snapshot now has the Link Exists Flag set.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 301
When Target is Modified

C:\>symsnapvx -sid 1888 list -sg RDF1_SG -detail -linked

Storage Group (SG) Name : RDF1_SG


SG's Symmetrix ID : 000197601888 (Microcode Version: 5978)

---------------------------------------------------------------------------------------------
Sym Link Flgs Remaining Done
Dev Snapshot Name Gen Dev FCMDS Snapshot Timestamp (Tracks) (%)
----- ----------------------------- ---- ----- ----- ------------------------ ---------- ----
000D1 backup 2 000DD ..XX. Wed Nov 28 14:22:44 2020 80141 0
----------
80141

302 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When a target is written to, the original point-in-time snapshot is unaffected—it remains pristine. The
Modified Flag is set to Modified Target Data, as shown here. The % Done and Remaining (Tracks)
indicates tracks that have been changed due to the writes.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 302
Expanded Snapshot Source Device

C:\>symsnapvx -sid 1888 list -sg RDF1_SG -detail

Storage Group (SG) Name : RDF1_SG


SG's Symmetrix ID : 000197601888 (Microcode Version: 5978)

-----------------------------------------------------------------------------------------
Snapshot
Sym Flags Dev Size
Dev Snapshot Name Gen FLRG TSEB Snapshot Timestamp (Tracks)
----- -------------------------------- ---- --------- ------------------------ ----------
000D1 backup 0 .X.. ..X. Fri Jan 04 10:55:43 2021 81930

Flags:

(E)xpanded : X = Source Device Expanded, . = Source Device Not Expanded

303 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

If the source device of a snapshot has been expanded, the (E)xpanded Flag is set. In this case, the device
00D1 was expanded from 10 GB to 15 GB while the snapshot was linked to a target device. The snapshot
size remains the same.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 303
Lesson: Replicating VMFS Datastore Using Unisphere
This lesson covers the following topics:

• Using TimeFinder SnapVX to create a snapshot of a VMFS datastore presented to the Primary
ESXi server

• Linking the snapshot to a target and presenting it to the Secondary ESXi server

304 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers replicating a VMware VMFS datastore using TimeFinder SnapVX. A snapshot of the
VMFS datastore presented to the Primary server will be created and linked to a target device. The target is
then accessed on a Secondary ESXi server.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 304
Primary ESXi Datastore

305 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Using the vSphere Web Client, you find that the Primary ESXi server—esxi-88-67—has access to the
Datastore named Production_Datastore. Click on the Production_Datastore link to view the Datastore
details.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 305
Datastore Details

306 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Note the naa number of the Extent. You will use this number to correlate the device with the PowerMax
volume, using Unisphere for PowerMax. Click on the Datastore browser tab to view the contents of the
Datastore.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 306
VM Resident on Production Datastore

307 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Browsing the Production_Datastore shows that it contains the folder StudentVM. This folder contains the
StudentVM.vmx and other files pertaining to the VM StudentVM.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 307
Datastore Browser

308 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

You can open a console to the StudentVM and examine the data. For the purposes of this example, a
folder named Production_data has been created on StudentVM. The objective is to use TimeFinder
SnapVX to take a snapshot of the PowerMax device hosting the Production_Datastore.

You have to identify a suitable target device accessible to a Secondary ESXi server. Then you can link the
snapshot to the target device. Subsequently you should be able to power on a snapshot of the StudentVM
on the Secondary ESXi Server.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 308
Identifying Device Hosting Production Datastore

309 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, navigate to the STORAGE > Storage Groups > Volumes page for the
Primary ESXi Server Storage Group. In this example, the Storage Group named
EMBEDDED_NAS_DM_SG is masked to the ESXi server. The Storage Group contains one volume. The
volume 00024 is the PowerMax device that our Datastore is located on. A listing of the volume details
shows the WWN for it.

This matches with the naa number shown previously in the vSphere Web Client. This confirms that the
Primary ESXi Server has access to device 00024. This device is in SID:217 and its capacity is 10 GB. In
order to take a snapshot of this device and link it to a target, you have to identify a 10 GB device on
SID:217 that has been masked to the Secondary ESXi Server.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 309
LUNs Accessible to Secondary ESXi Server

310 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Using the vSphere Web Client we find that the Secondary ESXi server has access to a few devices. Note
the naa number of the highlighted device. This correlates with the WWN of the PowerMax device 000AD.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 310
Create Snapshot of Production Device

311 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In Unisphere for PowerMax, SnapVX operations can only be performed on Storage Groups. As this is the
first time you will be creating a snapshot for the Production Device, navigate to the STORAGE > Storage
Groups page and select the Storage Group.

The Storage Group PW_DR was created when the production device was masked to the Primary ESXi
Server. To proceed, click the Protect button.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 311
Protect Storage Group Wizard (1 of 3)

312 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Protect Storage Group Wizard opens. Select the Point in Time using SnapVX radio button and then
click NEXT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 312
Protect Storage Group Wizard (2 of 3)

313 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, the snapshot is named datastore_backup and a 5 day Time To Live is set. To proceed,
click NEXT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 313
Protect Storage Group Wizard (3 of 3)

314 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the SnapVX Summary page and then select Run Now from the ADD TO JOB LIST drop down
menu.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 314
Link Snapshot to Target

315 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To link the snapshot to the target Storage Group, navigate to the DATA PROTECTION > Storage
Groups > SnapVX Snapshots page, select the snapshot, and click the Link button. The Link Snapshot
dialog box opens. In our example, we select an Existing Target Storage Group. The target device is in the
Storage Group named secondary_esxi_88_68.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 315
Mount Datastore

[root@esxi-88-68:~] esxcfg-volume -l
Scanning for VMFS-3/VMFS-5 host activity (512 bytes/HB, 2048 HBs).
VMFS UUID/label: 5c018873-d2208d22-7d8f-005056955e4a/Production_Datastore
Can mount: Yes
Can resignature: Yes
Extent name: naa.60000970000197600217533030304144:1 range: 0 - 9983 (MB)

[root@esxi-88-68:~] esxcfg-volume -r 5c018873-d2208d22-7d8f-005056955e4a


Resignaturing volume 5c018873-d2208d22-7d8f-005056955e4a
[root@esxi-88-68:~]

316 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Mounting the datastore can be done using the vSphere Web Client when a vCenter Server is configured.
In our example, there is no vCenter Server deployed, so we used esxcli commands. Open a PuTTy
session to the ESXi server and issue the commands shown. In this case, we used the –r option to mount
and resignature the datastore using the UUID.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 316
View Datastore Using Web Client

317 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Now that the datastore has been mounted, we can view the snapshot in the Datastores tab of the Web
Client Storage page. Select the Register a VM link to proceed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 317
Register VM

318 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Register VM dialog box opens. Select the datastore and then select the Student VM. Right click on
the StudentVM.vmx file and select Register VM.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 318
Datastore Browser

319 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The VM is now available on the secondary ESXi server. Select the VM and click the Power on link.
Answer the Virtual Machine Message question. Choose I Copied It.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 319
VM on Secondary ESXi Server

320 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

You can open a console to the VM on the Secondary ESXi server and verify that this VM has the same
data as the VM on the Primary ESXi server at the point in time of the snapshot.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 320
Lab: TimeFinder SnapVX Operations

This lab covers


• Creating snapshots
• Accessing snapshot data from a secondary host
• Restoring to source from snapshots
• Restoring to source from modified target

321 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers creating and linking TimeFinder SnapVX snapshots to target devices. It also covers
restoring snapshot data to the source device as well as restoring modified target data back to the source
device.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 321
Lab: TimeFinder SnapVX Replication of VMFS Datastore

This lab covers


• Identifying and correlating source and target
devices with devices accessible by primary and
secondary ESXi servers
• Create Timefinder SnapVX snapshots using
Unisphere for PowerMax
• Accessing the linked target from the secondary
ESXi server and power-on the Virtual Machine on
the secondary ESXi server
• Expanding the primary and secondary volumes

322 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers performing TimeFinder SnapVX replication of a VMFS Datastore using Unisphere for
PowerMax and VMware vSphere client.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 322
Module Summary

Key points covered in this module:

• TimeFinder SnapVX concepts

• Replicating a VMFS datastore using TimeFinder SnapVX

323 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered TimeFinder SnapVX local replication technology on PowerMax arrays. Concepts,
terminology, and operational details of creating snapshots and presenting them to target hosts were
discussed. Use of TimeFinder SnapVX for replication in a virtualized environment was also presented.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: TimeFinder SnapVX Operations 323
Module: SRDF/Synchronous Operations

Upon completion of this module, you should be able to:

• Create Dynamic SRDF Groups and Dynamic SRDF Pairs

• Perform SRDF/S operations using SYMCLI and Unisphere for PowerMax

• Perform Online Device Expansion using Unisphere for PowerMax

324 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on SRDF operations in synchronous mode. Use of SYMCLI and Unisphere for
PowerMax to perform SRDF operations and online device expansion are presented in detail.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 324
Lesson: SRDF Initial Setup Operations
This lesson covers the following topics:

• Listing the SRDF environment

• Creating Dynamic SRDF Groups

• Creating Dynamic SRDF Pairs

• Changing SRDF mode and suspending and resuming SRDF links

325 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers initial SRDF setup operations. Creating dynamic SRDF groups and SRDF pairs using
SYMCLI is presented in detail. Basic SRDF operations are also discussed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 325
Listing Environment

C:\>symcfg list

S Y M M E T R I X

Mcode Cache Num Phys Num Symm

SymmID Attachment Model Version Size (MB) Devices Devices

000197601888 Local PowerMax_8000 5978 397312 22 331

000197902249 Remote PowerMax_2000 5978 408576 0 277

C:\>

326 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In our example, a PowerMax 8000 array—SID:1888—and a PowerMax 2000 array—SID 2249—have


been configured with RF emulation. The remote adapters of each array are zoned to access the remote
adapters of the other array.

The commands shown are executed from a host attached to SID:1888, the Local PowerMax 8000 array.
The Num Phys Devices column indicates that the host from which the command was executed has
physical access to 22 devices on SID:1888. The Num Symm Devices column indicates the total number of
devices that have been configured on the storage arrays.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 326
Listing Remote Adapters
C:\>symcfg -sid 1888 list -ra all

Symmetrix ID: 000197601888 (Local)

S Y M M E T R I X R D F D I R E C T O R S

Remote Local Remote Status


Ident Port SymmID RA Grp RA Grp Dir Port
----- ---- ------------ -------- -------- --------------

RF-1F 11 000197902249 101 (64) 101 (64) Online Online


RF-2F 11 000197902249 101 (64) 101 (64) Online Online

C:\>symcfg -sid 2249 list -ra all

Symmetrix ID: 000197902249 (Local)

S Y M M E T R I X R D F D I R E C T O R S

Remote Local Remote Status


Ident Port SymmID RA Grp RA Grp Dir Port
----- ---- ------------ -------- -------- --------------

RF-1F 27 000197601888 101 (64) 101 (64) Online Online


RF-2F 27 000197601888 101 (64) 101 (64) Online Online
327 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here are the outputs from the symcfg list –ra all command from both storage arrays.
Currently there are four SRDF groups configured. SID:1888 uses port 11 on RF-1F and RF-2F, and
SID:2249 uses port 27 on RF-1F and RF-2F.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 327
Listing RDF Groups

C:\>symcfg -sid 1888 list -rdfg all

Symmetrix ID : 000197601888

S Y M M E T R I X R D F G R O U P S

Local Remote Group RDFA Info


------------ --------------------- --------------------------- ---------------
LL Flags Dir Flags Cycle
RA-Grp sec RA-Grp SymmID ST Name YLPD CHT Cfg CSRM time Pri
------------ --------------------- --------------------------- ----- ----- ---
101 (64) 10 101 (64)000197902249 OD SRDF_Sync1 X... ..X F-S -IS- 15 33

328 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here is the symcfg –sid 1888 list –rdfg all command output confirming the four SRDF
Groups.

Group Flags :

Prevent Auto (L)ink Recovery : X = Enabled, . = Disabled

Prevent RAs Online Upon (P)ower On: X = Enabled, . = Disabled

Link (D)omino : X = Enabled, . = Disabled

(S)TAR/SQAR mode : N = Normal, R = Recovery, . = OFF

S = SQAR Normal, Q = SQAR Recovery

RDF Software (C)ompression : X = Enabled, . = Disabled, - = N/A

RDF (H)ardware Compression : X = Enabled, . = Disabled, - = N/A

RDF Single Round (T)rip : X = Enabled, . = Disabled, - = N/A

RDF (M)etro : X = Configured, . = Not Configured

RDFA Flags :

(C)onsistency : X = Enabled, . = Disabled, - = N/A

(S)tatus : A = Active, I = Inactive, - = N/A

(R)DFA Mode : S = Single-session, M = MSC, - = N/A

(M)sc Cleanup : C = MSC Cleanup required, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 328
Display SRDF Connectivity – symsan Command

C:\>symsan -sid 1888 list -sanrdf -dir all

Symmetrix ID: 000197601888

Flags Remote
------ ----------- ------------------------------------
Dir Prt Lnk
Dir:P CS S S Symmetrix ID Dir:P WWN
------ --- --- --- ------------ ------ ----------------
01F:11 SO O C 000197902249 01F:27 500009735823340B
01F:11 SO O C 000197902249 02F:27 500009735823344B
02F:11 SO O C 000197902249 01F:27 500009735823340B
02F:11 SO O C 000197902249 02F:27 500009735823344B

329 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsan command can be used to discover and display SRDF connectivity between the arrays. The
symsan command helps in situations where SRDF groups have not yet been created between the storage
array pairs. The symsan command is particularly useful to determine the local and remote RDF directors,
as well as the full serial number of the remote array. The full serial number of the remote array is required
to create the first Dynamic SRDF group. Subsequent SRDF groups can be created by just specifying the
last few digits of the remote array. The output verifies that the RF on SID:1888 can indeed access the RF
on SID:2249 over the SAN.

Legend:

Director:

(C)onfig : S = Fibre-Switched, H = Fibre-Hub

G = GIGE, - = N/A

(S)tatus : O = Online, F = Offline, D = Dead, - = N/A

Port:

(S)tatus : O = Online, F = Offline, - = N/A

Link:

(S)tatus : C = Connected, P = ConnectInProg

D = Disconnected, I = Incomplete, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 329
Creating Dynamic SRDF Group

C:\>symrdf addgrp -label SRDF_Sync1 -sid 1888 -remote_sid 2249 -rdfg 101 -remote_rdfg 101
-dir 1F:11,2F:11 -remote_dir 1F:27,2F:27

Execute a Dynamic RDF Addgrp operation for group


Click to add text
'SRDF_Sync1' on Symm: 000197601888 (y/[n]) ? y

Successfully Added Dynamic RDF Group 'SRDF_Sync1' for Symm: 000197601888

C:\>

330 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symrdf addgrp command creates an empty Dynamic SRDF group on the source and the target
arrays and logically links them. The directors and the respective ports for the arrays are specified in the
command. The physical links and communication between the two arrays must exist for this command to
succeed.

Note that if this was the first SRDF group between these two arrays, the full 12-digit serial numbers of the
two arrays would need to be specified. Otherwise, an error message would be displayed.

The SRDF group number in the command (-rdfg and -remote_rdfg) is in decimal. In the PowerMax
family array, it is converted to hexadecimal. The decimal group numbers start at 01 but the hexadecimal
group numbers start at 00. Hence the hexadecimal group numbers will be off by one.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 330
Listing Configured SRDF Groups

C:\>symcfg -sid 1888 list -rdfg all

Symmetrix ID : 000197601888

S Y M M E T R I X R D F G R O U P S

Local Remote Group RDFA Info


------------ --------------------- --------------------------- ---------------
LL Flags Dir Flags Cycle
RA-Grp sec RA-Grp SymmID ST Name YLPD CHT Cfg CSRM time Pri
------------ --------------------- --------------------------- ----- ----- ---
101(64) 10 101(64) 000197902249 OD SRDF_Sync1 X... ..X. F-S -IS- 15 33

331 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The command shown gives detailed information on the currently configured SRDF Groups. The SRDF
Group we have just created is listed.

We have created an SRDF Group with the label SRDF_Sync and the SRDF Group number 101 in
decimal. Shown in parentheses is the hexadecimal value 64. It would be convenient if the SRDF Group
numbers on the local and the remote arrays are identical, however, this is not a requirement.

Legend:

Group (S)tatus : O = Online, F = Offline

Group (T)ype : S = Static, D = Dynamic, W = Witness

Director (C)onfig : F-S = Fibre-Switched, F-H = Fibre-Hub

G = GIGE, E = ESCON, T = T3, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 331
Creating SRDF Device Pairs
C:\>symrdf –sid 1888 –rdfg 101 –file –rdf_pairs.txt createpair –type R1 –establish -nop

An RDF 'Create Pair' operation execution is in progress for device


file 'rdf_pairs.txt'. Please wait...

Create RDF Pair in (1888,101)....................................Started.


Create RDF Pair in (1888,101)....................................Done.
Mark target device(s) in (1888,101) for full copy from source....Started.
Devices: 00BD-00C6 in (1888,101).................................Marked.
Mark target device(s) in (1888,101) for full copy from source....Done.
Merge track tables between source and target in (1888,101).......Started.
Devices: 00BD-00C6 in (1888,101).................................Merged.
Merge track tables between source and target in (1888,101).......Done.
Resume RDF link(s) for device(s) in (1888,101)...................Started.
Resume RDF link(s) for device(s) in (1888,101)...................Done.

The RDF 'Create Pair' operation successfully executed for device


file 'rdf_pairs.txt'.

332 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symrdf createpair command takes the dynamic capable device pairs listed in the text file—
pairs.txt—and makes them R1-R2 pairs. Devices are created as Dynamic capable by default. By
specifying –establish, the newly created R2 devices are synchronized with the data from the newly
created R1 devices. In this example, the file contains the following entries:

rdf_pairs.txt

BD 60

BE 61

BF 62

C0 63

C1 64

C2 65

C3 66

C4 67

C5 68

C6 69

The command has been executed from the host attached to SID:1888. The first column in the file lists the
devices on the PowerMax on which the command is executed. The second column is the remote
PowerMax—SID:2249. Specifying –type R1 makes the devices in the first column R1s and the devices
in the second column become their corresponding R2s. The mode of operation for the SRDF pairs that are
newly created is set to Adaptive Copy Disk Mode by default. Adaptive Copy Disk Mode is discussed later
in this module.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 332
Confirming Device Pairs
C:\>symrdf query -sid 1888 -rdfg 101 -f rdf_device_pairs.txt

Symmetrix ID : 000197601888 (Microcode Version: 5978)


Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)
RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 D..E Synchronized
N/A 000BE RW 0 0 RW 00061 WD 0 0 D..E Synchronized
Total ------- ------- ------- -------
Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

333 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

As noted earlier, the SRDF mode is set to Adaptive Copy Disk Mode by default . The establish operation
synchronizes data from the new R1 device to the new R2 device. This command has been executed from
the host attached to SID:1888. The R1 devices are created on SID:1888 and their corresponding R2
devices are created on SID:2249. R1 device 0BD is paired with R2 device 060. R1 device 0BE is paired
with R2 device 061.

Legend for FLAGS:

(M)ode of Operation : A = Async, S = Sync, E = Semi-sync, D = Adaptive Copy


Disk Mode

: W = Adaptive Copy WP Mode, M = Mixed, T = Active

(C)onsistency State : X = Enabled, . = Disabled, M = Mixed, - = N/A

(E)xempt : X = Enabled, . = Disabled, M = Mixed, - = N/A

R1/R2 Device (S)ize : E = R1 EQ R2, 1 = R1 GT R2, 2 = R2 GT R1, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 333
Deleting Device Pairing

Removes the pairing information from the array

Must suspend SRDF links before issuing symrdf deletepair


command

Canceling dynamic SRDF pairings changes the type of the device group
from R1 or R2 to Regular

Devices in the device group are changed from SRDF devices to SRDF-
capable standard devices, and the SYMAPI database is updated

334 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The delete SRDF pairs command cancels SRDF pairs in the device file specified. Before the
deletepair can be invoked, the pair must be suspended. The SRDF Group is not deleted by this
operation. If the SRDF Group is to be deleted, then the symrdf removegrp command is used after the
deletepair operation.

Example:

c:\symrdf suspend -sid 1888 -file rdf_pairs.txt -rdfg 101

c:\symrdf deletepair -sid 1888 -file rdf_pairs.txt -rdfg 101

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 334
Identify SRDF Devices Accessible to Host (1 of 2)
C:\>symrdf list pd

Symmetrix ID: 000197601888

Local Device View


-----------------------------------------------------------------------------
STATUS FLAGS RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MTES Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- -------- -------- --- ---- -------------

000BD 00060 R1:101 RW RW RW D1.E 0 0 RW WD Synchronized


000BE 00061 R1:101 RW RW RW D1.E 0 0 RW WD Synchronized

Total -------- --------


Track(s) 0 0
MB(s) 0.0 0.0

335 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The SYMCLI command symrdf list pd gives a list of all SRDF devices accessible to the host. The
command has been executed on the host attached to SID:1888. In this example, the host has access to 2
SRDF devices—0BD and 0BE—from SID:1888. As can be seen under the RDF Type:G column, the
devices are type R1 and they have been created in SRDF Group 101. The mode of SRDF operation for
these pairs is Adaptive Copy Disk Mode, and currently all the R1-R2 pairs are in a Synchronized state.
The local R1 devices—the Sym Dev column of the output—0BD and 0BE are paired with corresponding
R2 devices—the Sym RDev column of the output—060 and 061.

Legend for FLAGS:

(M)ode of Operation : A = Async, S = Sync, E = Semi-sync, D = Adaptive Copy


Disk Mode

: W = Adaptive Copy WP Mode, M = Mixed, T = Active

Mirror (T)ype : 1 = R1, 2 = R2

(E)xempt : X = Enabled, . = Disabled, M = Mixed, - = N/A

R1/R2 Device (S)ize : E = R1 EQ R2, 1 = R1 GT R2, 2 = R2 GT R1, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 335
Identify SRDF Devices Accessible to Host (2 of 2)
C:\>symdev list pd

Symmetrix ID: 000197601888

Device Name Dir Device


---------------------------- ------- -------------------------------------
Cap
Sym Physical SA :P Config Attribute Sts (MB)
---------------------------- -------------------------------------
00043 \\.\PHYSICALDRIVE19 01E:000 TDEV N/Grp’d RW 15360
00044 \\.\PHYSICALDRIVE20 01E:000 TDEV N/Grp'd RW 15360
000A3 \\.\PHYSICALDRIVE5 01E:000 TDEV N/Grp'd RW 6
000A5 \\.\PHYSICALDRIVE6 01E:000 TDEV N/Grp'd RW 6
000BD \\.\PHYSICALDRIVE7 01E:000 RDF1+TDEV N/Grp'd RW 10241
000BE \\.\PHYSICALDRIVE8 01E:000 RDF1+TDEV N/Grp'd RW 10241
000BF \\.\PHYSICALDRIVE9 01E:000 TDEV N/Grp'd RW 10241
-----Output Truncated-----

336 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The sympd list command gives the list of all the devices that the host can access on the array. This
command is used to correlate the host physical device name with the array device number. We see that
the host addresses the R1 devices as PHYSICALDRIVE7 and PHYSICALDRIVE8. To format the devices,
create partitions, and create and mount file systems, the physical device names of the host are used.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 336
Manage SRDF Operations Using Storage Groups

symrdf command can be executed on storage groups using the –sg option

symrdf –sg <storagegroup> -sid <SymmID> -rdfg <grpNum> <Action>

Create SRDF pairs using storage groups

For example:

symrdf createpair -sid <SymmID> -sg <storagegroup> -rdfg <grpNum> -type


<r1|r2> -remote_sg <storagegroup> -establish

337 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Starting with Solutions Enabler 8.0.2/HYPERMAX OS Q1 2015 SR, you can now manage SRDF
operations using Storage Groups. Storage Groups (SGs) are a collection of devices stored on the array
that are used by an application, a server, or a collection of servers. Refer to Dell EMC Solutions Enabler
Array Controls and Management CLI User Guide for more information on Storage Groups.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations: 337
Create Storage Group and Standard Devices
C:\>symsg –sid 1888 create RDF1_SG
C:\>symsg –sid 2249 create RDF2_SG
C:\>symdev –sid 1888 create –tdev –cap 2 –captype gb –N 2 –
sg RDF1_SG -nop
C:\>symdev –sid 2249 create –tdev –cap 2 –captype gb –N 2 –
sg RDF2_SG -nop
C:\>symrdf –sid 1888 createpair –sg RDF1_SG –rdfg 20 –type
r1 –establish –remote_sg RDF2_SG –nop

338 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg command is used for creating Storage Groups and the symdev command is used for
adding standard devices to storage group. symrdf createpair command is used to pair the devices in both
storages to form R1/R2 relationship.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 338
Display Storage Group and Devices Details
C:\>symsg –sid 1888 show RDF1_SG
Name: RDF1_SG

Symmetrix ID : 000197601888
Last updated at : Mon Nov 08 03:43:15 2021
Masking Views : No
FAST Managed : No
….(truncated)…

Devices (2):
{
------------------------------------------------------------------------------------
Sym Cap
Dev PdevName Device Config Attr Sts (MB)
------------------------------------------------------------------------------------
000BD N/A RDF1+ TDEV RW 2049
000BE N/A RDF1+ TDEV RW 2049
}
339 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symsg show command displays detailed group information for any specific storage group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 339
symrdf Command Syntax

symrdf –sid <Symmetrix ID> –sg <storage_group_name> -rdfg <rdfgroupnumber> <Action>

Actions

Suspend Update

Restore Failback

Set Mode Split

Failover Establish

Resume

340 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Users can perform a number of SRDF operations using host-based SYMCLI commands. Major SRDF
operations or actions include: suspend, restore, set mode, failover, resume, update, failback, split, and
establish.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 340
SRDF Modes
Synchronous—up to 200 kilometers

• Write acknowledged after target device has received and checked the data
• R1 and R2 devices always contain identical data

Asynchronous—unlimited distance
• Writes from production host are acknowledged immediately by local array
• Maintains a dependent-write consistent copy between the R1 and R2 devices

Adaptive Copy Disk—unlimited distance


• Designed to transfer large amounts of data without loss of performance
• New data accumulates on the R1, marked as invalid tracks, no guarantee of R2
data consistency if not synchronized

341 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In SRDF/Synchronous mode, the array responds to the host that issued a write operation to the source
device only after the array containing the target device acknowledges that it has received and checked the
data. Synchronous mode ensures that the source R1 and target R2 devices contain identical data.

SRDF/Asynchronous (SRDF/A) is a long-distance disaster restart solution with fast application response
times. SRDF/A maintains a dependent-write consistent copy between the R1 and R2 devices across any
distance with no impact to the application.

Adaptive copy disk mode is designed to transfer large amounts of data without loss of performance.
Adaptive copy mode allows the R1 and R2 devices to be more than one I/O out of synchronization. Unlike
the asynchronous mode, adaptive copy mode does not guarantee a dependent-write consistent copy of
data on R2 devices. Because the array cannot fully guard against data loss should a failure occur, Dell
EMC recommends:

1. Use adaptive copy disk mode to transfer the bulk of your data to target devices.

2. Then switch to synchronous mode to ensure full data protection or asynchronous mode to ensure full
data consistency.

The amount of data out of synchronization between the R1 and the R2 devices at any given time is
determined by the maximum skew value.

In adaptive copy disk mode (acp_disk), new data accumulates on the R1 until it can be transferred to the
R2.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 341
Mixed SRDF Modes on Remote Adapters
C:\>symqos -sid 1888 -ra -dir 1F set io -sync 50 -async 40 -copy 10

C:\>symqos -sid 1888 list -ra -io

RA IO State : Enabled

System Defaults:

Synchronous IOs () : 70
Asynchronous IOs () : 20
Copy IOs () : 10

RDF Directors:

Flg IO Percent
Ident R Sync Async Copy
------ --- ---- ----- ----
RF-1F X 50 40 10
RF-2F . 70 20 10

342 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

RA CPU resource distribution for Synchronous, Asynchronous, and Copy modes can be set either system
wide—which will affect all RAs—or can be set on a subset of RAs. The resource distribution can be
enabled or disabled. The system defaults as seen in here are 70/20/10 for Sync/Async/Copy modes.

As shown here, for purpose of illustration, the distribution can be changed for one of the directors if
necessary. In this case, RA-1F has been changed to 50/40/10 for Sync/Async/Copy modes.

Legend for Flg:

(R)A IO Set: X = Set, . = Default, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 342
Changing SRDF Mode of Operation
C:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG set mode sync -nop
C:\>symrdf –sid 1888 –sg RDF1_SG query
-----Output Truncated-----

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 S..E Synchronized
N/A 000BE RW 0 0 RW 00061 WD 0 0 S..E Synchronized

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

343 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The symrdf set mode command will change the SRDF operation mode. In this example, the mode has
been changed to Synchronous for these two R1-R2 pairs. This is indicated by the S in the M column of the
output. In normal operations of SRDF, the R1 device presents a Read Write (RW) status to its host and
the corresponding R2 device presents a Write Disabled (WD) status to its host. Data written to the R1
device is sent over the links to the R2 storage system. The meaning of the R1/R2 Inv(alid) Tracks are
discussed throughout this module.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 343
Suspending SRDF Links
C:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG suspend -nop
C:\>symrdf –sid 1888 –sg RDF1_SG query
-----Output Truncated-----

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 7278 RW 0006C WD 0 0 S..E Suspended
N/A 000BE RW 0 7232 RW 0006D WD 0 0 S..E Suspended

Total ------- ------- ------- -------


Track(s) 0 14510 0 0
MB(s) 0.0 1813.8 0.0 0.0

344 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Suspend is a singular operation. Data transfer from the source devices to the target devices is stopped.
The links for these devices are logically set to Not Ready (NR). This operation only affects the targeted
devices in the device pairs. SRDF device pairs in other devices and other SRDF Groups are not affected
even if they share the same Remote Directors. Physical links and the RA communication paths are still
available. New writes to the source devices accumulate as invalid tracks to the R2 in the R2 Inv Tracks
column. The R1s continue to be Read Write enabled and the R2s continue to be Write Disabled.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 344
Resuming SRDF Links
C:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG resume -nop
C:\>symrdf –sid 1888 –sg RDF1_SG query
-----Output Truncated-----

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 0006C WD 0 0 S..E Synchronized
N/A 000BE RW 0 0 RW 0006D WD 0 0 S..E Synchronized

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

345 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Resume is a singular operation. To invoke this operation, the RDF pair(s) must already be in the
Suspended state. Data transfer from R1 to R2 is resumed. The pair state will remain in SyncInProg until
all accumulated invalid tracks for the pair have been transferred. After the transfer has been completed,
the pair state will change to Synchronized. Invalid tracks are transferred to the R2 in any order – so write
serialization will not be maintained. The link is set to Read Write. The R1s continue to be Read Write
enabled and the R2s continue to be Write Disabled.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 345
Lesson: SRDF Disaster Recovery Operations
This lesson covers the following topics:

• SRDF Failover

• SRDF Update

• SRDF Failback

346 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF Disaster Recovery operations. Device and link states under different conditions
are presented in detail. Host considerations when performing DR operations are also discussed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations: 346
SRDF Disaster Recovery Operations
Failover:
• Makes a copy of the data on target devices—R2s—available to the host accessing these devices on
the target array
• Invoked after a disaster—host, storage array, or site failure
• Can be used for maintenance operations on the source site: Provides data availability from the target
devices, during host, storage array, or site maintenance

Update:
• Begins transfer of accumulated invalid tracks from the R2s to the R1s, while production work
continues on the R2s

Failback:
• Resumes operation back on the primary host accessing the source devices—R1s. All changes that
are made to the R2s when failed over are transferred back to the source devices
• Primary host can access the R1 devices when the command completes without waiting for the data
transfer to complete

347 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF disaster recovery operations are:

• Failover from the source side to the target side, switching data processing to the target side

• Update the source side after a failover while the target side is still used for applications

• Failback from the target side to the source side by switching data processing to the source side

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations: 347
SRDF Failover

C:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG failover

Execute an RDF 'Failover' operation for storage group ‘RDF1_SG’ (y/[n]) ? y

An RDF 'Failover' operation execution is


in progress for storage group ‘RDF1_SG’. Please wait...

Write Disable device(s) in (1888,20) on SA at source (R1)..............Done.


Suspend RDF link(s) for device(s) in
(1888,20).......................................Done.
Read/Write Enable device(s) in (1888,20) on RA at target (R2)..........Done.

The RDF 'Failover' operation successfully executed for storage group ‘RDF1_SG’.

348 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The failover operation can be executed from the source side host or the target side. This is true for all
symrdf commands. In order to perform operations from the target side, a device pair of type RDF2 should
be created and the R2 devices should be added to it. In the event of an actual disaster, this is helpful as
there would be no way of communicating with the source array. The operation assumes there is a disaster
situation and makes all efforts to enable data access on the target array:
• Will proceed if possible
• Will give message for any potential data integrity issue

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 348
Query After Failover
CC:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG query

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD WD 0 0 RW 00060 RW 7287 0 S..E Failed Over
N/A 000BE WD 0 0 RW 00061 RW 7333 0 S..E Failed Over

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

349 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

As can be seen in the output, the R1 devices are Write Disabled, the SRDF links between the device pairs
are logically suspended, and the R2 devices are Read Write enabled. The host accessing the R2 devices
can now resume processing the application.

While in a true disaster situation, when the source host/storage array/site may be unreachable, it is not
possible to perform a graceful shutdown on the source side prior to a failover. However, if the failover is
due to testing or for a maintenance operation, a graceful shutdown is recommended. A failover leads to a
Write Disabled state of the R1 devices. If a device suddenly becomes Write Disabled from a Read Write
state, the reaction of the host can be unpredictable if the device is in use. Hence the recommendation is to
stop applications, unmount the filesystem, or unassign the drive letter prior to performing a failover for
maintenance operations.

For a clean, consistent, coherent point-in-time copy which can be used with minimal recovery on the target
side, some or all of the following steps may have to be taken on the source side:
• Stop all applications
• Unmount the file system—unmount or unassign the drive letter to flush the filesystem buffers from
the host memory down to the storage array
• Deactivate the Volume Group

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 349
SRDF Update

C:\>symrdf –sid 1888 –rdfg 20 –sg RDF1_SG update

An RDF 'Update R1' operation execution is


in progress for storage group ‘RDF1_SG’. Please wait...

Suspend RDF link(s) for device(s) in (1888,20).................................Done.


Merge device track tables between source and target in (1888,20).......Started.
Devices: 00BD-00BE in (1888,20)..........................Merged.
Merge device track tables between source and target in (1888,20).......Done.
Resume RDF link(s) for device(s) in (1888,20)..............................Started.
Resume RDF link(s) for device(s) in (1888,20).................................Done.

The RDF 'Update R1' operation successfully initiated for storage group ‘RDF1_SG’.

350 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

While the target R2 device is still operational—Read Write Enabled to its local host—an incremental data
copy from the target R2 device to the source R1 device can be initiated. This is done to update the R1
mirror with changed tracks from the target R2 device. After an extended outage on the R1 side, a
substantial amount of invalid tracks could have accumulated on the R2. If a failback is now performed,
production starts from the R1. New writes to the R1 have to be transferred to the R2 synchronously. Any
track requested on the R1 that has not yet been transferred from the R2 has to be read from across the
links. This could lead to performance degradation on the R1 devices. The update operation helps to
minimize this impact.

When performing an update, the R1 devices are still Write Disabled. The links become Read Write
enabled because of the Updated state. The target devices remain Read Write during the update process.

The update operation can be used with the –until flag, which represents a skew value assigned to the
update process. For example, we can choose to update until the accumulated invalid tracks are down to
30000. Then a failback operation can be executed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 350
Query After Update
CC:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG query

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD WD 5009 0 RW 00060 RW 0 0 S..E R1 Updated
N/A 000BE WD 4990 0 RW 00061 RW 0 0 S..E R1 Updated

Total ------- ------- ------- -------


Track(s) 9999 0 0 0
MB(s) 1249.9 0.0 0.0 0.0

351 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When the update operation is performed after a failover, the links become Read Write enabled, but the
Source devices are still Write Disabled. Production work continues on the R2 devices.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 351
SRDF Failback
C:\>symrdf –sid 1888 –rdfg 20 –sg RDF1_SG failback

An RDF ‘Failback' operation execution is


in progress for storage group ‘RDF1_SG'. Please wait...

Write Disable device(s) in (1888,20) on RA at target (R2)..............Done.


Suspend RDF link(s) for device(s) in (1888,20).........................Done.
Merge track tables between source and target in (1888,20)..................Started.
Read/Write Enable device(s) in (1888,20) on RA at target (R2)..........Done.
Devices: 00BD-00BE in (1888,20)..........................Merged.
Merge track tables between source and target in (1888,20)...............Done.
Resume RDF link(s) for device(s) in (1888,20)..............................Started.
Resume RDF link(s) for device(s) in (1888,20).................................Done.
Read/Write Enable device(s) in (1888,20) on SA at source (R1).................Done.

The RDF ‘Failback' operation successfully initiated for storage group ‘RDF1_SG’.

352 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When the source site has been restored, or if maintenance is completed, you can return production to the
source site. The symrdf failback command sets the R2s to Write Disabled, the link to Read Write,
and the R1s to Read Write enabled. Merging of the device track tables between the source and target is
done. The SRDF links are resumed. The accumulated invalid tracks are transferred to the source devices
from the target devices. So all changes made to the data when in a failed over state will be preserved. As
noted earlier, the primary host can access the R1 devices and start production work as soon as the
command completes. If a track that has not yet been sent over from the R2 is required on the R1, SRDF
can preferentially read that track from across the links.

As the R2s will be set to Write Disabled, it is important to shut down the applications using the R2 devices,
and perform the appropriate host dependent steps to unmount filesystem/deactivate volume groups. If
applications are still actively accessing R2s when they are being set to Write Disabled, the reaction of the
host accessing the R2s will be unpredictable. In a true disaster, the failover process may not give an
opportunity for a graceful shutdown. But a failback event should always be planned and done gracefully.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 352
Query After Failback
C:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG query

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 7347 0 RW 00060 WD 0 0 S..E SyncInProg
N/A 000BE RW 7324 0 RW 00061 WD 0 0 S..E SyncInProg

Total ------- ------- ------- -------


Track(s) 14671 0 0 0
MB(s) 1833.9 0.0 0.0 0.0

353 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

As can be seen in the output, the R1s are set to Read Write, the R2s are set to Write Disabled, and the
links are set to Read Write. The pair states go into SyncInProg. The accumulated invalid tracks have been
transferred from the target array to the source array. Once all accumulated invalid tracks have been
transferred, the pair state will go to Synchronized. Because a failback operation sets the R2 devices to
Write Disabled, applications accessing the R2 devices must be stopped before the failback operation.
When a host suddenly loses RW access to a device while still actively accessing it, the results are
unpredictable.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 353
Lesson: SRDF Decision Support Operations
This lesson covers the following topics:

• SRDF Establish

• SRDF Restore

• Concurrent SRDF

354 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF Decision Support operations. Considerations for performing these operations
are presented in detail. Concurrent SRDF where one R1 device is simultaneously paired with two R2
devices is also discussed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations: 354
SRDF Decision Support Operations
Split
• Enables accessing both the R1 and R2 devices by their respective hosts
• Suspends the links between the R1-R2 pairs
• Read-Write enables the R2 device

Establish
• Resumes normal SRDF mirroring—source RW and target WD, link RW
• Save source R1 data—changes made to the R1 during split state are propagated to the R2
and changes made to the R2 are discarded

Restore
• Resumes normal SRDF mirroring—source RW and target WD, link RW
• Save target R2 data—changes made to the R2 during split state are propagated to the R1
and changes made to the R1 are discarded

355 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The decision support operations for SRDF devices are:

• Split an SRDF pair which stops mirroring for the SRDF pairs in a device pairs.

• Establish an SRDF pair by initiating a data copy from the source side to target side. The operation can
be full or incremental.

• Restore remote mirroring, which initiates a data copy from the target side to the source side. The
operation can be full or incremental.

As noted in the title, these are decision support operations and are not disaster recovery/business
continuance operations. In these situations, both the source and target sites are healthy and available.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations: 355
SRDF Split
C:\>symrdf –sid 1888 –rdfg 20 –sg RDF1_SG split -nop

C:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG split query

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 7227 NR 00060 RW 6256 0 S..E Split
N/A 000BE RW 0 7252 NR 00061 RW 6237 0 S..E Split

Total ------- ------- ------- -------


Track(s) 0 14479 12493 0
MB(s) 0.0 1809.9 1561.6 0.0
356 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The split command suspends the links between source—R1—and target—R2—volumes. The source
devices continue to be Read Write enabled. The target devices are set to Read Write enabled. Writes to
the R1 devices accumulate as R2 Inv(alid) Tracks—these are the tracks now owed to the R2 devices.
Writes to the R2 devices accumulate as R1 Inv(alid) Tracks—these are the tracks owed to the R1 devices.
The RDF Pair state is displayed as Split.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 356
SRDF Establish
C:\>symrdf –sid 1888 –rdfg 20 –sg RDF1_SG establish

Execute an RDF 'Incremental Establish' operation for storage group ‘RDF1_SG' (y/[n]) ? y

An RDF 'Incremental Establish' operation execution is


in progress for storage group ‘RDF1_SG'. Please wait...

Write Disable device(s) in (1888,20) on RA at target (R2)..............Done.


Suspend RDF link(s) for device(s) in (1888,20).........................Done.
Resume RDF link(s) for device(s) in (1888,20)..........................Started.
Merge track tables between source and target in (1888,20)..............Started.
Devices: 00BD-00BE in (1888,20)........................................Merged.
Merge device track tables between source and target in (1888,20).......Done.
Resume RDF link(s) for device(s) in (1888,20)..........................Done.

The RDF 'Incremental Establish' operation successfully initiated for storage group
‘RDF1_SG'.

357 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Establish operation will resume SRDF remote mirroring. Changes made to the source while in a split
state are transferred to the target. Changes made to the target are overwritten. The R2 devices are set to
Write Disabled. Hence applications should stop accessing the R2 devices prior to performing an establish
operation. The links are resumed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations: 357
Query After SRDF Establish
C:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG query

Symmetrix ID : 000197601888 (Microcode Version: 5978)


Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)
RDF (RA) Group Number : 20 (13)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 2283 RW 00060 WD 0 7710 S..E SyncInProg
N/A 000BE RW 0 2397 RW 00061 WD 0 7879 S..E SyncInprog

Total ------- ------- ------- -------


Track(s) 0 4680 0 15589
MB(s) 0.0 585.0 0.0 1948.6
358 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

As can be seen in the query output, the state of the devices are reverted to their normal state—R1-RW;
R2-WD—and the links are resumed—RW. Changes made to the R2 devices during the split state are
discarded. Changes made to the R1 devices during the split state are propagated to the R2 devices.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 358
SRDF Restore
C:\>symrdf –sid 1888 –rdfg 20 –sg RDF1_SG restore

An RDF 'Incremental Restore' operation execution is


in progress for storage group ‘RDF1_SG'. Please wait...

Write Disable device(s) in (1888,20) on SA at source (R1)..............Done.


Write Disable device(s) in (1888,20) on RA at target (R2)..............Done.
Suspend RDF link(s) for devices in (1888,20) ..........................Done.
Merge device track tables between source and target in (1888,20).......Started.
Devices: 00BD-00C6 in (1888,20)........................................Merged.
Merge device track tables between source and target in (1888,20).......Done.
Resume RDF link(s)for devices in (1888,20).............................Started.
Resume RDF link(s) for devices in (1888,20)............................Done.
Read/Write Enable device(s) in (1888,20) on SA at source(R1)...........Done.

The RDF 'Incremental Restore' operation successfully initiated for


storage group ‘RDF1_SG’.

359 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Restore operation resumes SRDF remote mirroring. Changes made to the target while in a split state
are transferred to the source. Changes made to the source are overwritten. The R2 devices are set to
Write Disabled. Hence, applications should stop accessing the R2 devices prior to performing an establish
operation. The links are resumed. As data on the R1 devices changes without the knowledge of the host,
access to R1 devices should be stopped prior to performing a restore operation. As soon as the command
completes, R1 devices can be accessed again without waiting for synchronization to be completed. Any
required track on the R1 that has not yet been received from the R2 will be read across the links.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations: 359
Query After SRDF Restore
C:\>symrdf –sid 1888 -rdfg 20 –sg RDF1_SG query

Symmetrix ID : 000197601888 (Microcode Version: 5978)


Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)
RDF (RA) Group Number : 20 (13)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 8205 0 RW 00060 WD 1938 0 S..E SyncInProg
N/A 000BE RW 8205 0 RW 00061 WD 1871 0 S..E SyncInProg

Total ------- ------- ------- -------


Track(s) 16410 0 3809 0
MB(s) 2051.3 0.0 476.1 0.0
360 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

As can be seen in the query output, the state of the devices are reverted to their normal state—R1-RW;
R2-WD—and the links are resumed to RW. Changes made to the R1 devices during the split state are
discarded. Changes made to the R2 devices during the split state are propagated to the R1 devices.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 360
R1/R2 Personality Swap
Changes the personality of the SRDF devices
• Current R1 becomes new R2. Current R2 becomes new R1
Data flow is from the new R1—old R2—to the new R2—old R1
Can be performed in one of two ways:
• symrdf swap
• symrdf failover –establish

Useful for:
• Disaster Recovery drills
• Datacenter relocation
• Maintenance operations on local site hosts while continuing production work with Disaster
Recovery protection
• Selective load balancing for certain applications by swapping their device personalities and
moving their workload to the other array

361 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

An R1/R2 personality swap—or R1/R2 swap—refers to when the SRDF personality of the SRDF device
designations of a specified device pairs are swapped. The source R1 devices become target R2 devices
and target R2 devices become source R1 devices.

Sample scenarios for R1/R2 Swap:

Symmetrix Load Balancing:

In our rapidly changing computing environments, it is often necessary to redeploy applications and storage
on a different storage array without having to give up disaster protection. An R1/R2 swap can enable this
redeployment with minimal disruption, while offering the benefit of load balancing across two storage
arrays.

Primary Data Center Relocation:

Sometimes a primary data center needs to be relocated to accommodate business practices. Businesses
might want to test their Disaster Recovery readiness without sacrificing DR protection. R1/R2 swaps allow
these customers to move their primary applications to their DR centers and continue to SRDF mirror back
to their Primary data center.

Post-Failover Temporary Protection Measure:

If the hosts on the source side are down for maintenance, an R1/R2 swap permits the relocation of
production computing to the target site without giving up the security of remote data protection. When all
problems have been solved on the local storage array hosts, you have to failover again and swap the
personality of the devices to go back to the original configuration.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 361
Concurrent SRDF Devices

Concurrent SRDF is a 3-site disaster recovery solution using R11 devices that
replicate to two R2 devices.
There are three different types of Concurrent SRDF devices:
• R11 – Each R1 mirror is paired with a different R2 mirror on two different remote
storage arrays.
• R21 – This device is the R2 mirror for an R1 device and also acts as a R1 mirror for
another R2 device. This device is used in the secondary site of a Cascaded SRDF
configuration.
• R22 – Each R2 mirror is paired with a different R1 mirror on two different remote
storage arrays. Only one of the R2 mirrors can be Read Write on the links at a time.

362 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Concurrent SRDF is a 3-site disaster recovery solution using R11 devices that replicate to two R2 devices.
R11 devices operate as the R1 device for two R2 devices. Links to both R2 devices are active. R21
devices are configured and used for Cascaded SRDF environments. SRDF supports concurrent
SRDF/Star topologies using R22 devices. R22 devices have two SRDF mirrors, only one of which is active
on the SRDF links at a given time. R22 devices improve the resiliency of the SRDF/Star application, and
reduce the number of steps for failover procedures.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 362
Concurrent SRDF – R11 Devices

One R1 can be paired with two R2 devices, concurrently

Each of the two concurrent mirrors must belong to different SRDF groups—RA groups

SRDF Group 1
R2
Site B

R1
R11

Site A

SRDF Group 2
R2
Site C
363 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Concurrent SRDF allows two remote SRDF mirrors of a single R1 device. A concurrent R1 device has two
R2 devices associated with it. Each of the R2 devices is usually in a different array. Any combination of
SRDF modes is allowed:

R11 → R2 (Site B) in Synchronous mode and R11 → R2 (Site C) in Asynchronous mode

R11 → R2 (Site B) in Synchronous mode and R11 → R2 (Site C) in Adaptive Copy Disk mode

R11 → R2 (Site B) in Synchronous mode R11 → R2 (Site C) in Synchronous mode

R11 → R2 (Site B) in Asynchronous mode and R11 → R2 (Site C) in Asynchronous mode

Each of the R1 → R2 pairs are created in different SRDF Groups.

2 Synchronous remote mirrors: A write I/O from the host to the R11 device cannot be acknowledged to
the host as completed until both remote arrays signal the local array that the SRDF I/O is in cache at the
remote side.

SRDF swap is not allowed in this configuration.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 363
Concurrent SRDF – R11 Example
C:\>symrdf createpair -sid 1888 -file rdf_device_pairs_conc.txt -type R1 -rdfg 102 –establish

An RDF 'Create Pair' operation execution is in progress for device


file 'rdf_device_pairs_conc.txt'. Please wait...

Create RDF Pair in (1888,102)....................................Started.


Create RDF Pair in (1888,102)....................................Done.
Mark target device(s) in (1888,102) for full copy from source....Started.
Devices: 00BD-00C6 in (1888,102).................................Marked.
Mark target device(s) in (1888,102) for full copy from source....Done.
Merge track tables between source and target in (1888,102).......Started.
Devices: 00D2-00D3 in (1888,102).................................Merged.
Merge track tables between source and target in (1888,102).......Done.
Resume RDF link(s) for device(s) in (1888,102)...................Started.
Resume RDF link(s) for device(s) in (1888,102)...................Done.

The RDF 'Create Pair' operation successfully executed for device


file 'rdf_device_pairs_conc.txt'.

364 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

For the purpose of illustration, we show the R1 devices paired with two R2 devices on the same remote
array. The real use for R11 devices is to pair them with R2 devices on two different remote arrays,
perhaps at two different locations.

In this example, R1 devices 0BD and 0BE on SID:1888 are paired with R2 devices 060 and 061 on
SID:2249, as well as concurrently paired with R2 devices 062 and 063 on SID:2249. This was
accomplished by the following two commands:

C:\>symrdf addgrp -label SRDF_CONC -sid 1888 -remote_sid 2249 -dir 1F:11,2F:11
-remote_dir 1F:27,2F:27 -rdfg 102 -remote_rdfg 102

A new RDF group—number 102—has been created.

C:\>symrdf createpair -sid 1888 -file rdf_device_pairs_conc.txt -type R1 -rdfg


101 –establish

Where the file rdf_pairs_conc.txt contains:

0BD 062

0BE 063

This specifies that R1 devices 0BD and 0BE should now be concurrently paired with R2 devices 062 and
063 as well.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 364
Listing Concurrent SRDF Devices
C:\>symrdf -sid 1888 list -concurrent

Symmetrix ID: 000197601888

Local Device View


-----------------------------------------------------------------------------
STATUS FLAGS RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MTES Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- -------- -------- --- ---- -------------

000BD 00060 R1:101 RW RW RW S1.E 0 0 RW WD Synchronized


00062 R1:102 RW RW RW D1.E 0 0 RW WD Synchronized
000BE 00061 R1:101 RW RW RW S1.E 0 0 RW WD Synchronized
00063 R1:102 RW RW RW D1.E 0 0 RW WD Synchronized

Total -------- --------


Track(s) 0 0
MB(s) 0.0 0.0

365 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The output shows that R1 device 0BD is now concurrently paired with R2 device 060 in RDF Group 101
as well as with R2 device 062 in RDF Group 102. Note that one leg {0BD→060} is in Synchronous
mode of SRDF and the other leg {0BD→062} is in Adaptive Copy Disk mode. Likewise for the device
pairs {0BE→061} and {0BE→063}. If you want to change the other leg to Synchronous mode as well,
then use the command symrdf –sid 1888 -rdfg 102 set mode sync.

So the way to deal with the two different legs is to call them out with the –rdfg flag and explicitly specify
which leg you want to operate on.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 365
SRDF Consistency Protection (1 of 2)

SRDF Consistency Group


• An SRDF consistency group is a composite group of SRDF devices that are enabled for
consistency
• If a source R1 device in the consistency group cannot propagate data to its corresponding R2
device, SRDF consistency suspends data propagation from all the R1 devices in the group.

SRDF Daemon storrdfd


The SRDF daemon storrdfd provides consistency protection for:
• SRDF/Synchronous RDF-Enginuity Consistency Assist (ECA) consistency groups
• SRDF/Asynchronous Multi-session Consistency (MSC) consistency groups
• Concurrent SRDF

366 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF consistency preserves the dependent-write consistency of devices within a consistency group by
monitoring data propagation from source devices to their corresponding target devices. If a source R1
device in the consistency group cannot propagate data to its corresponding R2 device, SRDF consistency
suspends data propagation from all the R1 devices in the group.

A Composite group must be created using the RDF consistency protection option (-rdf_consistency) and
must be enabled using the symcg enable command for the SRDF daemon to begin monitoring and
managing the consistency group. Devices in a consistency group can be from multiple arrays or from
multiple SRDF groups in the same array.

Consistency protection is managed by the SRDF daemon which is a Solutions Enabler process that runs
on a host with Solutions Enabler and connectivity to the array. Consistency protection is available for
SRDF/S, SRDF/A, and Concurrent SRDF modes. The storrdfd daemon ensures that there will be a
consistent R2 copy of the database at the point in time in which a data flow interruption occurs.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 366
SRDF Consistency Protection (2 of 2)

SRDF/S RDF-ECA
For SRDF/S RDF-ECA, the SRDF daemon
• Continuously polls SRDF/S sessions for data flow interruptions
➢ If an R1 cannot propagate data to its R2, it suspends SRDF links for all devices in the
consistency group

SRDF/A MSC

For SRDF/A MSC, the SRDF daemon performs


• Cycle switching and cache recovery for all SRDF/A sessions within a consistency group
• Manages the R1 -> R2 commits for SRDF/A sessions in multi-cycle mode

367 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

RDF-ECA provides consistency protection for synchronous mode devices by performing suspend
operations across all SRDF/S devices in a consistency group. SRDF/A MSC will be discussed in the next
module.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 367
Lesson: Online Device Expansion
This lesson covers the following topic:

• Online Device Expansion

368 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the steps to perform an online device expansion in Unisphere for PowerMax.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations: 368
Select Volume

369 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

From the navigation pane, select Storage and then select Storage Groups. From the Storage Groups
tab, select the desired storage group for which the volumes need to be expanded. Choose VOLUMES and
then select the desired volume from the available devices to be expanded. Then, select Expand.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 369
Volume Expansion Dialog

370 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

On the volume expansion dialog, enter the desired new volume size, select the RDF Group, and then
select Run Now from the ADD TO JOB LIST dropdown. For this example, the volume capacity is
expanded from 1000 GB to 2000 GB.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 370
Verify Expansion

371 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Verify that the device has been expanded. In this example, it can be observed that the capacity of the
000BF device has been expanded from 1000 GB to 2000 GB.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 371
Lesson: SRDF/S Operations Unisphere for PowerMax
This lesson covers the following topics:

• Creating Dynamic SRDF Groups and SRDF Pairs

• Performing SRDF/S Operations

372 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers performing SRDF operations using Unisphere for PowerMax.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 372
Creating SRDF Groups

373 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To list the currently configured SRDF Groups, navigate to Data Protection in the navigation pane > SRDF
Groups. Click Create SRDF Group to launch the wizard.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 373
Create SRDF Group Wizard (1 of 4)

374 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Unisphere for PowerMax Create SRDF Group dialog is shown here. Choose the desired
Communication protocol FC or GigE, select the Remote Array ID, and enter an SRDF group label. Click
NEXT to proceed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 374
Create SRDF Group Wizard (2 of 4)

375 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Configure Local array dialog is next. Enter or select the SRDF Group Number. Choose the RA
directors and ports for the local array.

The Advanced Options include Local Link Domino and Local Auto Link Recovery.

Under certain conditions, the SRDF devices can be forced into the Not Ready state to the host if, for
example, the host I/Os cannot be delivered across the SRDF link. The domino attribute is used to stop all
subsequent write operations to both R1 and R2 devices to avoid data corruption.

If, during normal operation, all SRDF links fail, the array stores the SRDF states of the affected SRDF
devices. The Local Auto Link Recovery attribute enables the array to restore the devices to these states
automatically when the SRDF links become operational.

If selecting Advanced Options, click OK to proceed. Click NEXT to continue to the Configure Remote
dialog.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 375
Create SRDF Group Wizard (3 of 4)

376 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Configure Remote array dialog is next. Enter or select the Remote SRDF Group Number. Choose the
SRDF directors and ports for the remote array. Click NEXT to proceed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 376
Create SRDF Group Wizard (4 of 4)

377 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Shown here is the Review Summary. The Use Software Compression option is available for SRDF traffic
over Fibre Channel and GigE SRDF links. If software compression is enabled, PowerMaxOS compresses
the data before sending it across the SRDF links. The arrays at both sides of the SRDF links must support
software compression and must have the software compression feature enabled in the configuration file.

Select Run Now from the ADD TO JOB LIST drop down menu to continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 377
Creating Dynamic SRDF Pairs

378 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To create Dynamic SRDF pairs in Unisphere for PowerMax, navigate to the SRDF Groups page. Click
the SRDF group that you want to create SRDF Pairs in and then click the Create Pairs button to launch
the Create Pairs dialog.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 378
Create Pairs Wizard (1 of 5)

379 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In the dialog, choose the Mirror Type R1 or R2. Select the SRDF Mode. The options include Adaptive
Copy, Synchronous, Asynchronous, and Active. Select the Establish radio button. Optionally, you can
choose to bypass the check to ensure the target of the operation is not writable by the host. Click NEXT to
continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 379
Create Pairs Wizard (2 of 5)

380 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The next step is to choose the Local Volumes. In this example, we chose two volumes manually using the
Select Volumes (not shown) wizard. Click NEXT to proceed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 380
Create Pairs Wizard (3 of 5)

381 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Next, select the Remote Volumes. In this case, we selected two volumes manually using the Select
Volumes Wizard (not shown). Click NEXT to continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 381
Create Pairs Wizard (4 of 5)

382 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Sort Pairs dialog window, you can reorder the volume pairing by dragging and dropping remote
volumes. Here we can see that the Local Volumes selected are volumes 0F3 and 0F4. The Remote
Volumes are volumes 07F and 092. Click NEXT to proceed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 382
Create Pairs Wizard (5 of 5)

383 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the Pair Summary and select Run Now from the ADD TO JOB LIST drop down menu.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 383
SRDF Group Operations

384 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

From the SRDF Groups page, select the SRDF Group and click the More Actions button. Attributes that
can be set and other actions that can be performed on this SRDF Group are displayed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 384
Module Summary

Key points covered in this module:

• Creating Dynamic SRDF Groups and Dynamic SRDF Pairs

• Performing SRDF/S Operations using SYMCLI and Unisphere for PowerMax

• Performing Online Device Expansion using Unisphere for PowerMax

385 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered SRDF operations in Synchronous mode. Use of SYMCLI and Unisphere for
PowerMax to perform SRDF operations and online device expansion were presented in detail.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Synchronous Operations 385
Module: SRDF/Asynchronous Operations
Upon completion of this module, you should be able to:

• Describe SRDF/Asynchronous remote replication on PowerMax and VMAX Family arrays

• Perform SRDF/A operations

• Describe SRDF/A resiliency features

• Manage SRDF/A multi-session consistency

386 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module focuses on the SRDF/Asynchronous mode of remote replication. Concepts and operations for
SRDF/A in single and multi-session modes are presented. SRDF/A resiliency features are also discussed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 386
Lesson: SRDF/A Concepts and Operations
This lesson covers the following topics:

• Multi-cycle mode for SRDF/A on PowerMax arrays and VMAX All Flash arrays

• SRDF/A – System-level and group-level attributes

• Adding and removing device pairs to and from active SRDF/A sessions

387 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/A multi-cycle mode on PowerMax and VMAX All Flash arrays. The attributes
that can be set for SRDF/A at a system and group level are discussed in detail. Methods for adding and
removing RDF device pairs to and from active SRDF/A sessions are presented.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 387
SRDF/A Multi-Cycle Mode
SRDF/A
capture (n)

transmit (n-1)
m = Transmit
queue depth

receive (n-m)
transmit (n-m)

1. Multiple cycles (one capture cycle and multiple transmit cycles) on the Apply (n-m-1)
R1 side
2. Two cycles (receive and apply) on the R2 side
3. Each cycle switch creates a new capture cycle (n) and the existing
capture cycle (n-1) is added to the queue of cycles (n-1 through n-m
cycles) where (m>=1) to be transmitted to the R2 side by a separate
commit action.
4. Only the data in the last transmit cycle (n-m) is transferred to the R2 side
during a single commit.
R1 R2

388 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/A Multi-Cycle Mode (MCM) allows more than two capture cycles on the R1 side.

When the minimum_cycle_time has elapsed, the data from the capture cycle will be added to a
transmit queue and a new capture cycle will occur. The transmit queue is a feature of SRDF/A. It provides
a location for R1 captured cycle data to be placed so a new capture cycle can occur.

The capture cycle will occur even if no data is transmitted across the link. If no data is transmitted across
the link, the capture cycle data will again be added to the transmit queue. The transmit queue holds the
data until it is transmitted across the link. The transmit cycle will transfer the data in the oldest capture
cycle to the R2 first and then repeat the process.

The benefit of this is to capture controlled amounts of data on the R1 side. Each capture cycle will occur at
regular intervals and will not contain large amounts of data waiting for a cycle to occur.

Another benefit is data that it is sent across the SRDF link will be smaller in size and should not
overwhelm the R2 side. The R2 side will still have two delta sets, the receive and the apply.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 388
SRDF/A System Attributes
C:\>symcfg -sid 1888 list -v|more

Symmetrix ID: 000197601888 (Local)


Time Zone : Eastern Standard Time

Product Model : PowerMax_8000


Symmetrix ID : 000197601888

Microcode Version (Number) : 5978 (175A0000)


-----Output Truncated-----
Symmetrix Configuration Checksum : 20518DA7
Switched RDF Configuration State : Enabled
Concurrent RDF Configuration State : Enabled
Dynamic RDF Configuration State : Enabled
Concurrent Dynamic RDF Configuration : Enabled
RDF Data Mobility Configuration State: Disabled
-----Output Truncated-----
SRDF/A Maximum Host Throttle (Secs) : 0
SRDF/A Maximum Cache Usage (Percent) : 75
SRDF/A DSE Maximum Capacity (GB) : NoLimit

389 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The system attributes that pertain to SRDF/A are shown here. The use of Host Throttle, Maximum Cache
Usage, and DSE Maximum Capacity attributes will be explained later in this module.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 389
SRDF Group – SRDF/A Attributes

C:\>symcfg -sid 1888 list -rdfg 101 -rdfa

Symmetrix ID : 000197601888

S Y M M E T R I X R D F A G R O U P S

-------- ---------- -------- ----- --- --- --------- -----------------------


Write Pacing
RA-Grp Group Flags Cycle Pri Thr Transmit Delay Thr GRP DEV FLG
Name CSRM TDA time Idle Time (usecs) (%) SAU SAU P
-------- ---------- -------- ----- --- --- --------- ------- --- --- --- ---
101 (64) SRDF_Sync1 -IS- XIX 15 33 50 000:00:00 50000 60 I.- --- X

390 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF/A attributes of an SRDF Group can be listed with the –rdfa option as shown here. Note that
all attributes displayed here are default values. The RDF Group has just been created and no modification
to the attributes has been made.

Legend:

RDFA Flags :

(C)onsistency : X = Enabled, . = Disabled, - = N/A

(S)tatus : A = Active, I = Inactive, - = N/A

(R)DFA Mode : S = Single-session, M = MSC, - = N/A

(M)sc Cleanup : C = MSC Cleanup required, - = N/A

(T)ransmit Idle : X = Enabled, . = Disabled, - = N/A

(D)SE Status : A = Active, I = Inactive, - = N/A

DSE (A)utostart : X = Enabled, . = Disabled, - = N/A

Write Pacing Flags :

(GRP) Group-Level Pacing:

(S)tatus : A = Active, I = Inactive, - = N/A

(A)utostart : X = Enabled, . = Disabled, - = N/A

S(U)pported : X = Supported, . = Not Supported, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 390
List of Individual RDF Group

C:\>symrdf -sid 1888 list -rdfg 101

Symmetrix ID: 000197601888

Local Device View


-----------------------------------------------------------------------------
STATUS FLAGS RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MTES Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- -------- -------- --- ---- -------------

000BD 00060 R1:101 RW RW RW A1.E 0 0 RW WD Consistent


000BE 00061 R1:101 RW RW RW A1.E 0 0 RW WD Consistent

Total -------- --------


Track(s) 0 0
MB(s) 0.0 0.0

391 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The list of devices in SRDF Group 101 is displayed here. The two devices 0BD and 0BE are R1 devices
on the local PowerMax 1888. They have been paired with device 060 and 061 on the remote PowerMax
2000. The SRDF mode is Asynchrounous—as denoted by the (M)ode of Operation flag—and the SRDF
pairs are currently Consistent. The displays in this and the previous page are the results of the following
operations—seen earlier in the SRDF/S module:

Create RDF Group:

symrdf addgrp -label SRDF_Async1 -sid 1888 -remote_sid 2249 -rdfg 101 -
remote_rdfg 101 -dir 1F:11,2F:11 -remote_dir 1F:27,2F:27

Create RDF device pairs:

symrdf createpair -sid 1888 -rdfg 101 -f rdf_device_pairs.txt -type R1 -


establish -g async_dg1

The –g async_dg1 option adds the newly created RDF device pairs to a SYMCLI device group
async_dg1.

rdf_device_pairs.txt

0BD 060

0BE 061

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 391
Transitioning to SRDF/A Mode

• From Synchronous mode:


– If the devices are in Synchronized state, the R2 data is already consistent.
Enabling SRDF/A immediately provides consistent data on the R2.

• From Adaptive Copy Disk mode:


– Any invalid tracks owed to the R2 are synchronized. Two cycle switches after
Synchronization, SRDF/A provides consistent data on the R2.

392 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/A can be enabled when the device pairs are operating in any of the listed modes. In the case of
Adaptive Copy to SRDF/A transition, it takes two additional cycle switches after resynchronization of data
for the R2 devices to be consistent.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 392
Example – Synchronous Mode to SRDF/A (1 of 2)
C:\>symrdf –sid 1888 –rdfg 101 query

Symmetrix ID : 000197601888 (Microcode Version: 5978)


Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)
RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 S..E Synchronized
N/A 000BE RW 0 0 RW 00061 WD 0 0 S..E Synchronized

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

393 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Any SRDF/A operation—with the exception of Consistency Exempt, discussed later in the module—must
be performed on all devices in an RA group. This means that all devices in an SRDF Group must be in the
same SRDF Device pairs as well. This is in contrast with SRDF/S, where operations can be performed on
a subset of devices in an SRDF Group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 393
Example – Synchronous Mode to SRDF/A (2 of 2)
C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt set mode async –nop
C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt enable
C:\>symrdf –sid 1888 –rdfg 101 query -rdfa
---Output Truncated----
RDFA Session Status : Active – DSE
---Output Truncated----
R2 Data is Consistent : True
---Output Truncated----
Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)

RDF (RA) Group Number : 101 (64)


Source (R1) View Target (R2) View FLAGS
--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 AX.E Consistent
N/A 000BE RW 0 0 RW 00061 WD 0 0 AX.E Consistent

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

394 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The mode of SRDF operation is set to Asynchronous for the device pairs in the storage group asyncdg1.
SRDF/A consistency is enabled. The symrdf query –rdfa command gives detailed information about
the SRDF/A state of the device pair. As described earlier, the transition from Synchronous to
Asynchronous mode is immediate. The consistency state of the R2 devices is displayed in the query as
True. The RDF pair state reflects Consistent.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 394
Example – Adaptive Copy Disk Mode to SRDF/A (1 of 3)
C:\>symrdf –sid 1888 –rdfg 101 query

Symmetrix ID : 000197601888 (Microcode Version: 5978)


Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)
RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 57 RW 00060 WD 0 0 D..E SyncInProg
N/A 000BE RW 0 105 RW 00061 WD 0 0 D..E SyncInProg

Total ------- ------- ------- -------


Track(s) 0 162 0 0
MB(s) 0.0 20.3 0.0 0.0

395 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, the device pairs are in SRDF Adaptive Copy Disk Mode (D..E). There are R2 invalid
tracks.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 395
Example – Adaptive Copy Disk Mode to SRDF/A (2 of 3)
C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt set mode async –nop
C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt enable
C:\>symrdf –sid 1888 –rdfg 101 query -rdfa
---Output Truncated------
RDFA Session Number : 100
RDFA Cycle Number : 10
RDFA Session Status : Active – DSE
---Output Truncated------
R2 Data is Consistent : False

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 AX.E SyncInProg
N/A 000BE RW 0 0 RW 00061 WD 0 0 AX.E SyncInProg

396 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The transition into SRDF/A is immediate and the group has been enabled for consistency (AX.E).
However, the pair state is SyncInProg. The R2 devices do not have consistent data until the pair state is
synchronized and then at least two cycle switches have completed..

Legend for FLAGS:

(M)ode of Operation : A = Async, S = Sync, E = Semi-sync, D = Adaptive Copy


Disk Mode

: W = Adaptive Copy WP Mode, M = Mixed, T = Active

(C)onsistency State : X = Enabled, . = Disabled, M = Mixed, - = N/A

(E)xempt : X = Enabled, . = Disabled, M = Mixed, - = N/A

R1/R2 Device (S)ize : E = R1 EQ R2, 1 = R1 GT R2, 2 = R2 GT R1, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 396
Example – Adaptive Copy Disk Mode to SRDF/A (3 of 3)
C:\>symrdf –sid 1888 –rdfg 101 query -rdfa

--Output Truncated----

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 AX.E Consistent
N/A 000BE RW 0 0 RW 00061 WD 0 0 AX.E Consistent

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

397 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

RDF Group 101 has a pair of devices that are currently in an active SRDF/A session. The objective is to
add another SRDF pair to this group without affecting the consistency of the current SRDF/A session.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 397
Adding Devices from Active SRDF/A Session

• Add the device pair in the active SRDF/A session


– Use the –cons_exempt flag with the add operation
– If Consistency has been enabled for the SRDF/A session, the –force option is
required

• Move the device pair to a different SRDF Group


– Use the –cons_exempt flag again if moving to another group with active
SRDF/A session

398 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To add a device pair, use the –cons_exempt flag. Then use the movepair operation to move the
devices out of the active SRDF/A session to a different SRDF Group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 398
Removing Devices from Active SRDF/A Session

• Suspend the device pair in the active SRDF/A session


– Use the –cons_exempt flag with the suspend operation
– If Consistency has been enabled for the SRDF/A session, the –force option is
required

• Move the device pair to a different SRDF Group


– Use the –cons_exempt flag again if moving to another group with active
SRDF/A session

399 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To remove a device pair, use the –cons_exempt flag to first suspend the link for the devices. Then use
the movepair operation to move the devices out of the active SRDF/A session to a different SRDF
Group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 399
SRDF/A Array-Wide Parameters
• rdfa_cache_percent:
– Defaults to 75, with a range of valid values from 0 to 100 percent
– It is the percentage of the Max# of System Write Pending Slots available to SRDF/A. The purpose is to
ensure that other applications can use some of the WP limit
– When SRDF/A hits its WP cache limit, it is forced to drop SRDF/A sessions to free up cache
– Setting it lower reserves some WP limit for non-SRDF/A cache usage. Setting it higher enables SRDF/A to use more
of the cache WP limit, potentially creating performance problems for other applications

• rdfa_host_throttle_time:
– Defaults to 0, with a range of valid values from 0 to 65535
– If greater than 0, this value overrides the rdfa_cache_percent and session_priority settings
– When the System WP Limit is reached, throttling delays a write from the host until a cache slot becomes free
– The value is the number of seconds to throttle host writes before dropping SRDF/A sessions. A value of 65535 means
wait forever

• dse_max_cap:
• Specifies maximum number of GB in SRP that DSE can use
• Defaults is No Limit
• Can be set to between 1–100000

400 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The rdfa_cache_percent sets the percentage of write pending cache that can be used by SRDF/A.
The rdfa_cache_percent can range from 0 to 100 percent.

The rdfa_host_throttle_time sets the number of seconds to throttle host writes to SRDF/A devices
when cache is full, before dropping RDF/A sessions. Throttling delays a write from the host until a cache
slot becomes free. Values are from 0 to 65535.

The dse_max_cap specifies the maximum capacity in the array's DSE (Delta Set Extension) SRP
(Storage Resource Pool). The Best Practices for Dell EMC SRDF/A Delta Set Extension Technical Note
provides more information.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 400
SRDF/A Configuration Parameters

• Array-wide parameters:
– Maximum Cache Usage
• symconfigure –sid 499 –cmd “set symmetrix rdfa_cache_percent=50;” commit
– Maximum Host Throttle
• symconfigure –sid 499 –cmd “set symmetrix rdfa_host_throttle_time=2;”
commit
– DSE Maximum Capacity
• symconfigure –sid 499 –cmd “set symmetrix dse_max_cap=1000;” commit

• SRDF Group level parameters:


– Cycle Time
• symrdf –sid 499 –rdfg 20 set rdfa –cycle_time 3
– Session Priority
• symrdf –sid 499 –rdfg 20 set rdfa –priority 20

401 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The array-wide parameters are set using the symconfigure command as shown here.

The Group parameters for SRDF/A can be set using the symrdf command.

The Cycle Time is the minimum time to wait before attempting an SRDF/A cycle switch. Values range
from 1 to 60 seconds. The default minimum cycle time is 15 seconds.

The Session priority is the priority used to determine which SRDF/A sessions to drop if cache becomes
full. Values range from 1 to 64, with 1 being the highest priority—the last to be dropped.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 401
Lesson: SRDF/A Resiliency Features
This lesson covers the following topics:

• Transmit Idle

• Delta Set Extension

• Group-level Write Pacing

• Recovery after link loss

402 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/A resiliency features such as Transmit Idle, Delta Set Extension, and Group-
level Write Pacing. A method for recovering after a link loss is also discussed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 402
Transmit Idle

• Transmit Idle is enabled by default when Dynamic SRDF Groups are


created
• Keeps SRDF/A sessions active if a temporary link loss occurs
• When links are restored, data transmission resumes
• When all links are lost:
– Data transmission from source to target is halted
– Cycle switching continues
– Transmit queue depth increases
– Data accumulates in cache until SRDF/A cache usage reaches the DSE
threshold
– When maximum DSE capacity is reached, the session is dropped

403 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/A Transmit Idle is a feature of SRDF/A that dynamically and transparently extends the Capture,
Transmit, and Receive phases of the SRDF/A cycle. Transmit Idle is enabled by default when Dynamic
SRDF groups are created. SRDF/A Transmit Idle is used to keep SRDF/A sessions active during
temporary link losses and mask the effects of an all SRDF links lost event. Without the SRDF/A Transmit
Idle feature, an all SRDF links lost event would normally result in the abnormal termination of SRDF/A.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 403
Delta Set Extension

• Extends cache available for SRDF/A by off-loading cycle data from cache to
disk
• Arrays are preconfigured with one or more SRPs before installation
• One SRP is designated for DSE allocations and supports DSE for all
SRDF/A sessions in the array
– The default SRP for DSE is the default SRP for FBA devices
• Data is paged to disk when array Write Pending count crosses the DSE
threshold
– Default threshold is 50% of array Write Pending limit
• When conditions become normal, data is read back from disk to cache and
transmitted to the Target array

404 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/A DSE extends the cache space available for SRDF/A session cycles by offloading cycle data from
cache to preconfigured pool storage. DSE helps SRDF/A ride through larger and longer throughput
imbalances than cache-based buffering alone.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 404
Setting Maximum Cache Usage and DSE Capacity
C:\>symcfg -sid 1888 -v list

-----Output Truncated-----
SRDF/A Maximum Host Throttle (Secs) : 0
SRDF/A Maximum Cache Usage (Percent) : 75
SRDF/A DSE Maximum Capacity (GB) : NoLimit
-----Output Truncated-----

C:\>symconfigure -sid 1888 -cmd "set symmetrix rdfa_cache_percent=80;" commit

C:\>symconfigure -sid 1888 -cmd "set symmetrix dse_max_cap=1000;" commit

C:\>symcfg -sid 1888 -v list

-----Output Truncated-----
SRDF/A Maximum Host Throttle (Secs) : 0
SRDF/A Maximum Cache Usage (Percent) : 80
SRDF/A DSE Maximum Capacity (GB) : 1000
-----Output Truncated-----

405 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The default SRDF/A Maximum Cache Usage is 75 percent. The default SRDF/A DSE Maximum Capacity
is NoLimit. Use the symconfigure commands shown to modify these two settings.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 405
Verifying Designated SRP for DSE

C:\>symcfg -sid 1888 list -srp -detail

STORAGE RESOURCE POOLS

Symmetrix ID : 000197600217
C A P A C I T Y
-------------------------------- --- -------------------------------------------
Flg Usable Allocated Free Subscribed
Name DR (GB) (GB) (GB) (GB)
-------------------------------- --- ---------- ---------- ---------- ----------
SRP_1 FX 50073.8 1845.1 48228.7 2016.7
---------- ---------- ---------- ----------
Total 50073.8 1845.1 48228.7 2016.7

Legend:
Flags:
(D)efault SRP : F = FBA Default, C = CKD Default, B = Both, . = N/A
(R)DFA DSE : X = Usable, . = Not Used

406 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A listing of the SRP shows that SRP_1 is designated for DSE use.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 406
Query After Temporary Link Loss (1 of 2)
C:\>symrdf –sid 1888 –rdfg 101 query -rdfa

Symmetrix ID : 000197601888 (Microcode Version: 5978)

RDFA Session Number : 100


RDFA Cycle Number : 258
RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
RDFA Minimum Cycle Time : 00:00:15
RDFA Avg Cycle Time : 00:00:15
RDFA Avg Transmit Cycle Time : 00:00:15
Transmit Queue Depth on R1 Side : 42
Tracks not Committed to the R2 Side: 869312
Time that R2 is behind R1 : 00:10:18
R2 Image Capture Time : Sat Dec 25 09:39:50 2021
R2 Data is Consistent : True
RDFA R1 Side Percent Cache In Use : 0
RDFA R2 Side Percent Cache In Use : 0
R1 Side DSE Used Tracks : 30954
----Output Continued on Next Page-------

407 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF/A session is still active. The transmit queue depth on the R1 side increases as cycle switches
continue in multi-cycle mode (MCM). DSE spillover has started as can be seen from the R1 Side DSE
Used Tracks.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 407
Query After Temporary Link Loss (2 of 2)
----Output Continued from Previous Page-------
R2 Side DSE Used Tracks : 0
R1 Side Shared Tracks : 0
Transmit Idle Time : 00:10:07

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 NA NA NA AX.E TransIdle
N/A 000BE RW 0 0 RW 00061 NA NA NA AX.E TransIdle
N/A 000BF RW 0 0 RW 00062 NA NA NA AX.E TransIdle

Total ------- ------- ------- -------


Track(s) 0 0 0 0
MB(s) 0.0 0.0 0.0 0.0

408 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The session has been in transmit idle for a little over 10 minutes and the pair state is reflected as
TransIdle.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 408
DSE Utilization
C:\>symcfg -sid 1888 list -srp -demand -type sl

STORAGE RESOURCE POOLS

Symmetrix ID : 000197600217

Name : SRP_1
Usable Capacity (GB) : 50073.8
SRDF DSE Allocated (GB) : 8.6
Snapshots Allocated (GB) : 60.0

--------------------------------------------------
Service Level Subscribed Allocated
Name (GB) (GB) (%)
------------------------ ---------- --------------
<none> 20.1 0.0 0
Diamond 1162.4 116.6 10
Optimized 89.9 18.4 20
Platinum 20.0 0.0 0
Gold 8.0 0.0 0
---------- ---------- ---
Total 1300.5 135.0 10

409 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A little over 8 GB has been allocated so far for DSE from the designated SRP—SRP_1 in this case.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 409
Group-Level Write Pacing

• Host-issued write I/Os are paced so their rate does not exceed the rate at
which DSE can offload the SRDF/A session’s cycle data to the DSE Storage
Resource Pool.
• Responds to spillover rate on R1 side only.
• This prevents cache overflow on both the R1 and R2 sides.

410 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The system will pace at the spillover rate until the usable configured capacity for DSE on the SRP reaches
its limit.

At this point, the system will then either drop SRDF/A, or pace to the link rate option. To drop or pace is
user definable.

All existing pacing features are supported and can be utilized to keep SRDF/A sessions active.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 410
Activating Group-Level Write Pacing
C:\>symcfg -sid 1888 list -rdfg 101 -rdfa
-----Output Truncated-----
-------- ---------- -------- ----- --- --- --------- -----------------------
Write Pacing
RA-Grp Group Flags Cycle Pri Thr Transmit Delay Thr GRP DEV FLG
Name CSRM TDA time Idle Time (usecs) (%) SAU SAU P
-------- ---------- -------- ----- --- --- --------- ------- --- --- --- ---
101 (64) SRDF_sync1 XAS- XAX 15 33 50 000:00:00 50000 60 I.- --- X

C:\>symrdf -sid 1888 -rdfg 101 activate -rdfa_wpace –nop

C:\>symrdf -sid 1888 -rdfg 101 set rdfa_pace -wp_autostart on -nop

C:\>symcfg -sid 1888 list -rdfg 101 -rdfa


-----Output Truncated-----
-------- ---------- -------- ----- --- --- --------- -----------------------
Write Pacing
RA-Grp Group Flags Cycle Pri Thr Transmit Delay Thr GRP DEV FLG
Name CSRM TDA time Idle Time (usecs) (%) SAU SAU P
-------- ---------- -------- ----- --- --- --------- ------- --- --- --- ---
101 (64) SRDF_Sync1 XAS- XAX 15 33 50 000:00:00 50000 60 AXX --- X

411 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A list command issued to the SRDF Group 101 shows current Group-level Write Pacing flags. They are
Inactive and Disabled by default. The command examples illustrate how to activate Group-level Write
Pacing and set Write Pacing Autostart to ON. A subsequent list command displays the result.

Legend:

Write Pacing Flags :

(GRP) Group-Level Pacing:

(S)tatus : A = Active, I = Inactive, - = N/A

(A)utostart : X = Enabled, . = Disabled, - = N/A

S(U)pported : X = Supported, . = Not Supported, - = N/A

(DEV) Device-Level Pacing:

(S)tatus : A = Active, I = Inactive, - = N/A

(A)utostart : X = Enabled, . = Disabled, - = N/A

S(U)pported : X = Supported, . = Not Supported, - = N/A

(FLG) Flags for Group-Level and Device-Level Pacing:

Devs (P)aceable : X = All Devices, . = Not all devices, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 411
Recovering After Extended Loss of Links

• If an extended loss of link occurs, and the SRDF/A dropped, many R2


invalid tracks can build up on the R1 side.
• If there are invalid tracks, Dell EMC recommends making a Gold Copy of
the R2 before starting any resynchronization.
• Enable SRDF/A after the two sides are synchronized.
• Resynchronization before enabling SRDF/A can be performed by setting
SRDF mode to Adaptive Copy Disk Mode.

412 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

As noted earlier, during resynchronization, the R2 does not have consistent data. A copy of the consistent
R2 data prior to resynchronization can safeguard against unexpected failures during the resynchronization
process. When the link is resumed, if there are a large number of invalid tracks owed by the R1 to its R2, it
is recommended that SRDF/A not be enabled right away. Enabling SRDF/A right after link resumption
causes a surge of traffic on the link due to shipping of accumulated invalid tracks and the new data added
to the SRDF/A cycles. This could lead to SRDF/A consuming more cache and reaching the System Write
Pending limit. If this happens, SRDF/A would drop again. Like with SRDF/S, resynchronization should be
performed during periods of relatively low production activity.

Resynchronization in Adaptive Copy Disk mode minimizes the impact on the production host. New writes
are buffered and these, along with the R2 invalids, are sent across the link. The time it takes to
resynchronize is elongated.

Resynchronization in Synchronous mode impacts the production host. New writes have to be sent
preferentially across the link while the R2 invalids are also shipped. Switching to Synchronous is possible
only if the distances and other factors permit. For instance, if the norm is to run in SRDF/S and toggle into
SRDF/A for batch processing—due to higher bandwidth requirement. In this case, if a loss of links occurs
during the batch processing, it might be possible to resynchronize in SRDF/S.

In either case, the R2 data is inconsistent until all the invalid tracks are sent over. Therefore, it is advisable
to enable SRDF/A after the two sides are completely synchronized.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 412
Recovery Example (1 of 4)
C:\>symrdf –sid 1888 –rdfg 101 query -rdfa

RDFA Cycle Number : 0


RDFA Session Status : Inactive

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 470727 NR 00060 NA NA NA AX.E Partitioned
N/A 000BE RW 0 465205 NR 00061 NA NA NA AX.E Partitioned

Total ------- ------- ------- -------


Track(s) 0 935932 NA NA
MB(s) 0 116992 NA NA

413 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In this example, there is a workload on the devices in SRDF/A enabled state. A permanent loss of link
places the devices in a Partitioned state. Production work continues on the R1 devices and the new writes
arriving for the R1 devices are marked as invalid or owed to the R2. At some point, SRDF/A is dropped
and the session is marked Inactive. To get to this state, the maximum DSE capacity has been exceeded.
So there is no choice but to drop SRDF/A.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 413
Recovery Example (2 of 4)
C:\>symrdf –sid 1888 –rdfg 101 query -rdfa

RDFA Cycle Number : 0


RDFA Session Status : Inactive
Time that R2 is behind R1 : 00:09:31
R2 Image Capture Time : Sat Dec 25 09:39:51 2021
R2 Data is Consistent : True

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 472615 NR 00060 WD 0 0 AX.E Suspended
N/A 000BE RW 0 467117 NR 00061 WD 0 0 AX.E Suspended

414 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When the links are restored, the pair state moves to Suspended. Even though the flags indicate SRDF/A
mode, the session status is Inactive. Also note that the R2 Data is Consistent. This is because the data
would be consistent up to the last Apply cycle. However, there are accumulated R2 Invalid tracks that are
owed to the R2 side.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 414
Recovery Example (3 of 4)
C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt disable –nop

C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt set mode acp_disk –nop

C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt resume –nop

C:\>symrdf –sid 1888 –rdfg 101 query –rdfa

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 466384 RW 00060 WD 0 0 D..E SynchInProg
N/A 000BE RW 0 462423 RW 00061 WD 0 0 D..E SynchInProg

415 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

As mentioned, we will next place the device pairs in Adaptive Copy Disk mode. As consistency was
enabled when the links were lost, we have to first disable consistency before changing the mode to
Adaptive Copy Disk. The RDF pair state is still Suspended. Next we resume the links. Once the RDF pair
state moves to Synchronized, the mode can be changed to Asynchronous and Consistency Enabled.

symrdf –g async_dg1 set mode async

symrdf -g async_dg1 enable

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 415
Recovery Example (4 of 4)
C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt set mode async –nop

C:\>symrdf –sid 1888 –rdfg 101 –file rdf_pairs.txt enable –nop

C:\>symrdf –sid 1888 –rdfg 101 query –rdfa

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 AX.E Consistent
N/A 000BE RW 0 0 RW 00061 WD 0 0 AX.E Consistent

416 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The RDF Pair state is now in Asynchronous mode.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 416
Failover/Failback in SRDF/A Mode

• If the primary site fails, data on R2 is consistent up to the last Apply cycle
– Partial data in the Receive cycle is discarded

• SRDF failover procedure can then be executed, and the workload can be
started on the R2 devices
– Consistency protection should be disabled before issuing symrdf failover
without the –force option

• Failback procedure after the primary site has been restored is identical to
Synchronous SRDF
– After symrdf failback command completion, workload can be restarted on
the R1 devices and SRDF/A can be enabled

417 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Again, it is advisable to make a copy of the R2 prior to executing a failback operation. When the workload
is resumed on the R1 devices immediately after a failback, accumulated invalid tracks have to be
synchronized from the R2 to the R1, and new writes must be shipped from the R1 to R2. If there is an
interruption now, data on the R2 is not consistent. Even though SRDF/A can be enabled right after a
failback, it should be enabled after the SRDF pairs enter the Synchronized state.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 417
Lesson: Independent Groups and Multi-Session Consistency
This lesson covers the following topics:

• Multiple independent SRDF/A groups

• Multi-Session Consistency (MSC)

• Managing MSC

418 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers managing independent SRDF/A groups and SRDF/A Multi-Session Consistency.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 418
Independent SRDF/A Groups (1 of 4)
C:\>symrdf –sid 1888 –rdfg 101 query –rdfa

RDFA Cycle Number : 100


RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
RDFA Minimum Cycle Time : 00:00:15

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 AX.E Consistent
N/A 000BE RW 0 0 RW 00061 WD 0 0 AX.E Consistent
N/A 000BF RW 0 0 RW 00062 WD 0 0 AX.E Consistent

419 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Devices in RDF Group 101 are in an active SRDF/A session. The pair state is Consistent. The current
cycle number for this group is 100. The minimum cycle time is at the default of 15 seconds.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 419
Independent SRDF/A Groups (2 of 4)
C:\>symrdf –sid 1888 –rdfg 102 query -rdfa

RDFA Cycle Number : 265


RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
RDFA Minimum Cycle Time : 00:00:05

Remote Symmetrix ID : 000197803566 (Microcode Version: 5978)


RDF (RA) Group Number : 102 (65)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 AX.E Consistent
N/A 000BE RW 0 0 RW 00061 WD 0 0 AX.E Consistent

420 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Devices in RDF Group 102 are in an active SRDF/A session. The pair state is Consistent. The current
cycle number for this group is 265. The minimum cycle time for this group has been set to five seconds.
The two groups switch cycles independently of each other.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 420
Independent SRDF/A Groups (3 of 4)

C:\>symrdf –sid 1888 –rdfg 101 query -rdfa

RDFA Cycle Number : 153


RDFA Session Status : Active – DSE

Remote Symmetrix ID : 000197902249 (Microcode Version: 5978)


RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 NA NA 0 AX.E TransIdle
N/A 000BE RW 0 0 RW 00061 NA NA 0 AX.E TransIdle
N/A 000BF RW 0 0 RW 00062 NA NA 0 AX.E TransIdle

421 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Loss of links for RDF Group 101 causes the pair states to go into Transmit Idle.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 421
Independent SRDF/A Groups (4 of 4)
C:\>symrdf –sid 1888 –rdfg 102 query -rdfa

RDFA Cycle Number : 303


RDFA Session Status : Active - DSE
RDFA Consistency Exempt Devices : No
RDFA Minimum Cycle Time : 00:00:05

Remote Symmetrix ID : 000197803566 (Microcode Version: 5978)


RDF (RA) Group Number : 102 (65)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 0 RW 00060 WD 0 0 AX.E Consistent
N/A 000BE RW 0 0 RW 00061 WD 0 0 AX.E Consistent

422 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

However, as the links for RDF Group 102 are still available, it is not affected by the loss of links for RDF
Group 101. So the devices in RDF Group 102 continue to be consistent and the cycle switches proceed as
usual.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 422
SRDF/A Multi-Session Consistency
MSC
• Manages multiple SRDF/A sessions
logically as if they were a single session: Delta
Set
– RDF Daemon for Open Systems
– Sessions can be within or across arrays
– Ensures a complete, re-startable point-in-
time copy on the remote side

Delta
Set

423 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

If one or more source R1 devices in an SRDF/A Multi-Session Consistency (MSC) enabled SRDF
consistency group cannot propagate data to their corresponding target R2 devices, then the MSC process
suspends data propagation from all R1 devices in the consistency group. This halts all data flow to the R2
targets. The RDF Daemon—storrdfd—performs cycle-switching and cache recovery operations across
all SRDF/A sessions in the group. This ensures that a consistent R2 data copy of the database exists at
the point-in-time any interruption occurs. If a session has devices from multiple arrays, then the host
running storrdfd must have access to all the arrays to coordinate cycle switches. It would be
recommended to have more than one host with access to all the arrays running the storrdfd daemon. In
the event one host fails, the surviving host can continue with MSC cycle switches.

A composite group must be created using the RDF consistency protection option—rdf_consistency—
and must be enabled using the symcg enable command. At this point the, RDF Daemon begins
monitoring and managing the MSC consistency group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 423
SRDF/A MSC

• RDF Daemon coordinates cycle switching of the SRDF/A MSC group


sessions as a single entity:
– Responsible for detecting failure conditions that would cause data on the R2 side
to become inconsistent
– When a failure condition is detected, the cycle switching for all SRDF/A sessions
in the group are stopped in a manner that leaves the R2 side with consistent data

424 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The RDF process daemon maintains consistency for enabled composite groups across multiple arrays for
SRDF/A with MSC. For the MSC option—rdf_consistency—to work in an RDF consistency-enabled
environment, each locally-attached host performing management operations must run an instance of the
RDF Daemon—storrdfd. Each host running storrdfd must also run an instance of the base
daemon—storapid. Optionally, if the Group Naming Services (GNS) daemon is also running, it
communicates the composite group definitions back to the RDF Daemon. If the GNS daemon is not
running, the composite group must be defined on each host individually.

In MSC, the Transmit cycles on the R1 side of all participating sessions, as well as all the corresponding
Apply cycles on the R2 side, must be empty. The switch is coordinated and controlled by the RDF
Daemon.

All host writes are held for the duration of the cycle switch. This ensures dependent write consistency. If
one or more sessions in MSC complete their Transmit and Apply cycles ahead of other sessions, they
have to wait for all sessions to complete, prior to a cycle switch.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 424
SRDF/A Operations
• Set SYMAPI_USE_RDFD = ENABLE in options configuration file
• Create a Composite Group (CG) with the -rdf_consistency option
– Group definition is passed to the RDF Daemon as a candidate group
– If the Daemon is not already running, it is started automatically

• Add all the devices in the multiple SRDF/A sessions to the CG


• Put all CG devices into Async mode
symrdf -cg <CGname> set mode async

• Enable CG devices for consistency protection


symcg -cg <CGname> enable
– The RDF Daemon is notified that the group should now be monitored
– Enable command must be done after the devices are put into Async mode

• When the devices become RW on the link, the RDF Daemon:


– Starts performing cycle switching
– Actively monitors the health of the group to maintain R2 data consistency

425 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To use the optional RDF Daemon, enable it in the SYMAPI options file and then start it. Managing MSC
requires the creation of Composite Groups. When the Composite Group is enabled, the cycle switching is
controlled by the RDF Daemon.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 425
SRDF/A MSC

C:\>symcg create msc_cg –type RDF1 –rdf_consistency


C:\>symcg list

C O M P O S I T E G R O U P S

Number of Number of
Name Type Valid Symms RAGs DGs Devs BCVs VDEVs TGTs

msc_cg RDF1 Yes 1 2 0 4 0 0 0

C:\>symcg -cg msc_cg enable -nop

426 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The objective is to manage two SRDF/A groups as a single entity using MSC. Create the consistency
group by name msc_cg with type RDF1.

Issue the symcg list command to list the consistency groups. The Number of RDF Groups (RAGs)
displays 2.

Next, enable MSC for the CG.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 426
Query of Composite Group
C:\>symrdf -cg msc_cg query -rdfa

Composite Group Name : msc_cg


Composite Group Type : RDF1
Number of Symmetrix Units : 1
Number of RDF (RA) Groups : 2
RDF Consistency Mode : MSC
RDFA MSC Info
{
MSC Session Status : Active
Consistency State : CONSISTENT
}

RDF (RA) Group Number : 102 (65)


RDFA Info:
{
Cycle Number : 79
Session Status : Active - MSC – DSE

RDF (RA) Group Number : 101 (64)


RDFA Info:
{
Cycle Number : 79
Session Status : Active - MSC - DSE

427 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The cycle numbers for the two groups have been reset to be the same. MSC has been enabled. They
cycle switch in unison even though their minimum cycle switch times are different.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 427
SRDF/A MSC at Work
C:\>symrdf -cg msc_cg query
RDF (RA) Group Number : 102 (65)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BG RW 0 130492 NR NA NA NA NA AX.E Partitioned

RDF (RA) Group Number : 101 (64)

Source (R1) View Target (R2) View FLAGS


--------------------------------- ------------------------ ----- ------------
ST LI ST
Standard A N A
Logical Sym T R1 Inv R2 Inv K Sym T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MCES STATE
--------------------------------- -- ------------------------ ----- ------------
N/A 000BD RW 0 8205 NR 00060 WD 0 0 AX.E Suspended
N/A 000BE RW 0 8205 NR 00061 WD 0 0 AX.E Suspended
N/A 000BF RW 0 8205 NR 00062 WD 0 0 AX.E Suspended

428 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A permanent loss of links for RDF Group 105 results in a Partitioned state for that group. But RDF Group
104 is suspended even though its links are still available. Note that the output is very verbose and has
been edited to show the relevant details for this example. When the failed links are restored, the RDF
Group moves from the Partitioned state to the Suspended state. Recovering from this state can be
accomplished with the command:

symrdf –cg msc_cg establish

Once the invalid tracks are marked, merged, and synchronized, MSC protection is automatically re-
instated. The user does not have to issue symcg –cg msc_cg enable again.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 428
MSC Cleanup

• If the link to the R2 side is available, the RDF Daemon performs the cleanup
automatically on the R1 side
• If the link is unavailable, then invocation of any SRDF command—such as
symrdf failover or split—from the R2 side performs the automatic
cleanup

429 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Cleanup is automatically performed by the RDF Daemon if the link to the R2 side is available.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 429
Lab: SRDF/Asynchronous Operations

This lab covers


• Single Session SRDF/Asynchronous groups
• Configuring Concurrent SRDF
• Configuring and managing SRDF/A Multi-Session
Consistency

430 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers setting SRDF/Asynchronous mode of operation for SRDF device pairs and enabling
consistency protection. It also covers configuring Concurrent SRDF with one leg in SRDF/Synchronous
mode and the other in SRDF/Asynchronous mode. Configuring and managing SRDF/A Multi-Session
Consistency is covered as well.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 430
Module Summary

Key points covered in this module:

• SRDF/Asynchronous remote replication on PowerMax and VMAX Family arrays

• SRDF/A operations

• SRDF/A resiliency features

• SRDF/A Multi-Session Consistency

431 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered SRDF/Asynchronous mode of remote replication. Concepts and operations for
SRDF/A in Single and Multi-Session modes were presented. SRDF/A resiliency features were also
discussed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Asynchronous Operations 431
Module: SRDF/Metro

Upon completion of this module, you should be able to:

• Describe SRDF/Metro

• Configure SRDF/Metro

• Create and view SRDF/Metro Device Pairs

• Perform SRDF/Metro Online Device Expansion

432 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module provides an overview of SRDF/Metro. Configuration and monitoring of SRDF/Metro is also
covered.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 432
Lesson: SRDR/Metro Introduction
This lesson covers the following topics:

• SRDF/Metro configurations

• Bias Facility

• SRDF Resiliency

• SRDF/Metro Extended Disaster Recovery (DR)

• Smart DR

433 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the concepts of SRDF/Metro. Configurations, resiliency, Disaster Recovery (DR), and
Smart DR options are also discussed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 433
Introduction to SRDF/Metro
Production

• Application High Availability (HA) Application Application Host

– At PowerMax, VMAX All Flash


• Features Read/Write Read/Write
– R1 & R2
> Read/Write to host
Witness
> Read/Write on link
– Synchronous replication
– Metro distance 100 km Read/Write
R1 R2
– Bias and witness options
– Single or clustered host Synchronous

• Managed by Solutions Enabler or


Unisphere for PowerMax
100 km

434 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

SRDF/Metro provides High Availability (HA) for an application at the PowerMax, and VMAX All Flash array
level. Typically, HA for an application is provided at the host level. SRDF/Metro provides the host
read/write capability to both R1 and R2 volumes while both volumes are read/write on the SRDF link.

Replication between the two sites is performed synchronously across the link with a Metro distance of 100
kilometers. A bias facility is used to determine which RDF volumes the host has access to if volumes in
the Metro configuration go Not Ready on the RDF link. This bias facility supports three methods for
determining which volumes to use; Device Bias, Array Witness—shown in this example—and Virtual
Witness(vWitness). SRDF/Metro is supported for single, as seen in this example, or clustered host
environments.

Solutions Enabler or Unisphere for PowerMax 9.0 or above is required to manage SRDF/Metro.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 434
SRDF/Metro Host Configurations

Single Host

Host
Multi-Pathing
Cluster
software

Read/Write Read/Write
Read/Write Read/Write

SRDF SRDF
Links Links
R1 R2 R1 R2

435 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

On the left is an SRDF/Metro configuration with a standalone host. In this example, the configuration has
visibility to a PowerMax arrays—R1 and R2 devices. It is using multi-pathing software, such as
PowerPath, to enable parallel reads and writes to each array. The identity of the R1 device is federated,
ensuring that the paired R2 device appears, through additional paths to host, as a single virtualized
device. On the right, is a clustered host environment. Each cluster node has dedicated access to an
individual array. In either case, writes to the R1 or R2 devices are synchronously copied to its SRDF
device pair. Should a conflict occur between writes to paired SRDF/Metro devices, the conflicts will be
internally resolved. This resolution ensures that a consistent image between paired SRDF devices is
maintained to the individual host or host cluster.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 435
SRDF/Metro Configuration Resiliency

Device Bias Array Witness Virtual Witness (vWitness)


Configuration setting Third physical array ESXi Server vApp

Witness
R1 R2 R1 R2 R2
R1

436 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Equipment or communication failures can make either device unavailable or break the SRDF link. In such
an event, SRDF/Metro uses a facility called Bias to determine which side remains accessible to the host
system. There are three methods for deciding which side remains available during a failure situation.
Device Bias uses a configuration setting of the device pair to specify which side remains available. Array
Witness uses a third physical array to determine which side is accessible. Virtual Witness (vWitness) runs
in a virtual appliance (vApp) on an ESXi server to determine which side is accessible to the host or hosts.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 436
SRDF/Metro Resiliency – Device Bias

No Witness
• Created with –use_bias option
configured
• R1 is always the winner in disasters
• If use R2 bias, system automatically
performs R1/R2 swap
R1

R1 R2

437 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Device Bias is the simplest of the bias methods. When making device pairs available on the SRDF link,
indicate that the bias method should be used for the device pairs with the -use_bias option. By default, the
R1 side of the pair is configured as the bias side. However, if there is a failure on the array that contains
the bias device, the host loses device access. The Device Bias method provides no way to make the R2
device available to the host. When operating with Device Bias, the state of the device pair is ActiveBias.

If the witness options are not used, the establish and restore commands also require the -use_bias
option. Bias can be changed when all device pairs in the SRDF/Metro group have reached the
ActiveActive or ActiveBias pair state.

Bias applies only to RDF device pairs in an SRDF/Metro configuration.

If either Array Witness or vWitness options are used, but unavailable, Device Bias is used if the device
pair becomes Not Ready on the RDF link.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 437
SRDF/Metro Resiliency – Array Witness Witness
PowerMax

• Third witness array RA RA


– VMAX All Flash running
HYPERMAX OS 5977.945.890+
– PowerMaxOS 5978.144.144+

R1 R2

RA

438 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

With the Array Witness method, SRDF/Metro uses a third witness array to determine the bias side. The
witness array runs on a VMAX array running Enginuity, on a VMAX All Flash running HYPERMAX OS, or
on a PowerMax or VMAX All Flash running PowerMaxOS. On VMAX All Flash array, HYPERMAX OS
5977.810.784 with an ePack containing fixes to support SRDF N-x connectivity must be used.
HYPERMAX OS 5977.945.890 or above contains all fixes and supports SRDF/Metro on VMAX All Flash
arrays. For PowerMax or VMAX All Flash arrays, PowerMaxOS 5978.144.144 or above supports
SRDF/Metro. In the event of a failure, the witness decides which side of the Metro group remains
accessible to hosts, giving preference to the bias side. This method chooses which side to continue
operations on when the Device Bias method may not result in continued host availability to a surviving
non-biased array. When operating with Array Witness, the state of the device pair is ActiveActive.

The witness array must have SRDF connectivity to both the R1-side array and R2-side array. SRDF
remote adapters (RAs) are required on the witness array with applicable network connectivity to both the
R1 side and R2 side arrays.

For complete redundancy, there can be multiple witness arrays. If the auto configuration process fails and
no other applicable witness arrays are available, SRDF/Metro uses the Device Bias method.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 438
SRDF/Metro Resiliency – Virtual Witness (vWitness)
Witness (vApp)

• Deployed as vApp on ESXi


server
– Requires HYPERMAX OS EMGMT1 EMGMT2
5977.945.890 or above
– Management with Solutions
Enabler, or Unisphere for
PowerMax
R1 R2

RA

439 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

vWitness is an additional resiliency option introduced in HYPERMAX OS 5977.945.890 and Solutions


Enabler. vWitness has similar capabilities to the Array Witness method. The difference is that it is
packaged to run in a virtual appliance (vApp) on a VMware ESXi server, rather than on an array. The
vWitness and the Array Witness options are treated the same in the operating environment, and can be
deployed independently or simultaneously. When deployed simultaneously, SRDF/Metro favors the Array
Witness option over the vWitness option, as the Array Witness option has better availability. For
redundancy, you can configure up to 32 vWitnesses. When operating with vWitness, the state of the
device pair is ActiveActive.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 439
SRDF/Metro Management

• SRDF/Metro pairs are managed in the unit of RDF group


• All the devices pairs in the SRDF/Metro group are consistent
• Devices can be added/removed to/from active SRDF/Metro group with –
exempt*
• Can move between SRDF/Sync or Adaptive Copy RDF group pairs and the
active SRDF/Metro group with –exempt * option
• Only establish, suspend and restore operations can work with RDF pairs in
Metro mode

440 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

All the devices in the RDF group should be managed together in Metro mode, similiar to SRDF/A
management. By nature, all the device pairs in the same RDF group with SRDF/Metro mode are
consistent. Unlike the other SRDF modes, only symrdf establish/suspend and restore operations can be
done between metro built pairs.

*The exempt option is only available on arrays running PowerMaxOS 5978 using Solutions Enabler 9.x or
Unisphere 9.x.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 440
Device Add/Remove to/from Active SRDF/Metro Group
with -exempt
• Add existing (non-RDF) devices to an active Example: Add devices
SRDF/Metro group while a host is actively
using the volumes being added
• Remove devices being protected using
SRDF/Metro
Benefits R1
SRDF/M
Group R2
• Achieve zero RPO and RTO for existing
applications using SRDF/Metro
• Parity with other SRDF modes

Non-RDF devices

441 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Newly added devices synchronizes R1->R2 invalid tracks under a new SRDF/Metro consistency exempt
status:

• Similar in concept to the previous SRDF/A consistency exempt functionality

The ActiveActive pair state is reached for devices only after the volumes have been added to the
SRDF/Metro session and track synchronization for added devices completes.

Once synchronized, the exempt status for these devices is cleared and SRDF/Metro operations for all
active devices continues normally.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 441
Exempt Support Restrictions

• Restore operations will be blocked while one or more devices in an


SRDF/Metro group are in an exempt status
• At least one device within the SRDF/Metro session must be non-
exempt
‒ Management software does not allow all devices in the SRDF/Metro
session to be removed with exempt deletepair or movepair

442 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

An Active SRDF/metro RDF group can not be empty, so there should be at least one device in the session
that is non-exempt.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 442
Create Pair Exempt – Unisphere (1 of 6)

443 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Let us look at how to use Unisphere to add RDF pairs to an active RDF metro group. Navigate to the
DATA PROTECTION > SRDF Groups page. Select the RDF group. In this case, there is an existing RDF
group 2 with 8 active metro pairs selected.

1. Choose Create Pairs

2. For the Select SRDF Mode, choose the Mirror Type that matches the other pairs in the same RDF
group from the same array. Notice that SRDF Mode is Active and grayed out.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 443
Create Pair Exempt – Unisphere (2 of 6)

444 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Search the non-RDF devices on the array to be used for creating the pairs, and confirm the selection.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 444
Create Pair Exempt – Unisphere (3 of 6)

445 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The selected volumes may optionally be added to the existing storage group that the existing pairs belong
to by checking the Add to Storage Group check box.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 445
Create Pair Exempt – Unisphere (4 of 6)

446 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Search for remote volumes and confirm the selection.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 446
Create Pair Exempt – Unisphere (5 of 6)

447 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The chosen remote volumes can also be optionally added into the existing storage group that the existing
pairs belong to, and then sort the pairing.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 447
Create Pair Exempt – Unisphere (6 of 6)

448 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the pair summary then choose Run Now. Once finished, the Success message is displayed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 448
Viewing SRDF/Metro Group Details

C:\>symcfg -sid 1888 list -rdfg 1 -metro

Symmetrix ID : 000197601888

S Y M M E T R I X R D F G R O U P S

Local Remote Group RDF Metro


------------ --------------------- --------------------------- -----------------
LL Flags Dir Witness
RA-Grp sec RA-Grp SymmID ST Name YLPD CHT Cfg CE S Identifier
------------ --------------------- --------------------------- -- --------------
1 ( 0) 1 1 ( 0) 000197601888 OD Uni_RDFG ...X ..X F-S -- - -

Legend:
Group (S)tatus : O = Online, F = Offline
Group (T)ype : S = Static, D = Dynamic, W = Witness
Director (C)onfig : F-S = Fibre-Switched, F-H = Fibre-Hub
G = GIGE, E = ESCON, T = T3, - = N/A

449 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Adding the –metro option displays more details. The RDF (M)etro flag is displayed as configured and the
(C)onfigured Type and (E)ffective Type are listed as B for Bias.

Group Flags:

Prevent Auto (L)ink Recovery : X = Enabled, . = Disabled

Prevent RAs Online Upon (P)ower On: X = Enabled, . = Disabled

Link (D)omino : X = Enabled, . = Disabled

(S)TAR/SQAR mode : N = Normal, R = Recovery, . = OFF

S = SQAR Normal, Q = SQAR Recovery

RDF Software (C)ompression : X = Enabled, . = Disabled, - = N/A

RDF (H)ardware Compression : X = Enabled, . = Disabled, - = N/A

RDF Single Round (T)rip : X = Enabled, . = Disabled, - = N/A

RDF (M)etro : X = Configured, . = Not Configured

RDF Metro Flags:

(C)onfigured Type : W = Witness, B = Bias, - = N/A

(E)ffective Type : W = Witness, B = Bias, - = N/A

Witness (S)tatus : N = Normal, D = Degraded,

F = Failed, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 449
Viewing SRDF/Metro Group Details (Contd.)
Group Flags :
T(Y)pe : N = Star Normal, R = Star Recovery,
S = SQAR Normal, Q = SQAR Recovery,
M = Metro, I = Data Migration,
T = MetroDR Metro, D = MetroDR DR,
V = VASA Async, G = Global Mirror,
P = PPRC,
Prevent Auto (L)ink Recovery : X = Enabled, . = Disbaled
Prevent RAs Online Upon (P)ower On : X = Enabled, . = Disbaled
Link (D)omino : X = Enabled, . = Disabled
RDF Software (C)ompression : X = Enabled, . = Disabled, - = N/A
RDF (H)ardware Compression : X = Enabled, . = Disabled, - = N/A
RDF Single Round (T)rip : X = Enabled, . = Disabled, - = N/A

RDF Metro Flags :


(C)onfigured Type : W = Witness, B = Bias, - = N/A
(E)ffective Type : W = Witness, B = Bias, - = N/A
Witness(S)tatus : N = Normal, D = Degraded,
F = Failed, - = N/A

450 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Adding the –metro option displays more details. The RDF (M)etro flag is displayed as configured and the
(C)onfigured Type and (E)ffective Type are listed as B for Bias.

Group Flags:

Prevent Auto (L)ink Recovery : X = Enabled, . = Disabled

Prevent RAs Online Upon (P)ower On: X = Enabled, . = Disabled

Link (D)omino : X = Enabled, . = Disabled

(S)TAR/SQAR mode : N = Normal, R = Recovery, . = OFF

S = SQAR Normal, Q = SQAR Recovery

RDF Software (C)ompression : X = Enabled, . = Disabled, - = N/A

RDF (H)ardware Compression : X = Enabled, . = Disabled, - = N/A

RDF Single Round (T)rip : X = Enabled, . = Disabled, - = N/A

RDF (M)etro : X = Configured, . = Not Configured

RDF Metro Flags:

(C)onfigured Type : W = Witness, B = Bias, - = N/A

(E)ffective Type : W = Witness, B = Bias, - = N/A

Witness (S)tatus : N = Normal, D = Degraded,

F = Failed, - = N/A

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 450
Adding Devices to SRDF/Metro Group Using CLI

C:\>symrdf -sid 1888 -rdfg 1 -f addpairtometro.txt createpair -type R1 -metro –exempt

C:\>symrdf -sid 1888 list -rdfg 1

Symmetrix ID: 000197601888

Local Device View


-----------------------------------------------------------------------------
STATUS FLAGS RDF S T A T E S
Sym Sym RDF --------- ----- R1 Inv R2 Inv ----------------------
Dev RDev Typ:G SA RA LNK MTES Tracks Tracks Dev RDev Pair
----- ----- -------- --------- ----- -------- -------- --- ---- -------------

00100 0007F R1:1 ?? RW RW T1.E 0 0 RW WD ActiveBias


00101 00092 R1:1 ?? RW RW T1.E 0 0 RW WD ActiveBias

451 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The createpair command creates device pairs using the devices listed in the addpairtometro.txt
file and places them in the SRDF/Metro session. In this example, the device pair includes devices 100 and
101. The -exempt option indicates that data on the R1 side of the new RDF device pairs should be
preserved and host accessibility should remain on the R1 side.

After creating the new device pairs in RDF group 4, Solutions Enabler performs an establish on them,
setting the device pairs to RW on the RDF link with SyncInProg RDF pair state. Then the device pairs will
transition to the ActiveActive RDF pair state if the devices already in the group are using witness
protection. The pair state will be ActiveBias if configured using bias protection, as in this example. If the
devices already in the group are suspended, then the newly added devices will also be suspended.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 451
Move Pair Exempt – Unisphere (1 of 5)

452 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

We can move RDF pairs to an active metro group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 452
Move Pair Exempt – Unisphere (2 of 5)

453 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Highlight the pairs to be moved, and choose Move.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 453
Move Pair Exempt – Unisphere (3 of 5)

454 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Choose the active metro RDF group that the pairs are to be moved to. If the storage group is to be used
for the other replication unit, all the devices in the storage group will be replicated with consistency. So
choose the storage group to remove from if necessary.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 454
Move Pair Exempt – Unisphere (4 of 5)

455 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

We can optionally add the device pairs to a new storage group, review the summary and run the job.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 455
Move Pair Exempt – Unisphere (5 of 5)

Before Moving Pairs

After Moving
Pairs

456 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

After the task succeeds, notice the device pairs moved into the metro group MetroR1—now 4 pairs versus
3 prior to the move.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 456
SRDF/Metro Extended DR with SRDF/A (1 of 3)
• Primary (bias) Site protection, best practice
SRDF/Metro

R11 R2

R2
SRDF/A Only

(Bias)

457 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Extended Disaster Recovery (DR) using SRDF/A in an SRDF/Metro environment includes protecting the
Primary, Secondary, or both sites. The best practice recommendation is to protect the Primary (bias) site
with SRDF/A to a tertiary site. In this case, the Primary device acts as an R11.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 457
SRDF/Metro Extended DR with SRDF/A (2 of 3)
• Secondary (non-bias) Site protection, not recommended
SRDF/Metro

R1 R21

SRDF/A Only
R2
(Bias)

458 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Disaster Recovery in an SRDF/Metro configuration using SRDF/A from the Secondary site can be
achieved, however, it is not recommended. In this configuration, the Secondary device acts as an R21.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 458
SRDF/Metro Extended DR with SRDF/A (3 of 3)
• Primary and secondary Site protection
SRDF/Metro

R11 R21

(Bias)

R2 R2
SRDF/A Only SRDF/A Only

459 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

It is also possible to protect both the Primary and Secondary sites with SRDF/A to their respective tertiary
sites, as shown here.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 459
SRDF/Metro Smart DR

The Smart DR feature adds the following capabilities to SRDF/Metro:

• Metro Smart DR high available (HA) disaster recovery (DR) solution


R1 R2
• Integrates SRDF/Metro (Metro) and SRDF/Async (SRDF/A)
• Achieved by closely coupling the SRDF/A sessions on each side of a
Metro pair
DR
• Witness configuration is required for all Smart DR configurations
• Ensures that only a single SRDF/A session will be sending data to the
DR site
• Switches the data transfer to the other side ensuring that the
dependent-write consistent copy of data on the DR site is maintained

460 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Added with the PowerMaxOS 5978 Q3 2020 SR and Solutions Enabler/Unisphere for PowerMax 9.2,
SRDF/Metro Smart DR provides SRDF/Metro with a single asynchronous target R22 volume which may
be populated from either the R1 or R2 volume of an SRDF/Metro paired solution. Adding the capability to
use a single asynchronous target volume simplifies setup, maintenance capabilities, system requirements,
and reduces the amount of disk space required for a single target system.

The Smart DR feature adds the following capabilities to SRDF/Metro:

• Metro Smart DR is a two-region high available (HA) disaster recovery (DR) solution

• Integrates SRDF/Metro (Metro) and SRDF/Async (SRDF/A) enabling HA DR for a Metro session

• Achieved by closely coupling the SRDF/A sessions on each side of a Metro pair to replicate to a single
DR device

• Witness configuration is required for all Smart DR configurations

• Ensures that only a single SRDF/A session will be sending data to the DR site

• Switches the data transfer to the other side ensuring that the dependent-write consistent copy of data
on the DR site is maintained and stays as up to date

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 460
Lesson: Implementing SRDF/Metro
This lesson covers the following topics:

• SRDF/Metro implementation using Unisphere for PowerMax

• Monitoring of SRDF/Metro with Unisphere for PowerMax

461 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/Metro implementation. Unisphere for PowerMax is used to set up, manage, and
monitor SRDF/Metro.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 461
SRDF/Metro Implementation Steps

1. Identify implementation configuration: Bias or with Witness


– if with Witness, which witness physical or virtual
– if physical witness, from both Source and Target arrays, is there an empty RDF
group from witness array to Source/Target array
– if virtual witness, is a vWitness available

2. Create RDF groups for the application to use SRDF/Metro


3. Mask the future R1 (now non-RDF) devices to the host
4. Create RDF pairs using the RDF groups from step 2 with either –use_bias
or -metro
5. Mask now R2 devices to the host

462 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To implement SRDF/Metro, the future R1 devices can be accessible by the host through out the
implementation, and the future R2 devices can only be accessible by the host once the pairs are active.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 462
SRDF/Metro – Create Physical Witness Group

463 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A Physical Witness requires the use of a third VMAX All Flash, or PowerMax array. From the DATA
PROTECTION menu, choose SRDF Groups, and click Create SRDF Group. This process must be done
on both the R1 and R2 sides of the Metro configuration. In this example, the R1 side configuration is
shown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 463
Physical Witness Group – Select Remote

464 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In the Create SRDF Group dialog, choose the Communication Protocol, Remote Array ID, and provide a
name for the Group. Select the SRDF/Metro Witness Group checkbox, and click NEXT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 464
Physical Witness Group – Configure Local

465 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Choose an SRDF Group Number and select the ports from the Local array. Click NEXT to continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 465
Physical Witness Group – Configure Remote

466 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the SRDF Group Number and ports for the Remote array. Click NEXT to continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 466
Physical Witness Group – Review Summary

467 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Select Run Now from the ADD TO JOB LIST dropdown to create the Physical Witness Group on the
Local array. You must create a Witness Group on the Remote array as well, not shown here.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 467
Physical Witness Group Task Details

468 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

View the details of the Physical Witness SRDF Group creation. Click CLOSE when finished.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 468
SRDF/Metro – Configure Virtual Witness

469 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To create a Virtual Witness, choose Virtual Witness from the DATA PROTECTION menu, and select the
Create button.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 469
Create Virtual Witness

470 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The vWitness has to be added to each of the two arrays in the SRDF/Metro configuration. Navigate to the
DATA PROTECTION > Virtual Witness page. Click the Create button to open the Create Virtual Witness
wizard. Enter a vWitness name, enter the IP address of the SE vApp, and select Add Virtual Witness to
remote arrays. Select Run Now from the ADD TO JOB LIST drop down menu to complete the task.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 470
Virtual Witness

471 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Virtual Witness which is created is shown in the image.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 471
Protect Storage Group

472 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Now we select a non-RDF storage group. To protect a Storage Group using SRDF/Metro, select the SG in
the Storage Groups list, and click the Protect button.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 472
Select Technology – SRDF/Metro

473 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Choose Setup high Availability using SRDF/Metro and click NEXT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 473
Configure Metro

474 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The Remote Array ID will be auto-populated with the remotely-attached array. Alternately, choose the
Scan button to scan for remote arrays.

Choosing the SRDF Group can be done automatically, or manually. To choose a specific SRDF Group,
choose Manual, and click the Select button. Select the SRDF Group from the Select Group listing and
click OK. Establish SRDF Pairs is enabled by default. To disable this setting, uncheck the box.
Establishing SRDF Pairs initiates a copy from the source R1 to the target R2. Choose the type of
protection, either Bias or Witness, for the SRDF/Metro configuration. Leave the default settings for Remote
Storage Group Name and Remote Service Level, unless changes need to be made. Click NEXT to
continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 474
Review Metro

475 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Review the settings for SRDF/Metro and choose Run Now from the ADD TO JOB LIST dropdown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 475
Protect Storage Group Task Details

476 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Once the task has completed, choose the Show Task Details link to display the steps that were taken to
protect the Storage Group using SRDF/Metro. Click CLOSE when done reviewing the details.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 476
SRDF Protected Storage Groups

477 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

From the DATA PROTECTION menu, select Storage Groups. To display Storage Groups protected by
SRDF, select the SRDF tab. Storage Group demo_host shows a State of ActiveBias, indicating
SRDF/Metro using Bias. Selecting the Storage Group in the list displays details of the Storage Group on
the right of the screen.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 477
Lesson: Configure SRDF/Metro Device Pairs
This lesson covers the following topics:

• Creating SRDF/Metro Device Pairs

• Viewing SRDF/Metro Device Pairs with Unisphere for PowerMax and Solutions Enabler

478 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/Metro device pairs. Solutions Enabler and Unisphere for PowerMax are used to
create and view SRDF/Metro Device Pairs. RDF device pairs can also be created without storage groups
as shown in the last lesson, but with device file from CLI, manually selecting the device pairs—from
Unisphere—or using device group.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 478
Create SRDF/Metro Device Pairs

479 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

From the DATA PROTECTION menu, choose SRDF Groups. Create SRDF/Metro Device Pairs within an
SRDF Group by choosing the SRDF Group and clicking the Create Pairs button.

With Solutions Enabler, an SRDF/Metro configuration is created from a set of non-SRDF devices with the
symrdf createpair command with the -rdf_metro option. The -rdf_metro option indicates the
device pair will operate in an SRDF/Metro configuration with a Witness array available. Devices must be
added to an empty RDF group or an RDF group which contains device pairs that are Not Ready on the
link. If the RDF group has devices in it already, they have to be devices that are part of an RDF/Metro
configuration.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 479
Create SRDF/Metro Device Pairs – Continued

480 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Select Active for the SRDF Mode and then click NEXT.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 480
Select Local Volumes

481 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the local volumes for the Create Pairs operation. Devices meeting the requested criteria, in this
example, one 2 GB volume, will be auto-selected. Manual Selection allows you to choose the specific
volumes from a listing based on the size and configuration of the volume. The Add to Storage Group
selection creates a new SG for the volume or volumes. In this example, no SG is being used. Click NEXT
to continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 481
Select Remote Volumes

482 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Select the Remote Volumes to be used for the Create Pairs operation. Click NEXT to continue.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 482
Review Pair Summary

483 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To complete the Create Pair operation, choose Run Now from the ADD TO JOB LIST dropdown on the
Review Pair Summary screen.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 483
SRDF/Metro Device Pairs

484 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The SRDF Group MetroStG displays an SRDF Mode of Active. View SRDF Metro Pairs in an SRDF
Group by clicking the SRDF Group Volumes link from the details panel of the SRDF Group. In this
example, device 00010 is the source, or R1 volume, and 000101 is the remote, or R2 volume. The device
pair is in an ActiveBias state.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 484
SRDF/Metro Device Pairs

R1

R2

485 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

In SRDF/Metro configurations, the External Identity WWN of the R2 is the same as the External Identity
WWN of the R1.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 485
SRDF/Metro Device Pairs
C:\>symdev –sid 1888 show 100 R1
Device Physical Name : Not Visible
Device Symmetrix Name : 00100
Symmetrix ID : 000197601888
.
Device WWN : 60000970000197601888533030313030
.
Device External Identity
{
Device WWN : 60000970000197601888533030313030

C:\>symdev –sid 2249 show 7f


R2
Device Physical Name : Not Visible
Device Symmetrix Name : 0007F
Symmetrix ID : 000197902249
.
Device WWN : 60000970000197902249533030303736
.
Device External Identity
{
Device WWN : 60000970000197902249533030313030

486 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

To display the Device External Identity of SRDF/Metro devices in Solutions Enabler, use the symdev
show command. The Device External Identity of the R1 and R2 is the same in SRDF/Metro
configurations.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 486
Lesson: Failure Scenarios and Operations
This lesson covers the following topics:

• List the failure scenarios and operations

487 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers the SRDF/Metro failure scenarios and operations.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 487
Planned Outage – Suspend RDF Pair

Application Host Application Host


025 025

Read/Write Read/Write Read/Write Not Ready

Active Suspended

R1 ActiveBias R2 ActiveBias R2
R1

SRDF/Metro SRDF/Metro

Bias Applied

488 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

For a planned SRDF outage, the suspend action is issued to take the device pairs out of the ActiveActive
or ActiveBias RDF pair state. The device pair is put into a Suspended RDF pair state. The suspend action
must be issued with the –force option. In this example, bias is used since there is no witness array. The
R1 devices are read/write to the host and the R2 devices are Not_Ready to the host.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 488
Planned Outage – Establish or Restore

Application Host Application Host


025 025

Read/Write Read/Write Read/Write Read/Write

Active Active

R1 Establish R2 R1 R2
Restore

SRDF/Metro SRDF/Metro

489 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Once the planned outage is complete, the user can decide to keep the R1 or R2 data. If the user wants to
keep the data on the R1 side, they can execute an establish command. If the user determines that they
need to revert to the data on the R2 side, they can issue the restore command.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 489
Changing Host Accessibility – Suspend – Bias R2

Application Host Application Host


025 025

Read/Write Read/Write Not Ready Read/Write

R2 R1
Active

R1 ActiveBias R2 R2 Suspended R1

SRDF/Metro SRDF/Metro

Bias Applied

490 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

If there is a planned application outage to the R1 devices, symrdf suspend -bias R2 -force is
used to cause what was the R1 to become the R2. What was the R2 becomes the R1 which is the side
that will be read/write to the host.

For a planned application outage to the R1 devices, which by default is the bias side, a half_swap to both
the R1 and R2 is allowed. Half_swaps force a bias change when the devices are Not_Ready (NR) on the
RDF link.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 490
Unplanned Outage – Link Failure
Application Host
025

Read/Write Not Ready

Partitioned
Active

ActiveBias

R1 R2
SRDF/Metro

Bias Applied

Continue operations on R1 Side


491 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When an unplanned SRDF link failure occurs between the two sides, the device pairs change from the
ActiveActive or ActiveBias RDF pair state into a Partitioned RDF pair state. The operations will be
continued on the R1 side.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 491
Unplanned Outage – Link Fixed
Application Host
025

Read/Write Not Ready

Active
Suspended

ActiveBias

R1 R2
SRDF/Metro

Bias Applied

Operations on R1 Side – Establish/Restore after link fixed


492 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

If the application continues running on the R1 side, the RDF pair state will change from Partitioned to
Suspended once the RDF link failure is addressed. From this point either an establish or restore can be
executed.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 492
Unplanned Outage Recovery Issue
Application Host
025

Not Ready Read/Write

Active
Partitioned

ActiveBias

R1 R1
SRDF/Metro Original R2

Bias Applied

493 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

A condition that causes an issue when the device pairs are in a Partitioned state is when the R2 devices
had a personality swap and the R1 devices did not. This means that there are two R1s in the Metro
configuration, as shown on the slide.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 493
Unplanned Outage Recovery Resolution
Application Host
025

Not Ready Read/Write

Active
Partitioned

ActiveBias

R1 R1
SRDF/Metro Original R2

Bias Applied

Half Swap
494 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

When the SRDF link failure is resolved, the device pair status remains Partitioned because there are two
R1s in the Metro configuration.

One choice would be to take the application offline and perform a half_swap (on the failed site) on the R1
– the original R2 – swapping it back to an R2. Now there would be an R1 and R2 in the configuration. Bias
would then be set on the original R1 side. The device pairs would go into a suspended state. The
application can now be brought up on the original R1 side and either an establish or restore could be
performed. The configuration returns to its original condition.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 494
Lesson: SRDF/Metro Online Device Expansion
This lesson covers the following topics:

• Describe Online Device Expansion

• List the steps to perform online device expansion using Unisphere

495 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lesson covers SRDF/Metro online device expansion and the steps to perform online device
expansion using Unisphere.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 495
SRDF/Metro Online Device Expansion
The Online Device Expansion feature provides the following functionality:
• Adds support for devices in SRDF/Metro Active or Suspended pair states
• Expansion does not impact read/write operation performance to associated
devices or applications
• Support for both Compatibility and Mobility IDs
• Supports SRDF/Metro R1/R2 topology with single a command/operation
• Support for devices which have an Async DR target is supported
• If the expansion operation fails for either site, then both paired devices
exposes the same (original) size

496 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

The functionalities provided by SRDF/Metro Online Device Expansion feature are listed on the slide.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 496
Unisphere Online Device Expansion (ODE) Steps

497 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Select a volume (001C6 in this example) from the available devices to be expanded, select Expand.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 497
Unisphere Online Device Expansion (ODE) Steps

498 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

On the volume expansion dialog, enter the desired new volume size (0.5 to 1 GB in this example), select
Run from the ADD TO JOB LIST dropdown.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 498
Unisphere Online Device Expansion (ODE) Steps

499 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Click the Show Task Details link to view the task details.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 499
Unisphere Online Device Expansion (ODE) Steps

500 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

Verify that the paired R1 device (001C6) has been expanded.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 500
Lab: SRDF/Metro

This lab covers


• Create SRDF group for SRDF/Metro
• Protect storage group using SRDF/Metro
• Add masking view to remote array
• Create RDF pairs
• SRDF disaster recovery
• Metro Smart DR configuration preparation

501 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This lab covers creating SRDF group for SRDF/Metro, protecting storage group using SRDF/Metro,
adding masking view to remote array, creating RDF pairs, and preparing for Metro Smart DR
configuration.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 501
Module Summary
Key points covered in this module:

• SRDF/Metro Overview

• Configuring SRDF/Metro with Unisphere for PowerMax

• SRDF/Metro Device Pairs

• SRDF/Metro Online Device Expansion

502 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This module covered monitoring SRDF/Metro device pairs. Solutions Enabler and Unisphere for
PowerMax are used to view SRDF/Metro details.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Module: SRDF/Metro 502
Course Summary

Key points covered in this course:


• PowerMax and VMAX Family configuration overview
• Storage provisioning concepts
• Managing ports and port characteristics
• Performing service provisioning to hosts
• Overview of storage management in a virtualized environment
• Using Unisphere for PowerMax for Compliance Monitoring and Workload Planning
• Local and remote replication offerings in PowerMax and VMAX Family arrays

503 © 2022 Dell Inc. or its subsidiaries. All Rights Reserved.

This concludes the training.

© 2022 Dell Inc. or its subsidiaries. All Rights Reserved. Course Summary 503

You might also like