SG 246983
SG 246983
SG 246983
Paul Rogers
Redelf Janssen
Andre Otto
Rita Pleus
Alvaro Salla
Valeria Sokal
ibm.com/redbooks
International Technical Support Organization
March 2010
SG24-6983-03
Note: Before using this information and the product it supports, read the information in Notices on
page ix.
This edition applies to Version 1 Release 11 of z/OS (5694-A01) and to subsequent releases and
modifications until otherwise indicated in new editions.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Contents v
5.5 Implementing SMS policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
5.6 Monitoring SMS policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5.7 Assigning data to be system-managed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5.8 Using data classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
5.9 Using storage classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
5.10 Using management classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
5.11 Management class functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
5.12 Using storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
5.13 Using aggregate backup and recovery support (ABARS) . . . . . . . . . . . . . . . . . . . . . 260
5.14 Automatic Class Selection (ACS) routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
5.15 SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.16 SMS control data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
5.17 Implementing DFSMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
5.18 Steps to activate a minimal SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5.19 Allocating SMS control data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5.20 Defining the SMS base configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
5.21 Creating ACS routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
5.22 DFSMS setup for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
5.23 Starting SMS and activating a new configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
5.24 Control SMS processing with operator commands . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5.25 Displaying the SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5.26 Managing data with a minimal SMS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 285
5.27 Device-independence space allocation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5.28 Developing naming conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
5.29 Setting the low-level qualifier (LLQ) standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
5.30 Establishing installation standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
5.31 Planning and defining data classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
5.32 Data class attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
5.33 Use data class ACS routine to enforce standards . . . . . . . . . . . . . . . . . . . . . . . . . . 295
5.34 Simplifying JCL use. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
5.35 Allocating a data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
5.36 Creating a VSAM cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
5.37 Retention period and expiration date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
5.38 SMS PDSE support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
5.39 Selecting data sets to allocate as PDSEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
5.40 Allocating new PDSEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
5.41 System-managed data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
5.42 Data types that cannot be system-managed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
5.43 Interactive Storage Management Facility (ISMF) . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
5.44 ISMF: Product relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
5.45 ISMF: What you can do with ISMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
5.46 ISMF: Accessing ISMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
5.47 ISMF: Profile option. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
5.48 ISMF: Obtaining information about a panel field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
5.49 ISMF: Data set option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
5.50 ISMF: Volume Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
5.51 ISMF: Management Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
5.52 ISMF: Data Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
5.53 ISMF: Storage Class option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
5.54 ISMF: List option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Contents vii
7.29 Accessing a data set with DFSMStvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
7.30 Application considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
7.31 DFSMStvs logging implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
7.32 Prepare for logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
7.33 Update PARMLIB with DFSMStvs parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
7.34 The DFSMStvs instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
7.35 Interacting with DFSMStvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
7.36 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX i5/OS RETAIN
AS/400 IBM RS/6000
CICS IMS S/390
DB2 iSeries System i
DS6000 Language Environment System Storage
DS8000 Magstar System z
Enterprise Storage Server OS/390 Tivoli
ESCON OS/400 TotalStorage
eServer Parallel Sysplex VTAM
FICON POWER5 z/Architecture
FlashCopy PowerPC z/OS
GDPS PR/SM z/VM
Geographically Dispersed Parallel pSeries z9
Sysplex RACF zSeries
Hiperspace Redbooks
HyperSwap Redbooks (logo)
Novell, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and other
countries.
ACS, Interchange, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the
U.S. and other countries.
SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Volume 1: Introduction to z/OS and storage concepts, TSO/E, ISPF, JCL, SDSF, and z/OS
delivery and installation
Volume 2: z/OS implementation and daily maintenance, defining subsystems, JES2 and
JES3, LPA, LNKLST, authorized libraries, Language Environment, and SMP/E
Volume 3: Introduction to DFSMS, data set basics, storage management hardware and
software, VSAM, System-Managed Storage, catalogs, and DFSMStvs
Volume 5: Base and Parallel Sysplex, System Logger, Resource Recovery Services (RRS),
Global Resource Serialization (GRS), z/OS system operations, Automatic Restart
Management (ARM), Geographically Dispersed Parallel Sysplex (GPDS)
Redelf Janssen is an IT Architect in IBM Global Services ITS in IBM Germany. He holds a
degree in Computer Science from University of Bremen and joined IBM Germany in 1988. His
areas of expertise include IBM zSeries, z/OS and availability management. He has written
IBM Redbooks publications on OS/390 Releases 3, 4, and 10, and z/OS Release 8.
Andre Otto is a z/OS DFSMS SW service specialist at the EMEA Backoffice team in
Germany. He has 12 years of experience in the DFSMS, VSAM and catalog components.
Andre holds a degree in Computer Science from the Dresden Professional Academy.
Rita Pleus is an IT Architect in IBM Global Services ITS in IBM Germany. She has 21 years
of IT experience in a variety of areas, including systems programming and operations
management. Before joining IBM in 2001, she worked for a German S/390 customer. Rita
holds a degree in Computer Science from the University of Applied Sciences in Dortmund.
Her areas of expertise include z/OS, its subsystems, and systems management.
Alvaro Salla is an IBM retiree who worked for IBM for more than 30 years in large systems.
He has co-authored many IBM Redbooks publications and spent many years teaching S/360
to S/390. He has a degree in Chemical Engineering from the University of Sao Paulo, Brazil.
Valeria Sokal is an MVS system programmer at an IBM customer. She has 16 years of
experience as a mainframe systems programmer.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Preface xiii
xiv ABCs of z/OS System Programming Volume 3
1
DFSMS is an operating environment that helps automate and centralize the management of
storage based on the policies that your installation defines for availability, performance,
space, and security.
The heart of DFSMS is the Storage Management Subsystem (SMS). Using SMS, the storage
administrator defines policies that automate the management of storage and hardware
devices. These policies describe data allocation characteristics, performance and availability
goals, backup and retention requirements, and storage requirements for the system.
DFSMS is an exclusive element of the z/OS operating system and is a software suite that
automatically manages data from creation to expiration.
IBM 3494
Understanding DFSMS
Data management is the part of the operating system that organizes, identifies, stores,
catalogs, and retrieves all the data information (including programs) that your installation
uses. DFSMS is an exclusive element of the z/OS operating system. DFSMS is a software
suite that automatically manages data from creation to expiration.
DFSMSdfp helps you store and catalog information about DASD, optical, and tape devices so
that it can be quickly identified and retrieved from the system. DFSMSdfp provides access to
both record- and stream-oriented data in the z/OS environment. The z/OS operating system
enables you to efficiently manage e-business workloads and enterprise transactions 24 hours
a day. DFSMSdfp is automatically included with z/OS. It performs the essential data, storage,
and device management functions of the system.
Systems programmer
As a systems programmer, you can use DFSMS data management to:
Allocate space on DASD and optical volumes
Automatically locate cataloged data sets
Control access to data
Transfer data between the application program and the medium
Mount magnetic tape volumes in the drive
NFS
IBM workstations
IBM System z
dfp
tvs NFS
Storage dss DFSMS
Hierarchy
rmm
hsm
P690
DFSMS components
DFSMS is an exclusive element of the z/OS operating system. DFSMS is a software suite
that automatically manages data from creation to expiration. The following elements comprise
DFSMS:
DFSMSdfp, a base element of z/OS
DFSMSdss, an optional feature of z/OS
DFSMShsm, an optional feature of z/OS
DFSMSrmm, an optional feature of z/OS
DFSMStvs, an optional feature of z/OS
DFSMSdfp Provides storage, data, program, and device management. It is comprised of
components such as access methods, OPEN/CLOSE/EOV routines, catalog
management, DADSM (DASD space control), utilities, IDCAMS, SMS, NFS,
ISMF, and other functions.
DFSMSdss Provides data movement, copy, backup, and space management functions.
DFSMShsm Provides backup, recovery, migration, and space management functions. It
invokes DFSMSdss for certain of its functions.
DFSMSrmm Provides management functions for removable media such as tape cartridges
and optical media.
DFSMStvs Enables batch jobs and CICS online transactions to update shared VSAM
data sets concurrently.
DFSMSdfp component
DFSMSdfp provides storage, data, program, and device management. It is comprised of
components such as access methods, OPEN/CLOSE/EOV routines, catalog management,
DADSM (DASD space control), utilities, IDCAMS, SMS, NFS, ISMF, and other functions.
Managing storage
The storage management subsystem (SMS) is a DFSMSdfp facility designed for automating
and centralizing storage management. SMS automatically assigns attributes to new data
when that data is created. SMS automatically controls system storage and assigns data to
the appropriate storage device. ISMF panels allow you to specify these data attributes.
For more information about ISMF, see 5.43, Interactive Storage Management Facility (ISMF)
on page 309.
Managing data
DFSMSdfp organizes, identifies, stores, catalogs, shares, and retrieves all the data that your
installation uses. You can store data on DASD, magnetic tape volumes, or optical volumes.
Using data management, you can complete the following tasks:
Allocate space on DASD and optical volumes
Automatically locate cataloged data sets
Control access to data
Transfer data between the application program and the medium
Mount magnetic tape volumes in the drive
z/OS UNIX System Services (z/OS UNIX) provides the command interface that interactive
UNIX users can use. z/OS UNIX allows z/OS programs to directly access UNIX data.
Concurrent copy
Figure 1-4 DFSMSdss functions
DFSMSdss component
DFSMSdss is the primary data mover for DFSMS. DFSMSdss copies and moves data to help
manage storage, data, and space more efficiently. It can efficiently move multiple data sets
from old to new DASD. The data movement capability that is provided by DFSMSdss is useful
for many other operations, as well. You can use DFSMSdss to perform the following tasks.
Space management
DFSMSdss can reduce or eliminate DASD free-space fragmentation.
Concurrent copy
When it is used with supporting hardware, DFSMSdss also provides concurrent copy
capability. Concurrent copy lets you copy or back up data while that data is being used. The
user or application program determines when to start the processing, and the data is copied
as though no updates have occurred.
DFSMSrmm component
DFSMSrmm manages your removable media resources, including tape cartridges and reels.
It provides the following functions.
Library management
You can create tape libraries, or collections of tape media associated with tape drives, to
balance the work of your tape drives and help the operators that use them.
Volume management
DFSMSrmm manages the movement and retention of tape volumes throughout their life
cycle.
Storage management
Space management
Availability management
DFSMShsm component
DFSMShsm complements DFSMSdss to provide the following functions.
Storage management
DFSMShsm provides automatic DASD storage management, thus relieving users from
manual storage management tasks.
Space management
DFSMShsm improves DASD space usage by keeping only active data on fast-access storage
devices. It automatically frees space on user volumes by deleting eligible data sets, releasing
overallocated space, and moving low-activity data to lower cost-per-byte devices, even if the
job did not request tape.
Attention: You must also have DFSMSdss to use the DFSMShsm functions.
DFSMStvs component
DFSMS Transactional VSAM Services (DFSMStvs) allows you to share VSAM data sets
across CICS, batch, and object-oriented applications on z/OS or distributed systems.
DFSMStvs enables concurrent shared updates of recoverable VSAM data sets by CICS
transactions and multiple batch applications. DFSMStvs enables 24-hour availability of CICS
and batch applications.
DFSMStvs is built on top of VSAM record-level sharing (RLS), which permits sharing of
recoverable VSAM data sets at the record level. Different applications often need to share
VSAM data sets. Sometimes the applications need only to read the data set. Sometimes an
application needs to update a data set while other applications are reading it. The most
complex case of sharing a VSAM data set is when multiple applications need to update the
data set and all require complete data integrity.
Transaction processing provides functions that coordinate work flow and the processing of
individual tasks for the same data sets. VSAM record-level sharing and DFSMStvs provide
The Storage Management Subsystem (SMS) is an operating environment that automates the
management of storage. Storage management uses the values provided at allocation time to
determine, for example, on which volume to place your data set, and how many tracks to
allocate for it. Storage management also manages tape data sets on mountable volumes that
reside in an automated tape library. With SMS, users can allocate data sets more easily.
The data sets allocated through SMS are called system-managed data sets or SMS-managed
data sets.
Access methods are identified primarily by the way that they organize the data in the data set.
For example, use the basic sequential access method (BSAM) or queued sequential access
method (QSAM) with sequential data sets. However, there are times when an access method
identified with one organization can be used to process a data set organized in another
manner. For example, a sequential data set (not extended-format data set) created using
BSAM can be processed by the basic direct access method (BDAM), and vice versa. Another
example is UNIX files, which you can process using BSAM, QSAM, basic partitioned access
method (BPAM), or virtual storage access method (VSAM).
DATASET.SEQ1
DATASET.SEQ DATASET.SEQ2
DATASET.PDS DATASET.SEQ3
DATASET.VSAM
VOLSER=DASD01 VOLSER=SL0001
Note: As an exception, the z/OS UNIX services component supports Hierarchical File
System (HFS) data sets, where the collection is of bytes and there is not the concept of
logically related data records.
Storage devices
Data can be stored on a magnetic direct access storage device (DASD), magnetic tape
volume, or optical media. As mentioned previously, the term DASD applies to disks or
simulated equivalents of disks. All types of data sets can be stored on DASD, but only
sequential data sets can be stored on magnetic tape. The types of data sets are described in
2.3, DFSMSdfp data set types on page 20.
DASD volumes
Each block of data on a DASD volume has a distinct location and a unique address, making it
possible to find any record without extensive searching. You can store and retrieve records
either directly or sequentially. Use DASD volumes for storing data and executable programs,
The following sections discuss the logical attributes of a data set, which are specified at data
set creation time in:
DCB/ACB control blocks in the application program
DD cards (explicitly, or through the Data Class (DC) option with DFSMS)
In an ACS Data Class (DC) routine (overridden by a DD card)
After the creation, such attributes are kept in catalogs and VTOCs.
HARRY.FILE.EXAMPLE.DATA
1 2 3 4
HLQ LLQ
A data set name can be one name segment, or a series of joined name segments. Each
name segment represents a level of qualification. For example, the data set name
HARRY.FILE.EXAMPLE.DATA is composed of four name segments. The first name on the left
is called the high-level qualifier (HLQ), the last name on the right is the lowest-level qualifier
(LLQ).
Each name segment (qualifier) is 1 to 8 characters, the first of which must be alphabetic (A to
Z) or national (# @ $). The remaining seven characters are either alphabetic, numeric (0 - 9),
national, a hyphen (-). Name segments are separated by a period (.).
Note: Including all name segments and periods, the length of the data set name must not
exceed 44 characters. Thus, a maximum of 22 name segments can make up a data set
name.
Large format data sets reduce the need to use multiple volumes for single data sets,
especially very large ones such as spool data sets, dumps, logs, and traces. Unlike
extended-format data sets, which also support greater than 65 535 tracks per volume, large
format data sets are compatible with EXCP and do not need to be SMS-managed.
You can allocate a large format data set using the DSNTYPE=LARGE parameter on the DD
statement, dynamic allocation (SVC 99), TSO/E ALLOCATE, or the access method services
ALLOCATE command.
You can allocate a basic format data set using the DSNTYPE=BASIC parameter on the DD
statement, dynamic allocation (SVC 99), TSO/E ALLOCATE, or the access method services
ALLOCATE command, or the data class. If no DSNTYPE value is specified from any of these
sources, then its default is BASIC.
Objects
Objects are named streams of bytes that have no specific format or record orientation. Use
the object access method (OAM) to store, access, and manage object data. You can use any
type of data in an object because OAM does not recognize the content, format, or structure of
the data. For example, an object can be a scanned image of a document, an engineering
drawing, or a digital video. OAM objects are stored either on DASD in a DB2 database, or
on an optical drive, or on an optical or tape storage volume.
The storage administrator assigns objects to object storage groups and object backup
storage groups. The object storage groups direct the objects to specific DASD, optical, or tape
devices, depending on their performance requirements. You can have one primary copy of an
object and up to two backup copies of an object. A Parallel Sysplex allows you to access
objects from all instances of OAM and from optical hardware within the sysplex.
Partitioned Organized
Physical Sequential
(PDS and PDSE)
Compression
Data striping
Extended-addressability
Objects
An extended-format data set can occupy any number of tracks. On a volume that has more
than 65,535 tracks, a sequential data set cannot occupy more than 65,535 tracks.
An extended-format, striped sequential data set can contain up to 4 GB blocks. The maximum
size of each block is 32 760 bytes.
System-managed DASD
You can allocate both sequential and VSAM data sets in extended format on a
system-managed DASD. Extended-format VSAM data sets also allow you to release partial
unused space and to use system-managed buffering (SMB, a fast buffer pool management
technique) for VSAM batch programs. You can select whether to use the primary or
secondary space amount when extending VSAM data sets to multiple volumes.
Objects
Objects are named streams of bytes that have no specific format or record orientation. Use
the object access method (OAM) to store, access, and manage object data. You can use any
type of data in an object because OAM does not recognize the content, format, or structure of
the data. For example, an object can be a scanned image of a document, an engineering
drawing, or a digital video. OAM objects are stored either on DASD in a DB2 database, or on
an optical drive, or on a tape storage volume.
The storage administrator assigns objects to object storage groups and object backup
storage groups. The object storage groups direct the objects to specific DASD, optical, or tape
devices, depending on their performance requirements. You can have one primary copy of an
object, and up to two backup copies of an object.
Data striping
Sequential data striping can be used for physical sequential data sets that cause I/O
bottlenecks for critical applications. Sequential data striping uses extended-format sequential
data sets that SMS can allocate over multiple volumes, preferably on separate channel paths
and control units, to improve performance. These data sets must reside on 3390 volumes that
are located on the IBM DS8000.
Sequential data striping can reduce the processing time required for long-running batch jobs
that process large, physical sequential data sets. Smaller sequential data sets can also
benefit because of DFSMS's improved buffer management for QSAM and BSAM access
methods for striped extended-format sequential data sets.
A stripe in DFSMS is the portion of a striped data set, such as an extended format data set,
that resides on one volume. The records in that portion are not always logically consecutive.
The system distributes records among the stripes such that the volumes can be read from or
written to simultaneously to gain better performance. Whether it is striped is not apparent to
the application program. Data striping distributes data for one data set across multiple
SMS-managed DASD volumes, which improves I/O performance and reduces the batch
window. For example, a data set with 28 stripes is distributed across 28 volumes.
Physical sequential data sets cannot be extended if none of the stripes can be extended. For
VSAM data sets, each stripe can be extended to an available candidate volume if extensions
fail on the current volume.
Data classes
Data class attributes define space and data characteristics of data sets that are normally
specified on JCL DD statements, TSO/E ALLOCATE commands, access method services
(IDCAMS) DEFINE commands, dynamic allocation requests, and ISPF/PDF panels. You can
use data class to allocate sequential and VSAM data sets in extended format for the benefits
of compression (sequential and VSAM KSDS), striping, and large data set sizes (VSAM).
Storage groups
SMS calculates the average preference weight of each storage group using the preference
weights of the volumes that will be selected if the storage group is selected for allocation.
Then, SMS selects the storage group that contains at least as many primary volumes as the
stripe count and has the highest average weight. If there are no storage groups that meet
these criteria, the storage group with the largest number of primary volumes is selected. If
multiple storage groups have the largest number of primary volumes, the one with the highest
average weight is selected. If there are still multiple storage groups that meet the selection
criteria, SMS selects one at random.
For striped data sets, ensure that there are a sufficient number of separate paths to DASD
volumes in the storage group to allow each stripe to be accessible through a separate path.
The maximum number of stripes for physical sequential (PS) data sets is 59. For VSAM data
sets, the maximum number of stripes is 16. Only sequential or VSAM data sets can be
striped.
Note: This support is invoked when allocating a new striped data set. Volumes are ranked
by preference weight from each individual controller. This support selects the most
preferred storage group that meets or closely meets the target stripe count. This allows
selection from the most preferred volume from individual controllers to meet the stripe
count (try to spread stripes across controllers).
Automatically activate the fast volume selection function to avoid overutilizing system
resources.
After selecting a storage group, SMS selects volumes by their preference weight. Primary
volumes are preferred over secondary volumes because they have a higher preference
weight. Secondary volumes are selected when there is an insufficient number of primary
volumes. If there are multiple volumes with the same preference weight, SMS selects one of
the volumes at random.
Volume preference
Volume preference attributes, such as availability, accessibility, and PAV capability are
supported.
Data sets defined as large format must be accessed using QSAM, BSAM, or EXCP.
Large format data sets have a maximum of 16 extents on each volume. Each large format
data set can have a maximum of 59 volumes. Therefore, a large format data set can have a
maximum of 944 extents (16 times 59).
A large format data set can occupy any number of tracks, without the limit of 65,535 tracks
per volume. The minimum size limit for a large format data set is the same as for other
sequential data sets that contain data: one track, which is about 56,000 bytes. Primary and
secondary space can both exceed 65,535 tracks per volume.
Figure 2-9 on page 31 shows the creation of a data set using ISPF panel 3.2. Other ways to
create a data set are as follows:
Access method services
You can define VSAM data sets and establish catalogs by using a multifunction services
program called access method services.
TSO ALLOCATE command
You can issue the ALLOCATE command of TSO/E to define VSAM and non-VSAM data
sets.
Using JCL
Any data set can be defined directly with JCL by specifying DSNTYPE=LARGE on the DD
statement.
You can allocate a basic format data set using the DSNTYPE=BASIC parameter on the DD
statement, dynamic allocation (SVC 99), TSO/E ALLOCATE or the access method services
ALLOCATE command, or the data class. If no DSNTYPE value is specified from any of these
sources, then its default is BASIC.
You can use the BPAM, BSAM, QSAM, BDAM, and EXCP access methods with VIO data
sets. SMS can direct SMS-managed temporary data sets to VIO storage groups.
Figure 2-10 Large format data set enhancement with z/OS V1R9
Updates have been made to the following commands and service to ensure that each can
handle large format data sets:
TSO TRANSMIT, RECEIVE
PRINTDS
Restriction: Types of data sets that cannot be allocated as large format data sets are:
PDS, PDSE, and direct data sets
Virtual I/O data sets, password data sets, and system dump data sets
BLOCKTOKENSIZE(REQUIRE | NOREQUIRE)
Using BLOCKTOKENSIZE(REQUIRE)
If your installation uses the default BLOCKTOKENSIZE(REQUIRE) setting in PARMLIB
member IGDSMSxx, you can issue the following command to see the current
BLOCKTOKENSIZE settings, from the MVS console:
D SMS,OPTIONS
z/OS UNIX
z/OS UNIX System Services (z/OS UNIX) enables z/OS to access UNIX files. UNIX
applications also can access z/OS data sets. z/OS UNIX files are byte-oriented, similar to
objects. We differentiate between the following types of z/OS UNIX files.
DSORG=PS
RECFM=FB
LRECL=80
Data Set
80
80 80
80 BLKSIZE=27920
80 80
DATASET.TEST.SEQ1
See also z/OS MVS JCL Reference, SA22-7597 for information about the data set
specifications discussed in this section.
Logical records, when located in DASD or tape, are grouped into physical records named
blocks (to save space in DASD because of the gaps). Each block of data on a DASD volume
has a distinct location and a unique address (block number, track, and cylinder), thus making
it possible to find any block without extensive sequential searching. Logical records can be
stored and retrieved either directly or sequentially.
DASD volumes are used for storing data and executable programs (including the operating
system itself), and for temporary working storage. One DASD volume can be used for many
separate data sets, and space on it can be reallocated and reused. The maximum length of a
logical record (LRECL) is limited by the physical size of the media used.
Spanned records are specified as VS, VBS, DS, or DBS. A spanned record is a logical record
that spans two or more blocks. Spanned records can be necessary if the logical record size is
larger than the maximum allowed block size.
You can also specify the records as fixed-length standard by using FS or FBS, meaning that
there is not an internal short block.
In an extended-format data set, the system adds a 32-byte suffix to each block, which is
transparent to the application program.
Space values
For DASD data sets, you can specify the amount of space required in: logical records, blocks,
records, tracks, or cylinders. You can specify a primary and a secondary space allocation.
When you define a new data set, only the primary allocation value is used to reserve space
for the data set on DASD. Later, when the primary allocation of space is filled, space is
allocated in secondary storage amounts (if specified). The extents can be allocated on other
volumes if the data set was defined as multivolume.
For example, if you allocate a new data set and specify SPACE=(TRK,(2,4)), this initially
allocates two tracks for the data set. As each record is written to the data set and these two
tracks are used up, the system automatically obtains four more tracks. When these four tracks
are used, another four tracks are obtained. The same sequence is followed until the extent
limit for the type of data set is reached.
The procedure for allocating space on magnetic tape devices is not like allocating space on
DASD. Because data sets on magnetic tape devices must be organized sequentially, each
one is located contiguously. All data sets that are stored on a given magnetic tape volume
must be recorded in the same density. See z/OS DFSMS Using Magnetic Tapes, SC26-7412
for information about magnetic tape volume labels and tape processing.
TSO
MCAT
ALIAS: FPITA
ALIAS: VERA
UCAT
VOLDAT
UCAT
FPIT VTOC
A.DA
TA
FPITA.DATA
FPITA.FILE1
VERA.FILE1
FPITA.DATA
For detailed information about catalogs refer to Chapter 6, Catalogs on page 325.
MYVOL1
PAY.D1
CATALOG
Cataloged reference
// DD DSN=PAY.D2,DISP=OLD
PAY.D2
See z/OS MVS JCL Reference, SA22-7597 for information about UNIT and VOL parameters.
Note: We strongly recommend that you do not have uncataloged data sets in your
installation because uncataloged data sets can cause problems with duplicate data and
possible incorrect data set processing.
VTOC
A
B C
A
Data sets
B
VTOC Data Set
C (Can be located
after cylinder 0,
track 0.)
Figure 2-16 Volume table of contents (VTOC)
The VTOC lists the data sets that reside on its volume, along with information about the
location and size of each data set, and other data set attributes. It is created when the volume
is initialized through the ICKDSF utility program.
The VTOC locates data sets on that volume. The VTOC is composed of 140-byte data set
control blocks (DSCBs), of which there are six types shown in Table 2-1 on page 47, that
correspond either to a data set currently residing on the volume, or to contiguous, unassigned
tracks on the volume. A set of assembler macros is used to allow a program or z/OS to
access VTOC information.
IEHLIST utility
The IEHLIST utility can be used to list, partially or completely, entries in a specified volume
table of contents (VTOC), whether indexed or non-indexed. The program lists the contents of
selected data set control blocks (DSCBs) in edited or unedited form.
VTOC
DATA SET A
DSCBs F4 F0 F1 F1 F1
DATA SET B
DATA SET C
DSCBs also describe the VTOC itself. CVAF routines automatically construct a DSCB when
space is requested for a data set on the volume. Each data set on a DASD volume has one or
more DSCBs (depending on its number of extents) describing space allocation and other
control information such as operating system data, device-dependent information, and data
set characteristics. There are seven kinds of DSCBs, each with a different purpose and a
different format number.
The first record in every VTOC is the VTOC DSCB (format-4). The record describes the
device, the volume the data set resides on, the volume attributes, and the size and contents of
the VTOC data set itself. The next DSCB in the VTOC data set is a free-space DSCB
(format-5) that describes the unassigned (free) space in the full volume. The function of
various DSCBs depends on whether an optional Index VTOC is allocated in the volume. Index
VTOC is a sort of B-tree, to make the search in VTOC faster.
Table 2-1 on page 47 describes the various types of DSCBs, taking into consideration
whether the Index VTOC is in place or not.
In z/OS V1R7 there is a new address space (DEVMAN) containing trace information about
CVAF events.
0 Free VTOC Describes unused DSCB records One for every unused 140-byte record
DSCB in the VTOC (contains 140 bytes in the VTOC. The DS4DSREC field of
of binary zeros). To delete a the format-4 DSCB is a count of the
DSCB from the VTOC, a format-0 number of format-0 DSCBs in the
DSCB is written over it. VTOC. This field is not maintained for an
indexed VTOC.
1 Identifier Describes the first three extents One for every data set or data space on
of a data set or VSAM data space. the volume, except the VTOC.
2 Index Describes the indexes of an ISAM One for each ISAM data set (for a
data set. This data set multivolume ISAM data set, a format-2
organization is old, and is not DSCB exists only on the first volume).
supported anymore.
3 Extension Describes extents after the third One for each data set on the volume
extent of a non-VSAM data set or that has more than three extents. There
a VSAM data space. can be as many as 10 for a PDSE, HFS,
extended format data set, or a VSAM
data set component cataloged in an
integrated catalog facility catalog.
PDSEs, HFS, and extended format data
sets can have up to 123 extents per
volume. All other data sets are
restricted to 16 extents per volume. A
VSAM component can have 7257
extents in up to 59 volumes (123 each).
7 Free space Only one field in the format-7 This DSCB is not used frequently.
for certain DSCB is an intended interface.
device This field indicates whether the
DSCB is a format-7 DSCB. You
can reference that field as
DS1FMTID or DS5FMTID. A
character 7 indicates that the
DSCB is a format-7 DSCB, and
your program is not to modify it.
VTOC
VVDS
DATA
FREE SPACE
VTOC index
The VTOC index enhances the performance of VTOC access. The VTOC index is a
physical-sequential data set on the same volume as the related VTOC, created by the
ICKDSF utility program. It consists of an index of data set names in format-1 DSCBs
contained in the VTOC and volume free space information.
If the system detects a logical or physical error in a VTOC index, the system disables further
access to the index from all systems that might be sharing the volume. Then, the VTOC
remains usable but with possibly degraded performance.
If a VTOC index becomes disabled, you can rebuild the index without taking the volume offline
to any system. All systems can continue to use that volume without interruption to other
applications, except for a brief pause during the index rebuild. After the system rebuilds the
VTOC index, it automatically re-enables the index on each system that has access to it.
Next, we see more details about the internal implementation of the Index VTOC.
You can use ICKDSF to convert a non-indexed VTOC to an indexed VTOC by using the
BUILDIX command and specifying the IXVTOC keyword. The reverse operation can be
performed by using the BUILDIX command and specifying the OSVTOC keyword. For details,
see Device Support Facilities Users Guide and Reference Release 17, GC35-0033, and
z/OS DFSMSdfp Advanced Services, SC26-7400, for more information about that topic.
//EXAMPLE JOB
//EXEC PGM=ICKDSF
//SYSPRINT DD SYSOUT=A
//SYSIN DD *
INIT UNITADDRESS(0353) NOVERIFY -
VOLID(VOL123)
/*
You use the INIT command to initialize volumes. The INIT command writes a volume label (on
cylinder 0, track 0) and a VTOC on the device for use by MVS. It reserves and formats tracks
for the VTOC at the location specified by the user and for the number of tracks specified. If no
location is specified, tracks are reserved at the default location.
The following example performs an online minimal initialization, and as a result of the
command, an index to the VTOC is created.
// JOB
// EXEC PGM=ICKDSF
//XYZ987 DD UNIT=3390,DISP=OLD,VOL=SER=PAY456
//SYSPRINT DD SYSOUT=A
//SYSIN DD *
INIT DDNAME(XYZ987) NOVERIFY INDEX(X'A',X'B',X'2')
/*
For details on how to IPL the stand-alone version and to see examples of the commands,
refer to Device Support Facilities Users Guide and Reference Release 17, GC35-0033.
With z/OS V1R10, only VSAM data sets are EAS-eligible. You can control whether VSAM
data sets can reside in cylinder-managed space by including or excluding EAVs in particular
storage groups. For non-SMS managed data sets, control the allocation to a volume by
specifying a specific VOLSER or esoteric name.
With z/OS V1R11, extended-format sequential data sets are now EAS-eligible. You can
control whether the allocation of EAS-eligible data sets can reside in cylinder-managed space
using both the methods supported in z/OS V1R10 and by using the new EATTR data set
attribute keyword.
D/T3380
2655 Cyl
1770 Cyl
885 Cyl
D/T3390
10017 Cyl
2226 Cyl 3339 Cyl
1113 Cyl
DASD capacity
Figure 3-1 shows various DASD device types. 3380 devices were used in the 1980s. Capacity
went from 885 to 2,655 cylinders per volume. When storage density increased, new device
types were introduced at the end of the 1980s. Those types were called 3390. Capacity per
volume ranged from 1,113 to 3,339 cylinders. A special device type, model 3390-9 was
introduced to store large amounts of data that needed very fast access. The track geometry
within one device category was (and is) always the same; this means that 3380 volumes have
47,476 bytes per track, and 3390 volumes have 56,664 bytes per track.
Track/Cyl 15 15 15 15 15 15 15
Almost 54 GB
3390-27 3390-54
Today, for example, the IBM Enterprise Storage Server emulates the IBM 3390. On an
emulated disk or on a VM minidisk, the number of cylinders per volume is a configuration
option. It might be less than or greater than the stated number. If so, the number of bytes per
device will differ accordingly. The IBM ESS Model 1750 supports up to 32760 cylinders and
the IBM ESS Model 2107 supports up to 65520 cylinders.
Large volume support is available on z/OS operating systems, the ICKDSF, and DFSORT
utilities.
Large volume support must be installed on all systems in a sysplex prior to sharing data sets
on large volumes. Shared system and application data sets cannot be placed on large
volumes until all system images in a sysplex have large volume support installed.
The size of the logical volume defined does not have an impact on the performance of the
ESS subsystem. The ESS does not serialize I/O on the basis of logical devices, so an
increase in the logical volume size does not affect the ESS backend performance. Host
operating systems, on the other hand, serialize I/Os against devices. As more data sets
reside on a single volume, there will be greater I/O contention accessing the device. With
large volume support, it is more important than ever to try to minimize contention on the
logical device level. To avoid potential I/O bottlenecks on devices:
Exploit the use of Parallel Access Volumes to reduce IOS queuing on the system level.
Eliminate unnecessary reserves by using WLM in goal mode.
Multiple allegiance will automatically reduce queuing on sharing systems.
Parallel Access Volume (PAV) support is of key importance when implementing large
volumes. PAV enables one MVS system to initiate multiple I/Os to a device concurrently. This
keeps IOSQ times down and performance up even with many active data sets on the same
volume. PAV is a practical must with large volumes. We discourage you from using large
volumes without PAV. In particular, we recommend the use of dynamic PAV and HyperPAV.
As the volume sizes grow larger, more data and data sets will reside on a single S/390 device
address. Thus, the larger the volume, the greater the multi-system performance impact will be
of serializing volumes with RESERVE processing. You need to exploit a GRS Star
configuration and convert all RESERVE's possible into system ENQ requests.
Serialization
Granularity
release
Parallel Access Volume (PAV)
HyperPAV
Alias UCBs
Access
visibility
Base UCB Dynamic volume
expansion
3390-9
3390-9
Physical
3390-3 3390-9
volume "3390-54"
3 GB 9 GB 27 GB 54 GB
max cyls: 3339 max cyls: 10017 max cyls: 32760 max cyls: 65520
years
DASD architecture
In the past decade, as processing power has dramatically increased, great care and
appropriate solutions have been deployed so that the amount of data that is directly
accessible can be kept proportionally equivalent. Over the years DASD volumes have
increased in size by increasing the number of cylinders and thus GB capacity.
However, the existing track addressing architecture has limited growth to relatively small GB
capacity volumes. This has placed increasing strain on the 4-digit device number limit and the
number of UCBs that can be defined. The largest available volume is one with 65,520
cylinders or approximately 54 GB, as shown in Figure 3-3.
Rapid data growth on the z/OS platform is leading to a critical problem for various clients, with
a 37% compound rate of disk storage growth between 1996 and 2007. The result is that this
is becoming a real constraint on growing data on z/OS. Business resilience solutions
(GDPS, HyperSwap, and PPRC) that provide continuous availability are also driving this
constraint.
Serialization granularity
Since the 1960s, shared DASD can be serialized through a sequence of
RESERVE/RELEASE CCWs that are today under the control of GRS, as shown in Figure 3-3.
This was a useful mechanism as long as the volume of data so serialized (the granularity)
was not too great. But whenever such a device grew to contain too much data, bottlenecks
became an issue.
This relief builds upon prior technologies that were implemented in part to help reduce the
pressure on running out of device numbers. These include PAV and HyperPAV. PAV alias
UCBs can be placed in an alternate subchannel set (z9 multiple subchannel support).
HyperPAV reduces the number of alias UCBs over traditional PAVs and provides the I/O
throughput required.
Multiple allegiance
Multiple allegiance (MA) was introduced to alleviate the following constraint. It allows
serialization on a limited amount of data within a given DASD volume, which leads to the
possibility of having several (non-overlapping) serializations held at the same time on the
same DASD volume. This is a useful mechanism on which any extension of the DASD volume
addressing scheme can rely. In other terms, multiple allegiance provides finer (than
RESERVE/RELEASE) granularity for serializing data on a volume. It gives the capability to
support I/O requests from multiple systems, one per system, to be concurrently active against
the same logical volume, if they do not conflict with each other. Conflicts occur when two or
more I/O requests require access to overlapping extents (an extent is a contiguous range of
tracks) on the volume, and at least one of the I/O requests involves writing of data.
Requests involving writing of data can execute concurrently with other requests as long as
they operate on non-overlapping extents on the volume. Conflicting requests are internally
queued in the DS8000. Read requests can always execute concurrently regardless of their
extents. Without the MA capability, DS8000 generates a busy indication for the volume
whenever one of the systems issues a request against the volume, thereby causing the I/O
requests to be queued within the channel subsystem (CSS). However, this concurrency can
be achieved as long as no data accessed by one channel program can be altered through the
actions of another channel program.
HyperPAV feature
With the IBM System Storage DS8000 Turbo model and the IBM server synergy feature,
the HyperPAV together with PAV, multiple allegiance can dramatically improve performance
and efficiency for System z environments. With HyperPAV technology:
z/OS uses a pool of UCB aliases.
For each z/OS image within the sysplex, aliases are used independently. WLM is not involved
in alias movement so it does not need to collect information to manage HyperPAV aliases.
Benefits of HyperPAV
HyperPAV has been designed to provide an even more efficient parallel access volume (PAV)
function. When implementing larger volumes, it provides a way to scale I/O rates without the
need for additional PAV alias definitions. HyperPAV exploits FICON architecture to reduce
overhead, improve addressing efficiencies, and provide storage capacity and performance
improvements, as follows:
More dynamic assignment of PAV aliases improves efficiency.
The number of PAV aliases needed might be reduced, taking fewer from the 64 K device
limitation and leaving more storage for capacity use.
The ability to do multiple I/O requests to the same volume nearly eliminates IOS queue time
(IOSQ), one of the major components in z/OS response time. Traditionally, access to highly
active volumes has involved manual tuning, splitting data across multiple volumes, and more.
With PAV and the Workload Manager, you can almost forget about manual performance
tuning. WLM manages PAVs across all members of a sysplex, too. The ESS, in conjunction
with z/OS, has the ability to meet the performance requirements on its own.
When you specify a yes value on the Service Coefficient/Service Definition Options panel,
you enable dynamic alias management globally throughout the sysplex. WLM will keep track
of the devices used by separate workloads and broadcast this information to other systems in
the sysplex. If WLM determines that a workload is not meeting its goal due to IOS queue time,
then WLM attempts to find alias devices that can be moved to help that workload achieve its
goal. Even if all work is meeting its goals, WLM will attempt to move aliases to the busiest
devices to minimize overall queuing.
Alias assignment
It is not always easy to predict the volumes to have an alias address assigned, and how many.
Your software can automatically manage the aliases according to your goals. z/OS can exploit
automatic PAV tuning if you are using WLM in goal mode. z/OS recognizes the aliases that
are initially assigned to a base during the Nucleus Initialization Program (NIP) phase. WLM
can dynamically tune the assignment of alias addresses. WLM monitors the device
performance and is able to dynamically reassign alias addresses from one base to another if
predefined goals for a workload are not met. WLM instructs IOS to reassign an alias.
Through WLM, there are two mechanisms to tune the alias assignment:
The first mechanism is goal based. This logic attempts to give additional aliases to a PAV
device that is experiencing IOS queue delays and is impacting a service class period that
is missing its goal. To give additional aliases to the receiver device, a donor device must
be found with a less important service class period. A bitmap is maintained with each PAV
device that indicates the service classes using the device.
The second mechanism is to move aliases to high-contention PAV devices from
low-contention PAV devices. High-contention devices will be identified by having a
significant amount of IOS queue. This tuning is based on efficiency rather than directly
helping a workload to meet its goal.
z/OS Image
The ESS and DS8000 support concurrent data transfer operations to or from the same
3390/3380 devices from the same system. A device (volume) accessed in this way is called a
parallel access volume (PAV).
PAV exploitation requires both software enablement and an optional feature on your
controller. PAV support must be installed on each controller. It enables the issuing of multiple
channel programs to a volume from a single system, and allows simultaneous access to the
logical volume by multiple users or jobs. Reads, as well as writes to other extents, can be
satisfied simultaneously. The domain of an I/O consists of the specified extents to which the
I/O operation applies, which corresponds to the extents of the same data set. Writes to the
same domain still have to be serialized to maintain data integrity, which is also the case for
reads and write.
The implementation of N parallel I/Os to the same 3390/3380 device consumes N addresses
in the logical controller, thus decreasing the number of possible real devices. Also, UCBs are
PAV benefits
Workloads that are most likely to benefit from PAV functionality being available include:
Volumes with many concurrently open data sets, such as volumes in a work pool
Volumes that have a high read to write ratio per extent
Volumes reporting high IOSQ times
To resolve such issues HyperPAV was introduced. With HyperPAV all aliases UCBs are
located in a pool and are used dynamically by IOS.
HyperPAV
Reduces the number of PAV-aliases needed per
logical subsystem (LSS)
By an order of magnitude but still maintaining optimal
response times
This is accomplished by no longer statically binding
PAV-aliases to PAV-bases
WLM no longer adjusts the bindings
In HyperPAV mode, PAV-aliases are bound to
PAV-bases only for the duration of a single I/O
operation, then reducing the number of aliases required
per LSS significantly
DS8000 feature
HyperPAV is an optional feature on the DS8000 series, available with the HyperPAV indicator
feature number 0782 and corresponding DS8000 series function authorization (2244-PAV
HyperPAV feature number 7899). HyperPAV also requires the purchase of one or more PAV
licensed features and the FICON/ESCON Attachment licensed feature. The FICON/ESCON
Attachment licensed feature applies only to the DS8000 Turbo Models 931, 932, and 9B2.
HyperPAV allows many DS8000 series users to benefit from enhancements to PAV with
support for HyperPAV.
HyperPAV allows an alias address to be used to access any base on the same control unit
image per I/O base. This capability also allows separate HyperPAV hosts to use one alias to
access separate bases, which reduces the number of alias addresses required to support a
set of bases in a System z environment with no latency in targeting an alias to a base. This
functionality is also designed to enable applications to achieve equal or better performance
than is possible with the original PAV feature alone, while also using the same or fewer z/OS
resources. The HyperPAV capability is offered on z/OS V1R6 and later.
Applications
do I/O to base UCB 08F0 Logical Subsystem (LSS) 0800
volumes UCB 0802
Alias UA=F0
Alias UA=F1
Alias UA=F2
Alias UA=F3
Applications z/OS Image
do I/O to base Base UA=01
volumes
UCB 08F0
UCB 0801
Base UA=02
UCB 08F1
Applications
do I/O to base UCB 08F3 P
volumes O
UCB 0802
O
UCB 08F2
L
HyperPAV feature
With the IBM System Storage DS8000 Turbo model and the IBM server synergy feature, the
HyperPAV together with PAV, Multiple Allegiance, and support for IBM System z MIDAW
facility can dramatically improve performance and efficiency for System z environments.
Note: HyperPAV was introduced and integrated in z/OS V1R9 and is available in z/OS
V1R8 with APAR OA12865.
EAV volumes
An extended address volume (EAV) is a volume with more than 65,520 cylinders. An EAV
increases the amount of addressable DASD storage per volume beyond 65,520 cylinders by
changing how tracks on volumes are addressed. The extended address volume is the next
step in providing larger volumes for z/OS. z/OS provided this support first in z/OS V1R10 of
the operating system. Over the years, volumes have grown by increasing the number of
cylinders and thus GB capacity. However, the existing track addressing architecture has
limited the required growth to relatively small GB capacity volumes, which has put pressure
on the 4-digit device number limit. Previously, the largest available volume is one with 65,520
cylinders or approximately 54 GB. Access to the volumes includes the use of PAV, HyperPAV,
and FlashCopy SE (Space-efficient FlashCopy).
3390 Model A
A volume of this size has to be configured in the DS8000 as a 3390 Model A. However, a
3390 Model A is not always an EAV. A 3390 Model A is any device configured in the DS8000
to have more than 65,220 cylinders. Figure 3-8 illustrates the 3390 device types.
EAV benefit
The benefit of this support is that the amount of z/OS addressable disk storage is further
significantly increased. This provides relief for customers that are approaching the 4-digit
device number limit by providing constraint relief for applications using large VSAM data sets,
such as those used by DB2, CICS, zFS file systems, SMP/E CSI data sets, and NFS
mounted data sets. This support is provided in z/OS V1R10 and enhanced in z/OS V1R11.
Extended format sequential data sets are supported in z/OS V1R11. PDSE, basic and large
sequential, BDAM and PDS data set support is provided for all data set types but will not be
enabled to use the extended addressing space (EAS) of an EAV. When this document
references EAS-eligible data sets, it is referring to VSAM and extended-format sequential
data sets.
3390 Model A
With EAV volumes, an architecture is implemented that provides a capacity of hundreds of
terabytes for a single volume. However, the first releases are limited to a volume with 223 GB
or 262,668 cylinders.
Note: In a future release, various of these data sets may become EAS-eligible. All data set
types, even those listed here, can be allocated in the track-managed space on a device
with cylinder-managed space on an EAV volume. Eligible EAS data sets can be created
and extended anywhere on an EAV. Data sets that are not eligible for EAS processing can
only be created or extended in the track-managed portions of the volume.
track-managed
space
Cylinder 0
Multicylinder unit
A multicylinder unit (MCU) is a fixed unit of disk space that is larger than a cylinder. Currently,
on an EAV volume, a multicylinder unit is 21 cylinders and the number of the first cylinder in
each multicylinder unit is a multiple of 21. Figure 3-11 illustrates the EAV and multicylinder
units.
The cylinder-managed space is space on the volume that is managed only in multicylinder
units. Cylinder-managed space begins at cylinder address 65,520. Each data set occupies an
integral multiple of multicylinder units. Space requests targeted for the cylinder-managed
space are rounded up to the next multicylinder unit. The cylinder-managed space only exists
on EAV volumes.
The 21-cylinder value for the MCU is derived from being the smallest unit that can map out
the largest possible EAV volume and stay within the index architecture with a block size of
8,192 bytes, as follows:
It is also a value that divides evenly into the 1 GB storage segments of an IBM DS8000.
These 1 GB segments are the allocation unit in the IBM DS8000 and are equivalent to
1,113 cylinders.
These segments are allocated in multiples of 1,113 cylinders starting at cylinder 65,520.
One of the more important EAV design points is that IBM maintains its commitment to
customers that the 3390 track format and image size, and tracks per cylinders, will remain the
Cylinder-managed space
The cylinder-managed space is the space on the volume that is managed only in
multicylinder units (MCUs). Cylinder-managed space begins at cylinder address 65520. Each
data set occupies an integral multiple of multicylinder units. Space requests targeted for the
cylinder-managed space are rounded up to the next multicylinder unit. The cylinder-managed
space exists only on EAV volumes. A data set allocated in cylinder-managed space may
have its requested space quantity rounded up to the next MCU.
Data sets allocated in cylinder-managed space are described with a new type of data set
control blocks (DSCB) in the VTOC. Tracks allocated in this space will also be addressed
using the new track address. Existing programs that are not changed will not recognize these
new DSCBs and therefore will be protected from seeing how the tracks in cylinder-managed
space are addressed.
Track-managed space
The track-managed space is the space on a volume that is managed in track and cylinder
increments. All volumes today have track-managed space. Track-managed space ends at
cylinder number 65519. Each data set occupies an integral multiple of tracks. The
track-managed space ends at cylinder address 65519. Each data set occupies an integral
multiple of tracks. The track-managed space allows existing programs and physical migration
products to continue to work. Physical copies can be done from a non-EAV to an EAV and
have those data sets accessible.
This capability simplifies data growth by allowing volume expansion without taking volumes
offline. Using DVE significantly reduces the complexity of migrating to larger volumes.
Note: For the dynamic volume expansion function, volumes cannot be in Copy Services
relationships (point-in-time copy, FlashCopy SE, Metro Mirror, Global Mirror, Metro/Global
Mirror, and z/OS Global Mirror) during expansion.
Note: All systems must be at the z/OS V1R10 level or above for the DVE feature to be
used when the systems are sharing the Release 4.0 Licensed Internal Microcode updated
DS8000 at an LCU level. Current DS8000 Licensed Internal Microcode is Version
5.4.1.1043.
A logical volume can be increased in size while the volume remains online to host systems for
the following types of volumes:
3390 model 3 to 3390 model 9
3390 model 9 to EAV volume sizes using z/OS V1R10
Dynamic volume expansion can be used to expand volumes beyond 65,520 cylinders
without moving data or causing an application outage.
Dynamic volume expansion is performed by the DS8000 Storage Manager and can be
requested using its Web GUI. 3390 volumes may be increased in size, for example from a
3390 model 3 to a model 9 or a model 9 to a model A (EAV). z/OS V1R11 introduces an
interface that can be used to make requests for dynamic volume expansion of a 3390 volume
on a DS8000 from the system.
Note: For the dynamic volume expansion function, volumes cannot be in Copy Services
relationships (point-in-time copy, FlashCopy SE, Metro Mirror, Global Mirror, Metro/Global
Mirror, and z/OS Global Mirror) during expansion.
This specifies the quantity of CKD cylinders that you want allocated to the specified volume.
3380 volumes cannot be expanded. For 3390 Model A volumes (DS8000 only), the -cap
parameter value can be in the range of 1 to 65,520 (increments of 1) or 65,667 to 262,668
(increments of 1113). For 3390 volumes, the -cap parameter value can be in the range of 1 to
65,520 (849 KB to 55.68 GB).
http://lpar_IPaddr:8451/DS8000/Console
Log in
DS8000 shipped with pre-Release 3 code (earlier than Licensed Machine Code 5.3.xx.xx)
can also establish the communication with the DS Storage Manager GUI using a Web
browser on any supported network-connected workstation by simply entering into the Web
browser the IP address of the HMC and the port that the DS Storage Management server is
listening to:
http://<ip-address>:8451/DS8000
After accessing the GUI through a LOGON, follow these steps to access the device for which
you want to increase the number of cylinders.
To increase the number of cylinders on an existing volume, you can use the Web browser
GUI by selecting the volume (shown as volser MLDF64 in Figure 3-16) and then using the
Select Action pull-down to select Increase Capacity.
After you close the message, the panel in Figure 3-18 on page 81 is displayed where you can
specify the number of cylinders to increase the volume size. When you specify a new capacity
to be applied to the selected volume, specify a value that is between the minimum and
maximum size values that are displayed. Maximum values cannot exceed the amount of total
storage that is available.
Note: Only volumes of type 3390 model 3, 3390 model 9, and 3390 custom, can be
expanded. The total volume cannot exceed the available capacity of the storage image.
Capacity cannot be increased for volumes that are associated with Copy Services
functions.
75000
75000
Figure 3-18 Specifying the number of cylinders to increase the volume size
When you specify a new capacity to be applied to the selected volumes, specify a value that
is between the minimum and maximum size values that are displayed. Maximum values
cannot exceed the amount of total storage that is available.
Specify Continue in Figure 3-19, and a requested size of 75,684 is processed for the
expansion of the volume.
Note: Remember that the number of cylinders must be as stated earlier. The reason an
MCU value is 21 cylinders is because it is derived from being the smallest unit that can
map out the largest possible EAV and stay within the index architecture (with a block size
of 8192 bytes). It is also a value that divides evenly into the 1 GB storage segments of a
DS8000. These 1 GB segments are the allocation unit in the DS8000 and are equivalent to
1113 cylinders. Data sets allocated in cylinder-managed space may have their requested
space quantity rounded up to the next MCU.
VTOC index
The VTOC index enhances the performance of VTOC access. The VTOC index is a
physical-sequential data set on the same volume as the related VTOC. It consists of an index
of data set names in format-1 DSCBs contained in the VTOC and volume free space
information.
An SMS-managed volume requires an indexed VTOC; otherwise, the VTOC index is highly
recommended. For additional information about SMS-managed volumes, see z/OS DFSMS
Implementing System-Managed Storage, SC26-7407.
Note: You can use the ICKDSF REFORMAT REFVTOC command to rebuild a VTOC index
to reclaim any no longer needed index space and to possibly improve access times.
This new block size is recorded in the format-1 DSCB for the index and is necessary to allow
for scaling to largest sized volumes. The VTOC index space map (VIXM) has a new bit,
VIMXHDRV, to indicate that new fields exist in the new VIXM extension, as follows:
The VIXM contains a new field for the RBA of the new large unit map and new space
statistics.
The VIXM contains a new field for the minimum allocation unit in cylinders for the
cylinder-managed space. Each extent in the cylinder-managed space must be a multiple
of this on an EAV.
Note: If the VTOC index size is omitted when formatting a volume with ICKDSF and does
not preallocate the index, the default before this release was 15 tracks. In EAV Release 1
(that is, starting with z/OS V1R10), the default size for EAV and non-EAV volumes is
calculated and can be different from earlier releases.
ICKDSF utility
The ICKDSF utility performs functions needed for the installation, use, and maintenance of
IBM direct-access storage devices (DASD). You can also use it to perform service functions,
error detection, and media maintenance.
The ICKDSF utility is used primarily to initialize disk volumes. At a minimum, this process
involves creating the disk label record and the volume table of contents (VTOC). ICKDSF can
also scan a volume to ensure that it is usable, can reformat all the tracks, can write home
addresses, as well as other functions.
APAR PK56092
This APAR provides extended address volume (EAV) support up to 262,668 cylinders. If you
define a volume greater than 262,668 cylinders you will receive the following message when
running any ICKDSF command:
ICK30731I X'xxxxxxx' CYLINDER SIZE EXCEEDS MAXIMUM SIZE
SUPPORTED
xxxxxxx will contain the hex value size of the volume you defined.
To convert a non-indexed VTOC to an indexed VTOC, use the BUILDIX command with the
IXVTOC keyword. The reverse operation can be performed by using the BUILDIX command
and specifying the OSVTOC keyword.
Run REFORMAT NEWVTOC(cc,hh,n|ANY,n) to expand the VTOC. The new VTOC will be
allocated on the beginning location cc,hh with total size of n tracks. Overlay between the new
and old VTOC is not allowed. If cc,hh is omitted, ICKDSF will locate the new VTOC at the first
eligible location on the volume other than at the location of the old VTOC where free space
with n tracks is found. n must be greater than the old VTOC size. The volume must be offline
to use the NEWVTOC parameter.
Run REFORMAT EXTVTOC(n) where n is the total size of the new VTOC in tracks. There
must be free space available to allow for contiguous expansion of the VTOC. If there is no
Restriction: Valid only for MVS online volumes. Valid only when EXTVTOC is specified.
F DEVMAN,ENABLE(REFVTOC)
F DEVMAN,{DUMP}
{REPORT}
{RESTART}
{END(taskid)}
{ENABLE(feature) }
New with z/OS V1R11
{DISABLE(feature)}
{?|HELP}
Figure 3-25 Rebuild the VTOC for EAV volumes with DEVMAN
You can also use the F DEVMAN,ENABLE(REFVTOC) command after an IPL as well to
enable automatic VTOC and index reformatting. However, update the DEVSUPxx parmlib
member to ensure that it remains enabled across subsequent IPLs.
Using DEVMAN
The DEVMAN REPORT display has the following format, as shown in Figure 3-26.
Where:
FMID Displays the FMID level of DEVMAN.
APARS Displays any DEVMAN APARs that are installed (or the word NONE).
OPTIONS Displays the currently enabled options (in the example, REFVTOC is
enabled).
SUBTASKS Lists the status of any subtasks that are currently executing.
F DEVMAN,HELP
DMO0060I DEVICE MANAGER COMMANDS:
**** DEVMAN *********************************************************
* ?|HELP - display devman modify command parameters
* REPORT - display devman options and subtasks
* RESTART - quiesce and restart devman in a new address space
* DUMP - obtain a dump of the devman address space
* END(taskid) - terminate subtask identified by taskid
* ENABLE(feature) - enable an optional feature
* DISABLE(feature)- disable an optional feature
*--------------------------------------------------------------------
* Optional features:
* REFVTOC - automatic VTOC rebuild
* DATRACE - dynamic allocation diagnostic trace
**** DEVMAN *********************************************************
USEEAV(YES|NO)
Specifies, at the system level, whether SMS can select
an extended address volume during volume selection
processing
Check applies to new allocations and when extending
data sets to a new volume
YES - EAV volumes can be used to allocate new data
sets or to extend existing data sets to new volumes
NO - Default - SMS does not select any EAV during
volume selection
SETSMS USEEAV(YES|NO)
Note: You can use the SETSMS command to change the setting of USEEAV without
having to re-IPL. This modified setting is in effect until the next IPL, when it reverts to the
value specified in the IGDSMSxx member of PARMLIB.
To make the setting change permanent, you must alter the value in SYS1.PARMLIB. The
syntax of the operator command is:
SETSMS USEEAV(YES|NO)
SMS requests will not use EAV volumes if the USEEAV setting in the IGDSMSxx parmlib
member is set to NO.
Specific allocation requests are failed. For non-specific allocation requests (UNIT=SYSDA),
EAV volumes are not selected. Messages indicating no space available are returned when
non-EAV volumes are not available.
For non-EAS eligible data sets, all volumes (EAV and non-EAV) are equally preferred (or they
have no preference). This is the same as today, with the exception that extended address
volumes are rejected when the USEEAV parmlib value is set to NO.
If the allocation request is equal to or higher than the BreakPointValue, the system prefers to
satisfy the request from free space available from the cylinder-managed space. If the
preferred area cannot satisfy the request, both areas become eligible to satisfy the requested
space amount.
Note: You can use the SETSMS command to change the setting of BreakPointValue
without having to re-IPL. This modified setting is in effect until the next IPL when it reverts
to the value specified in the IGDSMSxx member of PARMLIB. To make the setting change
permanent, you must alter the value in SYS1.PARMLIB. The syntax of the operator
command is:
SETSMS BreakPointValue(0-65520)
For all EAS eligible data sets - new data set attribute
EATTR - allow a user to control data set allocation to
have extended attribute DSCBs (format 8 and 9)
Use the EATTR parameter to indicate whether the
data set can support extended attributes
To create such data sets, you can include (EAVs) in
specific storage groups or specify an EAV on the
request or direct the allocation to an esoteric
containing EAV devices
By definition, a data set with extended attributes can
reside in the extended address space (EAS) on an
extended address volume (EAV)
EATTR attribute
This z/OS v1R11support for extended format sequential data sets includes the EATTR
attribute, which has been added for all data set types to allow a user to control whether a data
set can have extended attribute DSCBs and thus control whether it can be allocated in the
EAS.
EAS-eligible data sets are defined to be those that can be allocated in the extended
addressing space and have extended attributes. This is sometimes referred to as
cylinder-managed space.
DFSMShsm checks the data set level attribute EATTR when performing non-SMS volume
selection. The EATTR data set level attribute specifies whether a data set can have extended
attributes (Format 8 and 9 DSCBs) and optionally reside in EAS on an extended address
volume (EAV). Valid value for the EATTR are NO and OPT.
For more information about the EATTR attribute, see z/OS DFSMS Access Method Services
for Catalogs, SC26-7394.
Note: The EATTR specification is recorded in the format-1 or format-8 DSCBs for all data
set types and volume types and is recorded in the VVDS for VSAM cluster names. EATTR
is listed by IEHLIST, ISPF, ISMF, LISTCAT, and the catalog search interface (CSI).
//DD2 DD DSNAME=XYZ12,DISP=(,KEEP),UNIT=SYSALLDA,
// VOLUME=SER=25143,SPACE=(CYL,(10000,100),,CONTIG),
// EATTR=OPT
Note: The EATTR value has no effect for DISP=OLD processing, even for programs that
might open a data set for OUTPUT, INOUT, or OUTIN processing. The value on the EATTR
parameter is used for requests when the data set is newly created.
DFSMS provides an EAV migration assistance tracker program. The tracking of EAV
migration assistance instances uses the Console ID Tracking facility provided in z/OS V1R6.
The EAV migration assistance tracker helps you to do the following:
Identify select systems services by job and program name, where the invoking programs
might require analysis for changes to use new services. The program calls are identified
as informational instances for possible migration actions. They are not considered errors,
because the services return valid information.
Identify possible instances of improper use of returned information in programs, such as
parsing 28-bit cylinder numbers in output as 16-bit cylinder numbers. These instances are
identified as warnings.
Identify instances of programs that will either fail or run with an informational message if
they run on an EAV. These are identified as programs in error. The migration assistance
tracker flags programs with the following functions: when the target volume of the
operation is non-EAV, and the function invoked did not specify the EADSCB=OK keyword.
DFSMS instances tracked by the EAV migration assistance tracker are shown in Figure 3-34.
SETCON command
SETCON TR=ON
As events are recorded by the Tracking facility, report the instance to the product owner. After
the event is reported, update the parmlib member so that the instance is no longer recorded
by the facility. In this way, the facility only reports new events.
DFSMSdfp utilities
Utilities are programs that perform commonly needed functions. DFSMS provides utility
programs to assist you in organizing and maintaining data. There are system and data set
utility programs that are controlled by JCL, and utility control statements.
The base JCL and certain utility control statements necessary to use these utilities are
provided in the major discussion of the utility programs in this chapter. For more details and to
help you find the program that performs the function you need, see Guide to Utility Program
Functions in z/OS DFSMSdfp Utilities, SC26-7414.
Table 4-1 on page 109 lists and describes system utilities. Programs that provide functions
which are better performed by newer applications (such as ISMF, ISPF/PDF or DFSMSrmm
or DFSMSdss) are marked with an asterisk (*) in the table.
IEHPROGM Access Method Services, Build and maintain system control data.
PDF 3.2
*IFHSTATR DFSMSrmm, EREP Select, format, and write information about tape
errors from the IFASMFDP tape.
These utilities allow you to manipulate partitioned, sequential or indexed sequential data sets,
or partitioned data sets extended (PDSEs), which are provided as input to the programs. You
can manipulate data ranging from fields within a logical record to entire data sets. The data
set utilities included in this section cannot be used with VSAM data sets. You use the
IDCAMS utility to manipulate VSAM data set; refer to Invoking the IDCAMS utility program
on page 130.
Table 4-2 lists data set utility programs and their use. Programs that provide functions which
are better performed by newer applications, such as ISMF or DFSMSrmm or DFSMSdss, are
marked with an asterisk (*) in the table.
*IEBCOMPR, SuperC, (PDF 3.12) Compare records in sequential or partitioned data sets, or
PDSEs.
IEBGENER or ICEGENER Copy records from a sequential data set, or convert a data set
from sequential organization to partitioned organization.
*IEBIMAGE Modify, print, or link modules for use with the IBM 3800 Printing
Subsystem, the IBM 3262 Model 5, or the 4284 printer.
IEBPTPCH or PDF 3.1 or 3.6 Print or punch records in a sequential or partitioned data set.
Example 1
PDSE1 PDSE2
Directory 1 Directory 2
ABCDGL ABCDEFG
HIJKL
Example 2
PDSE1 PDSE2
Directory 1 Directory 2
ABCFHIJ ABFGHIJ
IEBCOMPR utility
IEBCOMPR is a data set utility used to compare two sequential data sets, two partitioned
data sets (PDS), or two PDSEs, at the logical record level, to verify a backup copy. Fixed,
variable, or undefined records from blocked or unblocked data sets or members can also be
compared. However, you should not use IEBCOMPR to compare load modules.
Two sequential data sets are considered equal (that is, are considered to be identical) if:
The data sets contain the same number of records
Corresponding records and keys are identical
Two partitioned data sets or two PDSEs are considered equal if:
Corresponding members contain the same number of records
Note lists are in the same position within corresponding members
Corresponding records and keys are identical
Corresponding directory user data fields are identical
If all these conditions are not met for a specific type of data set, those data sets are
considered unequal. If records are unequal, the record and block numbers, the names of the
DD statements that define the data sets, and the unequal records are listed in a message
data set. Ten successive unequal comparisons stop the job step, unless you provide a routine
for handling error conditions.
A partitioned data set or partitioned data set extended can be compared only if all names in
one or both directories have counterpart entries in the other directory. The comparison is
made on members identified by these entries and corresponding user data.
You can run this sample JCL to compare two cataloged, partitioned organized (PO) data sets:
Figure 4-2 on page 110 shows several examples of the directories of two partitioned data
sets.
In Example 1, Directory 2 contains corresponding entries for all the names in Directory 1;
therefore, the data sets can be compared.
In Example 2, each directory contains a name that has no corresponding entry in the other
directory; therefore, the data sets cannot be compared, and the job step will be ended.
IEBCOPY utility
IEBCOPY is a data set utility used to copy or merge members between one or more
partitioned data sets (PDS), or partitioned data sets extended (PDSE), in full or in part. You
can also use IEBCOPY to create a backup of a partitioned data set into a sequential data set
(called an unload data set or PDSU), and to copy members from the backup into a partitioned
data set.
In addition, IEBCOPY automatically lists the number of unused directory blocks and the
number of unused tracks available for member records in the output partitioned data set.
INDD statement
This statement specifies the names of DD statements that locate the input data sets. When
an INDD=appears in a record by itself (that is, not with a COPY keyword), it functions as a
control statement and begins a new step in the current copy operation.
INDD=[(]{DDname|(DDname,R)}[,...][)]
R specifies that all members to be copied or loaded from this input data set are to replace
any identically named members on the output partitioned data set.
OUTDD statement
This statement specifies the name of a DD statement that locates the output data set.
OUTDD=DDname
SELECT statement
This statement selects specific members to be processed from one or more data sets by
coding a SELECT statement to name the members. Alternatively, all members but a specific
few can be designated by coding an EXCLUDE statement to name members not to be
processed.
DATA.SET5
DATA.SET1 DATA.SET1
Directory
AC
Unsued
Directory Member C Directory
ABF Unused ABCDF
A
Member F Available Member F
A DATA.SET6 A
Unused Unused
B B
Directory
Available D
BCD
Member B C
D
C After Copy
Before Copy Before Compress
Available
COPY processing
Processing occurs as follows:
1. Member A is not copied from DATA.SET5 into DATA.SET1 because it already exists on
DATA.SET1 and the replace option was not specified for DATA.SET5.
2. Member C is copied from DATA.SET5 to DATA.SET1, occupying the first available space.
3. All members are copied from DATA.SET6 to DATA.SET1, immediately following the last
member. Members B and C are copied even though the output data set already contains
members with the same names because the replace option is specified on the data set
level.
The pointers in the DATA.SET1 directory are changed to point to the new members B and C.
Thus, the space occupied by the old members B and C is unused.
DATA.SET1 DATA.SET1
Directory Directory
ABCDF ABCDF
Member F Member F
A A
Unused Compress B
B D
D C
C Available
The simplest way to request a compress-in-place operation is to specify the same ddname for
both the OUTDD and INDD parameters of a COPY statement.
Example
In our example in 4.4, IEBCOPY: Copy operation on page 114, the pointers in the
DATA.SET1 directory are changed to point to the new members B and C. Thus, the space
occupied by the old members B and C is unused. The members currently on DATA.SET1 are
compressed in place as a result of the copy operation, thereby eliminating embedded unused
space. However, be aware that a compress-in-place operation may bring risk to your data if
something abnormally disrupts the process.
Using IEBGENER
IEBGENER copies records from a sequential data set or converts sequential data sets into
members of PDSs or PDSEs. You can use IEBGENER to:
Create a backup copy of a sequential data set, a member of a partitioned data set or
PDSE, or a UNIX System Services file such as an HFS file.
Produce a partitioned data set or PDSE, or a member of a partitioned data set or PDSE,
from a sequential data set or a UNIX System Services file.
Expand an existing partitioned data set or PDSE by creating partitioned members and
merging them into the existing data set.
Produce an edited sequential or partitioned data set or PDSE.
Manipulate data sets containing double-byte character set data.
Print sequential data sets or members of partitioned data sets or PDSEs or UNIX System
Services files.
Re-block or change the logical record length of a data set.
Copy user labels on sequential output data sets.
Supply editing facilities and exits.
Jobs that call IEBGENER have a system-determined block size used for the output data set if
RECFM and LRECL are specified, but BLKSIZE is not specified. The data set is also
considered to be system-reblockable.
In Figure 4-9, the data set in SYSUT1 is a PDS or PDSE member and the data set in
SYSUT2 is a UNIX file. This job creates a macro library in the UNIX directory.
// JOB ....
// EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=PROJ.BIGPROG.MACLIB(MAC1),DISP=SHR
//SYSUT2 DD PATH='/u/BIGPROG/macros/special/MAC1',PATHOPTS=OCREAT,
// PATHDISP=(KEEP,DELETE),
// PATHMODE=(SIRUSR,SIWUSR,
// SIRGRP,SIROTH),
// FILEDATA=TEXT
//SYSIN DD DUMMY
Note: If you have the DFSORT product installed, you should be using ICEGENER as an
alternative to IEBGENER when making an unedited copy of a data set or member. It may
already be installed in your system under the name IEBGENER. It generally gives better
performance.
Utility control
statements
Sequential Input Existing
define record Expanded
groups
Member B Data Set Data Set
name members
LASTREC C
E
Member F E
G
G
B
Available D
F
Figure 4-10 shows how sequential input is converted into members that are merged into an
existing partitioned data set or PDSE. The left side of the figure shows the sequential input
that is to be merged with the partitioned data set or PDSE shown in the middle of the figure.
Utility control statements are used to divide the sequential data set into record groups and to
provide a member name for each record group. The right side of the figure shows the
expanded partitioned data set or PDSE.
Note that members B, D, and F from the sequential data set were placed in available space
and that they are sequentially ordered in the partitioned directory.
MY.DATA IEBGENER
MY.DATA.OUTPUT
For further information about IEBGENER, refer to z/OS DFSMSdfp Utilities, SC26-7414.
Using IEHLIST
IEHLIST is a system utility used to list entries in the directory of one or more partitioned data
sets or PDSEs, or entries in an indexed or non-indexed volume table of contents. Any number
of listings can be requested in a single execution of the program.
The directory of a partitioned data set is composed of variable-length records blocked into
256-byte blocks. Each directory block can contain one or more entries that reflect member or
alias names and other attributes of the partitioned members. IEHLIST can list these blocks in
edited and unedited format.
The directory of a PDSE, when listed, will have the same format as the directory of a
partitioned data set.
If you include the keyword FORMAT in the LISTVTOC parameter, you will have more detailed
information about the DASD and about the data sets, and you can also specify the DSNAME
that you want to request information about. If you specify the keyword DUMP instead of
FORMAT, you will get an unformatted VTOC listing.
Note: This information is at the DASD volume level, and does not have any interaction with
the catalog.
IEHINITT utility
IEHINITT is a system utility used to place standard volume label sets onto any number of
magnetic tapes mounted on one or more tape units. They can be ISO/ANSI Version 3 or
ISO/ANSI Version 4 volume label sets written in American Standard Code for Information
Interchange (ASCII) or IBM standard labels written in EBCDIC.
(Omit REFRESH if you did not have this option active previously.)
To further protect against overwriting the wrong tape, IEHINITT asks the operator to verify
each tape mount.
In Figure 4-17, serial numbers 001234, 001244, 001254, 001264, 001274, and so forth are
placed on eight tape volumes. The labels are written in EBCDIC at 800 bits per inch. Each
volume labeled is mounted, when it is required, on one of four 9-track tape units.
Detailed procedures for using the program are described in z/OS DFSMSrmm
Implementation and Customization Guide, SC26-7405.
Note: DFSMSrmm is an optional priced feature of DFSMS. That means that EDGINERS
can only be used when DFSMSrmm is licensed. If DFSMSrmm is licensed, IBM
recommends that you use EDGINERS for tape initialization instead of using IEHINITT.
IEFBR14 program
IEFBR14 is not a utility program. It is a two-line program that clears register 15, thus passing
a return code of 0. It then branches to the address in register 14, which returns control to the
system. So in other words, this program is dummy program. It can be used in a step to force
MVS (specifically, the initiator) to process the JCL code and execute functions such as the
following:
Checking all job control statements in the step for syntax
Allocating direct access space for data sets
Performing data set dispositions like creating new data sets or deleting old ones
Note: Although the system allocates space for data sets, it does not initialize the new data
sets. Therefore, any attempt to read from one of these new data sets in a subsequent step
may produce unpredictable results. Also, we do not recommend allocation of multi-volume
data sets while executing IEFBR14.
In the example in Figure 4-18 the first DD statement DD1 deletes old data set DATA.SET1.
The second DD statement creates a new PDS with name DATA.SET2.
Access methods
An access method is a friendly program interface between programs and their data. It is in
charge of interfacing with Input Output Supervisor (IOS), the z/OS code that starts the I/O
operation. An access method makes the physical organization of data transparent to you by:
Managing data buffers
Blocking and de-blocking logical records into physical blocks
Synchronizing your task and the I/O operation (wait/post mechanism)
Writing the channel program
Optimizing the performance characteristics of the control unit (such as caching and data
striping)
Compressing and decompressing I/O data
Executing software error recovery
In contrast to other platforms, z/OS supports several types of access methods and data
organizations.
An access method defines the organization by which the data is stored and retrieved. DFSMS
access methods have their own data set structures for organizing data, macros, and utilities
to define and process data sets. It is an application choice, depending on the type of access
(sequential or random), to allow or disallow insertions and deletions, to pick up the most
adequate access method for its data.
Optionally, BDAM uses hardware keys. Hardware keys are less efficient than the optional
software keys in VSAM KSDS.
Note: Because BDAM tends to require the use of device-dependent code, it is not a
recommended access method. In addition, using keys is much less efficient than in VSAM.
BDAM is supported by DFSMS only to enable compatibility with other IBM operating
systems.
For information about partitioned organized data set, see 4.22, Partitioned organized (PO)
data sets on page 143, and subsequent sections.
VSAM arranges and retrieves logical records by an index key, relative record number, or
relative byte addressing (RBA). A logical record has an RBA, which is the relative byte
address of its first byte in relation to the beginning of the data set. VSAM is used for direct,
sequential or skip sequential processing of fixed-length and variable-length records on DASD.
VSAM data sets (also named clusters) are always cataloged. There are five types of cluster
organization:
Entry-sequenced data set (ESDS)
This contains records in the order in which they were entered. Records are added to the
end of the data set and can be accessed sequentially or randomly through the RBA.
Key-sequenced data set (KSDS)
This contains records in ascending collating sequence of the contents of a logical record
field called key. Records can be accessed by the contents of such key, or by an RBA.
Linear data set (LDS)
This contains data that has no record boundaries. Linear data sets contain none of the
control information that other VSAM data sets do. Data in Virtual (DIV) is an optional
intelligent buffering technique that includes a set of assembler macros that provide
buffering access to VSAM linear data sets. See 4.41, VSAM: Data-in-virtual (DIV) on
page 174.
Relative record data set (RRDS)
This contains logical records in relative record number order; the records can be accessed
sequentially or randomly based on this number. There are two types of relative record data
sets:
Fixed-length RRDS: logical records must be of fixed length.
Variable-length RRDS: logical records can vary in length.
A z/OS UNIX file (HFS or zFS) can be accessed as though it were a VSAM entry-sequenced
data set (ESDS). Although UNIX files are not actually stored as entry-sequenced data sets,
the system attempts to simulate the characteristics of such a data set. To identify or access a
UNIX file, specify the path that leads to it.
All access method services commands have the following general structure:
COMMAND parameters ... [terminator]
The command defines the type of service requested; the parameters further describe the
service requested; the terminator indicates the end of the command statement.
Time Sharing Option (TSO) users can use functional commands only. For more information
about modal commands, refer to z/OS DFSMS Access Method Services for Catalogs,
SC26-7394.
You can call the access method services program in the following ways:
As a job or jobstep
From a TSO session
From within your own program
TSO users can run access method services functional commands from a TSO session as
though they were TSO commands.
For more information, refer to Invoking Access Method Services from Your Program in z/OS
DFSMS Access Method Services for Catalogs, SC26-7394.
As a job or jobstep
You can use JCL statements to call access method services. PGM=IDCAMS identifies the
access method services program, as shown in Figure 4-21.
/*
Each time you enter an access method services command as a TSO command, TSO builds
the appropriate interface information and calls access method services. You can enter one
command at a time. Access method services processes the command completely before
TSO lets you continue processing. Except for ALLOCATE, all the access method services
functional commands are supported in a TSO environment.
To use IDCAMS and certain of its parameters from TSO/E, you must update the IKJTSOxx
member of SYS1.PARMLIB. Add IDCAMS to the list of authorized programs (AUTHPGM). For
more information, see z/OS DFSMS Access Method Services for Catalogs, SC26-7394.
ALTER Alters attributes of data sets, catalogs, tape library entries, and tape
volume entries that have already been defined.
BLDINDEX Builds alternate indexes (AIX) for existing VSAM base clusters.
DCOLLECT Collects data set, volume usage, and migration utility information.
DEFINE ALIAS Defines an alternate name for a user catalog or a non-VSAM data set.
DEFINE ALTERNATEINDEX Defines an alternate index for a KSDS or ESDS VSAM data set.
DEFINE CLUSTER Creates KSDS, ESDS, RRDS, VRRDS and linear VSAM data sets.
DEFINE PATH Defines a path directly over a base cluster or over an alternate index
and its related base cluster.
IMPORT Connects user catalogs, and imports VSAM cluster and its ICF catalogs
information.
PRINT Used to print VSAM data sets, non-VSAM data sets, and catalogs.
VERIFY Causes a catalog to correctly reflect the end of a data set after an error
occurred while closing a VSAM data set. The error might have caused
the catalog to be incorrect.
For a complete description of all AMS commands, see z/OS DFSMS Access Method
Services for Catalogs, SC26-7394.
DCOLLECT functions
Capacity planning
Active data sets
VSAM clusters
Migrated data sets
Backed-up data sets
SMS configuration information
The IDCAMS DCOLLECT command collects DASD performance and space occupancy data in
a sequential file that you can use as input to other programs or applications.
Data is gathered from the VTOC, VVDS, and DFSMShsm control data set for both managed
and non-managed storage. ISMF provides the option to build the JCL necessary to execute
DCOLLECT.
DCOLLECT example
With the sample JCL shown in Figure 4-25 you can gather information about all volumes
belonging to storage group STGGP001.
VOLABC
ABC.GDG.G0003V00
GDS: oldest
ABC.GDG.G0001V00 (-4)
ABC.GDG.G0002V00 (-3)
ABC.GDG.G0003V00 (-2)
VOLDEF
ABC.GDG.G0004V00 (-1)
ABC.GDG.G0002V00
ABC.GDG.G0005V00 ( 0)
Limit = 5 newest
ABC.GDG.G0005V00
Within a GDG, the generations can have like or unlike DCB attributes and data set
organizations. If the attributes and organizations of all generations in a group are identical,
the generations can be retrieved together as a single data set.
Generation data sets can be sequential, PDSs, or direct (BDAM). Generation data sets
cannot be PDSEs, UNIX files, or VSAM data sets. The same GDG may contain SMS and
non-SMS data sets.
There are usability benefits to grouping related data sets using a function such as GDS. For
example, the catalog management routines can refer to the information in a special index
called a generation index in the catalog, and as a result:
All data sets in the group can be referred to by a common name.
z/OS is able to keep the generations in chronological order.
Outdated or obsolete generations can be automatically deleted from the catalog by z/OS.
Another benefit is the ability to reference a new generation using the same JCL.
A generation data group (GDG) base is allocated in a catalog before the GDSs are
cataloged. Each GDG is represented by a GDG base entry. Use the access method services
DEFINE command to allocate the GDG base (see also 4.19, Defining a generation data
group on page 139).
The GDG base is a construct that exists only in a user catalog, it does not exist as a data set
on any volume. The GDG base is used to maintain the generation data sets (GDS), which are
the real data sets.
The number of GDSs in a GDG depends on the limit you specify when you create a new
GDG in the catalog.
GDG example
In our example in Figure 4-26 on page 137, the limit is 5. That means, the GDG can hold a
maximum of five GDSs. Our data set name is ABC.GDG. Then, you can access the GDSs by
their relative names; for example, ABC.GDG(0) corresponds to the absolute name
ABC.GDG.G0005V00. ABC.GDG(-1) corresponds to generation ABC.GDG.G0004V00, and
so on. The relative number can also be used to catalog a new generation (+1), which will be
generation number 6 with an absolute name of ABC.GDG.G0006V00. Because the limit is 5,
the oldest generation (G0001V00) is rolled-off if you define a new one.
The parameters you specify on the DEFINE GENERATIONDATAGROUP IDCAMS command determine
what happens to rolled-off GDSs. For example, if you specify the SCRATCH parameter, the GDS
is scratched from VTOC when it is rolled off. If you specify the NOSCRATCH parameter, the
rolled-off generation data set is re-cataloged as rolled off and is disassociated with its
generation data group.
GDSs can be in a deferred roll-in state if the job never reached end-of-step or if they were
allocated as DISP=(NEW,KEEP) and the data set is not system-managed. However, GDSs in
a deferred roll-in state can be referred to by their absolute generation numbers. You can use
the IDCAMS command ALTER ROLLIN to roll in these GDSs.
For further information about generation data groups, see z/OS DFSMS: Using Data Sets,
SC26-7410.
}
VTOC
EMPTY -
NOSCRATCH - A
LIMIT(255)) B ABC.GDG C
/*
A
C
available space
The DEFINE GENERATIONDATAGROUP command defines a GDG base catalog entry GDG01.
Figure 4-29 shows a generation data set defined within the GDG by using JCL statements.
The job DEFGDG2 allocates space and catalogs a GDG data set in the newly-defined GDG.
The job control statement GDGDD1 DD specifies the GDG data set in the GDG.
Only one model DSCB is necessary for any number of generations. If you plan to use only
one model, do not supply DCB attributes when you create the model. When you subsequently
create and catalog a generation, include necessary DCB attributes in the DD statement
referring to the generation. In this manner, any number of GDGs can refer to the same model.
The catalog and model data set label are always located on a direct access volume, even for
a magnetic tape GDG.
Restriction: You cannot use a model DSCB for system-managed generation data sets.
The generation and version number are in the form GxxxxVyy, where xxxx is an unsigned
four-digit decimal generation number (0001 through 9999) and yy is an unsigned two-digit
decimal version number (00 through 99). For example:
A.B.C.G0001V00 is generation data set 1, version 0, in generation data group A.B.C.
A.B.C.G0009V01 is generation data set 9, version 1, in generation data group A.B.C.
The number of generations and versions is limited by the number of digits in the absolute
generation name; that is, there can be 9,999 generations. Each generation can have 100
versions. The system automatically maintains the generation number.
You can catalog a generation using either absolute or relative numbers. When a generation is
cataloged, a generation and version number is placed as a low-level entry in the generation
data group. To catalog a version number other than V00, you must use an absolute
generation and version number.
A.B.C.G0005V00 = A.B.C(-1)
READ/UPDATE OLD GDS
A.B.C.G0006V00 = A.B.C(0)
DEFINE NEW GDS
A.B.C.G0007V00 = A.B.C(+1)
The value of the specified integer tells the operating system what generation number to
assign to a new generation data set, or it tells the system the location of an entry representing
a previously cataloged old generation data set.
When you use a relative generation number to catalog a generation, the operating system
assigns an absolute generation number and a version number of V00 to represent that
generation. The absolute generation number assigned depends on the number last assigned
and the value of the relative generation number that you are now specifying. For example, if in
a previous job generation, A.B.C.G0006V00 was the last generation cataloged, and you
specify A.B.C(+1), the generation now cataloged is assigned the number G0007V00.
Though any positive relative generation number can be used, a number greater than 1 can
cause absolute generation numbers to be skipped for a new generation data set. For
example, if you have a single step job and the generation being cataloged is a +2, one
generation number is skipped. However, in a multiple step job, one step might have a +1 and
a second step a +2, in which case no numbers are skipped. The mapping between relative
and absolute numbers is kept until the end of the job.
PO.DATA.SET
DIRECTORY
A
B C
A B
MEMBERS
C
In a partitioned organized data set, the books are called members, and to locate them, they
are pointed to by entries in a directory, as shown in Figure 4-32.
The members are individual sequential data sets and can be read or written sequentially, after
they have been located by directory. Then, the records of a given member are written or
retrieved sequentially.
Partitioned data sets can only exist on DASD. Each member has a unique name, one to eight
characters in length, and is stored in a directory that is part of the data set.
The main benefit of using a PO data set is that, without searching the entire data set, you can
retrieve any individual member after the data set is opened. For example, in a program library
(always a partitioned data set) each member is a separate program or subroutine. The
individual members can be added or deleted as required.
All these improvements require almost total compatibility, at the program level and the user
level, with the old PDS.
If your data set is large, or if you expect to update it extensively, it might be best to allocate a
large space. A PDS cannot occupy more than 65,535 tracks and cannot extend beyond one
volume. If your data set is small or is seldom changed, let SMS calculate the space
requirements to avoid wasted space or wasted time used for recreating the data set.
Space for the directory is expressed in 256 byte blocks. Each block contains from 3 to 21
entries, depending on the length of the user data field. If you expect 200 directory entries,
request at least 10 blocks. Any unused space on the last track of the directory is wasted
unless there is enough space left to contain a block of the first member.
The system allocates five cylinders to the data set, of which ten 256-byte records are for a
directory. Since the CONTIG subparameter is coded, the system allocates 10 contiguous
cylinders on the volume. The secondary allocation is two cylinders, which is needed when the
data set needs to expand beyond the five cylinders primary allocation.
CREATION
DATA CLASS CONSTRUCT
SMS
VOLUME + DSNTYPE=LIBRARY
// DD DSNTYPE=LIBRARY
CONVERSION
New directory pages are added, interleaved with the member pages, as new directory entries
are required. A PDSE always occupies at least five pages of storage.
The directory is like a KSDS index structure (KSDS is covered in 4.34, VSAM key sequenced
cluster (KSDS) on page 166), making a search much faster. It cannot be overwritten by being
opened for sequential output.
If you try to add a member with DCB characteristics that differs from the rest of the members,
you will get an error.
Restriction: You cannot use a PDSE for certain system data sets that are opened in the
IPL/NIP time frame.
PDSE enhancements
Recent enhancements have made PDSEs more reliable and available, correcting a few
problems that caused IPLs due to a hang, deadlock, or out-of-storage condition.
Originally, in order to implement PDSE, two system address spaces were introduced:
SMXC, in charge of PDSE serialization.
SYSBMAS, the owner of the data space and hiperspace buffering.
z/OS V1R6 combines SMXC and SYSBMAS to a single address space called SMSPDSE.
This improves overall PDSE usability and reliability by:
Reducing excessive ECSA usage (by moving control blocks into the SMSPDSE address
space)
Reducing re-IPLs due to system hangs in failure or CANCEL situations
Providing storage administrators with tools for monitoring and diagnosis through VARY
SMS,PDSE,ANALYSIS command (for example, determining which systems are using a
particular PDSE)
However, the SMSPDSE address space is usually non-restartable because of the eventual
existence of perennial PDSEs data sets in the LNKLST concatenation. Then, any hang
condition can cause an unplanned IPL. To fix this, we have a new AS, the restartable
SMSPDSE1, which is in charge of all allocated PDSEs except the ones in the LNKST.
You can convert the entire data set or individual members, and also back up and restore
PDSEs. By using the DFSMSdss COPY function with the CONVERT and PDS keywords, you
can convert a PDSE back to a PDS. This is especially useful if you need to prepare a PDSE
for migration to a site that does not support PDSEs. When copying members from a PDS load
module library into a PDSE program library, or vice versa, the system invokes the program
management binder component.
Converting PDSs to PDSEs is beneficial, but be aware that certain data sets are unsuitable
for conversion to, or allocation as, PDSEs because the system does not retain the original
block boundaries.
Using DFSMSdss
In Figure 4-36, the DFSMSdss COPY example converts all PDSs with the high-level qualifier
of MYTEST on volume SMS001 to PDSEs with the high-level qualifier of MYTEST2 on
Using IEBCOPY
To copy one or more specific members using IEBCOPY, as shown in Figure 4-36 on
page 150, use the SELECT control statement. In this example, IEBCOPY copies members A,
B, and C from USER.PDS.LIBRARY to USER.PDSE.LIBRARY.
For more information about DFSMSdss, see z/OS DFSMSdss Storage Administration Guide,
SC35-0423, and z/OS DFSMSdss Storage Administration Reference, SC35-0424.
The binder
The binder is the program that processes the output of language translators and compilers
into an executable program (load module or program object). It replaced the linkage editor
The program management loader increases the services of the program fetch component by
adding support for loading program objects. The program management loader reads both
program objects and load modules into virtual storage and prepares them for execution. It
relocates any address constants in the program to point to the appropriate areas in virtual
storage and supports 24-bit, 31-bit, and 64-bit addressing ranges. All program objects loaded
from a PDSE are page-mapped into virtual storage. When loading program objects from a
PDSE, the loader selects a loading mode based on the module characteristics and
parameters specified to the binder when you created the program object. You can influence
the mode with the binder FETCHOPT parameter. The FETCHOPT parameter allows you to
select whether the program is completely preloaded and relocated before execution, or
whether pages of the program can be read into virtual storage and relocated only when they
are referenced during execution.
IEWTPORT utility
The transport utility (IEWTPORT) is a program management service with very specific and
limited function. It obtains (through the binder) a program object from a PDSE and converts it
into a transportable program file in a sequential (nonexecutable) format. It also reconstructs
the program object from a transportable program file and stores it back into a PDSE (through
the binder).
Access methods
An access method defines the technique that is used to store and retrieve data. Access
methods have their own data set structures to organize data, macros to define and process
data sets, and utility programs to process data sets. Access methods are identified primarily
by the data set organization. For example, use the basic sequential access method (BSAM)
or queued sequential access method (QSAM) with sequential data sets. However, there are
times when an access method identified with one organization can be used to process a data
set organized in a different manner.
Physical sequential
There are two sequential access methods, basic sequential access method (BSAM) and
queued sequential access method (QSAM) and just one sequential organization. Both
methods access data organized in a physical sequential manner; the physical records
(containing logical records) are stored sequentially in the order in which they are entered.
An important performance item in sequential access is buffering. If you allow enough buffers,
QSAM is able to minimize the number of SSCHs by packaging together in the same I/O
operation (through CCW command chaining) the data transfer of many physical blocks. This
function decreases considerably the total amount of I/O connect time. Another key point is the
look-ahead function for reads, that is, reading in advance records that are not yet required by
the application program.
Extended format data sets must be SMS-managed and must reside on DASD. You cannot
use an extended format data set for certain system data sets.
Programs can also access the information in HFS files through the MVS BSAM, QSAM, and
VSAM (Virtual Storage Access Method) access methods. When using BSAM or QSAM, an
HFS file is simulated as a multi-volume sequential data set. When using VSAM, an HFS file is
simulated as an ESDS. Note the following point about HFS data sets:
They are supported by standard DADSM create, rename, and scratch.
They are supported by DFSMShsm for dump/restore and migrate/recall if DFSMSdss is
used as the data mover.
They are not supported by IEBCOPY or the DFSMSdss COPY function.
QSAM arranges records sequentially in the order that they are entered to form sequential
data sets, which are the same as those data sets that BSAM creates. The system organizes
records with other records. QSAM anticipates the need for records based on their order. To
improve performance, QSAM reads these records into storage before they are requested.
This is called queued access. You can use QSAM with the following data types:
Sequential data sets
Basic format sequential data sets before z/OS V1R7, which were known as sequential
data sets or more accurately as non-extended-format sequential data sets
Large format sequential data sets
Extended-format data sets
z/OS UNIX files
The DCBE macro option FIXED=USER must be coded to specify that the calling program
has done its own page fixing and indicates that the user has page fixed all BSAM data buffers.
Note: BSAM will never queue or defer more read or write requests than the number of
channel programs (NCP) value set in OPEN.
QSAM support
For QSAM in z/OS V1R9, if you code a nonzero MULTACC value, OPEN calculates a default
number of buffers that you are suggesting the system queue more efficiently. OPEN
calculates the number of BLKSIZE-length blocks that can fit within 64 KB, then multiplies that
value by the MULTACC value. If the block size exceeds 32 KB, then OPEN uses the
MULTACC value without modification (this can happen only if you are using LBI, the large
block interface). The system then tries to defer starting I/O requests until that number of
buffers has been accumulated for the DCB. QSAM will never queue (defer) more buffers than
the BUFNO value that is in effect.
Note: If you code a nonzero MULTACC value, OPEN will calculate a default number of
buffers that you are suggesting the system queue more efficiently.
For these supported data set types, the system uses MULTSDN to calculate a more efficient
value for BUFNO when the following conditions are true:
The MULTSDN value is not zero.
DCBBUFNO has a value of zero after completion of the DCB OPEN exit routine.
The data set block size is available.
Catalog
Management
VSAM Data set types:
KSDS
ESDS
CATALOG
LDS
RRDS
Fixed Length
Variable Length
Record
Management
z/OS UNIX files can be accessed as though they are VSAM entry-sequenced data sets
(ESDS). Although UNIX files are not actually stored as entry-sequenced data sets, the
system attempts to simulate the characteristics of such a data set. To identify or access a
UNIX file, specify the path that leads to it.
Any type of VSAM data set can be in extended format. Extended-format data sets have a
different internal storage format than data sets that are not extended. This storage format
gives extended-format data sets additional usability characteristics and possibly better
performance due to striping. You can choose that an extended-format key-sequenced data
set be in the compressed format. Extended-format data sets must be SMS managed. You
cannot use an extended-format data set for certain system data sets.
Logical record
Unit of application information in a VSAM data set
Designed by the application programmer
Can be of fixed or variable size
Divided into fields, one of them can be a key
Physical record
Control interval
Control area
Component
Cluster
Sphere
Logical record
A logical record is a unit of application information used to store data in a VSAM cluster. The
logical record is designed by the application programmer from the business model. The
application program, through a GET, requests that a specific logical record be moved from the
I/O device to memory in order to be processed. Through a PUT, the specific logical record is
moved from memory to an I/O device. A logical record can be of a fixed size or a variable size,
depending on the business requirements.
The logical record is divided into fields by the application program, such as the name of the
item, code, and so on. One or more contiguous fields can be defined as a key field to VSAM,
and a specific logical record can be retrieved directly by its key value.
Logical records of VSAM data sets are stored differently from logical records in non-VSAM
data sets.
Physical record
A physical record is device-dependent and is a set of logical records moved during an I/O
operation by just one CCW (Read or Write). VSAM calculates the physical record size in
order to optimize the track space (to avoid many gaps) at the time the data set is defined. All
physical records in VSAM have the same length. A physical record is also referred to as a
physical block or simply a block. VSAM may have control information along with logical
records in a physical record.
Component
A component in systems with VSAM is a named, cataloged collection of stored records, such
as the data component or index component of a key-sequenced file or alternate index. A
component is a set of CAs. It is the VSAM terminology for an MVS data set. A component has
an entry in the VTOC. An example of a component can be the data set containing only data
for a KSDS VSAM organization.
Cluster
A cluster is a named structure consisting of a group of related components. VSAM data sets
can be defined with either the DEFINE CLUSTER command or the ALLOCATE command. The
cluster is a set of components that have a logical binding between them. For example, a
KSDS cluster is composed of the data component and the index component. The concept of
cluster was introduced to make the JCL to access VSAM more flexible. If you want to access
a KSDS normally, just use the clusters name on a DD card. Otherwise, if you want special
processing with just the data, use the data component name on the DD card.
Sphere
A sphere is a VSAM cluster and its associated data sets. The cluster is originally defined with
the access method services ALLOCATE command, the DEFINE CLUSTER command, or through
JCL. The most common use of the sphere is to open a single cluster. The base of the sphere
is the cluster itself.
3 bytes
Control Interval Format 4 bytes
C
R R R
FREE SPACE I
LR1 LR2 LR
LRnn D D D
D
Fn F2 F1
F
Contigous records of
the same size
Based on the CI size, VSAM calculates the best size of the physical block in order to better
use the 3390/3380 logical track. The CI size can be from 512 bytes to 32 KB. A CI contents
depends on the cluster organization. A KSDS consists of:
Logical records stored from the beginning to the end of the CI.
Free space, for data records to be inserted into or lengthened.
Control information, which is made up of two types of fields:
One control interval definition field (CIDF) per CI. CIDF is a 4-byte field. CIDF contains
information about the amount and location of free space in the CI.
The size of CIs can vary from one component to another, but all the CIs within the data or
index component of a particular cluster data set must be of the same length. The CI
components and properties may vary, depending on the data set organization. For example,
an LDS does not contain CIDFs and RDFs in its CI. All of the bytes in the LDS CI are data
bytes.
Spanned records
Spanned records are logical records that are larger than the CI size. They are needed when
the application requires very long logical records. To have spanned records, the file must be
defined with the SPANNED attribute at the time it is created. Spanned records are allowed to
extend across or span control interval boundaries, but not beyond control area limits. The
RDFs describe whether the record is spanned or not.
A spanned record always begins on a control interval boundary, and fills one or more control
intervals within a single control area. A spanned record does not share the CI with any other
records; in other words, the free space at the end of the last segment is not filled with the next
record. This free space is only used to extend the spanned record.
CAs are needed to implement the concept of splits. The size of a VSAM file is always a
multiple of the CA size and VSAM files are extended in units of CAs.
Splits
CI splits and CA splits occur as a result of data record insertions (or increasing the length of
an already existing record) in KSDS and VRRDS organizations. If a logical record is to be
inserted (in key sequence) and there is not enough free space in the CI, the CI is split.
Approximately half the records in the CI are transferred to a free CI provided in the CA, and
the record to be inserted is placed in the original CI.
If there are no free CIs in the CA and a record is to be inserted, a CA split occurs. Half the CIs
are sent to the first available CA at the end of the data component. This movement creates
free CIs in the original CA, then the record to be inserted causes a CI split.
Cluster
HDR 67 95
Index
HDR 38 67 HDR 95 Set
Index
Component
H H H
D 7 11 14 21 30 38 D 43 50 54 57 64 67 D 71 75 78 85 92 95 Sequence
R R R Set
2 5 7 39 41 43 68 69
8 9 44 45 46 72 73 74
Record
12 13 14 51 53 54 76 77 78
Data
Component 15 16 19 55 56 57 79 80 85
key
22 23 26 58 61 62 86 89
31 35 38 65 66 67 93 94 95 Control
Interval
Logical Record
Data component
The data component is the part of a VSAM cluster, alternate index, or catalog that contains
the data records. All VSAM cluster organizations have the data component.
Index component
The index component is a collection of records containing data keys and pointers (relative
byte address, or RBA). The data keys are taken from a fixed defined field in each data logical
record. The keys in the index logical records are compressed (rear and front). The RBA
pointers are compacted. Only KSDS and VRRDS VSAM data set organizations have the
index component.
Using the index, VSAM is able to retrieve a logical record from the data component when a
request is made randomly for a record with a certain key. A VSAM index can consist of more
than one level (binary tree). Each level contains pointers to the next lower level. Because
there are random and sequential types of access, VSAM divides the index component into
two parts: the sequence set, and the index set.
Index set
The records in all levels of the index above the sequence set are called the index set. An entry
in an index set logical record consists of the highest possible key in an index record in the
next lower level, and a pointer to the beginning of that index record. The highest level of the
index always contains a single index CI.
The structure of VSAM prime indexes is built to create a single index record at the lowest level
of the index. If there is more than one sequence-set-level record, VSAM automatically builds
another index level.
Cluster
A cluster is the combination of the data component (data set) and the index component (data
set) for a KSDS. The cluster provides a way to treat index and data components as a single
component with its own name. Use of the word cluster instead of data set is recommended.
The records in the AIX index component contain the alternate key and the RBA pointing to the
alternate index data component. The records in the AIX data component contain the alternate
key value itself and all the primary keys corresponding to the alternate key value (pointers to
data in the base cluster). The primary keys in the logical record are in ascending sequence
within an alternate index value.
Any field in the base cluster record can be used as an alternate key. It can also overlap the
primary key (in a KSDS), or any other alternate key. The same base cluster may have several
alternate indexes varying the alternate key. There may be more than one primary key value
per the same alternate key value. For example, the primary key might be an employee
number and the alternate key might be the department name; obviously, the same
department name may have several employee numbers.
AIX cluster is created with IDCAMS DEFINE ALTERNATEINDEX command, then it is populated by
the BLDINDEX command. Before a base cluster can be accessed through an alternate index, a
path must be defined. A path provides a way to gain access to the base data through a
specific alternate index. To define a path, use the DEFINE PATH command. The utility to issue
this command is discussed in 4.14, Access method services (IDCAMS) on page 129.
Sphere
A sphere is a VSAM cluster and its AIX associated clusters data sets.
The key field must be contiguous and each keys contents must be unique. After it is
specified, the value of the key cannot be altered, but the entire record may be deleted.
When a new record is added to the data set, it is inserted in its logical collating sequence by
key.
A KSDS has a data component and an index component. The index component keeps track
of the used keys and is used by VSAM to retrieve a record from the data component quickly
when a request is made for a record with a certain key.
A KSDS can be accessed in sequential mode, direct mode, or skip sequential mode (meaning
that you process sequentially, but directly skip portions of the data set).
HDR 67 95
Index
HDR 38 67 HDR 95 Set
Index
Component
H H H
D 7 11 14 21 26 38 D 43 50 54 57 64 67 D 71 75 78 85 92 95 Sequence
R R R Set
2 5 7 39 41 43 68 69
8 9 44 45 46 72 73 74
12 13 14 51 53 54 76 77 78
Data
Component 15 16 19 55 56 57 79 80 85
22 23 26 58 61 62 86 89
31 35 38 65 66 67 93 94 95
Control
Interval Control Area Control Area Control Area
Application
When initially loading a KSDS data set, records must be presented to VSAM in key sequence.
This loading can be done through the IDCAMS VSAM utility named REPRO. The index for a
key-sequenced data set is built automatically by VSAM as the data set is loaded with records.
When a data CI is completely loaded with logical records, free space, and control information,
VSAM makes an entry in the index. The entry consists of the highest possible key in the data
control interval and a pointer to the beginning of that control interval.
When accessing records sequentially, VSAM refers only to the sequence set. It uses a
horizontal pointer to get from one sequence set record to the next record in collating
sequence.
If VSAM does not find a record with the desired key, the application receives a return code
indicating that the record was not found.
CI 1
C
R R
RECORD RECORD RECORD RECORD I
UNUSED SPACE D D
1 2 3 4 D
F F
F
RBA 0
CI 2
C
R R R R
RECORD RECORD RECORD RECORD UNUSED I
D D D D
5 6 7 8 SPACE D
F F F F
F
RBA 4096
CI 3
C
R R
RECORD RECORD I
UNUSED SPACE D D
9 10 D
F F
F
RBA 8192
CI 4
C
I
UNUSED SPACE D
F
RBA 12288
Records can be accessed sequentially or directly by relative byte address (RBA). When a
record is loaded or added, VSAM indicates its relative byte address (RBA). The RBA is the
offset of the first byte of the logical record from the beginning of the data set. The first record
in a data set has an RBA of 0; the second record has an RBA equal to the length of the first
record, and so on. The RBA of a logical record depends only on the record's position in the
sequence of records. The RBA is always expressed as a full-word binary integer.
Although an entry-sequenced data set does not contain an index component, alternate
indexes are allowed. You can build an alternate index with keys to keep track of these RBAs.
Application program:
GET NEXT
CI 1
C
R R
RECORD RECORD RECORD RECORD I
UNUSED SPACE D D
1 2 3 4 D
F F
F
RBA 0
CI 2
C
R R R R
RECORD RECORD RECORD RECORD UNUSED I
D D D D
5 6 7 8 SPACE D
F F F F
F
RBA 4096
CI 3
C
R R
RECORD RECORD I
UNUSED SPACE D D
9 10 D
F F
F
RBA 8192
CI 4
C
I
UNUSED SPACE
D
F
RBA 12288
Figure 4-47 Typical ESDS processing (ESDS)
Existing records can never be deleted. If the application wants to delete a record, it must flag
that record as inactive. As far as VSAM is concerned, the record is not deleted. Records can
be updated, but without length change.
ESDS organization is suited for sequential processing with variable records, but in a few read
accesses you need a direct (random) access by key (here using the AIX cluster).
C
R R
CI 0 SLOT 1 SLOT 2 SLOT 3 SLOT 4 SLOT 5 D
F
D
F
I
D
F
C
O R R
C
T R
R E C
CI 2
R R
I
A SLOT 11 SLOT 12 SLOT 13 SLOT 14 SLOT 15 D D
D
O F F
F
L
C
CI 3
R R
I
SLOT 16 SLOT 17 SLOT 18 SLOT 19 SLOT 20 D
F
D
F
D
F
C
R R
CI 0 SLOT 21 SLOT 22 SLOT 23 SLOT 24 SLOT 25 D
F
D
F
I
D
F
C
O R R
C
T R
R E C
CI 2
R R
I
A SLOT 31 SLOT 32 SLOT 33 SLOT 34 SLOT 35 D D
D
O F F
F
L
C
CI 3
R R
I
SLOT 36 SLOT 37 SLOT 38 SLOT 39 SLOT 40 D
F
D
F
D
F
Application program:
GET Record 26
C
R R
CI 0 SLOT 1 SLOT 2 SLOT 3 SLOT 4 SLOT 5 D
F
D
F
I
D
F
C
O R R
C
T R
R E C
CI 2
R R
I
A SLOT 11 SLOT 12 SLOT 13 SLOT 14 SLOT 15 D D
D
O F F
F
L
C
CI 3
R R
I
SLOT 16 SLOT 17 SLOT 18 SLOT 19 SLOT 20 D
F
D
F
D
F
C
R R
CI 0 SLOT 21 SLOT 22 SLOT 23 SLOT 24 SLOT 25 D
F
D
F
I
D
F
C
O R R
C
T R
R E C
CI 2
R R
I
A SLOT 31 SLOT 32 SLOT 33 SLOT 34 SLOT 35 D D
D
O F F
F
L
C
R R
CI 3 SLOT 36 SLOT 37 SLOT 38 SLOT 39 SLOT 40 D
F
D
F
I
D
F
CI DATA
C
O
N A CI DATA
T R
R E
A CI DATA
O
L
CI DATA
CI DATA
C
O
N A CI DATA
T R
R E
A CI DATA
O
L
CI DATA
IDCAMS is used to define a linear data set. An LDS has only a data component. An LDS data
set is just a physical sequential VSAM data set comprised of 4 KB physical records, but with a
revolutionary buffer technique called data-in-virtual (DIV).
A linear data set is processed as an entry-sequenced data set, with certain restrictions.
Because a linear data set does not contain control information, it cannot be accessed as
though it contained individual records. You can access a linear data set with the DIV macro. If
using DIV to access the data set, the control interval size must be 4096; otherwise, the data
set will not be processed.
When a linear data set is accessed with the DIV macro, it is referred to as the data-in-virtual
object or the data object.
For information about how to use data-in-virtual, see z/OS MVS Programming: Assembler
Services Guide, SA22-7605.
Data-in-virtual (DIV)
You can access a linear data set using these techniques:
VSAM
DIV, if the control interval size is 4096 bytes. The data-in-virtual (DIV) macro provides
access to VSAM linear data sets.
Window services, if the control interval size is 4096 bytes.
Data-in-virtual (DIV) is an optional and unique buffering technique used for LDS data sets.
Application programs can use DIV to map a data set (or a portion of a data set) into an
address space, a data space, or a hiperspace. An LDS cluster is sometimes referred to as a
DIV object. After setting the environment, the LDS cluster looks to the application as a table
in virtual storage with no need of issuing I/O requests.
Data is read into main storage by the paging algorithms only when that block is actually
referenced. During RSM page-steal processing, only changed pages are written to the cluster
in DASD. Unchanged pages are discarded since they can be retrieved again from the
permanent data set.
DIV is designed to improve the performance of applications that process large files
non-sequentially and process them with significant locality of reference. It reduces the
number of I/O operations that are traditionally associated with data retrieval. Likely candidates
are large arrays or table files.
LDS
WINDOW
OFFSET
BLOCK3
BLOCK4 SPAN
BLOCK5
(Address space)
(Dataspace)
or
(Hiperspace)
No actual I/O is done until the program references the data in the window. The reference will
result in a page fault which causes data-in-virtual services to read the data from the linear
data set into the window.
DIV SAVE can be used to write out changes to the data object. DIV RESET can be used to
discard changes made in the window since the last SAVE operation.
The objective of a buffer pool is to avoid I/O operations in random accesses (due to re-visiting
data) and to make these I/O operations more efficient in sequential processing, thereby
improving performance.
For more efficient use of virtual storage, buffer pools can be shared among clusters using
locally or globally shared buffer pools. There are four types of resource pool management,
called modes, defined according to the technique used to manage them:
Not shared resources (NSR)
Local shared resources (LSR)
Global shared resources (GSR)
Record-level shared resources (RLS)
These modes can be declared in the ACB macro of the VSAM data set (MACRF keyword)
and are described in the following section.
INDEX
User ACB
MACRF=
(LSR,NUB)
Data
NSR is used by high-level languages. Since buffers are managed by a sequential algorithm,
NSR is not the best choice for random processing. For applications using NSR, consider
using system-managed buffering, discussed in 4.45, VSAM: System-managed buffering
(SMB) on page 179.
GSR is not commonly used by applications, so you should consider the use of VSAM RLS
instead.
For more information about NSR, LSR, and GSR, refer to 7.2, Base VSAM buffering on
page 380 and also to the IBM Redbooks publication VSAM Demystified, SG24-6105.
Usually, SMB allocates many more buffers than are allocated without SMB. Performance
improvements can be dramatic with random access (particularly when few buffers were
available). The use of SMB is transparent from the point of view of the application; no
application changes are needed.
SMB is available to a data set when all the following conditions are met:
It is an SMS-managed data set.
It is in extended format (DSNTYPE = EXT in the data class).
The application opens the data set for NSR processing.
The information contained in SMB processing techniques is needed because SMB must
maintain an adequate algorithm for managing the CIs in the resource pool. SMB accepts the
ACB MACRF options when the I/O operation is requested. For this reason, the installation
must accurately specify the processing type, through ACCBIAS options:
Direct Optimized (DO)
SMB optimizes for totally random record access. When this technique is used, VSAM
changes the buffering management from NSR to LSR.
Direct Weighted (DW)
The majority is direct access to records, with some sequential.
Sequential Optimized (SO)
Totally sequential access.
Sequential Weighted (SW)
The majority is sequential access, with some direct access to records.
When SYSTEM is used in JCL or in the data class, SMB chooses the processing technique
based on the MACRF parameter of the ACB.
For more information about the use of SMB, refer to VSAM Demystified, SG24-6105.
//DS1 DD DSNAME=VSAMDATA,AMP=('BUFSP=200,OPTCD=IL,RECFM=FB',
// 'STRNO=6,MSG=SMBBIAS')
The first IEC161I (return code 001) message indicates the access bias used by SMB. The sfi
field can be:
DO - Direct Optimized
DW - Direct Weighted
SO - Sequential Optimized
SW - Sequential Weighted
CO - Create optimized
CR - Create Recovery
When you can code MSG=SMBBIAS in your JCL to request a VSAM open message, it
indicates what SMB access bias actually is used for a particular component being opened:
15.00.02 SYSTEM1 JOB00028 IEC161I
001(DW)-255,TESTSMB,STEP2,VSAM0001,,,SMB.KSDS,,
IEC161I SYS1.MVSRES.MASTCAT
15.00.02 SYSTEM1 JOB00028 IEC161I 001(0000002B 00000002 00000000
00000000)-255,TESTSMB,STEP2,
IEC161I VSAMDATA,,,SMB.KSDS,,SYS1.MVSRES.MASTCAT
SMB overview
System-managed buffering (SMB), a feature of DFSMSdfp, supports batch application
processing.
SMB uses formulas to calculate the storage and buffer numbers needed for a specific access
type SMB takes the following actions:
It changes the defaults for processing VSAM data sets. This enables the system to take
better advantage of current and future hardware technology.
It initiates a buffering technique to improve application performance. The technique is one
that the application program does not specify. You can choose or specify any of the four
processing techniques that SMB implements:
Direct Optimized (DO) The DO processing technique optimizes for totally random
record access. This is appropriate for applications that
access records in a data set in totally random order. This
technique overrides the user specification for nonshared
resources (NSR) buffering with a local shared resources
(LSR) implementation of buffering.
Sequential Optimized (SO) The SO technique optimizes processing for record access
that is in sequential order. This is appropriate for backup
and for applications that read the entire data set or a large
percentage of the records in sequential order.
SMB performance
Performance of System Managed Buffering (SMB) Direct Optimized Access Bias had been
affected adversely when a VSAM data set continued to grow. This is due to the original
allocation for index buffer space becoming increasingly deficient as the data set size
increases. This problem is avoided for data buffer space by using the subparameter SMBVSP
of the JCL AMP parameter. However, for index buffer space, the only way to adjust the index
buffer space to a more appropriate allocation was to close and reopen the data set. Changes
have been made in z/OS V1R11 VSAM to avoid the necessity of closing and reopening the
data set.
SMBVSP parameter
Prior to z/OS V1R11, when SMBVSP was used to specify the amount of storage for SMB
Direct Optimized Access Bias, the value was used by VSAM OPEN to calculate the number of
data buffers (BUFND). The number of index buffers (BUFNI), in contrast,was calculated by
VSAM OPEN based on the current high used CI. That is, it was based upon the data set size
at open time.
With z/OS V1R11, VSAM OPEN calculates the BUFNI to be allocated using 20% of the value
of SMBVSP, or the data set size, if the calculation using it actually yields a higher BUFNI. The
usage in regard to calculating BUFND remains unchanged. Now, as a KSDS grows, provision
can be made for a better storage allocation for both data and index buffers by the use of the
SMBVSP parameter.
Note: For further details about SMB, see z/OS DFSMS Using Data Sets, SC26-7410. For
further details about how to invoke SMB and about specifying Direct Optimized (DO) and
SMBVSP values in a DATACLAS construct, see z/OS DFSMS Storage Administration
Reference (for DFSMSdfp, DFSMSdss, DFSMShsm), SC26-7402. For information about
specification with JCL, see z/OS MVS JCL Reference, SA22-7597.
The value of this parameter is used as a multiplier of the virtual buffer space for Hiperspace
buffers. This can reduce the size required for an application region, but does have
implications related to processor cycle requirements. That is, all application requests must
orient to a virtual buffer address. If the required data is in a Hiperspace buffer, the data must
be moved to a virtual buffer after stealing a virtual buffer and moving that buffer to a least
recently used (LRU) Hiperspace buffer.
Extended addressability
VSAM enhancements
The following list presents the major VSAM enhancements since DFSMS V1R2. For the
majority of these functions, extended format is a prerequisite. The enhancements are:
Data compression for KSDS - This is useful for improving I/O mainly for write-once,
read-many clusters.
Extended addressability - This allows data components larger than 4 GB. The limitation
was caused by an RBA field of 4 bytes; RBA now has an 8-byte length.
Record-level sharing (RLS) - This allows VSAM data sharing across z/OS systems in a
Parallel Sysplex.
System-managed buffering (SMB) - This improves the performance of random NSR
processing.
Data stripping and multi-layering - This improves sequential access performance due to
parallel I/Os in several volumes (stripes).
DFSMS data set separation - This allows the allocation of clusters in distinct physical
control units.
Free space release - As with non-VSAM data sets, the free space that is not used at the
end of the data component can be released at deallocation.
When allocating new data sets or extending existing data sets to new volumes, SMS volume
selection frequently calls SRM to select the best volumes. Unfortunately, SRM may select the
same set of volumes that currently have the lowest I/O delay. Poor performance or single
points of failure may occur when a set of functional-related critical data sets are allocated onto
the same volumes. SMS provides a function to separate their critical data sets, such as DB2
partitions, onto different volumes to prevent DASD hot spots and reduce I/O contention.
This provides a facility for an installation to separate functional-related critical data sets onto
different extent pools and volumes for better performance and to avoid single points of failure.
Important: Use data set separation only for a small set of mission-critical data.
A data set separation profile contains at least one data set separation group. Each data set
separation group specifies whether separation is at the PCU or volume level and whether it is
required or preferred. It also includes a list of data set names to be separated from each other
during allocation.
Restriction: You cannot use data set separation when allocating non-SMS-managed data
sets or during use of full volume copy utilities such as PPRC.
Separation profile
The syntax for the data set separation profiles is defined as follows:
SEPARATIONGROUP(PCU) This indicates that separation is on the PCU level.
SEPARATIONGROUP(VOLUME) This indicates that separation is on the volume level.
VOLUME may be abbreviated as VOL.
TYPE(REQUIRED) This indicates that separation is required. SMS fails the
allocation if the specified data set or data sets cannot be
separated from other data sets on the specified level
Note: If only one data set name is specified with DSNLIST, the data set name must contain
at least one wildcard character.
Earlier syntax
The following earlier form of the syntax for SEPARATIONGROUP is tolerated by z/OS V1R11.
It supports separation at the PCU level only.
SEPARATIONGROUP|SEP
FAILLEVEL|FAIL ({PCU|NONE})
DSNLIST|DSNS|DSN (data-set-name[,data-set-name,...])
DFSORT, together with DFSMS and RACF, form the strategic product base for the evolving
system-managed storage environment. DFSORT is designed to optimize the efficiency and
speed with which operations are completed through synergy with processor, device, and
system features (for example, memory objects, Hiperspace, data space, striping,
compression, extended addressing, DASD and tape device architecture, processor memory,
and processor cache.
DFSORT example
The simple example in Figure 4-62 illustrates how DFSORT merges data sets by combining
two or more files of sorted records to form a single data set of sorted records.
You can use DFSORT to do simple application tasks such as alphabetizing a list of names, or
you can use it to aid complex tasks such as taking inventory or running a billing system. You
can also use DFSORT's record-level editing capability to perform data management tasks.
For most of the processing done by DFSORT, the whole data set is affected. However, certain
forms of DFSORT processing involve only certain individual records in that data set.
Tip: You can use DFSORT's ICEGENER facility to achieve faster and more efficient
processing for applications that are set up to use the IEBGENER system utility. For more
information, see z/OS DFSORT Application Programming Guide, SC26-7523.
DFSORT customization
Specifying the DFSORT customization parameters is a very important task for z/OS system
programmers. Depending on such parameters, DFSORT may use lots of system resources
such as CPU, I/O, and especially virtual storage. The uncontrolled use of virtual storage may
cause IPLs due to the lack of available slots in page data sets. Plan to use the IEFUSI z/OS
exit to control products such as DFSORT.
For articles, online books, news, tips, techniques, examples, and more, visit the z/OS
DFSORT home page:
http://www-1.ibm.com/servers/storage/support/software/sort/mvs
z/OS
DFSMS AMS
NETWORK
HP/UX
AIX FILE
MVS Data Sets
SYSTEM
SERVER
OMVS
TCP/IP
(z/OS NFS) Network
UNIX Hierarchical File
z/OS
System
DFSMS
NETWORK
FILE
SYSTEM
Other NFS CLIENT
Sun Solaris Client and
Servers
With the NFS server, you can remotely access z/OS conventional data sets or z/OS UNIX
files from workstations, personal computers, and other systems that run client software for the
The z/OS NFS server acts as an intermediary to read, write, create, or delete z/OS UNIX files
and MVS data sets that are maintained on an MVS host system. The remote MVS data sets
or z/OS UNIX files are mounted from the host processor to appear as local directories and
files on the client system.
With the NFS client you can allow basic sequential access method (BSAM), queued
sequential access method (QSAM), virtual storage access method (VSAM), and z/OS UNIX
users and applications transparent access to data on systems that support the Sun NFS
version 2 protocols and the Sun NFS version 3 protocols.
Other client platforms should work as well because NFS version 4 is an industry standard
protocol, but they have not been tested by IBM.
NFS client software for other IBM platforms is available from other vendors. You can also
access the NFS server from non-IBM clients that use the NFS version 2 or version 3 protocol,
including:
DEC stations running DEC ULTRIX version 4.4
HP 9000 workstations running HP/UX version 10.20
Sun PC-NFS version 5
Sun workstations running SunOS or Sun Solaris versions 2.5.3
For further information about NFS, refer to z/OS Network File System Guide and Reference,
SC26-7417, and visit:
http://www-1.ibm.com/servers/eserver/zseries/zos/nfs/
DFSMS Optimizer uses input data from several sources in the system and processes it using
an extract program that merges the data and builds the Optimizer database.
By specifying different filters you can produce reports that help you build a detailed storage
management picture of your enterprise. With the report data, you can use the charting facility
to produce color charts and graphs.
The DFSMS Optimizer provides analysis and simulation information for both SMS and
non-SMS users. The DFSMS Optimizer can help you maximize storage use and minimize
storage costs. It provides methods and facilities for you to:
Monitor and tune DFSMShsm functions such as migration and backup
Create and maintain a historical database of system and data activity
For more information about the DFSMS Optimizer, see DFSMS Optimizer Users Guide and
Reference, SC26-7047, or visit:
http://www-1.ibm.com/servers/storage/software/opt/
RESTORE...
TAPECNTL...
TSO
Figure 4-65 DFSMSdss backing up and restoring volumes and data sets
Note: Like devices have the same track capacity and number of tracks per cylinder (for
example, 3380 Model D, Model E, and Model K). Unlike DASD devices have different
track capacities (for example, 3380 and 3390), a different number of tracks per cylinder,
or both.
Physical
or
Logical ?
TSO
During a restore operation, the data is processed the same way it is dumped because
physical and logical dump tapes have different formats. If a data set is dumped logically, it is
restored logically; if it is dumped physically, it is restored physically. A data set restore
operation from a full volume dump is a physical data set restore operation.
UCAT
DUMP
ABC.FILE
ABC.FILE
TSO
VOLABC
ABC.FILE
DU
MP
01
Logical processing
A logical copy, dump, or restore operation treats each data set and its associated information
as a logical entity, and processes an entire data set before beginning the next one.
Each data set is moved by tracks from the source device and is potentially written to the target
device as a set of data records, allowing data movement between devices with different track
and cylinder configurations. Checking of data record consistency is not performed during
dump operation.
Catalogs and VTOCs are used to select data sets for logical processing. If you do not specify
input volumes, the catalogs are used to select data sets for copy and dump operations. If you
specify input volumes using the LOGINDDNAME, LOGINDYNAM, or STORGRP keywords on
the COPY or DUMP command, DFSMSdss uses VTOCs to select data sets for processing.
DUMP FULL
CACSW3
TSO
DU
MP
0 1
Physical processing
Physical processing moves data based on physical track images. Because data movement is
carried out at the track level, only target devices with track sizes equal to those of the source
device are supported. Physical processing operates on volumes, ranges of tracks, or data
sets. For data sets, it relies only on volume information (in the VTOC and VVDS) for data set
selection, and processes only that part of a data set residing on the specified input volumes.
Attention: Take care when invoking the TRACKS keyword with the COPY and RESTORE
commands. The TRACKS keyword should be used only for a data recovery operation.
For example, you can use it to repair a bad track in the VTOC or a data set, or to
retrieve data from a damaged data set. You cannot use it in place of a full-volume or a
logical data set operation. Doing so can destroy a volume or impair data integrity.
You specify the data set keyword on the DUMP command and input volumes with the
INDDNAME or INDYNAM parameter. This produces a physical data set dump.
The RESTORE command is executed and the input volume is created by a physical dump
operation.
DFD
SS
Sta
nd -
A lo
ne
Ta p
e
Figure 4-69 DFSMSdss stand-alone services
Stand-alone services can perform either a full-volume restore or a tracks restore from dump
tapes produced by DFSMSdss or DFDSS and offers the following benefits:
Provides user-friendly commands to replace the previous control statements
Supports IBM 3494 and 3495 Tape Libraries, and 3590 Tape Subsystems
Supports IPLing from a DASD volume, in addition to tape and card readers
Allows you to predefine the operator console to be used during stand-alone services
processing
For detailed information about the stand-alone service and other DFSMSdss information,
refer to z/OS DFSMSdss Storage Administration Reference, SC35-0424, and z/OS
DFSMSdss Storage Administration Guide, SC35-0423, and visit:
http://www-1.ibm.com/servers/storage/software/sms/dss/
Availability Space
Automatic Backup
Incremental Backup
Availability management is used to make data available by automatically copying new and
changed data set to backup volumes.
Space management is used to manage DASD space by enabling inactive data sets to be
moved off fast-access storage devices, thus creating free space or new allocations.
DFSMShsm also provides for other supporting functions that are essential to your
installation's environment.
For further information about DFSMShsm, refer to z/OS DFSMShsm Storage Administration
Guide, SC35-0421 and z/OS DFSMShsm Storage Administration Reference, SC35-0422,
and visit:
http://www-1.ibm.com/servers/storage/software/sms/hsm/
SMS-managed Non-SMS-managed
Primary and
Storage Groups
Secondary Volumes
(volumes)
User
Catalog
DFSMShsm
Control
Backup Data
Sets
Functions
TAPE
Availability management
DFSMShsm backs up your data, automatically or by command, to ensure availability if
accidental loss of the data sets or physical loss of volumes should occur. DFSMShsm also
allows the storage administrator to copy backup and migration tapes, and to specify that
copies be made in parallel with the original. You can store the copies on site as protection
from media damage, or offsite as protection from site damage. DFSMShsm also provides
disaster backup and recovery for user-defined groups of data sets (aggregates) so that you
can restore critical applications at the same location or at an offsite location.
Note: You must also have DFSMSdss to use the DFSMShsm functions.
Availability management ensures that a recent copy of your DASD data set exists. The
purpose of availability management is to ensure that lost or damaged data sets can be
retrieved at the most current possible level. DFSMShsm uses DFSMSdss as a fast data
mover for backups. Availability management automatically and periodically performs functions
that:
1. Copy all the data sets on DASD volumes to tape volumes
2. Copy the changed data sets on DASD volumes (incremental backup) either to other DASD
volumes or to tape volumes
DFSMShsm minimizes the space occupied by the data sets on the backup volume by using
compression and stacking.
The attribute descriptions explain the attributes to be added to the previously defined storage
groups and management classes. Similarly, the descriptions of DFSMShsm commands relate
to commands to be added to the ARCCMDxx member of SYS1.PARMLIB.
Two groups of tasks are performed for availability management: dump tasks and backup
tasks. Availability management comprises the following functions:
Aggregate backup and recovery (ABARS)
Automatic physical full-volume dump
Automatic incremental backup
Automatic control data set backup
Command dump and backup
Command recovery
Disaster backup
Expiration of backup versions
Fast replication backup and recovery
SMS-managed Non-SMS-managed
Primary
Storage Groups
Volumes
(volumes)
User
Catalog
DFSMShsm
Control
Data
Migration Sets
Level 1 DASD
Space management
Space management is the function of DFSMShsm that allows you to keep DASD space
available for users in order to meet the service level objectives for your system. The purpose
of space management is to manage your DASD storage efficiently. To do this, space
management automatically and periodically performs functions that:
1. Move low activity data sets (using DFSMSdss) from user-accessible volumes to
DFSMShsm volumes
2. Reduce the space occupied by data on both the user-accessible volumes and the
DFSMShsm volumes
DFSMShsm improves DASD space usage by keeping only active data on fast-access storage
devices. It automatically frees space on user volumes by deleting eligible data sets, releasing
overallocated space, and moving low-activity data to lower cost-per-byte devices, even if the
job did not request tape.
It is possible to have more than one z/OS image sharing the same DFSMShsm policy. In this
case one of the DFSMShsm images is the primary host and the others are secondary. The
primary HSM host is identified by HOST= in the HSM startup and is responsible for:
Hourly space checks
During auto backup: CDS backup, backup of ML1 data sets to tape
During auto dump: Expiration of dump copies and deletion of excess dump VTOC copy
data sets
During secondary space management (SSM): Cleanup of MCDS, migration volumes, and
L1-to-L2 migration
If you are running your z/OS HSM images in sysplex (parallel or basic), you can use
secondary host promotion to allow a secondary image to assume the primary image's tasks if
the primary host fails. Secondary host promotion uses XCF status monitoring to execute the
promotion. To indicate a system as a candidate, issue:
SETSYS PRIMARYHOST(YES)
and
SSM(YES)
Primary Volumes
or Level 0 (ML0)
Migration
Migration Level 1 (ML1)
Level 2 (ML2)
Migration
Level 2 (ML2)
Note: If you have a DASD controller that compresses data, you can skip level 1 (ML1)
migration because the data in L0 is already compacted/compressed.
A data set can move back and forth between these two states, and it can move from level 0 to
migration level 2 (and back) without passing through migration level 1. Objects do not migrate.
Movement back to level 0 is known as recall.
ML1 enhancements
Beginning in V1R11, DFSMShsm enables ML1 overflow volumes to be selected for migration
processing, in addition to their current use for data set backup processing. DFSMShsm
enables these ML1 overflow volumes to be selected for migration or backup of large data
sets, with the determining size values specified by a new parameter of the SETSYS command.
Use the new ML1OVERFLOW parameter with the subparameter of DATASETSIZE(dssize) to
specify the minimum size that a data set must be in order for DFSMShsm to prefer ML1
overflow volume selection for migration or backup copies.
In addition, DFSMShsm removes the previous ML1 volume restriction against migrating or
backing up a data set whose expected size after compaction (if active and used) is greater
than 65,536 tracks The new limit for backed up or migrated copies is equal to the maximum
size limit for the largest volume available.
ML1OVERFLOW option
ML1OVERFLOW is an optional parameter specifying the following:
The minimum data set size for ML1 OVERFLOW volume preference
The threshold for ML1 OVERFLOW volume capacity for automatic secondary space
management migration from ML1 OVERFLOW to ML2 volumes
If the calculated size of the data set is less than the minimum size specified in dssize, then
DFSMShsm prefers the NOOVERFLOW ML1 volume with the maximum amount of free
space and least number of users.
For data sets smaller than 58 K tracks, DFSMShsm allocates a basic sequential format data
set for the backup copy. For data sets larger than 58 K tracks, DFSMShsm allocates a large
format sequential data set for the backup copy.
Basic or large format sequential data sets will prefer OVERFLOW or NOOVERFLOW
volumes based on the SETSYS ML1OVERFLOW(DATASETSIZE(dssize)) value. If there is
not enough free space on the NOOVERFLOW or OVERFLOW volume for a particular backup
copy, then DFSMShsm tries to create the backup on a OVERFLOW or NOOVERFLOW
volume, respectively. If the data set is too large to fit on a single ML1 volume (OVERFLOW or
NOOVERFLOW), then the migration or backup fails.
Therefore, data sets larger than 64 K tracks no longer have to be directed to tape. Such data
sets larger will be allocated as LFS data sets regardless whether they are on an OVERFLOW
or NOOVERFLOW volume. OVERFLOW volumes can be used as a repository for larger data
sets. You can now customize your installation to exploit the OVERFLOW volume pool
according to a specified environment.
Installation considerations
A coexistence APAR will be required to enable downlevel DFSMShsm to tolerate migrated or
backup copies of LFS format DFSMShsm data sets. APAR OA26330 enables processing of
large format sequential migration and backup copies to be processed on lower level
installations.
Downlevel DFSMShsm installations (pre-V1R11) will be able to recall and recover data sets
from a V1R11 DFSMShsm LFS migration or backup copies. For OVERFLOW volumes on
lower level systems, recalls and recovers will be successful. Migrations from lower level
systems to the V1R11 OVERFLOW volumes will not be allowed because the OVERFLOW
volumes will not be included in the volume selection process.
Large data sets can and will migrate and backup to ML1 DASD. These will be large format
sequential HSM migration data sets on ML1. OVERFLOW volumes will be used for migration
now, in addition to backup. We anticipate that not many installations used OVERFLOW
volumes before but if they were used, then migration actions are needed.
If you back up or migrate data sets to ML1 OVERFLOW volumes, you can specify the
percentage of occupied space that must be in the ML1 OVERFLOW volume pool before the
migration of data sets to ML2 volumes occurs during automatic secondary space
management.
ML2 volumes can be either DASD or tape. The TAPEMIGRATION parameter of the SETSYS
command specifies what type of ML2 volume is used. The SETSYS command for DFSMShsm
host 2 specifies ML2 migration to tape.
Note: If you want DFSMShsm to perform automatic migration from ML1 to ML2 volumes,
you must specify the thresholds of occupancy parameter (of the ADDVOL command) for the
ML1 volumes.
2 GB = ~36K tracks
ML1 NOOVERFLOW
VOLUME POOL
ML2
L0
D/S Data Set Size ML1 Copy Format
For data sets smaller than 58 K tracks, DFSMShsm allocates a basic sequential format data
set for the backup copy.
If there is not enough free space on the NOOVERFLOW or OVERFLOW volume for a
particular backup copy, DFSMShsm tries to create the backup on a OVERFLOW or
NOOVERFLOW volume respectively. If the data set is too large to fit on a single ML1 volume
(OVERFLOW or NOOVERFLOW), then the migration or backup fails.
If the value of the attribute is BOTH, a DFSMShsm-authorized user can use either of the
commands, and a non-DFSMShsm-authorized user can use the HBACKDS command to
back up the data set.
If the value of the attribute is ADMIN, a DFSMShsm-authorized user can use either of the
commands to back up the data set, but a non-DFSMShsm-authorized user cannot back up
the data set. If the value of the attribute is NONE, the command backup cannot be done.
Fast
replication
Spill
Level 0
Backup
Migration
Daily Level 1 Migration
Backup Level 2
Aggregate Dump
backup Volumes
Volume types
DFSMShsm supports the following volume types:
Level 0 (L0) volumes contain data sets that are directly accessible to you and the jobs you
run. DFSMShsm-managed volumes are those L0 volumes that are managed by the
DFSMShsm automatic functions. These volumes must be mounted and online when you
refer to them with DFSMShsm commands.
Also in z/OS V1R7, a new command V SMS,VOLUME is introduced. It allows you to change the
state of the DFSMShsm volumes without having to change and reactivate the SMS
configuration using ISMF.
HSM.HMIG.ABC.FILE1.T891008.I9012
Level 0 Level 1
10 days
without ABC.FILE1 dsname
any
access
ABC.FILE2
Migrate
ABC.FILE3
Active copy:
A backup version within the number of backup copies
specified by the management class or SETSYS value
Retained copy:
A backup copy that has rolled-off from being an active
copy, but has not yet met its retention period
Management class retention period:
The maximum number of days to maintain a backup copy
RETAINDAYS new with z/OS V1R11:
The minimum number of days to maintain a backup copy
(this value takes precedence)
With z/OS V1R11, DFSMShsm can maintain a maximum of 100 active copies. DFSMShsm
can maintain more than enough retained copies for each data set to meet all expected
requirements. Active and retained copies are as follows:
Active copies Active copies are the set of backup copies created that have not yet
rolled off. The number of active copies is determined by the SMS
management class or SETSYS value. The maximum number of active
copies will remain 100.
Retained copies Retained copies are the set of backup copies that have rolled off from
the active copies and have not yet reached their retention periods. A
nearly unlimited number of retained copies for each data set can be
maintained.
The default retention limit is NOLIMIT. If you specify zero (0), then a user-specified or data
class-derived EXPDT or RETPD is ignored. If users specify values that exceed the maximum
period, then the retention limit value overrides not only their values but also the expiration
attribute values. The retention limit value is saved. ISMF primes the Retention Limit field with
what you specified the last time.
RETAINDAYS keyword
To specify a retention period for a copy of a backup data set, using the new RETAINDAYS
keyword, you can use one of the following methods:
(H)BACKDS command
ARCHBACK macro
ARCINBAK program
The RETAINDAYS value must be an integer in the range of 0 to 50000, or 99999 (the never
expire value).
DFSMShsm compares the number of backup versions that exist with the value of the
NUMBER OF BACKUPS (DATA SET EXISTS) attribute in the management class. If there are
more versions than requested, the excess versions are deleted if the versions do not have a
RETAINDAYS value or the RETAINDAYS value has been met, starting with the oldest. The
excess versions are kept as excess active versions if they have an un-met RETAINDAYS
value. These excess versions will then be changed to retained backup versions when a new
version is created.
Starting with the now-oldest backup version and ending with the third-newest version,
DFSMShsm calculates the age of the version to determine if the version should be expired. If
a RETAINDAYS value was specified when the version was created, then the age is compared
to the retain days value. If RETAINDAYS was not specified, then the age is compared to the
value of the RETAIN DAYS EXTRA BACKUPS attribute in the management class. If the age
of the version meets the expiration criteria, then the version is expired.
The EXPIREBV command is used to delete unwanted backup and expired ABARS versions of
SMS-managed and non-SMS-managed data sets from DFSMShsm-owned storage. The
optional parameters of the EXPIREBV command determine the deletion of the backup versions
of non-SMS-managed data sets. The management class attributes determine the deletion of
backup versions of SMS-managed data sets. The management class fields Retain Extra
Versions and Retain Only Version determine which ABARS versions or incremental backup
versions are deleted. The RETAINDAYS parameter specified on the data set backup request
determines how long a data set backup copy is kept for both SMS-managed and
non-SMS-managed data sets.
BACKDS command
The BACKDS command creates a backup version of a specific data set. When you enter the
BACKDS command, DFSMShsm does not check whether the data set has changed or has met
the requirement for frequency of backup. When DFSMShsm processes a BACKDS
command, it stores the backup version on either tape or the ML1 volume with the most
available space.
With z/OS V1R11, the RETAINDAYS keyword is an optional parameter on the BACKDS
command specifying a number of days to retain a specific backup copy of a data set. If you
specify RETAINDAYS, number of retain days is a required parameter that specifies a
minimum number of days (0-50000) that DFSMShsm retains the backup copy. If you specify
99999, the data set backup version never expires. Any value greater than 50000 (and other
than 99999) causes a failure with an ARC1605I error message. A retain days value of 0
indicates that:
The backup version might expire within the same day that it was created if EXPIREBV
processing takes place or when the next backup version is created,
The backup version is kept as an active copy before roll-off occurs,
The backup version is not managed as a retained copy.
RETAINDAYS parameters
The value of RETAINDAYS can be in the range of 0 - 50000, which corresponds to the
maximum of 136 years. If you specify 99999, the data set backup version is treated as never
expire. Any value greater than 50000 (and other than 99999) causes a failure with an error
message ARC1605I. A retain days value of 0 indicates that:
The backup version expires when the next backup copy is created.
The backup version might expire within the same day that it was created if EXPIREBV
processing takes place.
The backup version is kept as an active copy before roll-off occurs.
And the backup version is not managed as a retained copy.
Note: You can use the RETAINDAYS keyword only with cataloged data sets. If you specify
RETAINDAYS with an uncataloged data set, then BACKDS processing fails with the
ARC1378I error message.
Note: For non-SMS-managed data sets, the RETAINDAYS value takes precedence over
any of the EXPIREBV parameters.
EXPIREBV processing
During EXPIREBV processing, DFSMShsm checks the retention days for each retained
backup copy. The retained copy is identified as an expired version if it has met its retention
period. The EXPIREBV DISPLAY command displays the backup versions that have met their
RETAINDAYS value. The EXPIREBV EXECUTE command deletes the backup versions that have
met their RETAINDAYS value.
When you enter the EXPIREBV command, DFSMShsm checks the retention days for each
active backup copy for each data set, starting with the oldest backup version and ending with
the third newest version. If the version has a specified retention days value, DFSMShsm
calculates the age of the version, compares the age to the value of the retention days, and
expires the version if it has met its RETAINDAYS.
The second-newest version is treated as though it had been created on the same day as the
newest backup version, and is not expired unless the number of retention days specified by
RETAINDAYS have passed since the creation of the newest backup version. EXPIREBV does
not process the newest backup version until it meets both the management class retention
values, and the RETAINDAYS value.
HSM.HMIG.ABC.FILE1.T891008.I9012
Level 1 Level 0
dsname ABC.FILE1
ABC.FILE2
Recall
ABC.FILE3
Automatic recall
Using an automatic recall process returns a migrated data set from an ML1 or ML2 volume to
a DFSMShsm-managed volume. When a user refers to the data set, DFSMShsm reads the
system catalog for the volume serial number. If the volume serial number is MIGRAT,
DFSMShsm finds the migrated data set, recalls it to a DFSMShsm-managed volume, and
updates the catalog. The result of the recall process is a data set that resides on a user
volume in a user readable format. The recall can also be requested by a DFSMShsm
command. Automatic recall returns your migrated data set to a DFSMShsm-managed
volume when you refer to it. The catalog is updated accordingly with the real volser.
Recall returns a migrated data set to a user L0 volume. The recall is transparent and the
application does not need to know that it happened or where the migrated data set resides. To
provide applications with quick access to their migrated data sets, DFSMShsm allows up to
15 concurrent recall tasks. RMF monitor III shows delays caused by the recall operation.
The MVS allocation routine discovers that the data set is migrated when, while accessing the
catalog, it finds the word MIGRAT instead of the volser.
Command recall
Command recall returns your migrated data set to a user volume when you enter the HRECALL
DFSMShsm command through an ISMF panel or by directly keying in the command.
IBM 3494
DFSMSrmm
In your enterprise, you store and manage your removable media in several types of media
libraries. For example, in addition to your traditional tape library (a room with tapes, shelves,
and drives), you might have several automated and manual tape libraries. You probably also
have both onsite libraries and offsite storage locations, also known as vaults or stores.
With the DFSMSrmm functional component of DFSMS, you can manage your removable
media as one enterprise-wide library (single image) across systems. Because of the need for
global control information, these systems must have accessibility to shared DASD volumes.
DFSMSrmm manages your installation's tape volumes and the data sets on those volumes.
DFSMSrmm also manages the shelves where volumes reside in all locations except in
automated tape library data servers.
DFSMSrmm manages all tape media (such as cartridge system tapes and 3420 reels), as
well as other removable media you define to it. For example, DFSMSrmm can record the shelf
location for optical disks and track their vital record status; however, it does not manage the
objects on optical disks.
Library management
DFSMSrmm can manage the following devices:
A removable media library, which incorporates all other libraries, such as:
System-managed manual tape libraries
Examples of automated tape libraries include IBM TotalStorage Enterprise Automated Tape
Library (3494) and IBM TotalStorage Virtual Tape Servers (VTS).
Shelf management
DFSMSrmm groups information about removable media by shelves into a central online
inventory, and keeps track of the volumes residing on those shelves. DFSMSrmm can
manage the shelf space that you define in your removable media library and in your storage
locations.
Volume management
DFSMSrmm manages the movement and retention of tape volumes throughout their life
cycle.
For more information about DFSMSrmm, see z/OS DFSMSrmm Guide and Reference,
SC26-7404 and z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405,
and visit:
http://www-1.ibm.com/servers/storage/software/sms/rmm/
IBM 3494
DFSMSrmm automatically records information about data sets on tape volumes so that you
can manage the data sets and volumes more efficiently. When all the data sets on a volume
have expired, the volume can be reclaimed and reused. You can optionally move volumes that
are to be retained to another location.
DFSMSrmm helps you manage your tape volumes and shelves at your primary site and
storage locations by recording information in a DFSMSrmm control data set.
In the removable media library, you store your volumes in shelves, where each volume
occupies a single shelf location. This shelf location is referred to as a rack number in the
DFSMSrmm TSO subcommands and ISPF dialog. A rack number matches the volumes
external label. DFSMSrmm uses the external volume serial number to assign a rack number
when adding a volume, unless you specify otherwise. The format of the volume serial you
define to DFSMSrmm must be one to six alphanumeric characters. The rack number must be
six alphanumeric or national characters.
You can have several automated tape libraries or manual tape libraries. You use an
installation-defined library name to define each automated tape library or manual tape library
to the system. DFSMSrmm treats each system-managed tape library as a separate location
or destination.
Since z/OS 1.6, a new EDGRMMxx parmlib member OPTION command, together with the
VLPOOL command, allows better support for the client/server environment.
z/OS 1.8 DFSMSrmm introduces an option to provide tape data set authorization
independent of the RACF TAPVOL and TAPEDSN. This option allows you to use RACF
generic DATASET profiles for both DASD and tape data sets.
All tape media and drives supported by z/OS are supported in this environment. Using
DFSMSrmm, you can fully manage all types of tapes in a non-system-managed tape library,
including 3420 reels, 3480, 3490, and 3590 cartridge system tapes.
Storage location
Storage locations are not part of the removable media library because the volumes in storage
locations are not generally available for immediate use. A storage location is comprised of
shelf locations that you define to DFSMSrmm. A shelf location in a storage location is
identified by a bin number. Storage locations are typically used to store removable media that
are kept for disaster recovery or vital records. DFSMSrmm manages two types of storage
locations: installation-defined storage locations and DFSMSrmm built-in storage locations.
You can define an unlimited number of installation-defined storage locations, using any
eight-character name for each storage location. Within the installation-defined storage
location, you can define the type or shape of the media in the location. You can also define
the bin numbers that DFSMSrmm assigns to the shelf locations in the storage location. You
can request DFSMSrmm shelf-management when you want DFSMSrmm to assign a specific
shelf location to a volume in the location.
For example, an installation can have the LOCAL storage location onsite as a vault in the
computer room, the DISTANT storage location can be a vault in an adjacent building, and the
REMOTE storage location can be a secure facility across town or in another state.
DFSMSrmm provides shelf-management for storage locations so that storage locations can
be managed at the shelf location level.
IBM 3494
RMM CDS
DFSMSrmm helps you manage the movement of your volumes and retention of your data
over their full life, from initial use to the time they are retired from service. Among the
functions DFSMSrmm performs for you are:
Automatically initializing and erasing volumes
Recording information about volumes and data sets as they are used
Expiration processing
Identifying volumes with high error levels that require replacement
To make full use of all of the DFSMSrmm functions, you specify installation setup options and
define retention and movement policies. DFSMSrmm provides you with utilities to implement
the policies you define. Since z/OS 1.7, we have DFSMSrmm enterprise enablement that
allows high-level languages to issue DFSMSrmm commands through Web services.
You can define shelf space in storage locations. When you move volumes to a storage
location where you have defined shelf space, DFSMSrmm checks for available shelf space
and then assigns each volume a place on the shelf if you request it. You can also set up
DFSMSrmm to reuse shelf space in storage locations.
If your business requires transaction systems, the batch window can also be a high cost.
Additionally, you must pay for staff to install, monitor, and operate your storage hardware
devices, for electrical power to keep each piece of storage hardware cool and running, and for
floor space to house the hardware. Removable media, such as optical and tape storage, cost
less per gigabyte (GB) than online storage, but they require additional time and resources to
locate, retrieve, and mount.
To allow your business to grow efficiently and profitably, you need to find ways to control the
growth of your information systems and use your current storage more effectively.
Availability
Space
Security
Performance
dfp
dss rmm
DFSMS
IBM
3494
ISMF
VTS
hsm tvs
Storage management
Storage management involves data set allocation, placement, monitoring, migration, backup,
recall, recovery, and deletion. These activities can be done either manually or by using
automated processes.
The DFSMS software product, together with hardware products and installation-specific
requirements for data and resource management, comprises the key to system-managed
storage in a z/OS environment.
The heart of DFSMS is the Storage Management Subsystem (SMS). Using SMS, the storage
administrator defines policies that automate the management of storage and hardware
devices. These policies describe data allocation characteristics, performance and availability
goals, backup and retention requirements, and storage requirements for the system. SMS
governs these policies for the system and the Interactive Storage Management Facility
(ISMF) provides the user interface for defining and maintaining the policies.
DFSMS + z/OS
RACF + + DFSORT
DFSMS environment
The DFSMS environment consists of a set of hardware and IBM software products which
together provide a system-managed storage solution for z/OS installations.
DFSMS uses a set of constructs, user interfaces, and routines (using the DFSMS products)
that allow the storage administrator to better manage the storage system. The core logic of
DFSMS, such as the Automatic Class Selection (ACS) routines, ISMF code, and constructs,
is located in DFSMSdfp. DFSMShsm and DFSMSdss are involved in the management class
construct.
In this environment, the Resource Access Control Facility (RACF) and Data Facility Sort
(DFSORT) products complement the functions of the base operating system. RACF provides
resource security functions, and DFSORT adds the capability for faster and more efficient
sorting, merging, copying, reporting, and analyzing of business information.
With system-managed storage, users can allow the system to select the specific unit and
volume for the allocation. They can also specify size requirements in terms of megabytes or
kilobytes. This means the user does not need to know anything about the physical
characteristics of the devices in the installation.
Tape System-managed storage lets you exploit the device technology of new devices without
having to change the JCL UNIT parameter. In a multi-library environment, you can select the
drive based on the library where the cartridge or volume resides. You can use the IBM
TotalStorage Enterprise Automated Tape Library (3494 or 3495) to automatically mount tape
volumes and manage the inventory in an automated tape library. Similar functionality is
available in a system-managed manual tape library. If you are not using SMS for tape
management, you can still access the IBM TotalStorage Enterprise Automated Tape Library
(3494 or 3495) using Basic Tape Library Storage (BTLS) software.
You can use DFSMShsm to automatically back up your various types of data sets and use
point-in-time copy to maintain access to critical data sets while they are being backed up.
Concurrent copy, virtual concurrent copy, SnapShot, and FlashCopy, along with
backup-while-open, have an added advantage in that they avoid invalidating a backup of a
CICS VSAM KSDS due to a control area or control interval split.
You can also create a logical group of data sets, so that the group is backed up at the same
time to allow recovery of the application defined by the group. This is done with the aggregate
backup and recovery support (ABARS) provided by DFSMShsm.
You can also use system-determined block sizes to automatically reblock physical sequential
and partitioned data sets that can be reblocked.
The policies defined in your installation represent decisions about your resources, such as:
What performance objectives are required by the applications accessing the data
Based on these objectives, you can try to better exploit cache data striping. By tracking
data set I/O activities, you can make better decisions about data set caching policies and
improve overall system performance. For object data, you can track transaction activities
to monitor and improve OAM's performance.
When and how to back up data - incremental or total
Determine the backup frequency, the number of backup versions, and the retention period
by consulting user group representatives. Be sure to consider whether certain data
backups need to be synchronized. For example, if the output data from application A is
used as input for application B, you must coordinate the backups of both applications to
prevent logical errors in the data when they are recovered.
The purpose of a backup plan is to ensure the prompt and complete recovery of data. A
well-documented plan identifies data that requires backup, the levels required, responsibilities
for backing up the data, and methods to be used.
ACS Routines
g em ent Cl
n a as
s
Ma
Which are the
services?
Sto
a Class
rage Class
What does it Data What is the
look like? Set service level?
D a t
Where is it
placed?
St
o ra g
e Group
Figure 5-5 Creating SMS policies
For example, the administrator can define one storage class for data entities requiring high
performance, and another for those requiring standard performance. Then, the administrator
writes Automatic Class Selection (ACS) routines that use naming conventions or other criteria
of your choice to automatically assign the classes that have been defined to data as that data
is created. These ACS routines can then be validated and tested.
DFSMS facilitates all of these tasks by providing menu-driven panels with the Interactive
Storage Management Facility (ISMF). ISMF panels make it easy to define classes, test and
validate ACS routines, and perform other tasks to analyze and manage your storage. Note
that many of these functions are available in batch through the NaviQuest tool.
(1) (2)
Assign Storage Not
Data Set Class (SC)
Not applicable
system-managed
(3)
Object Stored Stored Stored
(4)
Define OAM Assign
Assign System
Volume Group (SG)
Storage Groups Storage Group
(SG) (SG)
How to be system-managed
Using SMS, you can automate storage management for individual data sets and objects, and
for DASD, optical, and tape volumes. Figure 5-7 shows how a data set, object, DASD volume,
tape volume, or optical volume becomes system-managed. The numbers shown in
parentheses are associated with the following notes:
1. A DASD data set is system-managed if you assign it a storage class. If you do not assign a
storage class, the data set is directed to a non-system-managed DASD or tape volume,
one that is not assigned to a storage group.
2. You can assign a storage class to a tape data set to direct it to a system-managed tape
volume. However, only the tape volume is considered system-managed, not the data set.
3. Objects are also known as byte-stream data, and this data is used in specialized
applications such as image processing, scanned correspondence, and seismic
measurements. Object data typically has no internal record or field structure and, after it is
written, the data is not changed or updated. However, the data can be referenced many
times during its lifetime. Objects are processed by OAM. Each object has a storage class;
therefore, objects are system-managed. The optical or tape volume on which the object
resides is also system-managed.
4. Tape volumes are added to tape storage groups in tape libraries when the tape data set is
created.
Data class attributes define space and data characteristics that are normally specified on JCL
DD statements, TSO/E ALLOCATE command, IDCAMS DEFINE commands, and dynamic
allocation requests. For tape data sets, data class attributes can also specify the type of
cartridge and recording method, and if the data is to be compacted. Users then need only
specify the appropriate data classes to create standardized data sets.
You can override various data set attributes assigned in the data class, but you cannot
change the data class name assigned through an ACS routine.
Note: The data class name is not saved for non-system-managed data sets, although the
allocation attributes in the data class are used to allocate the data set.
For objects on tape, we recommend that you do not assign a data class through the ACS
routines. To assign a data class, specify the name of that data class on the SETOAM command.
If you change a data class definition, the changes only affect new allocations. Existing data
sets allocated with the data class are not changed.
Some of the availability requirements that you specify to storage classes (such as cache and
dual copy) can only be met by DASD volumes attached through one of the following storage
control units or a similar device:
3990-3 or 3990-6
RAMAC Array Subsystem
Enterprise Storage Server (ESS)
DS6000 or DS8000
Figure 5-9 shows storage control unit configurations and their storage class attribute values.
With a storage class, you can assign a data set to dual copy volumes to ensure continuous
availability for the data set. With dual copy, two current copies of the data set are kept on
separate DASD volumes (by the control unit). If the volume containing the primary copy of the
data set is damaged, the companion volume is automatically brought online and the data set
continues to be available and current. Remote copy is the same, with the two volumes in
distinct control units (generally remote).
You can specify an I/O response time objective with storage class by using the millisecond
response time (MSR) parameter. During data set allocation, the system attempts to select the
closest available volume to the specified performance objective. Also along the data set life,
through the use MSR, DFSMS dynamically uses the cache algorithms as DASD Fast Write
(DFW) and Inhibit Cache Load (ICL) in order to reach the MSR target I/O response time. This
DFSMS function is called dynamic cache management.
For objects, the system uses the performance goals you set in the storage class to place the
object on DASD, optical, or tape volumes. The storage class is assigned to an object when it
is stored or when the object is moved. The ACS routines can override this assignment.
Note: If you change a storage class definition, the changes affect the performance service
levels of existing data sets that are assigned to that class when the data sets are
subsequently opened. However, the definition changes do not affect the location or
allocation characteristics of existing data sets.
SPACE
Expiration
BACKUP
Migration/Object System-Managed
Transition Volume
GDG Storage
Management Management Management
Class Subsystem
Data DFSMShsm
Management and
Requirements DFSMSdss
DFSMShsm-Owned
Management classes let you define management requirements for individual data sets, rather
than defining the requirements for entire volumes. All the data set functions described in the
management class are executed by DFSMShsm and DFSMSdss programs. Figure 5-11 on
page 257 shows the sort of functions an installation can define in a management class.
The ACS routine can override the management class specified in JCL, ALLOCATE or DEFINE
command.You cannot override management class attributes through JCL or command
parameters.
Note: If you change a management class definition, the changes affect the management
requirements of existing data sets and objects that are assigned that class. You can
reassign management classes when data sets are renamed.
When changing a management class definition, the changes affect the management
requirements of existing data sets and objects that are assigned to that class.
SMS-managed
VIO PRIMARY LARGE
OBJECT
TAPE Storage
Groups
OBJECT BACKUP
DFSMShsm-owned
DB2 IMS CICS
Non-system-managed
Migration SYSTEM UNMOVABLE TAPE
Migration
Level 2,
Level 1
Backup,
Dump
Storage groups
A storage group is a collection of storage volumes and attributes that you define. The
collection can be a group of:
System paging volumes
DASD volumes
Tape volumes
Optical volumes
Combination of DASD and optical volumes that look alike
DASD, tape, and optical volumes treated as a single object storage hierarchy
Storage groups, along with storage classes, help reduce the requirement for users to
understand the physical characteristics of the storage devices which contain their data.
In a tape environment, you can also use tape storage groups to direct a new tape data set to
an automated or manual tape library.
DFSMShsm uses various storage group attributes to determine whether the volumes in the
storage group are eligible for automatic space or availability management.
Figure 5-12 shows an example of how an installation can group storage volumes according to
their objective. In this example:
SMS-managed DASD volumes are grouped into storage groups so that primary data sets,
large data sets, DB2 data, IMS data, and CICS data are all separated.
Note: A storage group is assigned to a data set only through the storage group ACS
routine. Users cannot specify a storage group when they allocate a data set, although they
can specify a unit and volume.
Whether or not to honor a users unit and volume request is an installation decision, but we
recommend that you discourage users from directly requesting specific devices. It is more
effective for users to specify the logical storage requirements of their data by storage and
management class, which the installation can then verify in the ACS routines.
For objects, there are two types of storage groups, OBJECT and OBJECT BACKUP. An
OBJECT storage group is assigned by OAM when the object is stored; the storage group
ACS routine can override this assignment. There is only one OBJECT BACKUP storage
group, and all backup copies of all objects are assigned to this storage group.
SMS volume selection
SMS determines which volumes are used for data set allocation by developing a list of all
volumes from the storage groups assigned by the storage group ACS routine. Volumes are
then either removed from further consideration or flagged as the following:
Primary Volumes online, below threshold, that meet all the specified criteria in the
storage class.
Secondary Volumes that do not meet all the criteria for primary volumes.
Tertiary When the number of volumes in the storage group is less than the number of
volumes that are requested.
Rejected Volumes that do not meet the required specifications. They are not
candidates for selection.
SMS starts volume selection from the primary list; if no volumes are available, SMS selects
from the secondary; and, if no secondary volumes are available, SMS selects from the
tertiary list.
SMS interfaces with the system resource manager (SRM) to select from the eligible volumes
in the primary list. SRM uses device delays as one of the criteria for selection, and does not
prefer a volume if it is already allocated in the jobstep. This is useful for batch processing
when the data set is accessed immediately after creation.
SMS does not use SRM to select volumes from the secondary or tertiary volume lists. It uses
a form of randomization to prevent skewed allocations in instances such as when new
volumes are added to a storage group, or when the free space statistics are not current on
volumes.
For a striped data set, when multiple storage groups are assigned to an allocation, SMS
examines each storage group and selects the one that offers the largest number of volumes
attached to unique control units. This is called control unit separation. After a storage group
has been selected, SMS selects the volumes based on available space, control unit
separation, and performance characteristics if they are specified in the assigned storage
class.
DataSet
DataSet
The user-defined group of data sets can be those belonging to an application, or any
combination of data sets that you want treated as a separate entity. Aggregate processing
enables you to:
Back up and recover data sets by application, to enable business to resume at a remote
site if necessary
Move applications in a non-emergency situation in conjunction with personnel moves or
workload balancing
Duplicate a problem at another site
You can use aggregate groups as a supplement to using management class for applications
that are critical to your business. You can associate an aggregate group with a management
class. The management class specifies backup attributes for the aggregate group, such as
the copy technique for backing up DASD data sets on primary volumes, the number of
Although SMS must be used on the system where the backups are performed, you can
recover aggregate groups to systems that are not using SMS, provided that the groups do not
contain data that requires that SMS be active, such as PDSEs. You can use aggregate groups
to transfer applications to other data processing installations, or to migrate applications to
newly-installed DASD volumes. You can transfer the application's migrated data, along with its
active data, without recalling the migrated data.
DFSMSdss or Storage
DFSMShsm Storage Class Class
Conversion of ACS Routine
Existing Data Sets not
Storage assigned
Class Assigned
Management Class
ACS Routine System-Managed
Volume
Storage Group
ACS Routine
The ACS language contains a number of read-only variables, which you can use to analyze
new data allocations. For example, you can use the read-only variable &DSN to make class
and group assignments based on data set or object collection name, or &LLQ to make
assignments based on the low-level qualifier of the data set or object collection name.
With z/OS V1R6, you can use a new ACS routine read-only security label variable,
&SECLABL, as input to the ACS routine. A security label is a name used to represent an
association between a particular security level and a set of security categories. It indicates
the minimum level of security required to access a data set protected by this profile.
You use the four read-write variables to assign the class or storage group you determine for
the data set or object, based on the routine you are writing. For example, you use the
&STORCLAS variable to assign a storage class to a data set or object.
For each SMS configuration, you can write as many as four routines: one each for data class,
storage class, management class, and storage group. Use ISMF to create, translate, validate,
and test the routines.
Because data allocations, whether dynamic or through JCL, are processed through ACS
routines, you can enforce installation standards for data allocation on system-managed and
non-system-managed volumes. ACS routines also enable you to override user specifications
for data, storage, and management class, and requests for specific storage volumes.
You can use the ACS routines to determine the SMS classes for data sets created by the
Distributed FileManager/MVS. If a remote user does not specify a storage class, and if the
ACS routines decide that the data set is not to be system-managed, then the Distributed
FileManager/MVS terminates the creation process immediately and returns an error reply
message to the source. Therefore, when you construct your ACS routines, consider the
potential data set creation requests of remote users.
SMS configuration
An SMS configuration is composed of:
A set of data class, management class, storage class, and storage group
ACS routines to assign the classes and groups
Optical library and drive definitions
Tape library definitions
Aggregate group definitions
SMS base configuration, that contains information such as:
Default management class
Default device geometry
The systems in the installation for which the subsystem manages storage
The SMS configuration is stored in SMS control data sets, which are VSAM linear data sets.
You must define the control data sets before activating SMS. SMS uses the following types of
control data sets:
Source Control Data Set (SCDS)
Active Control Data Set (ACDS)
Communications Data Set (COMMDS)
COMMDS ACDS
SCDS
SMS
You use the SCDS to develop and test but, before activating a configuration, retain at least
one prior configuration if you need to regress to it because of error. The SCDS is never used
to manage allocations.
We recommend that you have extra ACDSs in case a hardware failure causes the loss of your
primary ACDS. It must reside on a shared device, accessible to all systems, to ensure that
they share a common view of the active configuration. Do not have the ACDS reside on the
same device as the COMMDS or SCDS. Both the ACDS and COMMDS are needed for SMS
operation across the complex. Separation protects against hardware failure. Also create a
backup ACDS in case of hardware failure or accidental data loss or corruption.
The COMMDS must reside on a shared device accessible to all systems. However, do not
allocate it on the same device as the ACDS. Create a spare COMMDS in case of a hardware
failure or accidental data loss or corruption. SMS activation fails if the COMMDS is
unavailable.
Managing
temporary
data
Managing tape
volumes
Implementing DFSMS
You can implement SMS to fit your specific needs. You do not have to implement and use all
of the SMS functions. Rather, you can implement the functions you are most interested in
first. For example, you can:
Set up a storage group to only exploit the functions provided by extended format data sets,
such as striping, system-managed buffering (SMB), partial release, and so on.
Put data in a pool of one or more storage groups and assign them policies at the storage
group level to implement DFSMShsm operations in stages.
Exploit VSAM record level sharing (RLS).
In this book, we present an overview of the steps needed to activate, and manage data with,
a minimal SMS configuration, without affecting your JCL or data set allocations. To implement
DFSMS in your installation, however, see z/OS DFSMS Implementing System-Managed
Storage, SC26-7407.
All of these elements are required for a valid SMS configuration, except for the storage class
ACS routine.
The steps needed to activate the minimal configuration are presented in Figure 5-18. When
implementing DFSMS, beginning by implementing a minimal configuration allows you to:
Gain experience with ISMF applications for the storage administrator, because you use
ISMF applications to define and activate the SMS configuration.
Specify SHAREOPTIONS(2,3) only for the SCDS. This lets one update-mode user operate
simultaneously with other read-mode users between regions.
Define GRS resource names for active SMS control data sets
If you plan to share SMS control data sets between systems, consider the effects of multiple
systems sharing these data sets. Access is serialized by the use of RESERVE, which locks
out access to the entire device volume from other systems until the RELEASE is issued by the
task using the resource. This is undesirable, especially when there are other data sets on the
volume.
Place the resource name IGDCDSXS in the RESERVE conversion RNL as a generic entry to
convert the RESERVE/RELEASE to an ENQueue/DEQueue. This minimizes delays due to
contention for resources and prevents deadlocks associated with the VARY SMS command.
Important: If there are multiple SMS complexes within a global resource serialization
complex, be sure to use unique COMMDS and ACDS data set names to prevent false
contention.
For information about allocating COMMDS and ACDS data set names, see z/OS DFSMS
Implementing System-Managed Storage, SC26-7407.
Security
definitions
Storage ACS
Group Storage
Class
Validade Validate
Translate SCDS
For more information, see z/OS DFSMSdfp Storage Administration Reference, SC26-7402.
Defining a data class, a management class, and creating their respective ACS routines are
not required for a valid SCDS. However, because of the importance of the default
management class, we recommend that you include it in your minimal configuration.
For a detailed description of SMS classes and groups, see z/OS DFSMS Implementing
System-Managed Storage, SC26-7407.
The DFSMS product tape contains a set of sample ACS routines. The appendix of z/OS
DFSMSdfp Storage Administration Reference, SC26-7402, contains sample definitions of the
SMS classes and groups that are used in the sample ACS routines. The starter set
configuration can be used as a model for your own SCDS. For a detailed description of base
configuration attributes and how to use ISMF to define its contents, see z/OS DFSMSdfp
Storage Administration Reference, SC26-7402.
In the storage class ACS routine, the &STORCLAS variable is set to a null value to prevent
users from coding a storage class in JCL before you want to have system-managed data sets.
You define the class using ISMF. Select Storage Class in the primary menu. Then you can
define the class, NONSMS, in your configuration in one of two ways:
Select option 3 Define in the Storage Class Application Selection panel. The CDS Name
field must point to the SCDS you are building.
Select option 1 Display in the Storage Class Application Selection panel. The CDS Name
field must point to the starter set SCDS. Then, in the displayed panel, use the COPY line
operator to copy the definition of NONSMS from the starter set SCDS to your own SCDS.
Define a storage group (for example, NOVOLS) in your SCDS. A name like NOVOLS is useful
because you know it does not contain valid volumes.
No management classes are assigned when the minimal configuration is active. Definition of
this default is done here to prepare for the managing permanent data implementation phase.
The management class, STANDEF, is defined in the starter set SCDS. You can copy its
definition to your own SCDS in the same way as the storage class, NONSMS.
The storage group ACS routine will never run if a null storage class is assigned. Therefore, no
data sets are allocated as system-managed by the minimal configuration. However, you must
code a trivial one to satisfy the SMS requirements for a valid SCDS. After you have written the
ACS routines, use ISMF to translate them into executable form.
Follow these steps to create a data set that contains your ACS routines:
1. If you do not have the starter set, allocate a fixed-block PDS or PDSE with LRECL=80 to
contain your ACS routines. Otherwise, start with the next step.
2. On the ISMF Primary Option Menu, select Automatic Class Selection to display the ACS
Application Selection panel.
3. Select option 1 Edit. When the next panel is shown, enter in the Edit panel the name of
the PDS or PDSE data set you want to create to contain your source ACS routines.
For more information, see z/OS DFSMS: Using the Interactive Storage Management Facility,
SC26-7411.
ACS Storage
Routines Group
Every SMS system must have an IGDSMSzz member in SYS1.PARMLIB that specifies a
required ACDS and COMMDS control data set pair. This ACDS and COMMDS pair is used if
the COMMDS of the pair does not point to another COMMDS.
If the COMMDS of the pair refers to another COMMDS during IPL, it means a more recent
COMMDS has been used. SMS uses the most recent COMMDS to ensure that you cannot
IPL with a down-level configuration.
The data sets that you specify for the ACDS and COMMDS pair must be the same for every
system in an SMS complex. Whenever you change the ACDS or COMMDS, update the
IGDSMSzz for every system in the SMS complex so that it specifies the same data sets.
IGDSMSzz has many parameters. For a complete description of the SMS parameters, see
z/OS MVS Initialization and Tuning Reference, SA22-7592, and z/OS DFSMSdfp Storage
Administration Reference, SC26-7402.
Activating a new
SMS configuration
Starting SMS
To start SMS, which starts the SMS address space, use either of these methods:
With SMS=xx defined in IEASYSxx and SMS defined as a valid subsystem, IPL the
system. This starts SMS automatically.
Or, with SMS defined as a valid subsystem to z/OS, IPL the system. Start SMS later, using
the SET SMS=yy MVS operator command.
You can manually activate a new SMS configuration in two ways. Note that SMS must be
active before you use one of these methods:
1. Activating an SMS configuration from ISMF:
From the ISMF Primary Option Menu panel, select Control Data Set.
In the CDS Application Selection panel, enter your SCDS data set name and select 5
Activate, or enter the ACTIVATE command on the command line.
The ACTIVATE command, which runs from the ISMF CDS application, is equivalent to the
SETSMS operator command with the SCDS keyword specified.
If you use RACF, you can enable storage administrators to activate SMS configurations from
ISMF by defining the facility STGADMIN.IGD.ACTIVATE.CONFIGURATION and issuing
permit commands for each storage administrator.
When and how to use the Initializes SMS parameters and Changes SMS parameters only
command. starts SMS if SMS is defined when SMS is running.
but not started at IPL. Changes
SMS parameters when SMS is
running.
What default values are Default values are used for No default values. Parameters
available. non-specified parameters. non-specified remain
unchanged.
For more information about operator commands, see z/OS MVS System Commands,
SA22-7627.
D SMS,SG(STRIPE),LISTVOL
IGD002I 16:02:30 DISPLAY SMS 581
The DISPLAY SMS command can be used in various variations. To learn about the full
functionality of this command, see z/OS MVS System Commands, SA22-7627.
Inefficient space usage and poor data allocation cause problems with space and performance
management. In a DFSMS environment, you can enforce good allocation practices to help
reduce a variety of these problems. The following section highlights how to exploit SMS
capabilities.
Data classes can be determined from the user-specified value on the DATACLAS parameter
(DD card, TSO Alloc, Dynalloc macro), from a RACF default, or by ACS routines. ACS
routines can also override user-specified or RACF default data classes.
You can override a data class attribute (not the data class itself) using JCL or dynamic
allocation parameters. DFSMS usually does not change values that are explicitly specified,
because doing so alters the original meaning and intent of the allocation. There is an
Users cannot override the data class attributes of dynamically-allocated data sets if you use
the IEFDB401 user exit.
For additional information about data classes see also 5.8, Using data classes on page 251.
For sample data classes, descriptions, and ACS routines, see z/OS DFSMS Implementing
System-Managed Storage, SC26-7407.
SUB
FILE.PDSE
VOLSMS
You take full advantage of system-managed storage when you allow the system to place data
on the most appropriate device in the most efficient way, when you use system-managed data
When converting data sets for use in DFSMS, users do not have to remove these parameters
from existing JCL because volume and unit information can be ignored with ACS routines.
(However, you should work with users to evaluate UNIT and VOL=SER dependencies before
conversion).
If you keep the VOL=SER parameter for a non-SMS volume, but you are trying to access a
system-managed data set, then SMS might not find the data set. All SMS data sets (the ones
with a storage class) must reside on a system-managed volume.
You must implement a naming convention for your data sets. Although a naming convention is
not a prerequisite for DFSMS conversion, it makes more efficient use of DFSMS. You can
also reduce the cost of storage management significantly by grouping data that shares
common management requirements. Naming conventions are an effective way of grouping
data. They also:
Simplify service-level assignments to data
Facilitate writing and maintaining ACS routines
Allow data to be mixed in a system-managed environment while retaining separate
management criteria
Provide a filtering technique useful with many storage management products
Simplify the data definition step of aggregate backup and recovery support
Most naming conventions are based on the HLQ and LLQ of the data name. Other levels of
qualifiers can be used to identify generation data sets and database data. They can also be
used to help users to identify their own data.
Do not embed information that is subject to frequent change in the HLQ, such as department
number, application location, output device type, job name, or access method. Set a standard
within the HLQ. Figure 5-28 on page 289 shows examples of naming standards.
Figure 5-29 shows examples of how you can use LLQ naming standards to indicate the
storage management processing criteria.
The first column lists the LLQ of a data name. An asterisk indicates where a partial qualifier
can be used. For example, LIST* indicates that only the first four characters of the LLQ must
be LIST; valid qualifiers include LIST1, LISTING, and LISTOUT. The remaining columns show
the storage management processing information for the data listed.
Negotiate with your user group representatives to agree on the specific policies for the
installation, how soon you can implement them, and how strongly you enforce them.
You can simplify storage management by limiting the number of data sets and volumes that
cannot be system-managed.
DC C
DC B
DC A
DATA CLASS ATTRIBUTES
DATA SET TYPE
RECORD LENGTH
BLOCKSIZE
SPACE REQUIREMENTS
EXPIRATION DATE
VSAM ATTRIBUTES
Have data class names indicate the type of data to which they are assigned, which makes it
easier for users to identify the template they need to use for allocation.
You define data classes using the ISMF data class application. Users can access the Data
Class List panel to determine which data classes are available and the allocation values that
each data class contains.
Figure 5-32 on page 294 contains information that can help in this task. For more information
about planning and defining data classes, see z/OS DFSMSdfp Storage Administration
Reference, SC26-7402.
For detailed information about specifying data class attributes, see z/OS DFSMSdfp Storage
Administration Reference, SC26-7402.
Figure 5-33 Using data class (DC) ACS routine to enforce standards
The data class ACS routine provides an automatic method for enforcing standards because it
is called for system-managed and non-system-managed data set allocations. Standards are
enforced automatically at allocation time, rather than through manual techniques after
allocation.
Enforcing standards optimizes data processing resources, improves service to users, and
positions you for implementing system-managed storage. You can fail requests or issue
warning messages to users who do not conform to standards. Consider enforcing the
following standards in your DFSMS environment:
Prevent extended retention or expiration periods.
Prevent specific volume allocations, unless authorized. For example, you can control
allocations to spare, system, database, or other volumes.
Require valid naming conventions before implementing DFSMS system management for
permanent data sets.
MVS/ESA
For example, with the use of data classes, you have less use for the JCL keywords UNIT,
DCB, and AMP. When you start using system-managed data sets, you do not need to use the
JCL VOL keyword.
In the following sections, we present sample jobs exemplifying the use of JCL keywords
when:
Creating a sequential data set
Creating a VSAM cluster
Specifying a retention period
Specifying an expiration date
//NEWDATA DD DSN=FILE.SEQ1,
// DISP=(,CATLG),
// SPACE=(50,(5,5)),AVGREC=M,
// RECFM=VB,LRECL=80
FILE.SEQ1
Figure 5-35 shows an example of JCL used to create a data set in a system-managed
environment.
Table 5-2 lists the attributes a user can override with JCL.
For more information about data classes refer to 5.8, Using data classes on page 251 and
5.32, Data class attributes on page 294.
As previously mentioned, in order to use a data class, the data set does not have to be
system-managed. An installation can take advantages of a minimal SMS configuration to
simplify JCL use and manage data set allocation.
For information about managing data allocation, see z/OS DFSMS: Using Data Sets,
SC26-7410.
//VSAM DD DSN=NEW.VSAM,
// DISP=(,CATLG),
// SPACE=(1,(2,2)),AVGREC=M,
// RECORG=KS,KEYLEN=17,KEYOFF=6,
// LRECL=80
NEW.VSAM
NEW.VSAM.DATA
NEW.VSAM.INDEX
You can use JCL DD statement parameters to override various data class attributes; see
Table 5-2 on page 298 for those related to VSAM data sets.
A data set with a disposition of MOD is treated as a NEW allocation if it does not already
exist; otherwise, it is treated as an OLD allocation.
For a non-SMS environment, a VSAM cluster creation is only done through IDCAMS. In
Figure 5-36, NEW.VSAM refers to a KSDS VSAM cluster.
You cannot use certain parameters in JCL when allocating VSAM data sets, although you can
use them in the IDCAMS DEFINE command.
//RETAIN DD DSN=DEPTM86.RETPD.DATA,
// DISP=(,CATLG),RETPD=365
//RETAIN DD DSN=DEPTM86.EXPDT.DATA,
// DISP=(,CATLG),EXPDT=2006/013
The VTOC entry for non-VSAM and VSAM data sets contains the expiration date as declared
in the JCL, the TSO ALLOCATE command, the IDCAMS DEFINE command, or in the data class
definition. The expiration date is placed in the VTOC either directly from the date
specification, or after it is calculated from the retention period specification. The expiration
date in the catalog entry exists for information purposes only. If you specify the current date or
an earlier date, the data set is immediately eligible for replacement.
You can use a management class to limit or ignore the RETPD and EXPDT parameters given
by a user. If a user specifies values that exceed the maximum allowed by the management
class definition, the retention period is reset to the allowed maximum. For an expiration date
beyond year 1999 use the following format: YYYY/DDD. For more information about using
management class to control retention period and expiration date, see z/OS DFSMShsm
Storage Administration Guide, SC35-0421.
Important: Expiration dates 99365, or 99366, or 1999/365 or 1999/366 are special dates
and they mean never expires.
If you have DFSMS installed, you can extend PDSE sharing to enable multiple users on
multiple systems to concurrently create new PDSE members and read existing members.
Using the PDSESHARING keyword in the SYS1.PARMLIB member, IGDSMSxx, you can
specify:
NORMAL. This allows multiple users to read any member of a PDSE.
EXTENDED. This allows multiple users to read any member or create new members of a
PDSE.
All systems sharing PDSEs need to be upgraded to DFSMS to use the extended PDSE
sharing capability.
After updating the IGDSMSxx member of SYS1.PARMLIB, you need to issue the SET SMS
ID=xx command for every system in the complex to activate the sharing capability. See also
z/OS DFSMS: Using Data Sets, SC26-7410 for information about PDSE sharing.
Although SMS supports PDSs, consider converting these to the PDSE format. Refer to 4.26,
PDSE: Conversion on page 150 for more information about PDSE conversion.
By using the &DSNTYPE read-only variable in the ACS routine for data-class selection, you
can control which PDSs are to be allocated as PDSEs. The following values are valid for
DSNTYPE in the data class ACS routines:
&DSNTYPE = 'LIBRARY' for PDSEs.
&DSNTYPE = 'PDS' for PDSs.
&DSNTYPE is not specified. This indicates that the allocation request is provided by the
user through JCL, the TSO/E ALLOCATE command, or dynamic allocation.
If you specify a DSNTYPE value in the JCL, and a separate DSNTYPE value is also specified
in the data class selected by ACS routines for the allocation, the value specified in the data
class is ignored.
SUB
FILE.PDSE
VOLSMS
Temporary Permanent
Data Data
Object Data
Database
System Data
Data
These are common types of data that can be system-managed. For details on how these data
types can be system-managed using SMS storage groups, see z/OS DFSMS Implementing
System-Managed Storage, SC26-7407.
Temporary data Data sets used only for the duration of a job, job step, or terminal
session, and then deleted. These data sets can be cataloged or
uncataloged, and can range in size from small to very large.
Permanent data Data sets consisting of:
Interactive data
TSO user data sets
ISPF/PDF libraries you use during a terminal session
Data sets classified in this category are typically small, and are
frequently accessed and updated.
Batch data Data that is classified as either online-initiated, production, or test.
Data accessed as online-initiated are background jobs that an
online facility (such as TSO) generates.
Uncataloged data
When data sets are cataloged, users do not need to know which volumes the data sets reside
on when they reference them; they do not need to specify unit type or volume serial number.
This is essential in an environment with storage groups, where users do not have private
volumes.
Panel Help
-------------------------------------------------------------------------------
ISMF PRIMARY OPTION MENU - z/OS DFSMS V1 R6
Enter Selection or Command ===>
ISMF provides interactive access to the space management, backup, and recovery services
of the DFSMShsm and DFSMSdss functional components of DFSMS, to the tape
management services of the DFSMSrmm functional component, as well as to other products.
DFSMS introduces the ability to use ISMF to define attributes of tape storage groups and
libraries.
A storage administrator uses ISMF to define the installation's policy for managing storage by
defining and managing SMS classes, groups, and ACS routines. ISMF then places the
configuration in an SCDS. You can activate an SCDS through ISMF or an operator command.
ISMF is menu-driven, with fast paths for many of its functions. ISMF uses the ISPF data-tag
language (DTL) to give its functional panels on workstations the look of common user access
(CUA) panels and a graphical user interface (GUI).
ISPF/PDF
DFSMS
ISMF generates a data list based on your selection criteria. Once the list is built, you can use
ISMF entry panels to perform space management or backup and recovery tasks against the
entries in the list.
As a user performing data management tasks against individual data sets or against lists of
data sets or volumes, you can use ISMF to:
Edit, browse, and sort data set records
Delete data sets and backup copies
Protect data sets by limiting their access
Recover unused space from data sets and consolidate free space on DASD volumes
Copy data sets or DASD volumes to the same device or another device
Migrate data sets to another migration level
You cannot allocate data sets from ISMF. Data sets are allocated from ISPF, from TSO, or
with JCL statements. ISMF provides the DSUTIL command, which enables users to get to
ISPF and toggle back to ISMF.
Panel Help
----------------------------------------------------------------------------
ISMF PRIMARY OPTION MENU - z/OS DFSMS V1 R8
Enter Selection or Command ===>
Figure 5-46 ISMF Primary Option Menu panel for storage administrator mode
Accessing ISMF
How you access ISMF depends on your site.
You can create an option on the ISPF Primary Option Menu to access ISMF. Then access
ISMF by typing the appropriate option after the arrow on the Option field, in the ISPF
Primary Option Menu. This starts an ISMF session from the ISPF/PDF Primary Option
Menu.
To access ISMF directly from TSO, use the command:
ISPSTART PGM(DGTFMD01) NEWAPPL(DGT)
There are two Primary Option Menus, one for storage administrators, and another for end
users. Figure 5-46 shows the menu available to storage administrators; it includes additional
applications not available to end users.
Option 0 controls the user mode or the type of Primary Option Menu to be displayed. Refer to
5.47, ISMF: Profile option on page 315 for information about how to change the user mode.
The ISMF Primary Option Menu example assumes installation of DFSMS at the current
release level. For information about adding the DFSORT option to your Primary Option Menu,
see DFSORT Installation and Customization Release 14, SC33-4034.
Panel Help
------------------------------------------------------------------
ISMF PROFILE OPTION MENU
Enter Selection or Command ===>
You can select ISMF or ISPF JCL statements for processing batch jobs.
Use ENTER to see the line operator descriptions in sequence or choose them
by number:
Figure 5-48 shows the panel you reach when you press the Help PF key with the cursor in the
Line Operator field of the panel shown in Figure 5-49 on page 317 where the arrow points to
the data set. The Data Set List Line Operators panel shows the commands available to enter
in that field. If you want an explanation about a specific command, type the option
corresponding to the desired command and a panel is displayed showing information about
the command function.
You can exploit the Help PF key, when defining classes, to obtain information about what you
have to enter in the fields. Place the cursor in the field and press the Help PF key.
To see and change the assigned functions to the PF keys, enter the KEYS command in the
Command field.
Figure 5-50 shows the data set list generated for the generic data set name MHLRES2.**.
If ISMF is unable to get certain information required to check if a data set meets the selection
criteria specified, that data set is also to be included in the list. Missing information is
indicated by dashes on the corresponding column.
The Data Fields field shows how many fields you have in the list. You can navigate throughout
these fields using Right and Left PF keys. The figure also shows the use of the actions bar.
Volume option
Selecting option 2 (Volume) from the ISMF Primary Option Menu takes you to the Volume List
Selection Menu panel, as follows:
Selecting option 1 (DASD) displays the Volume Selection Entry Panel, shown in part (1) of
Figure 5-51. Using filters, you can select a Volume List Panel, shown in part (2) of the figure.
To view the commands you can use in the LINE OPERATOR field (marked with a circle in the
figure), place the cursor in the field and press the Help PF key.
Data class attributes are assigned to a data set when the data set is created. They apply to
both SMS-managed and non-SMS-managed data sets. Attributes specified in JCL or
equivalent allocation statements override those specified in a data class. Individual attributes
in a data class can be overridden by JCL, TSO, IDCAMS, and dynamic allocation statements.
Entering the DISPLAY line command in the LINE OPERATOR field, in front of a data class name,
displays the information about that data class, without requiring you to navigate using the
right and left PF keys.
The Storage Class Application Selection panel lets the storage administrator specify
performance objectives and availability attributes that characterize a collection of data sets.
For objects, the storage administrator can define the performance attribute Initial Access
Response Seconds. A data set or object must be assigned to a storage class in order to be
managed by DFSMS.
You can specify the DISPLAY line operator next to any class name on a class list to generate a
panel that displays values associated with that particular class. This information can help you
decide whether you need to assign a new DFSMS class to your data set or object.
If you determine that a data set you own is to be associated with a separate management
class or storage class, and if you have authorization, you can use the ALTER line operator
against a data set list entry to specify another storage class or management class.
ISMF lists
After obtaining a list (data set, data class, or storage class), you can save the list by typing
SAVE listname in the Command panel field. To see the saved lists, use the option L (List) in
the ISMF Primary Option Menu.
The List Application panel displays a list of all lists saved from ISMF applications. Each entry
in the list represents a list that was saved. If there are no saved lists to be found, the ISMF
Primary Option Menu panel is redisplayed with the message that the list is empty.
You can reuse and delete saved lists. From the List Application, you can reuse lists as though
they were created from the corresponding application. You can then use line operators and
commands to tailor and manage the information in the saved lists.
To learn more about the ISMF panel, see z/OS DFSMS: Using the Interactive Storage
Management Facility, SC26-7411.
Chapter 6. Catalogs
A catalog is a data set that contains information about other data sets. It provides users with
the ability to locate a data set by name, without knowing where the data set resides. By
cataloging data sets, your users will need to know less about your storage setup. Thus, data
can be moved from one device to another, without requiring a change in JCL DD statements
that refer to an existing data set.
Cataloging data sets also simplifies backup and recovery procedures. Catalogs are the
central information point for data sets; all data sets must be cataloged. In addition, all
SMS-managed data sets must be cataloged.
DFSMS allows you to use catalogs for any type of data set or object. Many advanced
functions require the use of catalogs, for example, the storage management subsystem.
Multiple user catalogs contain information about user data sets, and a single master catalog
contains entries for system data sets and user catalogs.
In z/OS, the component that controls catalogs is embedded in DFSMSdfp and is called
Catalog Management. Catalog Management has one address space for itself named Catalog
Address Space (CAS). This address space is used for buffering and to store control blocks,
together with code.
The modern catalog structure in z/OS is called the integrated catalog facility (ICF). All data
sets managed by the storage management subsystem (SMS) must be cataloged in an ICF
catalog.
Most installations depend on the availability of catalog facilities to run production job streams
and to support online users. For maximum reliability and efficiency, catalog all permanent
data sets and create catalog recovery procedures to guarantee continuous availability in
z/OS.
structure 1 VVDS
VVDS VTOC
VVDS
VVDS VTOC
VVDS
structure 2 VVDS
VVDS VTOC
VVDS
Catalogs
Catalogs, as mentioned, are data sets containing information about other data sets, and they
provide users with the ability to locate a data set by name, without knowing the volume where
the data set resides. This means that data sets can be moved from one device to another,
without requiring a change in JCL DD statements that refer to an existing data set.
Cataloging data sets also simplifies backup and recovery procedures. Catalogs are the
central information point for VSAM data sets; all VSAM data sets must be cataloged. In
addition, all SMS-managed data sets must be cataloged. Activity towards the catalog is much
more intense in a batch/TSO workload than in a CICS/ DB2 workload, where the majority of
data sets are allocated at CICS/DB2 initialization time.
The VVDS can be considered an extension of the volume table of contents (VTOC). The
VVDS is volume-specific, whereas the complexity of the BCS depends on your definitions.
The relationship between the BCS and the VVDS is many-to-many. That is, a BCS can point
to multiple VVDSs and a VVDS can point to multiple BCSs.
The VVDS contains VSAM volume records (VVRs) that hold information about VSAM data
sets residing on the volume. The VVDS also contains non-VSAM volume records (NVRs) for
SMS-managed non-VSAM data sets on the volume. If an SMS-managed non-VSAM data set
spans volumes, then only the first volume contains an NVR for that data set.
The system automatically defines a VVDS with 10 tracks primary and 10 tracks secondary
space, unless you explicitly define it.
VTOC VVDS
. DSNAME2
DSNAMEn
VOL002
.
. VTOC VVDS
DSNAME4 VOL003
DSNAMEn DSNAMEn ... VOL002
DSNAME5
In other words, the BCS portion of the ICF catalog contains the static information about the
data set, the information that rarely changes.
Every catalog consists of one BCS and one or more VVDSs. A BCS does not own a VVDS;
that is, more than one BCS can have entries for a single VVDS. Every VVDS that is
connected to a BCS has an entry in the BCS. For example, Figure 6-2 shows a possible
relationship between a BCS and three VVDSs on three disk volumes.
For non-VSAM data sets that are not SMS-managed, all catalog information is contained
within the BCS. For other types of data sets, there is other information available in the VVDS.
BCS structure
The BCS contains the information about where a data set resides. That can be a DASD
volume, tape, or other storage medium. Related information in the BCS is grouped into
logical, variable-length, spanned records related by key. The BCS uses keys that are the data
set names (plus one character for extensions).
A catalog can have data sets cataloged on any number of volumes. The BCS can have as
many as 123 extents on one volume. One volume can have multiple catalogs on it. All the
necessary control information is recorded in the VVDS residing on that volume.
Master catalog
A configuration of catalogs depends on a master catalog. A master catalog has the same
structure as any other catalog. What makes it a master catalog is that all BCSs are cataloged
in it, as well as certain data sets called system data sets (for instance, SYS1.LINKLIB and
other SYS1 data sets). Master catalogs are discussed in The master catalog on page 332.
VVDS characteristics
The VVDS is a VSAM entry-sequenced data set (ESDS) that has a 4 KB control interval size.
The hexadecimal RBA of a record is used as its key or identifier.
Volser is the volume serial number of the volume on which the VVDS resides.
You can explicitly define the VVDS using IDCAMS, or it is implicitly created after you define
the first VSAM or SMS-managed data set on the volume.
VVDSSPACE keyword
Prior to z/OS V1R7, the default space parameter is TRACKS(10,10), which could be too small
for sites that use custom 3390 volumes (the ones greater than 3390-9). With z/OS V1R7,
there is a new VVDSSPACE keyword of the F CATALOG command, as follows:
F CATALOG,VVDSSPACE(primary,secondary)
An explicitly defined VVDS is not related to any BCS until a data set or catalog object is
defined on the volume. As data sets are allocated on the VVDS volume, each BCS with
VSAM data sets or SMS-managed data sets residing on that volume is related to the VVDS.
VVDSSPACE indicates that the Catalog Address Space are to use the values specified as the
primary and secondary allocation amount in tracks for an implicitly defined VVDS. The default
value is ten tracks for both the primary and secondary values. The specified values are
preserved across a Catalog Address Space restart, but are not preserved across an IPL.
SYSCAT VOLABC
MASTER USER
CATALOG CATALOG
Catalog by function:
Master catalog
User catalog
SYS1.PARMLIB
ABC.DSNAME
SYS1.LINKLIB
DEF.DSNAME
ABC.DSNAME1
SYSRES VOL001
Catalogs by function
By function, the catalogs (BCSs) can be classified as master catalog and user catalog. A
particular case of a user catalog is the volume catalog, which is a user catalog containing only
tape library and tape volume entries.
There is no structural difference between a master catalog and a user catalog. What sets a
master catalog apart is how it is used, and what data sets are cataloged in it. For example,
the same catalog can be master in one z/OS and user in the other z/OS.
The master catalog for a system must contain entries for all user catalogs and their aliases
that the system uses. Also, all SYS1 data sets must be cataloged in the master catalog for
proper system initialization.
During a system initialization, the master catalog is read so that system data sets and
catalogs can be located.
For more information see z/OS MVS Initialization and Tuning Reference, SA22-7592.
For information about the IDCAMS LISTCAT command, see also 6.10, Listing a catalog on
page 345.
If you do not want to run an IDCAMS job, you can run LISTCAT as a line command in ISPF
option 3.4. List the SYS1.PARMLIB and type listc ent(/), as shown in Figure 6-5.
Note: The forward slash (/) specifies to use the data set name on the line where the
command is entered.
Cataloging data sets for two unrelated applications in the same catalog creates a single point
of failure for them that otherwise might not exist. Assessing the impact of outage of a given
catalog can help to determine if it is too large or can impact too many applications.
// DD DSN=PAY.D1
// DD DSN=PAY.D2
// DD DSN=DEPT1.VAC
// DD DSN=DEPT2.VAC
PAY.D1
...
UCAT1
PAY.D1
MCAT PAY.D2
...
ALIAS: PAY
UCAT1 PAY.D2
ALIAS: DEPT1
...
ALIAS: DEPT2
UCAT2
SYS1.LINKLIB UCAT2
SYS1.PARMLIB DEPT1.VAC
... DEPT2.VAC
...
DEPT1.VAC
DEPT2.VAC
...
Using aliases
Aliases are used to tell catalog management which user catalog your data set is cataloged in.
First, you place a pointer to an user catalog in the master catalog through the IDCAMS DEFINE
UCAT command. Next, you define an appropriate alias name for a user catalog in the master
catalog. Then, match the high-level qualifier (HLQ) of your data set with the alias. This
identifies the appropriate user catalog to be used to satisfy the request.
In Figure 6-6, all data sets with an HLQ of PAY have their information in the user catalog
UCAT1 because in the master catalog there is an alias PAY pointing to UCAT1.
The data sets with an HLQ of DEPT1 and DEPT2, respectively, have their information in the
user catalog UCAT2 because in the master catalog there are aliases DEPT1 and DEPT2
pointing to UCAT2.
Note: Aliases can also be used with non-VSAM data sets in order to create alternate
names to the same data set. Those aliases are not related to a user catalog.
To define an alias, use the IDCAMS command DEFINE ALIAS. An example is shown in 6.7,
Defining a catalog and its aliases on page 339.
However, the multilevel alias facility is only to be used when a better solution cannot be found.
The need for the multilevel alias facility can indicate data set naming conventions problems.
For more information about the multilevel alias facility, see z/OS DFSMS: Managing Catalogs,
SC26-7409.
Start
Standard search order for a LOCATE request
Y
STEPCAT Search data set in STEPCAT
?
N
Continue if not found
Y
JOBCAT Search/define data set in
? JOBCAT
N
Continue if not found
This command defines the data set PROD.PAYROLL in catalog SYS1.MASTER.ICFCAT. You
can use RACF to prevent the use of the CATALOG parameter and restrict the ability to define
data sets in the master catalog.
However, alternatives to catalog aliases are available for directing a catalog request,
specifically the CATALOG parameter of access method services and the name of the catalog.
The following search order is used to locate the catalog for an already cataloged data set:
1. Use the catalog named in IDCAMS CATALOG parameter, if coded. If the data set is not
found, fail the job.
2. If the data set is a generation data set, the catalog containing the GDG base definition is
used for the new GDS entry.
3. If not found, and the high-level qualifier is an alias for a catalog, search the catalog or if the
high-level qualifier is the name of a catalog, search the catalog. If the data set is not found,
fail the job.
4. Otherwise, search the master catalog.
Note: For SMS-managed data sets, JOBCAT and STEPCAT DD statements are not
allowed and cause a job failure. Also, they are not suggested even for non-SMS data sets,
because they can cause conflicted information. Therefore, do not use them and keep in
mind that they have been phased out starting with z/OS V1R7.
To use an alias to identify the catalog to be searched, the data set must have more than one
data set qualifier.
For information about the catalog standard search order also refer to z/OS DFSMS:
Managing Catalogs, SC26-7409.
TEST1.B
Defining a catalog
You can use the IDCAMS to define and maintain catalogs. See also 4.14, Access method
services (IDCAMS) on page 129. Defining a master catalog or user catalog is basically the
same.
Use the access method services command DEFINE USERCATALOG ICFCATALOG to define the
basic catalog structure (BCS) of an ICF catalog. Using this command you do not specify
whether you want to create a user or a master catalog. How to identify the master catalog to
the system is described in 6.4, Catalogs by function on page 332.
A connector entry to this user catalog is created in the master catalog, as the listing in
Figure 6-10 shows.
The attributes of the user catalog are not defined in the master catalog. They are described in
the user catalog itself and its VVDS entry. This is called the self-describing record. The
self-describing record is given a key of binary zeros to ensure it is the first record in the
catalog. There are no associations (aliases) yet for this user catalog. To create associations,
you need to define aliases.
To define a volume catalog (for tapes), use the parameter VOLCATALOG instead of ICFCATALOG.
See z/OS DFSMS Access Method Services for Catalogs, SC26-7394, for more detail.
If you do not want to change or add any attributes, you need only supply the entry name of the
object being defined and the MODEL parameter. When you define a BCS, you must also
specify the volume and space information for the BCS.
For further information about using a model, see z/OS DFSMS: Managing Catalogs,
SC26-7409.
Defining aliases
To use a catalog, the system must be able to determine which data sets are to be defined in
that catalog. The simplest way to accomplish this is to define aliases in the master catalog for
the user catalog. Before defining an alias, carefully consider the effect the new alias has on
old data sets. A poorly chosen alias can make other data sets inaccessible.
You can define aliases for the user catalog in the same job in which you define the catalog by
including DEFINE ALIAS commands after the DEFINE USERCATALOG command. You can use
conditional operators to ensure the aliases are only defined if the catalog is successfully
defined. After the catalog is defined, you can add new aliases or delete old aliases.
You cannot define an alias if a data set cataloged in the master catalog has the same
high-level qualifier as the alias. The DEFINE ALIAS command fails with a Duplicate data set
name error. For example, if a catalog is named TESTE.TESTSYS.ICFCAT, you cannot define
the alias TESTE for any catalog.
Use the sample SYSIN for an IDCAMS job in Figure 6-11 to define aliases TEST1 and
TEST2.
DEFINE ALIAS -
(NAME(TEST1) -
RELATE(OTTO.CATALOG.TEST))
DEFINE ALIAS -
(NAME(TEST2) -
RELATE(OTTO.CATALOG.TEST))
These definitions result in the following entries in the master catalog (Figure 6-12).
Both aliases have an association to the newly defined user catalog. If you now create a new
data set with an HLQ of TEST1 or TEST2, its entry will be directed to the new user catalog.
Also, the listing of the user catalog connector now shows both aliases; see Figure 6-13.
CATALOG.MASTER
Tip: Convert all intra-sysplex RESERVES in global ENQs through the conversion RNL.
Independent of the number of catalogs, use the virtual lookaside facility (VLF) for buffering the
user catalog CIs. The master catalog CIs are naturally buffered in the catalog address space
(CAS). Multiple catalogs can reduce the impact of the loss of a catalog by:
Reducing the time necessary to recreate any given catalog
Allowing multiple catalog recovery jobs to be in process at the same time
Recovery from a pack failure is dependent on the total amount of catalog information about a
volume, regardless of whether this information is stored in one catalog or in many catalogs.
When using multiple user catalogs, consider grouping data sets under different high-level
qualifiers. You can then spread them over multiple catalogs by defining aliases for the various
catalogs.
Cache
MVS 1 Cache
MVS 2
Note: The device must be defined as shared to all systems that access it.
If several systems have the device defined as shared and other systems do not, then catalog
corruption will occur. Check with your system programmer to determine shared volumes.
Note that it is not necessary to have the catalog actually be shared between systems; the
catalog address space assumes it is shared if it meets the criteria stated. All VVDSs are
defined as shared. Tape volume catalogs can be shared in the same way as other catalogs.
By default, catalogs are defined with SHAREOPTIONS(3 4). You can specify that a catalog is
not to be shared by defining the catalog with SHAREOPTIONS(3 3). Only define a catalog as
unshared if you are certain it will not be shared. Place unshared catalogs on volumes that
have been initialized as unshared. Catalogs that are defined as unshared and that reside on
shared volumes will become damaged if referred to by another system.
Attention: To avoid catalog corruption, define a catalog volume on a shared UCB and set
catalog SHAREOPTIONS to (3 4) on all systems sharing a catalog.
Using SHAREOPTIONS 3 means that VSAM does not issue the ENQ SYSVSAM SYSTEMS
for the catalog; SHAREOPTIONS 4 means that the VSAM buffers need to be refreshed.
You can check whether a catalog is shared by running the operator command:
MODIFY CATALOG,ALLOCATED
If a catalog is not really shared with another system, move the catalog to an unshared device
or alter its SHAREOPTIONS to (3 3). To prevent potential catalog damage, never place a
catalog with SHAREOPTIONS (3 3) on a shared device.
There is one VVR in a shared catalog that is used as a log by all the catalog management
accessing such catalog. This log is used to guarantee the coherency of each catalog buffer in
each z/OS system.
The checking also affects performance because, to maintain integrity, for every catalog
access a special VVR in the shared catalog must be read before using the cached version of
the BCS record. This access implies a DASD reserve and I/O operations.
To avoid having I/O operations to read the VVR, you can use enhanced catalog sharing
(ECS). For information about ECS, see 6.24, Enhanced catalog sharing on page 375.
Checking also ensures that the control blocks for the catalog in the CAS are updated. This
occurs if the catalog has been extended or otherwise altered from another system. This
checking maintains data integrity.
You can use the LISTCAT output to monitor VSAM data sets including catalogs. The statistics
and attributes listed can be used to help determine if you reorganize, recreate, or otherwise
alter a VSAM data set to improve performance or avoid problems.
The LISTCAT command can be used in many variations to extract information about a
particular entry in the catalog. It extracts the data from the BCS and VVDS.
LISTCAT examples
LISTCAT examples for monitoring catalogs include:
List all ALIAS entries in the master catalog:
LISTCAT ALIAS CAT(master.catalog.name)
This command provides a list of all aliases that are currently defined in your master
catalog. If you need information only about one specific alias, use the keyword
ENTRY(aliasname) and specify ALL to get detailed information. For sample output of this
command, see Figure 6-12 on page 341.
TEST1.A TEST1.B
Since z/OS V1R7, an attempt to define a page data set in a catalog not pointed to by the
running master causes an IDCAMS message, instead of it being executed and causing later
problems.
The default of the DELETE command is scratch, which means the BCS, VTOC, and VVDS data
set entries are erased. By doing that, the reserved space for this data set on the volume is
released. The data set itself is not overwritten until the freed space is reused by another data
set. You can use the parameter ERASE for an IDCAMS DELETE if you want the data set to be
overwritten with binary zeros for security reasons.
Delete aliases
To simply delete an alias, use the IDCAMS DELETE ALIAS command, specifying the alias you
are deleting. To delete all the aliases for a catalog, use EXPORT DISCONNECT to disconnect the
catalog. The aliases are deleted when the catalog is disconnected. When you again connect
the catalog (using IMPORT CONNECT), the aliases remain deleted.
Figure 6-20 Delete the VVDS entry for a non-VSAM data set
Important: When deleting VSAM KSDS, you must issue a DELETE VVR for each of the
components, the DATA, and the INDEX.
The DELETE command with keyword RECOVERY removes the GDG base catalog entry from the
catalog.
Delete an ICF
When deleting an ICF, take care to specify whether you want to delete only the catalog, or if
you want to delete all associated data. The following examples show how to delete a catalog.
Delete with recovery
In Figure 6-22, a user catalog is deleted in preparation for replacing it with an imported
backup copy. The VVDS and VTOC entries for objects defined in the catalog are not
deleted and the data sets are not scratched, as shown in the JCL.
RECOVERY specifies that only the catalog data set is deleted, without deleting the objects
defined in the catalog.
Delete an empty user catalog
In Figure 6-23 on page 350, a user catalog is deleted. A user catalog can be deleted when
it is empty; that is, when there are no objects cataloged in it other than the catalog's
volume. If the catalog is not empty, it cannot be deleted unless the FORCE parameter is
specified.
Important: The FORCE parameter deletes all data sets in the catalog. The DELETE command
deletes both the catalog and the catalog's user catalog connector entry in the master
catalog.
Where:
SCRATCH This means that the non-VSAM data set being deleted from the catalog is to
be removed from the VTOC of the volume on which it resides. When
SCRATCH is specified for a cluster, alternate index, page space, or data
space, the VTOC entries for the volumes involved are updated to reflect the
deletion of the object.
NOSCRATCH This means that the non-VSAM data set being deleted from the catalog is to
remain in the VTOC of the volume on which it resides, or that it has already
been scratched from the VTOC. When NOSCRATCH is specified for a
cluster, page space, alternate index, or data space, the VTOC entries for the
volumes involved are not updated.
To execute the DELETE command against a migrated data set, you must have RACF group
ARCCATGP defined. In general to allow certain authorized users to perform these operations
on migrated data sets without recalling them, perform the following steps:
1. Define a RACF catalog maintenance group named ARCCATGP.
ADDGROUP (ARCCATGP)
2. Connect the desired users to that group.
Only when such a user is logged on under group ARCCATGP does DFSMShsm bypass the
automatic recall for UNCATALOG, RECATALOG, and DELETE/NOSCRATCH requests for
migrated data sets. For example, the following LOGON command demonstrates starting a
TSO session under ARCCATGP. For further information about ARCCATGP group, see z/OS
DFSMShsm Implementation and Customization Guide, SC35-0418.
LOGON userid | password GROUP(ARCCATGP)
To delete a migrated data set, but the data set is not recorded in the HSM control data sets,
execute a DELETE NOSCRATCH command for the data set to clean up the ICF catalog.
DELETE command
The DELETE command deletes catalogs, VSAM data sets, non-VSAM data sets, and objects.
With z/OS V1R11, the IDCAMS DELETE command is enhanced to include a new function
called DELETE MASK. It allows users to specify the data set name selection criteria desired
with a mask-entry-name and a keyword MASK. A mask-entry-name (also called as filter
key) can have two consecutive asterisks (**) or one or more percentage signs (%)
The two consecutive asterisks represent zero or more characters, and it is not limited to the
number of levels. For example, A.B.** means all data set names with two or more levels with
A and B as their first and second qualifiers, respectively. The percentage sign is the
replacement for any character in that same relative position. For example, ABCDE matches
the mask-entry A%%DE, but not A%DE.
The MASK keyword is the keyword to turn on the new feature; for example:
DELETE A.B.** MASK
DELETE A.BC.M%%K MASK
NOMASK is the keyword to turn the new function off. The default is NOMASK.
If more than one entry is to be deleted, the list of entrynames must be enclosed in
parentheses. The maximum number of entrynames that can be deleted is 100. If the MASK
keyword is specified, then only one entryname can be specified. This entryname is also
known as the mask filter key.
Note: When a generic level name is specified, only one qualifier can replace the asterisk
(*). When a generic level name is specified, an asterisk (*) may only represent 1 qualifier of
a data set name. When a filter key of double asterisks (**) is specified with the MASK
parameter, the key may represent multiple qualifiers within a data set name. The double
asterisks (**) may precede or follow a period. It must be preceded or followed by either a
period or a blank.
For masking filter key of percent signs (%), it allows one to eight percent (%) specified in
each qualifier. A data set name ABCDE matches the mask entry-name 'A%09/06/03', but
does not match 'A%DE'.
MASK keyword
The DELETE MASK command allows you to specify many variations of a data set name on a
single deletion, using new wild card characters and rules to give more flexibility in selecting
the data sets to be deleted. Mask can be an asterisk (*), two consecutive asterisks (**) or a
percentage sign (%).
Backing up a BCS
IDCAMS EXPORT command
DFSMSdss logical dump command
DFSMShsm BACKDS command
Backing up a VVDS
Backup the full volume
Backup all data sets described in the VVDS
Backup procedures
The two parts of an ICF catalog, the BCS and the VVDS, require separate backup
techniques. The BCS can be backed up like any other data set. Only back up the VVDS as
part of a volume dump. The entries in the VVDS and VTOC are backed up when the data sets
they describe are:
Exported with IDCAMS
Logically dumped with DFSMSdss
Backed up with DFSMShsm
Important: Because catalogs are essential system data sets, it is important that you
maintain backup copies. The more recent and accurate a backup copy, the less impact a
catalog outage will have on your installation.
Backing up a BCS
To back up a BCS you can use one of the following methods:
The access method services EXPORT command
The DFSMSdss logical DUMP command
The DFSMShsm BACKDS command
The copy created by these utilities is a portable sequential data set that can be stored on a
tape or direct access device, which can be of another device type than the one containing the
source catalog.
When these commands are used to back up a BCS, the aliases of the catalog are saved in
the backup copy. The source catalog is not deleted, and remains as a fully functional catalog.
The relationships between the BCS and VVDSs are unchanged.
You cannot permanently export a catalog by using the PERMANENT parameter of EXPORT.
The TEMPORARY option is used even if you specify PERMANENT or allow it to default.
Figure 6-26 shows you an example for an IDCAMS EXPORT.
Note: You cannot use IDCAMS REPRO or other copying commands to create and recover
BCS backups.
Also make periodic volume dumps of the master catalog's volume. This dump can later be
used by the stand-alone version of DFSMSdss to restore the master catalog if you cannot
access the volume from another system.
Backing up a VVDS
Do not back up the VVDS as a data set to provide for recovery. To back up the VVDS, back up
the volume containing the VVDS, or back up all data sets described in the VVDS (all VSAM
and SMS-managed data sets). If the VVDS ever needs to be recovered, recover the entire
volume, or all the data sets described in the VVDS.
You can use either DFSMSdss or DFSMShsm to back up and recover a volume or individual
data sets on the volume.
MASTER
CATALOG
LOCK IMPORT
USER
CATALOG
Recovery procedures
Before you run the recovery procedures mentioned in this section, see 6.23, Fixing
temporary catalog problems on page 373.
Normally, a BCS is recovered separately from a VVDS. A VVDS usually does not need to be
recovered, even if an associated BCS is recovered. However, if you need to recover a VVDS,
and a BCS resides on the VVDSs volume, you must recover the BCS as well. If possible,
export the BCS before recovering the volume, and then recover the BCS from the exported
copy. This ensures a current BCS.
Before recovering a BCS or VVDS, try to recover single damaged records. If damaged
records can be rebuilt, you can avoid a full recovery.
Single BCS records can be recovered using the IDCAMS DELETE and DEFINE commands as
described in 6.11, Defining and deleting data sets on page 347. Single VVDS and VTOC
records can be recovered using the IDCAMS DELETE command and by recovering the data
sets on the volume.
The way you recover a BCS depends on how it was saved (see 6.13, Backup procedures on
page 353). When you recover a BCS, you do not need to delete and redefine the target
catalog unless you want to change the catalog's size or other characteristics, or unless the
BCS is damaged in such a way as to prevent the usual recovery.
Lock the BCS before you start recovery so that no one else has access to it while you recover
the BCS. If you do not restrict access to the catalog, users might be able to update the
catalog during recovery or maintenance and create a data integrity exposure. The catalog
also will be unavailable to any system that shares the catalog. You cannot lock a master
catalog.
After you recover the catalog, update the BCS with any changes which have occurred since
the last backup, for example, by running IDCAMS DEFINE RECATALOG for all missing entries.
You can use the access method services DIAGNOSE command to identify certain
unsynchronized entries.
For further information about recovery procedures, see z/OS DFSMS: Managing Catalogs,
SC26-7409. For information about the IDCAMS facility, see z/OS DFSMS Access Method
Services for Catalogs, SC26-7394.
UCAT1
Index Data component
DSNAME1 DSNAME1 ... VOL001 VOL001
VVDS
DSNAME2 DSNAME2 ... VOL002
DSNAME1 ... UCAT1
DSNAME3 DSNAME3 ... VOL001
DSNAME3 ... UCAT1
DSNAME4 DSNAME4 ... VOL002
DSNAME5 ... UCAT27
DSNAME5 DSNAME5 ... VOL001
. VVDS
VSAM errors
Two kinds of VSAM errors that can occur with your BCS or VVDS:
Logical errors
The records on the DASD volume still have valid physical characteristics like record size or
CI size. The VSAM information in those records is wrong, like pointers from one record to
another or the end-of-file information.
When errors in the VSAM structure occur, they are in most cases logical errors for the BCS.
Because the VVDS is an entry-sequenced data set (ESDS), it has no index component.
Logical errors for an ESDS are unlikely.
You can use the IDCAMS EXAMINE command to analyze the structure of the BCS. As
explained previously, the BCS is a VSAM key-sequenced data set (KSDS). Before running the
EXAMINE, run an IDCAMS VERIFY to make sure that the VSAM information is current, and
ALTER LOCK the catalog to prevent update from others while you are inspecting it.
With the parameter INDEXTEST, you analyze the integrity of the index. With parameter
DATATEST, you analyze the data component. If only the index test shows errors, you might
have the chance to recover the BCS by just running an EXPORT/IMPORT to rebuild the index. If
there is an error in the data component, you probably have to recover the BCS as described
in 6.14, Recovery procedures on page 355.
Catalog errors
By catalog errors we mean errors in the catalog information of a BCS or VVDS, or
unsynchronized information between the BCS and VVDS. The VSAM structure of the BCS is
still valid, that is, an EXAMINE returns no errors.
Catalog errors can make a data set inaccessible. Sometimes it is sufficient to delete the
affected entries, sometimes the catalog needs to be recovered (see 6.14, Recovery
procedures on page 355).
You can use the IDCAMS DIAGNOSE command to validate the contents of a BCS or VVDS. You
can use this command to check a single BCS or VVDS and to compare the information
between a BCS and multiple VVDSs.
For various DIAGNOSE examples, see z/OS DFSMS Access Method Services for Catalogs,
SC26-7394.
DEFINE
RECATALOG
Protecting Catalogs
RACF
STGADMIN profiles in
vsam.rectlg
RACF FACILITY class:
dataset.def
STGADMIN.IDC.DIAGNOSE.CATALOG
STGADMIN.IDC.DIAGNOSE.VVDS dataset.ghi
STGADMIN.IDC.EXAMINE.DATASET
Protecting catalogs
The protection of data includes:
Data security: the safety of data from theft or intentional destruction
Data integrity: the safety of data from accidental loss or destruction
Data can be protected either indirectly, by preventing access to programs that can be used to
modify data, or directly, by preventing access to the data itself. Catalogs and cataloged data
sets can be protected in both ways.
To protect your catalogs and cataloged data, use the Resource Access Control Facility
(RACF) or a similar product.
For information about using APF for program authorization, see z/OS MVS Programming:
Authorized Assembler Services Guide, SA22-7608.
All IDCAMS load modules are contained in SYS1.LINKLIB, and the root segment load
module (IDCAMS) is link-edited with the SETCODE AC(1) attribute. These two characteristics
ensure that access method services executes with APF authorization.
To open a catalog as a data set, you must have ALTER authority and APF authorization.
When defining an SMS-managed data set, the system only checks to make sure the user has
authority to the data set name and SMS classes and groups. The system selects the
appropriate catalog, without checking the user's authority to the catalog. You can define a
data set if you have ALTER or OPERATIONS authority to the applicable data set profile.
Deleting any type of RACF-protected entry from a RACF-protected catalog requires ALTER
authorization to the catalog or to the data set profile protecting the entry being deleted. If a
non-VSAM data set is SMS-managed, RACF does not check for DASDVOL authority. If a
non-VSAM, non-SMS-managed data set is being scratched, DASDVOL authority is also
checked.
For ALTER RENAME, the user is required to have the following two types of authority:
ALTER authority to either the data set or the catalog
ALTER authority to the new name (generic profile) or CREATE authority to the group
Be sure that RACF profiles are correct after you use REPRO MERGECAT or CNVTCAT on a
catalog that uses RACF profiles. If the target and source catalogs are on the same volume,
the RACF profiles remain unchanged.
Tape data sets defined in an integrated catalog facility catalog can be protected by:
Controlling access to the tape volumes
Controlling access to the individual data sets on the tape volumes
Profiles
To control the ability to perform functions associated with storage management, define
profiles in the FACILITY class whose profile names begin with STGADMIN (storage
administration). For a complete list of STGADMIN profiles, see z/OS DFSMSdfp Storage
Administration Reference, SC26-7402. Examples of profiles include:
STGADMIN.IDC.DIAGNOSE.CATALOG
STGADMIN.IDC.DIAGNOSE.VVDS
STGADMIN.IDC.EXAMINE.DATASET
UCAT1
UCAT2
Merging catalogs
You might find it beneficial to merge catalogs if you have many small or seldom-used
catalogs. An excessive number of catalogs can complicate recovery procedures and waste
resources such as CAS storage, tape mounts for backups, and system time performing
backups.
Merging catalogs is accomplished in much the same way as splitting catalogs (see 6.18,
Splitting a catalog on page 363). The only difference between splitting catalogs and merging
them is that in merging, you want all the entries in a catalog to be moved to another catalog,
so that you can delete the obsolete catalog.
Use the following steps to merge two integrated catalog facility catalogs:
1. Use ALTER LOCK to lock both catalogs.
2. Use LISTCAT to list the aliases for the catalog you intend to delete after the merger:
//JOB ...
//S1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//DD1 DD DSN=listcat.output,DISP=(NEW,CATLG),
// SPACE=(TRK,(10,10)),
// DCB=(RECFM=VBA,LRECL=125,BLKSIZE=629)
//SYSIN DD *
LISTC ENT(catalog.name) ALL -
OUTFILE(DD1)
/*
Important: This step can take a long time to complete. If the MERGECAT job is cancelled,
then all merged entries so far remain in the target catalog. They are not backed out in
case the job fails. See Recovering from a REPRO MERGECAT Failure in z/OS
DFSMS: Managing Catalogs, SC26-7409, for more information about this topic.
Since z/OS V1R7, REPRO MERGECAT provides the capability to copy a range of records from
one user catalog to another. It allows recovery of a broken catalog by enabling you to copy
from one specific key to another specific key just before where the break occurred and
then recover data beginning after the break. Refer to the parameters FROMKEY/TOKEY in the
previous example.
5. Use the listing created in step 2 to create a sequence of DELETE ALIAS and DEFINE ALIAS
commands to delete the aliases of the obsolete catalog, and to redefine the aliases as
aliases of the catalog you are keeping.
The DELETE ALIAS/DEFINE ALIAS sequence must be run on each system that shares the
changed catalogs and uses a separate master catalog.
6. Use DELETE USERCATALOG to delete the obsolete catalog. Specify RECOVERY on the
DELETE command.
7. If your catalog is shared, run the EXPORT DISCONNECT command on each shared system to
remove unwanted user catalog connector entries. If your catalog is shared, run the EXPORT
DISCONNECT command on each shared system to remove unwanted user catalog
connector entries.
8. Use ALTER UNLOCK to unlock the remaining catalog.
You can also merge entries from one tape volume catalog to another using REPRO MERGECAT.
REPRO retrieves tape library or tape volume entries and redefines them in a target tape volume
catalog. In this case, VOLUMEENTRIES needs to be used to correctly filter the appropriate
entries. The LEVEL parameter is not allowed when merging tape volume catalogs.
UCAT2
UCAT3
Splitting catalogs
You can split a catalog to create two catalogs or to move a group of catalog entries if you
determine that a catalog is either unacceptably large or that it contains too many entries for
critical data sets.
If the catalog is unacceptably large (that is, a catalog failure leaving too many entries
inaccessible), then you can split the catalog into two catalogs. If the catalog is of an
acceptable size but contains entries for too many critical data sets, then you can simply move
entries from one catalog to another.
To split a catalog or move a group of entries, use the access method services REPRO MERGECAT
command. Use the following steps to split a catalog or to move a group of entries:
1. Use ALTER LOCK to lock the catalog. If you are moving entries to an existing catalog, lock it
as well.
2. If you are splitting a catalog, define a new catalog with DEFINE USERCATALOG LOCK (see also
Defining a catalog and its aliases on page 339).
3. Use LISTCAT to obtain a listing of the catalog aliases you are moving to the new catalog.
Use the OUTFILE parameter to define a data set to contain the output listing (see also
Merging catalogs on page 361).
4. Use EXAMINE and DIAGNOSE to ensure that the catalogs are error-free. Fix any errors
indicated (see also Checking the integrity on an ICF structure on page 357).
Important: This step can take a long time to complete. If the MERGECAT job is cancelled,
all merged entries so far will remain in the target catalog. They are not backed out in
case the job fails. See Recovering from a REPRO MERGECAT Failure in z/OS
DFSMS: Managing Catalogs, SC26-7409, for more information about this topic.
6. Use the listing created in step 3 to create a sequence of DELETE ALIAS and DEFINE ALIAS
commands for each alias. These commands delete the alias from the original catalog, and
redefine them as aliases for the catalog which now contains entries belonging to that alias
name.
The DELETE ALIAS/DEFINE ALIAS sequence must be run on each system that shares the
changed catalogs and uses a separate master catalog.
7. Unlock both catalogs using ALTER UNLOCK.
Catalog performance
Performance is not the main consideration when defining catalogs. It is more important to
create a catalog configuration that allows easy recovery of damaged catalogs with the least
amount of system disruption. However, there are several options you can choose to improve
catalog performance without affecting the recoverability of a catalog. Remember that in an
online environment, such as CICS/DB2, the number of data set allocations is minimal and
consequently the catalog activity is low.
Buffering catalogs
The simplest method of improving catalog performance is to use a buffer to maintain catalog
records within CAS private area address space or VLF data space. Two types of buffer are
available exclusively for catalogs. The in-storage catalog (ISC) buffer is contained within the
catalog address space (CAS). The catalog data space buffer (CDSC) is separate from CAS
and uses the z/OS VLF component, which stores the buffered records in a data space. Both
types of buffer are optional, and each can be cancelled and restarted without an IPL.
Master catalog
If the master catalog only contains entries for catalogs, catalog aliases, and system data sets,
the entire master catalog is read into main storage during system initialization. Because the
master catalog, if properly used, is rarely updated, the performance of the master catalog is
not appreciably affected by I/O requirements. For that reason, keep the master catalog small
and do not define user data sets into it.
For more information about these values, see z/OS DFSMS Access Method Services for
Catalogs, SC26-7394.
Since z/OS V1R7, a new catalog auto-tuning (every 10-minutes) automatically modifies
temporarily the number of data buffers, index buffers, and VSAM strings for catalogs. When
any modification occurs the message IEC391I is issued telling the new values. This function is
by default enabled, but can be disabled through the F CATALOG,DISABLE(AUTOTUNING).
If the catalog is shared only within one GRSplex, convert the SYSIGGV2 resource to a global
enqueue to avoid reserves on the volume on which the catalog resides. If you are not
converting SYSIGGV2, you can have ENQ contentions on those volumes and even run into
deadlock situations.
Important: If you share a catalog with a system that is not in the same GRS complex, do
not convert the SYSIGGV2 resource for this catalog. Sharing a catalog outside the
complex requires reserves for the volume on which the catalog resides. Otherwise, you will
break the catalog. For more information, see z/OS MVS Planning: Global Resource
Serialization, SA22-7600.
The F CATALOG,REPORT,PERFORMANCE
command can be used to examine the following:
F CATALOG,REPORT,PERFORMANCE command
You can use the F CATALOG command to list information about catalogs currently allocated to
the catalog address space. Sometimes you need this information so that you can use another
MODIFY command to close or otherwise manipulate a catalog in cache.
The command displays information about the performance of specific events that catalog
processing invokes. Each line shows the number of times (nnn) that event has occurred since
IPL or the last reset of the statistics by using the F CATALOG,REPORT,PERFORMANCE(RESET), and
the average time for each occurrence (nnn.nnn). The unit of measure of the average time
(unit) is either milliseconds (MSEC), seconds (SEC), or the average shown as hours, minutes,
and seconds (hh:mm:ss.th).
Note: Other forms of the REPORT command provide information about various aspects of
the catalog address space.
The F CATALOG,REPORT,CACHE command also provides rich information about the use of
catalog buffering. This command causes general information about catalog cache status for
all catalogs currently active in the catalog address space to be listed. The report generated
shows information useful in evaluating the catalog cache performance for the listed catalogs.
F CATALOG,REPORT,PERFORMANCE
IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE
IEC359I CATALOG PERFORMANCE REPORT
*CAS***************************************************
* Statistics since 23:04:21.14 on 01/25/2010 *
* -----CATALOG EVENT---- --COUNT-- ---AVERAGE--- *
* Entries to Catalog 607,333 3.632 MSEC *
* BCS ENQ Shr Sys 651,445 0.055 MSEC *
* BCS ENQ Excl Sys 302 0.062 MSEC *
* BCS DEQ 1,051K 0.031 MSEC *
* VVDS RESERVE CI 294,503 0.038 MSEC *
* VVDS DEQ CI 294,503 0.042 MSEC *
* VVDS RESERVE Shr 1,482K 0.045 MSEC *
* VVDS RESERVE Excl 113 0.108 MSEC *
* VVDS DEQ 1,482K 0.040 MSEC *
* SPHERE ENQ Excl Sys 49 0.045 MSEC *
* SPHERE DEQ 49 0.033 MSEC *
* CAXWA ENQ Shr 144 0.006 MSEC *
* CAXWA DEQ 144 0.531 MSEC *
* VDSPM ENQ 651,816 0.005 MSEC *
* VDSPM DEQ 651,816 0.005 MSEC *
* BCS Get 63,848 0.095 MSEC *
* BCS Put 24 0.597 MSEC *
* BCS Erase 11 0.553 MSEC *
* VVDS I/O 1,769K 0.625 MSEC *
* VLF Delete Minor 1 0.019 MSEC *
* VLF Define Major 84 0.003 MSEC *
* VLF Identify 8,172 0.001 MSEC *
* RMM Tape Exit 24 0.000 MSEC *
* OEM Tape Exit 24 0.000 MSEC *
* BCS Allocate 142 15.751 MSEC *
* SMF Write 106,367 0.043 MSEC *
* IXLCONN 2 107.868 MSEC *
* IXLCACHE Read 2 0.035 MSEC *
* MVS Allocate 116 19.159 MSEC *
* Capture UCB 39 0.008 MSEC *
* SMS Active Config 2 0.448 MSEC *
* RACROUTE Auth 24,793 0.080 MSEC *
* RACROUTE Define 7 0.066 MSEC *
* Obtain QuiesceLatch 606,919 0.001 MSEC *
* ENQ SYSZPCCB 27,980 0.005 MSEC *
* DEQ SYSZPCCB 27,980 0.003 MSEC *
* Release QuiesceLatch 606,919 0.000 MSEC *
* Capture to Actual 149 0.014 MSEC *
*CAS***************************************************
IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED
System 1 System 2
User catalog
As soon as a user requests a catalog function (for example, to locate or define a data set), the
CAS gets control to handle the request. When it has finished, it returns the requested data to
the user. A catalog task which handles a single user request is called a service task. To each
user request a service task is assigned. The minimum number of available service tasks is
specified in the SYSCATxx member of SYS1.NUCLEUS (or the LOADxx member of
SYS1.PARMLIB). A table called the CRT keeps track of these service tasks.
The CAS contains all information necessary to handle a catalog request, like control block
information about all open catalogs, alias tables, and buffered BCS records.
During the initialization of an MVS system, all user catalog names identified in the master
catalog, their aliases, and their associated volume serial numbers are placed in tables in
CAS.
You can use the MODIFY CATALOG operator command to work with the catalog address space.
See also 6.22, Working with the catalog address space on page 371.
Since z/OS 1.8, the maximum number of parallel catalog requests is 999, as defined in the
SYSCAT parmlib member. Previously it was 180.
Never use RESTART to refresh catalog or VVDS control blocks or to change catalog
characteristics. Restarting CAS is a drastic procedure, and if CAS cannot restart, you will
have to IPL the system.
When you issue MODIFY CATALOG,RESTART, the CAS mother task is abended with abend code
81A, and any catalog requests in process at the time are redriven.
The restart of CAS in a new address space is transparent to all users. However, even when all
requests are redriven successfully and receive a return code of zero (0), the system might
produce indicative dumps. There is no way to suppress these indicative dumps.
For a discussion about the entire functionality of the MODIFY CATALOG command, see z/OS
DFSMS: Managing Catalogs, SC26-7409.
Use the following commands to close or unallocate a BCS or VVDS in the catalog address
space. The next access to the BCS or VVDS reopens it and rebuilds the control blocks.
MODIFY CATALOG,CLOSE(catalogname) - Closes the specified catalog but leaves it
allocated.
MODIFY CATALOG,UNALLOCATE(catalogname) - Unallocates a catalog; if you do not specify a
catalog name, then all catalogs are unallocated.
MODIFY CATALOG,VCLOSE(volser) - Closes the VVDS for the specified volser.
MODIFY CATALOG,VUNALLOCATE - Unallocates all VVDSs; you cannot specify a volser, so try
to use VCLOSE first.
Delays or hangs can occur if the catalog needs one of these resources and it is held already
by someone else, for example by a CAS of another system. You can use the following
commands to display global resource serialization (GRS) data:
D GRS,C - Displays GRS contention data for all resources, who is holding a resource, and
who is waiting.
D GRS,RES=(resourcename) - Displays information for a specific resource.
D GRS,DEV=devicenumber - Displays information about a specific device, such as whether it
is reserved by the system.
Route these commands to all systems in the sysplex to get an overview about hang
situations.
When you have identified a catalog address space holding a resource for a long time, or the
GRS outputs do not show you anything but you have still catalog problems, you can use the
following command to get detailed information about the catalog services task:
MODIFY CATALOG,LIST - Lists the currently active service tasks, their task IDs, duration, and
the job name for which the task is handling the request.
Watch for tasks with long duration time. You can obtain detailed information about a specific
task by running the following command for a specific task ID:
MODIFY CATALOG,LISTJ(taskid),DETAIL - Shows detailed information about a service task,
for example if it is waiting for the completion of an ENQ.
If you identify a long-running task that is in a deadlock situation with another task (on another
system), you can end and redrive the task to resolve the lockout. The following commands
help you to end a catalog service task:
MODIFY CATALOG,END(taskid),REDRIVE - End a service task and redrive it.
MODIFY CATALOG,END(taskid),NOREDRIVE - Permanently end the task without redriving.
MODIFY CATALOG,ABEND(taskid) - Abnormally end a task which cannot be stopped by
using the END parameter.
You can use the FORCE parameter for these commands if the address space that the service
task is operating on behalf of has ended abnormally. Use this parameter only in this case.
You can also try to end the job for which the catalog task is processing a request.
For more information about the MODIFY CATALOG command and fixing temporary catalog
problems, see z/OS DFSMS: Managing Catalogs, SC26-7409.
MODIFY CATALOG,ECSHR(AUTOADD)
VVR VVR
Coupling
CATALOG
Facility
Most of the overhead associated with shared catalog is eliminated if you use enhanced
catalog sharing (ECS). ECS uses a cache Coupling Facility structure to keep the special
VVR. In addition, the Coupling Facility structure (as defined in CFRM) keeps a copy of
updated records. There is no I/O necessary to read the catalog VVR to verify the updates. In
addition, the eventual modifications are also kept in the Coupling Facility structure, thereby
avoiding more I/O. ECS saves about 50% in elapsed time and provides an enormous
reduction in ENQ/Reserves.
Only catalogs that were added are shared in ECS mode. The command MODIFY
CATALOG,ECSHR(STATUS) shows you the ECS status for each catalog, as well as whether it is
eligible and already activated.
Important: If you attempt to use a catalog that is currently ECS-active from a system
outside the sysplex, the request might break the catalog.
No more than 1024 catalogs can currently be shared using ECS from a single system.
All systems sharing the catalog in ECS mode must have connectivity to the same
Coupling Facility, and must be in the same global resource serialization (GRS) complex.
When you use catalogs in ECS mode, convert the resource SYSIGGV2 to a SYSTEMS
enqueue. Otherwise, the catalogs in ECS mode will be damaged.
For more information about ECS, see z/OS DFSMS: Managing Catalogs, SC26-7409. For
information about defining Coupling Facility structures, see z/OS MVS Setting Up a Sysplex,
SA22-7625.
As an extension of VSAM RLS, DFSMStvm enables any job or application that is designed for
data sharing to read-share or write-share VSAM recoverable data sets. VSAM RLS provides
a server for sharing VSAM data sets in a sysplex. VSAM RLS uses Coupling Facility-based
locking and data caching to provide sysplex-scope locking and data access integrity.
DFSMStvs adds logging, commit, and backout processing.
To understand DFSMStvs, it is necessary to first review base VSAM information and VSAM
record-level sharing (RLS).
SHAREOPTIONS (crossregion,crosssystem)
The cross-region share options specify the amount of sharing allowed among regions within
the same system or multiple systems. Cross-system share options specify how the data set is
shared among systems. Use global resource serialization (GRS) or a similar product to
perform the serialization.
SHAREOPTIONS (1,x)
The data set can be shared by any number of users for read access (open for input), or it can
be accessed by only one user for read/write access (open for output). If the data set is open
for output by one user, a read or read/write request by another user will fail. With this option,
VSAM ensures complete data integrity for the data set. When the data set is already open for
RLS processing, any request to open the data set for non-RLS access will fail.
SHAREOPTIONS (2,x)
The data set can be shared by one user for read/write access, and by any number of users for
read access. If the data set is open for output by one user, another open for output request
will fail, but a request for read access will succeed. With this option, VSAM ensures write
SHAREOPTIONS (3,x)
The data set can be opened by any number of users for read and write request. VSAM does
not ensure any data integrity. It is the responsibility of the users to maintain data integrity by
using enqueue and dequeue macros. This setting does not allow any type of non-RLS access
while the data set is open for RLS processing.
For more information about VSAM share options, see z/OS DFSMS: Using Data Sets,
SC26-7410.
For more information about VSAM buffering techniques refer to 4.44, VSAM: Buffering
modes on page 177.
MACRF=(NSR/LSR/GSR)
The Access Method Control block (ACB) describes an open VSAM data set. A subparameter
for the ACB macro is MACRF, in which you can specify the buffering technique to be used by
VSAM. For LSR and GSR, you need to run the BLDVRP macro before opening the data set to
create the resource pool.
For information about VSAM macros, see z/OS DFSMS: Macro Instructions for Data Sets,
SC26-7408.
(record B) (Record E)
CICS CICS
AOR AOR
CICS CICS
AOR AOR
VSAM
CICS CICS
AOR AOR
System 1 System n
Problems
There are a couple of problems with this kind of CICS configuration:
CICS FOR is a single point of failure.
Multiple system performance is not acceptable.
Lack of scalability.
Over time the FORs became a bottleneck because CICS environments became increasingly
complex. CICS required a solution to have direct shared access to VSAM data sets from
multiple CICSs.
CICS CICS
AOR AOR
CICS CICS
AOR AOR
coupling facility
CICS CICS
AOR AOR
System 1 System n
VSAM record-level sharing (RLS) is a method of access to your existing VSAM files that
provides full read and write integrity at the record level to any number of users in your Parallel
Sysplex.
With VSAM RLS, multiple CICS systems can directly access a shared VSAM data set,
eliminating the need to ship functions between the application-owning regions and file-owning
regions. CICS provides the logging, commit, and backout functions for VSAM recoverable
data sets. VSAM RLS provides record-level serialization and cross-system caching. CICSVR
provides a forward recovery utility.
Level of sharing
The level of sharing that is allowed between applications is determined by whether or not a
data set is recoverable; for example:
Both CICS and non-CICS jobs can have concurrent read or write access to
nonrecoverable data sets. There is no coordination between CICS and non-CICS, so data
integrity can be compromised.
VSAM RLS uses a Coupling Facility to perform data-set-level locking, record locking, and
data caching. VSAM RLS uses the conditional write and cross-invalidate functions of the
Coupling Facility cache structure, thereby avoiding the need for control interval (CI) level
locking.
VSAM RLS uses the Coupling Facility caches as store-through caches. When a control
interval of data is written, it is written to both the Coupling Facility cache and the direct access
storage device (DASD). This ensures that problems occurring with a Coupling Facility cache
do not result in the loss of VSAM data.
VSAM RLS also supports access to a data set through an alternate index, but it does not
support opening an alternate index directly in RLS mode. Also, VSAM RLS does not support
access through an alternate index to data stored under z/OS UNIX System Services.
Extended format, extended addressability, and spanned data sets are supported with VSAM
RLS. Compression is also supported.
Keyrange data sets and the IMBED attribute for a KSDS are obsolete. You cannot define new
data sets as keyrange or with an imbedded index anymore. However, there still might be old
data sets with these attributes in your installation.
Exception: SHAREOPTIONS(2,x)
For non-RLS access, SHAREOPTIONS(2,x) are handled as always. One user can have the
data set open for read/write access and multiple users can have it open for read access only.
VSAM does not provide data integrity for the readers.
If the data set is open for RLS access, non-RLS opens for read are possible. These are the
only share options, where a non-RLS request to open the data set will not fail if the data set is
already open for RLS processing. VSAM does not provide data integrity for the non-RLS
readers.
Non-CICS access
RLS access from batch jobs to data sets that are open by CICS depends on whether the data
set is recoverable or not. For recoverable data sets, non-CICS access from other applications
(that do not act as recoverable resource manager) is not allowed.
See 7.10, VSAM RLS/CICS data set recovery on page 392 for details.
CICS CICS
R/W R/W
1 3 1 4
Batch 1 3 4 Batch
R/O R/O
coupling facility
SMSVSAM SMSVSAM
data space data space
1 2 3 4
System 1 System 2
VSAM data set
MACRF=RLS
The first request for a record after data set open for RLS processing will cause an I/O
operation to read in the CI that contains this record. A copy of the CI is stored into the cache
structure of the Coupling Facility and in the buffer pool in the data space.
Buffer coherency
Buffer coherency is maintained through the use of Coupling Facility (CF) cache structures
and the XCF cross-invalidation function. For the example in Figure 7-8, that means:
1. System 1 opens the VSAM data set for read/write processing.
2. System 1 reads in CI1 and CI3 from DASD; both CIs are stored in the cache structure in
the Coupling Facility.
3. System 2 opens the data set for read processing.
For further information about cross-invalidation, see z/OS MVS Programming: Sysplex
Services Guide, SA22-7617.
The VSAM RLS Coupling Facility structures are discussed in more detail in 7.14, Coupling
Facility structures for RLS sharing on page 397.
GET UPD RPL_1 GET UPD RPL_2 GET CR RPL_3 GET NRI RPL_4
(Record B) (Record E) (Record B) (Record B)
Waits for
record lock
Record A
Record B
Record C Record B Record E
Holder Holder (EXCL)
control Record D (EXCL) CICS2.Tran2
interval (CI) CICS1.Tran1
Record E Waiter
Record F (SHARE)
CICS3.Tran3
Record G
VSAM RLS locks
The type of read integrity is specified either in the ACB macro or in the JCL DD statement:
ACB RLSREAD=NRI/CR/CRE
//dd1 DD dsn=datasetname,RLS=NRI/CR/CRE
Example situation
In our example in Figure 7-9 on page 390 we have the following situation:
1. CICS transaction Tran1 obtains an exclusive lock on Record B for update processing.
2. Transaction Tran2 obtains an exclusive lock for update processing on Record E, which is in
the same CI.
3. Transaction Tran3 needs a shared lock also on Record B for consistent read; it has to wait
until the exclusive lock by Tran1 is released.
4. Transaction Tran4 does a dirty read (NRI); it does not have to wait because in that case,
no lock is necessary.
With NRI, Tran4 can read the record even though it is held exclusively by Tran1. There is no
read integrity for Tran4.
CF lock structure
RLS locking is performed in the Coupling Facility through the use of a CF lock structure
(IGWLOCK00) and the XES locking services.
Contention
When contention occurs on a VSAM record, the request that encountered the contention
waits for the contention to be removed. The lock manager provides deadlock detection. When
a lock request is in deadlock, the request is rejected, resulting in the VSAM record
management request completing with a deadlock error response.
A data set is considered recoverable if the LOG attribute has one of the following values:
UNDO
The data set is backward recoverable. Changes made by a transaction that does not
succeed (no commit was done) are backed out. CICS provides the transactional recovery.
See also 7.11, Transactional recovery on page 394.
ALL
The data set is both backward and forward recoverable. In addition to the logging and
recovery functions provided for backout (transactional recovery), CICS records the image
of changes to the data set, after they were made. The forward recovery log records are
used by forward recovery programs and products such as CICS VSAM Recovery
(CICSVR) to reconstruct the data set in the event of hardware or software damage to the
data set. This is referred to as data set recovery. For LOG(ALL) data sets, both types of
recovery are provided, transactional recovery and data set recovery.
Non-CICS read/write access for recoverable data sets that are open by CICS is not allowed.
The recoverable attribute means that when the file is accessed in RLS mode, transactional
recovery is provided. With RLS, the recovery is only provided when the access is through
CICS file control, so RLS does not permit a batch (non-CICS) job to open a recoverable file
for OUTPUT.
Exclusive locks that VSAM RLS holds on the modified records cause other transactions that
have read-with-integrity requests and write requests for these records to wait. After the
modifying transaction is committed or backed out, VSAM RLS releases the locks and the
other transactions can access the records.
If the transaction fails, its changes are backed out. This capability is called transactional
recovery.
The CICS backout function removes changes made to the recoverable data sets by a
transaction. When a transaction abnormally ends, CICS performs a backout implicitly.
Example
In our example in 7.11, Transactional recovery on page 394, transaction Trans1 is complete
(committed) after Record 1 and Record 2 are updated. Transactional recovery ensures that
either both changes are made or no change is made. When the application requests commit,
both changes are made atomically. In the case of an failure after updating Record 1, the
change to this record is backed out. This applies only for recoverable data sets, not for
non-recoverable ones.
Batch window
The batch window is a period of time in which online access to recoverable data sets must be
disabled. During this time, no transaction processing can be done. This is normally done
because it is necessary to run batch jobs or other utilities that do not properly support
recoverable data, even if those utilities use also RLS access. Therefore, to allow these jobs or
utilities to safely update the data, it is first necessary to make a copy of the data. In the event
that the batch job or utility fails or encounters an error, this copy can be safely restored and
online access can be re-enabled. If the batch job completes successfully, the updated copy of
the data set can be safely used because only the batch job had access to the data while it
was being updated. Therefore, the data cannot have been corrupted by interference from
online transaction processing.
See 7.20, Interacting with VSAM RLS on page 412 for information about how to quiesce and
unquiesce a data set.
Lock structure
In a Parallel Sysplex, you need only one lock structure for VSAM RLS because only one
VSAM sharing group is permitted. The required name is IGWLOCK00.
Ensure that the Coupling Facility lock structure has universal connectivity so that it is
accessible from all systems in the Parallel Sysplex that support VSAM RLS.
Tip: For high-availability environments, use a nonvolatile Coupling Facility for the lock
structure. If you maintain the lock structure in a volatile Coupling Facility, a power outage
can cause a failure and loss of information in the Coupling Facility lock structure.
They are also used as system buffer pool with cross-invalidation being done (see 7.8,
Buffering under VSAM RLS on page 388).
Each Coupling Facility cache structure is contained in a single Coupling Facility. You may
have multiple Coupling Facilities and multiple cache structures.
A sizing tool known as CFSIZER is also available on the IBM Web site at:
http://www-1.ibm.com/servers/eserver/zseries/cfsizer/vsamrls.html
ACTIVE STRUCTURE
----------------
ALLOCATION TIME: 02/24/2005 14:22:56
CFNAME : CF1
COUPLING FACILITY: 002084.IBM.02.000000026A3A
PARTITION: 1F CPCID: 00
ACTUAL SIZE : 14336 K
STORAGE INCREMENT SIZE: 256 K
ENTRIES: IN-USE: 0 TOTAL: 33331, 0% FULL
LOCKS: TOTAL: 2097152
PHYSICAL VERSION: BC9F02FD EDC963AC
LOGICAL VERSION: BC9F02FD EDC963AC
SYSTEM-MANAGED PROCESS LEVEL: 8
XCF GRPNAME : IXCLO001
DISPOSITION : KEEP
ACCESS TIME : 0
NUMBER OF RECORD DATA LISTS PER CONNECTION: 16
MAX CONNECTIONS: 4
# CONNECTIONS : 4
For more information about VSAM RLS parameters, see z/OS DFSMSdfp Storage
Administration Reference, SC26-7402.
CACHE01
CICS CICS
R/W IGWLOCK00 R/W
System 1 System 2
Both the primary and secondary SHCDS contain the same data. With the duplexing of the
data, VSAM RLS ensures that processing can continue in case VSAM RLS loses connection
to one SHCDS or the control data set got damaged. In that case, you can switch the spare
SHCDS to active.
To calculate the size of the sharing control data sets, follow the guidelines provided in z/OS
DFSMSdfp Storage Administration Reference, SC26-7402.
SHCDS operations
Use the following command to activate your newly defined SHCDS for use by VSAM RLS.
For the primary and secondary SHCDS, use:
VARY SMS,SHCDS(SHCDS_name),NEW
For the spare SHCDS, use:
VARY SMS,SHCDS(SHCDS_name),NEWSPARE
D SMS,SHCDS
IEE932I 539
IGW612I 17:10:12 DISPLAY SMS,SHCDS
Name Size %UTIL Status Type
WTSCPLX2.VSBOX48 10800Kb 4% GOOD ACTIVE
WTSCPLX2.VSBOX52 10800Kb 4% GOOD ACTIVE
WTSCPLX2.VSBOX49 10800Kb 4% GOOD SPARE
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
----------------- 0Kb 0% N/A N/A
Figure 7-20 Example of SHCDS display
Note: In the VARY SMS,SHCDS commands, the SHCDS name is not fully qualified.
SMSVSAM takes as a default the first two qualifiers, which must always be
SYS1.DFPSHCDS. You must specify only the last two qualifiers as the SHCDS names.
SC=CICS1
SC=NORLS
PAYSTRUC
Cache set name =
(blank)
The following steps describe how to define a cache set and how to associate the cache
structures to the cache set:
1. From the ISMF primary option menu for storage administrators, select option 8, Control
Data Set.
2. Select option 7, Cache Update, and make sure that you specified the right SCDS name
(SMS share control data set, do not mix up with SHCDS).
3. Define your CF cache sets (see Figure 7-22 on page 406).
Guaranteed Space . . . . . . . . . N (Y or N)
Guaranteed Synchronous Write . . . N (Y or N)
Multi-Tiered SG . . . . . . . . . . (Y, N, or blank)
Parallel Access Volume Capability N (R, P, S, or N)
CF Cache Set Name . . . . . . . . . PUBLIC1 (up to 8 chars or blank)
CF Direct Weight . . . . . . . . . 6 (1 to 11 or blank)
CF Sequential Weight . . . . . . . 4 (1 to 11 or blank)
Note: Be sure to change your Storage Class ACS routines so that RLS data sets are
assigned the appropriate storage class.
More detailed information about setting up SMS for VSAM RLS is in z/OS DFSMSdfp Storage
Administration Reference, SC26-7402.
LOGSTREAMID(logstreamname)
Specifies the name of the CICS forward recovery
logstream for data sets with LOG(ALL)
Another way to assign the LOG attribute and a LOGSTREAMID is to use a data class that has
those values already defined.
The LOG parameter is described in detail in 7.10, VSAM RLS/CICS data set recovery on
page 392.
Use the LOGSTREAMID parameter to assign a CICS forward recovery log stream to a data
set which is forward recoverable.
For more information about the IDCAMS DEFINE and ALTER commands, see z/OS DFSMS
Access Method Services for Catalogs, SC26-7394.
For information about the IXCMIAPU utility, see z/OS MVS Setting Up a Sysplex, SA22-7625.
CACHE02
subsystem subsystem
CICS2 CICS2
CACHE01
MMFSTUFF MMFSTUFF
IGWLOCK00
batch job SMSVSAM SMSVSAM batch job
HSM Coupling Facility HSM
System 1 System n
The SMSVSAM address space needs to be started on each system where you want to exploit
VSAM RLS. It is responsible for centralizing all processing necessary for cross-system
sharing, which includes one connect per system to XCF lock, cache, and VSAM control block
structures.
Terminology
We use the following terms to describe an RLS environment:
RLS server
The SMSVSAM address space is also referred to as the RLS server.
SETSMS command
Use the SETSMS command to overwrite the PARMLIB specifications for IGDSMSxx. The syntax
is:
SETSMS CF-TIME(nnn|3600)
DEADLOCK_DETECTION(iiii,kkkk)
RLSINIT
RLS_MAXCFFEATURELEVEL({A|Z})
RLS_MAX_POOL_SIZE(nnnn|100)
SMF_TIME(YES|NO)
For information about these PARMLIB values refer to 7.15, Update PARMLIB with VSAM
RLS parameters on page 400.
Display commands
There are a several display commands available to provide RLS-related information.
Display the status of the SMSVSAM address space:
DISPLAY SMS,SMSVSAM{,ALL}
Specify ALL to see the status of all the SMSVSAM servers in the sysplex.
Display information about the Coupling Facility cache structure:
DISPLAY SMS,CFCACHE(CF_cache_structure_name|*)
Display information about the Coupling Facility lock structure IGWLOCK00:
DISPLAY SMS,CFLS
This information includes the lock rate, lock contention rate, false contention rate, and
average number of requests waiting for locks.
Display XCF information for a CF structure:
DISPLAY XCF,STR,STRNAME=structurename
This provides information such as status, type, and policy size for a CF structure.
The quiesce status of a data set is set in the catalog and is shown in an IDCAMS LISTCAT
output for the data set. See 7.22, Interpreting RLSDATA in an IDCAMS LISTCAT output on
page 417 for information about interpreting LISTCAT outputs.
This new size can be larger or smaller than the size of the current CF cache structure, but it
cannot be larger than the maximum size specified in the CFRM policy. The SETXCF
START,ALTER command will not work unless the structures ALLOW ALTER indicator is set to
YES.
Important: This section simply provides an overview of commands that are useful for
working with VSAM RLS. Before using these commands (other than the DISPLAY
command), read the official z/OS manuals carefully.
When you dump data sets that are designated by CICS as eligible for backup-while-open
processing, data integrity is maintained through serialization interactions between:
CICS (database control program)
VSAM RLS
VSAM record management
DFSMSdfp
DFSMSdss
Backup-while-open
In order to allow DFSMSdss to take a backup while your data set is open by CICS, you need
to define the data set with the BWO attribute TYPECICS or assign a data class with this
attribute.
TYPECICS
Use TYPECICS to specify BWO in a CICS environment. For RLS processing, this
activates BWO processing for CICS. For non-RLS processing, CICS determines whether
For information about BWO processing, see z/OS DFSMSdss Storage Administration
Reference, SC35-0424.
You can use the sample JCL in Figure 7-31 to run an IDCAMS LISTCAT job.
RLSDATA
RLSDATA contains the following information:
LOG
This field shows you the type of logging used for this data set. It can be NONE, UNDO or
ALL.
Note: If the RLS-IN-USE indicator is on, it does not mean that the data set is currently
in use by VSAM RLS. It simply means that the last successful open was for RLS
processing.
Non-RLS open will always attempt to call VSAM RLS if the RLS-IN-USE bit is on in the
catalog. This bit is a safety net to prevent non-RLS users from accessing a data set
which can have retained or lost locks associated with it.
The RLS-IN-USE bit is set on by RLS open and is left on after close. This bit is only
turned off by a successful non-RLS open or by the IDCAMS SHCDS CFRESET command.
LOGSTREAMID
This value tells you the forward recovery log stream name for this data set if the LOG
attribute has the value of ALL.
RECOVERY TIMESTAMP
The recovery time stamp gives the time the most recent backup was taken when the data
set was accessed by CICS using VSAM RLS.
All LISTCAT keywords are described in Appendix B of z/OS DFSMS Access Method Services
for Catalogs, SC26-7394.
Objective of DFSMStvs
The objective of DFSMStvs is to provide transactional recovery directly within VSAM. It is an
extension to VSAM RLS. It allows any job or application that is designed for data sharing to
read/write share VSAM recoverable files.
DFSMStvs adds logging and commit/back out support to VSAM RLS. DFSMStvs requires
and supports the RRMS (recoverable resource management services) component as the
commit or sync point manager.
DFSMStvs provides a level of data sharing with built-in transactional recovery for VSAM
recoverable files that is comparable with the data sharing and transactional recovery support
for databases provided by DB2 and IMSDB.
Before DFSMStvs, those two types of recovery were only supported by CICS.
CICS performs the transactional recovery for data sets defined with a LOG parameter UNDO
or ALL.
For forward recoverable data sets (LOG(ALL)) CICS also records updates in a log stream for
forward recovery. CICS itself does not perform forward recovery, it performs only logging. For
forward recovery you need a utility like CICS VSAM recovery (CICSVR).
Without DFSMStvs, batch jobs cannot perform transactional recovery and logging. That is the
reason batch jobs were granted only read access to a data set that was opened by CICS in
RLS mode. A batch window was necessary to run batch updates for CICS VSAM data sets.
With DFSMStvs, batch jobs can perform transactional recovery and logging concurrently with
CICS processing. Batch jobs can now update data sets while they are in use by CICS. No
batch window is necessary any more.
Peer recovery
Peer recovery allows DFSMStvs to recover for a failed DFSMStvs instance to clean up any
work that was left in an incomplete state and to clear retained locks that resulted from the
failure.
For more information about peer recovery, see z/OS DFSMStvs Planning and Operation
Guide, SC26-7348.
z/OS RRMS:
- Registration services Prepare/Commit
- Context services DFSMStvs
Rollback
- Resource recovery
services (RRS)
Another
Recoverable
Resource Mgr
Another
Recoverable
Resource Mgr
When an application issues a commit request directly to z/OS or indirectly through a sync
point manager that interfaces with the z/OS sync point manager, DFSMStvs is invoked to
participate in the two-phase commit process.
Other resource managers (like DB2) whose recoverable resources were modified by the
transaction are also invoked by the z/OS sync point manager, thus providing a commit scope
across the multiple resource managers.
Two-phase commit
The two-phase commit protocol is a set of actions used to make sure that an application
program either makes all changes to the resources represented by a single unit of recovery
(UR), or it makes no changes at all. This protocol verifies that either all changes or no
changes are applied even if one of the elements (such as the application, the system, or the
resource manager) fails. The protocol allows for restart and recovery processing to take place
after system or subsystem failure.
For a discussion of the term unit of recovery, see 7.27, Unit of work and unit of recovery on
page 426.
+$100 +$100
$700 $800 $700 $800
Transaction to Incomplete
transfer $100 transaction
Atomic updates
A transaction is known as atomic when an application changes data in multiple resource
managers as a single transaction, and all of those changes are accomplished through a
single commit request by a sync point manager. If the transaction is successful, all the
changes are committed. If any piece of the transaction is not successful, then all changes are
backed out. An atomic instant occurs when the sync point manager in a two-phase commit
process logs a commit record for the transaction.
Also see 7.11, Transactional recovery on page 394 for information about recovering an
uncompleted transaction.
update 1
update 2
commit
} A = unit of recovery
synchronized explicit
update 3
update 4
update 5
} B
update 6
} C
End of program synchronized implicit
Figure 7-36 Unit of recovery example
RRS uses unit of recovery (UR) to mean much the same thing. Thus, a unit of recovery is the
set of updates between synchronization points. There are implicit synchronization points at
the start and at the end of a transaction. Explicit synchronization points are requested by an
application within a transaction or batch job. It is preferable to use explicit synchronization for
greater control of the number of updates in a unit of recovery.
Changes to data are durable after a synchronization point. That means that the changes
survive any subsequent failure.
In Figure 7-36 there are three units of recovery, noted as A, B and C. The synchronization
points between the units of recovery are either:
Implicit - At the start and end of the program
Explicit - When requested by commit
...
System 1 System n
CICSA undo log log stream
CICS/DFSMStvs merged
lock structures forward recovery log log
stream
Coupling Facility
DFSMStvs logging
DFSMStvs logging uses the z/OS system logger. The design of DFSMStvs logging is similar
to the design of CICS logging. Forward recovery logstreams for VSAM recoverable files will
be shared across CICS and DFSMStvs. CICS will log changes made by CICS transactions;
DFSMStvs will log changes made by its callers.
Types of logs
There are various types of logs involved in DFSMStvs (and CICS) logging. They are:
Undo logs (mandatory, one per image) - tvsname.IGWLOG.SYSLOG
The backout or undo log contains images of changed records for recoverable data sets as
they existed prior to being changed. It is used for transactional recovery to back out
uncommitted changes if a transaction failed.
Shunt logs (mandatory, one per image) - tvsname.IGWSHUNT.SHUNTLOG
The shunt log is used when backout requests fail and for long running units of recovery.
The system logger writes log data to log streams. The log streams are put in list structures in
the Coupling Facility (except for DASDONLY log streams).
As Figure 7-37 on page 427 shows, you can merge forward recovery logs for use by CICS
and DFSMStvs. You can also share it by multiple VSAM data sets. You cannot share an undo
log by CICS and DFSMStvs; you need one per image.
For information about how to define log streams and list structures refer to 7.32, Prepare for
logging on page 433.
You can modify an application to use DFSMStvs by specifying RLS in the JCL or the ACB and
having the application access a recoverable data set using either open for input with CRE or
open for output from a batch job.
Application considerations
For an application to participate in transactional recovery, it must first understand the concept
of a transaction. It is not a good idea simply to modify an existing batch job to use DFSMStvs
with no further change, as this causes the entire job to be seen as a single transaction. As a
result, locks would be held and log records would need to exist for the entire life of the job.
This can cause a tremendous amount of contention for the locked resources. It can also
cause performance degradation as the undo log becomes exceedingly large.
RLS and DFSMStvs provide isolation until commit/backout. Consider the following rules:
Share locks on records accessed with repeatable read.
Hold write locks on changed records until the end of a transaction.
Use commit to apply all changes and release all locks.
Information extracted from shared files must not be used across commit/backout for the
following reasons:
Need to re-access the records
Handle all work that is part of one UR under the same context
For information about units of recovery, see 7.27, Unit of work and unit of recovery on
page 426. Reconsider your application to handle work that is part of one unit of recovery
under the same context.
Instead, the batch application must have a built-in method of tracking its processing position
within a series of transactions. One potential method of doing this is to use a VSAM
recoverable file to track the jobs commit position. When the application fails, any
uncommitted changes are backed out.
The already-committed changes cannot be backed out, because they are already visible to
other jobs or transactions. In fact, it is possible that the records that were changed by
previously-committed UR were changed again by other jobs or transactions. Therefore, when
the job is rerun, it is important that it determines its restart point and not attempt to redo any
changes it had committed before the failure.
For this reason, it is important that jobs and applications using DFSMStvs be written to
execute as a series of transactions and use a commit point tracking mechanism for restart.
A sample JCL you can use to define a new CFRM policy is shown in Figure 7-42 on
page 434.
Multiple log streams can write data to a single Coupling Facility structure. This does not mean
that the log data is merged; the log data stays segregated according to log stream.
Figure 7-43 shows how to define the structures in the LOGR policy.
For the various types of log streams that are used by DFSMStvs refer to 7.28, DFSMStvs
logging on page 427. A log stream is a VSAM linear data set which simply contains a
collection of data. To define log streams, you can use the example JCL in Figure 7-44.
Attention: Log streams are single-extent VSAM linear data sets and need
SHAREOPTIONS(3,3). The default is SHAREOPTIONS(1,3) so you must alter the share
options explicitly by running IDCAMS ALTER.
For information about these PARMLIB parameters, see z/OS MVS Initialization and Tuning
Reference, SA22-7592.
IGWTV001 IGWTV002
SMSVSAM SMSVSAM
System 1 System 2
Coupling Facility
recoverable
data set
As soon as an application that does not act as a recoverable resource manager has RLS
access to a recoverable data set, DFSMStvs is invoked (see also 7.29, Accessing a data set
with DFSMStvs on page 429). DFSMStvs calls VSAM RLS (SMSVSAM) for record locking
and buffering. With DFSMStvs built on top of VSAM RLS, full sharing of recoverable files
becomes possible. Batch jobs can now update the recoverable files without first quiescing
CICS' access to them.
SETSMS command
Use the SETSMS command to overwrite the PARMLIB specifications for IGDSMSxx. The syntax
is:
SETSMS AKP(nnn|1000)
QTIMEOUT(nnn|300)
MAXLOCKS(max|0,incr|0)
These are the only DFSMStvs PARMLIB specifications you can overwrite using the SETSMS
command. For information about these parameters, see 7.33, Update PARMLIB with
DFSMStvs parameters on page 436.
Display command
There are a few display commands you can use to get information about DFSMStvs.
To display common DFSMStvs information:
DISPLAY SMS,TRANVSAM{,ALL}
This command lists information about the DFSMStvs instance on the system were it was
issued. To get information from all systems use ALL. This information includes name and
state of the DFSMStvs instance, values for AKP, start type, and qtimeout, and also the
names, types, and states of the used log streams.
To display information about a particular job that uses DFSMStvs:
DISPLAY SMS,JOB(jobname)
The information about the particular job includes the current job step, the current ID, and
status of the unit of recovery used by this job.
To display information about a particular unit of recovery currently active within the
sysplex:
DISPLAY SMS,URID(urid|ALL)
This command provides information about a particular UR in the sysplex or about all URs
of the system on which this command was issued. If ALL is specified, you do not obtain
Important: This chapter simply provides an overview of new operator commands to know
to work with DFSMStvs. Before using these commands (other than the DISPLAY
command), read the official z/OS manuals carefully.
Summary
Base VSAM
Functions and limitations
DFSMStvs
Functions
Figure 7-48
Summary
In this chapter we showed the limitations of base VSAM that made it necessary to develop
VSAM RLS. Further, we exposed the limitations of VSAM RLS that were the reason to
enhance VSAM RLS by the functions provided by DFSMStvs.
Base VSAM
VSAM does not provide read or read/write integrity for share options other than 1.
User needs to use enqueue/dequeue macros for serialization.
The granularity of sharing on a VSAM cluster is at the control interval level.
Buffers reside in the address space.
Base VSAM does not support CICS as a recoverable resource manager; a CICS file
owning region is necessary to ensure recovery.
VSAM RLS
Enhancement of base VSAM.
User does not need to serialize; this is done by RLS locking.
Granularity of sharing is record level, not CI level.
Buffers reside in the data space and Coupling Facility.
Supports CICS as a recoverable resource manager (CICS logging for recoverable data
sets); no CICS file owning region is necessary.
For many years, DASDs have been the most used storage devices on IBM eServer zSeries
systems and their predecessors, delivering the fast random access to data and high
availability that customers have come to expect.
The era of tapes began before DASD was introduced. During that time, tapes were used as
the primary application storage medium. Today customers use tapes for such purposes as
backup, archiving, or data transfer between companies.
Traditional DASD
3380 Models J, E, K
3390 Models 1, 2, 3, 9
Traditional DASD
In the era of traditional DASD, the hardware consisted of controllers like 3880 and 3990,
which contained the necessary intelligent functions to operate a storage subsystem. The
controllers were connected to S/390 systems through parallel or ESCON channels. Behind a
controller there were several model groups of the 3390 that contained the disk drives. Based
on the models, these disk drives had various capacities per device. Within each model group,
the various models provide either four, eight, or twelve devices. All A-units come with four
controllers, providing a total of four paths to the 3990 Storage Control. At that time, you were
not able to change the characteristics of a given DASD device.
The more modern IBM DASD products, such as Enterprise Storage Server (ESS), DS6000,
DS8000, and DASD from other vendors, emulate IBM 3380 and 3390 volumes in geometry,
capacity of tracks, and number of tracks per cylinder. This emulation makes all the other
entities think they are dealing with real 3380s or 3390s. Among these entities, we have data
processing people not working directly with storage, JCL, MVS commands, open routines,
access methods, IOS, and channels. One benefit of this emulation is that it allows DASD
manufacturers to implement changes in the real disks, including the geometry of tracks and
cylinders, without affecting the way those components interface with DASD. From an
ESS technology
The IBM TotalStorage Enterprise Storage Server (ESS) is the IBM disk storage server,
developed using IBM Seascape architecture. The ESS provides functionality to the family of
e-business servers, and also to non-IBM (that is, Intel-based and UNIX-based) families of
servers. Across all of these environments, the ESS features unique capabilities that allow it to
meet the most demanding requirements of performance, capacity, and data availability that
the computing business requires. See 8.4, Enterprise Storage Server (ESS) on page 453 for
more information about this topic.
Seascape architecture
The Seascape architecture is the key to the development of the IBM storage products.
Seascape allows IBM to take the best of the technologies developed by the many IBM
laboratories and integrate them, thereby producing flexible and upgradeable storage
solutions. This Seascape architecture design has allowed the IBM TotalStorage Enterprise
Storage Server to evolve from the initial E models to the succeeding F models, and to the
later 800 models, each featuring new, more powerful hardware and functional enhancements,
and always integrated under the same successful architecture with which the ESS was
originally conceived. See 8.3, Seascape architecture on page 450 for more information.
Note: In this publication, we use the terms disk or head disk assembly (HDA) for the real
devices, and the terms DASD volumes or DASD devices for the logical 3380/3390s.
RAID Disks
Primary Alternate
Record X Record X
Raid-1 ABCDEF ABCDEF
RAID architecture
Redundant array of independent disks (RAID) is a direct access storage architecture where
data is recorded across multiple physical disks with parity separately recorded, so that no loss
of access to data results from the loss of any one disk in the array.
RAID breaks the one-to-one association of volumes with devices. A logical volume is now the
addressable entity presented by the controller to the attached systems. The RAID unit maps
the logical volume across multiple physical devices. Similarly, blocks of storage on a single
physical device may be associated with multiple logical volumes. Because a logical volume is
mapped by the RAID unit across multiple physical devices, it is now possible to overlap
processing for multiple cache misses to the same logical volume because cache misses can
be satisfied by separate physical devices.
The RAID concept involves many small computer system interface (SCSI) disks replacing a
big one. The major RAID advantages are:
Performance (due to parallelism)
Cost (SCSI are commodities)
zSeries compatibility
Environment (space and energy)
However, RAID increased the chances of malfunction due to media and disk failures and the
fact that the logical device is now residing on many physical disks. The solution was
Note: The ESS storage controllers use the RAID architecture that enables multiple logical
volumes to be mapped on a single physical RAID group. If required, you can still separate
data sets on a physical controller boundary for the purpose of availability.
RAID implementations
Except for RAID-1, each manufacturer sets the number of disks in an array. An array is a set
of logically related disks, where a parity applies.
Note: Data striping (stripe sequential physical blocks in separate disks) is sometimes
called RAID-0, but it is not a real RAID because there is no redundancy, that is, no parity
bits.
Seascape architecture
The IBM Enterprise Storage Servers architecture for e-business design is based on the IBM
storage enterprise architecture, Seascape. The Seascape architecture defines
next-generation concepts for storage by integrating modular building block technologies from
IBM, including disk, tape, and optical storage media, powerful processors, and rich software.
Integrated Seascape solutions are highly reliable, scalable, and versatile, and support
specialized applications on servers ranging from PCs to super computers. Virtually all types
of servers can concurrently attach to the ESS, including iSeries and AS/400 systems. As a
result, ESS can be the external disk storage system of choice for AS/400 as well as iSeries
systems in heterogeneous SAN environments.
DFSMS provides device support for the IBM 2105 Enterprise Storage Server (ESS), a
high-end storage subsystem. The ESS storage subsystem succeeded the 3880, 3990, and
9340 subsystem families. Designed for mid-range and high-end environments, the ESS gives
you large capacity, high performance, continuous availability, and storage expandability. You
can read more about ESS in 8.4, Enterprise Storage Server (ESS) on page 453.
Cache
Cache is used to store both read and write data to improve ESS performance to the attached
host systems. There is the choice of 8, 16, 24, 32, or 64 GB of cache. This cache is divided
between the two clusters of the ESS, giving the clusters their own non-shared cache. The
ESS cache uses ECC (error checking and correcting) memory technology to enhance
reliability and error correction of the cache. ECC technology can detect single- and double-bit
errors and correct all single-bit errors. Memory scrubbing, a built-in hardware function, is also
performed and is a continuous background read of data from memory to check for correctable
errors. Correctable errors are corrected and rewritten to cache. To protect against loss of data
on a write operation, the ESS stores two copies of written data, one in cache and the other in
NVS.
The ESS 750 has capabilities similar to the ESS 800. The ESS Model 750 consists of two
clusters, each with a two-way processor and 4 or 8 GB cache. It can have two to six Fibre
Channel/FICON or ESCON host adapters. The storage capacity ranges from a minimum of
1.1 TB up to a maximum of 4 TB. A key feature is that the ESS 750 is upgradeable,
non-disruptively, to the ESS Model 800, which can grow to more than 55 TB of physical
capacity.
Note: Effective April 28, 2006, IBM withdrew from marketing the following products:
IBM TotalStorage Enterprise Storage Server (ESS) Models 750 and 800
IBM Standby Capacity on Demand for ESS offering
For information about replacement products, see 8.16, IBM TotalStorage DS6000 on
page 474 and 8.17, IBM TotalStorage DS8000 on page 477.
SCSI protocol
Although we do not cover other platforms in this publication, we provide here a brief overview
of the SCSI protocol. The SCSI adapter is a card on the host. It connects to a SCSI bus
through a SCSI port. There are two types of SCSI supported by ESS:
SCSI Fast Wide with 20 MBps
Ultra SCSI Wide with 40 MBps
Storage consolidation
StorWatch support
PPRC support
IBM includes a Web browser interface called TotalStorage Enterprise Storage Server (ESS)
Copy Services. The interface is part of the ESS subsystem and can be used to perform
FlashCopy and PPRC functions.
Many of the ESS features are now available to non-zSeries platforms, such as PPRC for
Windows XP and UNIX, where the control is through a Web interface.
StorWatch support
On the software side, there is StorWatch, a range of products in UNIX/XP that does what
DFSMS and automation do for System z. The TotalStorage Expert, formerly marketed as
StorWatch Expert, is a member of the IBM and Tivoli Systems family of solutions for
Enterprise Storage Resource Management (ESRM). These are offerings that are designed to
complement one another, and provide a total storage management solution.
TotalStorage Expert is an innovative software tool that gives administrators powerful, yet
flexible storage asset, capacity, and performance management capabilities to centrally
manage Enterprise Storage Servers located anywhere in the enterprise.
Host Adapters
Main Power
Supplies
Batteries
At the top of each cluster is an ESS cage. Each cage provides slots for up to 64 disk drives,
32 in front and 32 at the back.
Each host adapter can communicate with either cluster. To install a new host adapter card,
the bay must be powered off. For the highest path availability, it is important to spread the host
connections across all the adapter bays. For example, if you have four ESCON links to a host,
each connected to a separate bay, then the loss of a bay for upgrade only impacts one of the
four connections to the server. The same is also valid for a host with FICON connections to
the ESS.
Similar considerations apply for servers connecting to the ESS by means of SCSI or Fibre
Channel links. For open system servers, the Subsystem Device Driver (SDD) program that
comes standard with the ESS can be installed on the connecting host servers to provide
multiple paths or connections to handle errors (path failover) and balance the I/O load to the
ESS.
The ESS connects to a large number of servers, operating systems, host adapters, and SAN
fabrics. A complete and current list is available at the following Web site:
http://www.storage.ibm.com/hardsoft/products/ess/supserver.htm
These characteristics allow simpler and more powerful configurations. The ESS supports up
to 16 host adapters, which allows for a maximum of 16 Fibre Channel/FICON ports per
machine, as shown in Figure 8-8 on page 458.
Each Fibre Channel/FICON host adapter provides one port with an LC connector type. The
adapter is a 2 Gb card and provides a nominal 200 MBps full-duplex data rate. The adapter
will auto-negotiate between 1 Gb and 2 Gb, depending upon the speed of the connection at
the other end of the link. For example, from the ESS to a switch/director, the FICON adapter
can negotiate to 2 Gb if the switch/director also has 2 Gb support. The switch/director to host
link can then negotiate at 1 Gb.
Eight-packs
Set of 8 similar capacity/rpm disk drives packed
together
Installed in the ESS cages
Initial minimum configuration is 4 eight-packs
Upgrades are available increments of 2 eight-packs
Maximum of 48 eight-packs per ESS with expansion
Disk drives
18.2 GB 15,000 rpm or 10,000 rpm
36.4 GB 15,000 rpm or 10,000 rpm
72.8 GB 10,000 rpm
145.6 GB 10,000 rpm
Eight-pack conversions
Capacity and/or RPMs
ESS disks
With a number of disk drive sizes and speeds available, including intermix support, the ESS
provides a great number of capacity configuration options.
The maximum number of disk drives supported within the IBM TotalStorage Enterprise
Storage Server Model 800 is 384, with 128 disk drives in the base enclosure and 256 disk
drives in the expansion rack. When configured with 145.6 GB disk drives, this gives a total
physical disk capacity of approximately 55.9 TB (see Table 8-1 on page 461 for more details).
Disk drives
The minimum available configuration of the ESS Model 800 is 582 GB. This capacity can be
configured with 32 disk drives of 18.2 GB contained in four eight-packs, using one ESS cage.
All incremental upgrades are ordered and installed in pairs of eight-packs; thus the minimum
capacity increment is a pair of similar eight-packs of either 18.2 GB, 36.4 GB, 72.8 GB, or
145.6 GB capacity.
The ESS is designed to deliver substantial protection against data corruption, not just relying
on the RAID implementation alone. The disk drives installed in the ESS are the latest
state-of-the-art magneto resistive head technology disk drives that support advanced disk
functions such as disk error correction codes (ECC), Metadata checks, disk scrubbing, and
predictive failure analysis.
The IBM TotalStorage ESS Specialist will configure the eight-packs on a loop with spare
DDMs as required. Configurations that include drive size intermixing can result in the creation
of additional DDM spares on a loop as compared to non-intermixed configurations. Currently
there is the choice of four new-generation disk drive capacities for use within an eight-pack:
18.2 GB/15,000 rpm disks
36.4 GB/15,000 rpm disks
72.8 GB/10,000 rpm disks
145.6 GB/10,000 rpm disks
The eight disk drives assembled in each eight-pack are all of the same capacity. Each disk
drive uses the 40 MBps SSA interface on each of the four connections to the loop.
It is possible to mix eight-packs of various capacity disks and speeds (rpm) within an ESS, as
described in the following sections.
Use Table 8-1 as a guide for determining the capacity of a given eight-pack. This table shows
the capacities of the disk eight-packs when configured as RAID ranks. These capacities are
the effective capacities available for user data.
A A A S B B B B C C C C D D D D 1 2 3 S 1 2 3 4
A/B/C/D : representation of RAID 5 rank drives (user data and distributed parity)
The ESS Storage Server Model 800 uses the latest SSA160 technology in its device adapters
(DAs). With SSA 160, each of the four links operates at 40 MBps, giving a total nominal
bandwidth of 160 MBps for each of the two connections to the loop. This amounts to a total of
320 MBps across each loop. Also, each device adapter card supports two independent SSA
loops, giving a total bandwidth of 320 MBps per adapter card. There are eight adapter cards,
giving a total nominal bandwidth capability of 2,560 MBps. See 8.11, SSA loops on
page 464 for more information about this topic.
SSA loops
One adapter from each pair of adapters is installed in each cluster, as shown in Figure 8-10.
The SSA loops are between adapter pairs, which means that all the disks can be accessed by
both clusters. During the configuration process, each RAID array is configured by the IBM
TotalStorage ESS Specialist to be normally accessed by only one of the clusters. If a cluster
failure occurs, the remaining cluster can take over all the disk drives on the loop.
Figure 8-10 on page 462 shows a logical representation of a single loop with 48 disk drives
(RAID ranks are actually split across two eight-packs for optimum performance). In the figure
you can see there are six RAID arrays: four RAID 5 designated A to D, and two RAID 10 (one
3+3+2 spare and one 4+4).
read
SSA operation write
4 links per loop
2 read and 2 write DA
simultaneously in each direction
40 MB/sec on each link
write
read
Loop availability
DA
Loop reconfigures itself
dynamically
write
read
Spatial reuse
Up to 8 simultaneous
DA DA
operations to local group of
disks (domains) per loop
SSA operation
SSA is a high performance, serial connection technology for disk drives. SSA is a full-duplex
loop-based architecture, with two physical read paths and two physical write paths to every
disk attached to the loop. Data is sent from the adapter card to the first disk on the loop and
then passed around the loop by the disks until it arrives at the target disk. Unlike bus-based
designs, which reserve the whole bus for data transfer, SSA only uses the part of the loop
between adjacent disks for data transfer. This means that many simultaneous data transfers
can take place on an SSA loop, and it is one of the main reasons that SSA performs so much
better than SCSI. This simultaneous transfer capability is known as spatial release.
Each read or write path on the loop operates at 40 MBps, providing a total loop bandwidth of
160 MBps.
Loop availability
The loop is a self-configuring, self-repairing design that allows genuine hot-plugging. If the
loop breaks for any reason, then the adapter card will automatically reconfigure the loop into
two single loops. In the ESS, the most likely scenario for a broken loop is if the actual disk
drive interface electronics fails. If this happens, the adapter card dynamically reconfigures the
loop into two single loops, effectively isolating the failed disk. If the disk is part of a RAID
array, the adapter card will automatically regenerate the missing disk using the remaining
data and parity disks to the spare disk. After the failed disk is replaced, the loop is
automatically reconfigured into full duplex operation and the replaced disk becomes a new
spare.
If a cluster fails, then the remaining cluster device adapter owns all the domains on the loop,
thus allowing full data access to continue.
First RAID-10 rank Data Data Data Spare Data Data Data Data
Additional RAID-10 ranks Data Data Data Spare Data Data Data Data
configured in the loop will be 1' 2' 3' S 1' 2' 3' 4'
4+4 Eight-pack pair 2
Eight-pack 4
For a loop with an intermixed
Data Data Data Data Data Data Data Data
capacity, the ESS will assign
1 2 3 4 1 2 3 4
two spares for each capacity.
Eight-pack 3
This means there will be one
Data Data Data Data Data Data Data Data
3+3+2S array per capacity
1' 2' 3' 4' 1' 2' 3' 4'
RAID-10
RAID-10 is also known as RAID 0+1 because it is a combination of RAID 0 (striping) and
RAID 1 (mirroring). The striping optimizes the performance by striping volumes across
several disk drives (in the ESS Model 800 implementation, three or four DDMs). RAID 1 is the
protection against a disk failure provided by having a mirrored copy of each disk. By
combining the two, RAID 10 provides data protection and I/O performance.
Array
A disk array is a group of disk drive modules (DDMs) that are arranged in a relationship, for
example, a RAID 5 or a RAID 10 array. For the ESS, the arrays are built upon the disks of the
disk eight-packs.
Disk eight-pack
The physical storage capacity of the ESS is materialized by means of the disk eight-packs.
These are sets of eight DDMs that are installed in pairs in the ESS. Two disk eight-packs
provide for two disk groups, with four DDMs from each disk eight-pack. These disk groups
can be configured as either RAID-5 or RAID-10 ranks.
Spare disks
The ESS requires that a loop have a minimum of two spare disks to enable sparing to occur.
The sparing function of the ESS is automatically initiated whenever a DDM failure is detected
on a loop and enables regeneration of data from the failed DDM onto a hot spare DDM.
Cluster 1 Cluster 2
1) RAID 10 array 3) RAID 10 array
Loop A 3 + 3 + 2S 4+4 Loop A
LSS 0 LSS 1
SSA 01 SSA 11
DATA
MOVER
DATA
MOVER
local point-in-time copy
Concurrent
py
Copy te co
XRC us remo nces
TotalStorage
Sidefile ono
chr dd ista
asyn unlimite
ove r
Km
o 103
C copy up t
TotalStorage
PPR
mote
us re
hrono over
sync -XD copy
PPRC s remote es FlashCopy
ronou al distanc
ynch t
non-s continen
Remote copy provides two options that enable you to maintain a current copy of your data at
a remote site. These two options are used for disaster recovery and workload migration:
Extended remote copy (XRC)
Peer-to-peer remote copy (PPRC)
Note: Fibre Channel Protocol is supported only on ESS Model 800 with the appropriate
licensed internal code (LIC) level and the PPRC Version 2 feature enabled.
PPRC provides a synchronous volume copy across ESS controllers. The copy is done from
one controller (the one having the primary logical device) to the other (having the secondary
logical device). It is synchronous because the task doing the I/O receives the CPU back with
the guarantee that the copy was executed. There is a performance penalty for distances
longer than 10 km. PPRC is used for disaster recovery, device migration, and workload
migration; for example, it enables you to switch to a recovery system in the event of a disaster
in an application system.
You can issue the CQUERY command to query the status of one volume of a PPRC volume pair
or to collect information about a volume in the simplex state. The CQUERY command is
modified and enabled to report on the status of S/390-attached CKD devices.
See z/OS DFSMS Advanced Copy Services, SC35-0428, for further information about the
PPRC service and the CQUERY command.
If you are trying to decide whether to use synchronous or asynchronous PPRC, consider the
differences between the two modes:
When you use synchronous PPRC, no data loss occurs between the last update at the
primary system and the recovery site, but it increases the impact to applications and uses
more resources for copying data.
Asynchronous PPRC using the extended distance feature reduces impact to applications
that write to primary volumes and uses less resources for copying data, but data might be
lost if a disaster occurs. To use PPRC-XD as a disaster recovery solution, customers need
to periodically synchronize the recovery volumes with the primary site and make backups
to other DASD volumes or tapes.
PPRC-XD can operate at very long distances (such as continental distances), well beyond
the 103 km supported for PPRC synchronous transmissionsand with minimal impact on the
application. The distance is limited only by the network and channel extender technology
capabilities.
XRC relies on the IBM TotalStorage Enterprise Storage Server, IBM 3990, RAMAC Storage
Subsystems, and DFSMSdfp. The 9393 RAMAC Virtual Array (RVA) does not support XRC
for source volume capability.
XRC relies on the system data mover, which is part of DFSMSdfp. The system data mover is
a high-speed data movement program that efficiently and reliably moves large amounts of
data between storage devices. XRC is a continuous copy operation, and it is capable of
operating over long distances (with channel extenders). It runs unattended, without
involvement from the application users. If an unrecoverable error occurs at your primary site,
the only data that is lost is data that is in transit between the time when the primary system
fails and the recovery at the recovery site.
You can implement XRC with one or two systems. Let us suppose that you have two systems:
an application system at one location, and a recovery system at another. With these two
systems in place, XRC can automatically update your data on the remote disk storage
subsystem as you make changes to it on your application system. You can use the XRC
suspend/resume service for planned outages. You can still use this standard XRC service on
systems attached to the ESS if these systems are installed with the toleration or transparency
support.
Coupled Extended Remote Copy (CXRC) allows XRC sessions to be coupled together to
guarantee that all volumes are consistent across all coupled XRC sessions. CXRC can
manage thousands of volumes. IBM TotalStorage XRC Performance Monitor provides the
ability to monitor and evaluate the performance of a running XRC configuration.
Concurrent copy
Concurrent copy is an extended function that enables data center operations staff to generate
a copy or a dump of data while applications are updating that data. Concurrent copy delivers
a copy of the data, in a consistent form, as it existed before the updates took place.
FlashCopy service
FlashCopy is a point-in-time copy services function that can quickly copy data from a source
location to a target location. FlashCopy enables you to make copies of a set of tracks, with the
copies immediately available for read or write access. This set of tracks can consist of an
entire volume, a data set, or just a selected set of tracks. The primary objective of FlashCopy
is to create a copy of a source volume on the target volume. This copy is called a
point-in-time copy. Access to the point-in-time copy of the data on the source volume is
through reading the data from the target volume. The actual point-in-time data that is read
from the target volume might or might not be physically stored on the target volume. The ESS
FlashCopy service is compatible with the existing service provided by DFSMSdss. Therefore,
you can invoke the FlashCopy service on the ESS with DFSMSdss.
ESS
z/OS
DS6000
DS8000
Figure 8-15 ESS performance features
With ESS, it is possible to have this queue concept internally; I/O Priority Queueing in ESS
has the following properties:
I/O can be queued with the ESS in priority order.
WLM sets the I/O priority when running in goal mode.
There is I/O priority for systems in a sysplex.
Each system gets a fair share.
Custom volumes
Custom volumes provides the possibility of defining small size 3390 or 3380 volumes. This
causes less contention on a volume. Custom volumes is designed for high activity data sets.
Careful size planning is required.
The ESS manages its cache in 4 KB segments, so for small data blocks (4 KB and 8 KB are
common database block sizes), minimum cache is wasted. In contrast, large cache segments
can exhaust cache capacity when filling up with small random reads. Thus the ESS, having
smaller cache segments, is able to avoid wasting cache space for situations of small record
sizes that are common in interactive applications.
This efficient cache management, together with the ESS Model 800 powerful back-end
implementation that integrates new (optional) 15,000 rpm drives, enhanced SSA device
adapters, and twice the bandwidth (as compared to previous models) to access the larger
NVS (2 GB) and the larger cache option (64 GB), all integrate to give greater throughput while
sustaining cache speed response times.
75.25
@19.2TB
@4.8TB
5.25
54.5 19
The DS6000 series offers high scalability and excellent performance. With the DS6800
(Model 1750-511), you can install up to 16 disk drive modules (DDMs). The minimum storage
capability with 8 DDMs is 584 GB. The maximum storage capability with 16 DDMs for the
DS6800 model is 4.8 TB. If you want to connect more than 16 disks, you can use up to 13
DS6000 expansion units (Model 1750-EX1) that allow a maximum of 224 DDMs per storage
system and provide a maximum storage capability of 67 TB.
DS6000 specifications
Table 8-2 summarizes the DS6000 features.
Max cache 4 GB
RAID levels 5, 10
Modular scalability
The DS6000 is modularly scalable, with optional expansion enclosure, to add capacity to help
meet your growing business needs. The scalability comprises:
Flexible design to accommodate on demand business environments
Ability to make dynamic configuration changes
Add disk drives in increments of 4
Add storage expansion units
Scale capacity to over 67 TB
ES 800 DS8000
76
75.25
33.25
54.5
The current physical storage capacity of the DS8000 series system can range from 1.1 TB to
192 TB of physical capacity, and it has an architecture designed to scale to over 96 petabytes.
DS8000 models
The DS8000 series offers various choices of base and expansion models, so you can
configure storage units that meet your performance and configuration needs.
DS8100
The DS8100 (Model 921) features a dual two-way processor complex and support for one
expansion frame.
DS8300
The DS8300 (Models 922 and 9A2) features a dual four-way processor complex and
The DS8000 expansion frames (Models 92E and 9AE) expand the capabilities of the base
models. You can attach the Model 92E to either the Model 921 or the Model 922 to expand
their capabilities. You can attach the Model 9AE to expand the Model 9A2.
Workload A Workload B
LUN 0 LUN 0
LUN 1 LUN 2
DS8300 Model 9A2 exploits
Logical
LUN 2 Partition LPAR technology, allowing you
Logical
B to run two separate storage
Partition
A
server images
DS8000
RAID
RAID RAID
RAID
RAID RAID
Adapters
Adapters Switched Fabric Adapters
Adapters
Adapters Adapters
LPAR overview
A logical partition (LPAR) is a subset of logical resources that is capable of supporting an
operating system. It consists of CPUs, memory, and I/O slots that are a subset of the pool of
available resources within a system. These resources are assigned to the logical partition.
Isolation between LPARs is provided to prevent unauthorized access between partition
boundaries.
With these separate resources, each Storage System LPAR can run the same or other
versions of microcode, and can be used for completely separate production, test, or other
unique storage environments within this single physical system. This can enable storage
Copy services
FlashCopy
Mirroring
Metro Mirror (Synchronous PPRC)
Global Mirror (Asynchronous PPRC)
Metro/Global Copy (two or three-site Asynchronous
Cascading PPRC)
Global Copy (PPRC Extended Distance)
Global Mirror for zSeries (XRC) DS6000 can be
configured as an XRC target only
Metro/Global Mirror for zSeries (three-site solution
using Synchronous PPRC and XRC) DS6000 can
be configured as an XRC target only
These hardware and software features, products, and services are available on the IBM
TotalStorage DS6000 and DS8000 series and IBM TotalStorage ESS Models 750 and 800. In
addition, a number of advanced Copy Services features that are part of the IBM TotalStorage
Resiliency family are available for the DS6000 and DS8000 series. The IBM TotalStorage DS
Family also offers systems to support enterprise-class data backup and disaster recovery
capabilities. As part of the IBM TotalStorage Resiliency Family of software, IBM TotalStorage
FlashCopy point-in-time copy capabilities back up data in the background and allow users
nearly instant access to information about both source and target volumes. Metro and Global
Mirror capabilities create duplicate copies of application data at remote sites. High-speed
data transfers help to back up data for rapid retrieval.
Copy Services
Copy Services is a collection of functions that provides disaster recovery, data migration, and
data duplication functions. Copy Services runs on the DS6000 and DS8000 series and
supports open systems and zSeries environments.
Copy Services functions also are supported on the previous generation of storage systems,
the IBM TotalStorage Enterprise Storage Server.
For information about copy services, see 8.14, ESS copy services on page 469.
Netscape or
Internet Explorer
z/OS
TotalStorage Expert
Windows XP or
AIX
AS/400
DS8000
FICON UNIX
ESCON
VTS
Peer-To-Peer
VTS
3494 Library Windows XP
Manager
TotalStorage Expert
TotalStorage Expert is an innovative software tool that gives administrators powerful but
flexible storage asset, capacity, and performance management capabilities to centrally
manage Enterprise Storage Servers located anywhere in the enterprise.
The two features are licensed separately. There are also upgrade features for users of
StorWatch Expert V1 with either the ESS or the ETL feature, or both, who want to migrate to
TotalStorage Expert V2.1.1.
TotalStorage Expert is designed to augment commonly used IBM performance tools such as
Resource Management Facility (RMF), DFSMS Optimizer, AIX Performance Toolkit, and
similar host-based performance monitors. While these tools provide performance statistics
from the host systems perspective, TotalStorage Expert provides statistics from the ESS and
ETL system perspective.
The ESS is ideal for businesses with multiple heterogeneous servers, including zSeries,
UNIX, Windows NT, Windows 2000, Novell NetWare, HP/UX, Sun Solaris, and AS/400
servers.
With Version 2.1.1, the TotalStorage ESS Expert is packaged with the TotalStorage ETL
Expert. The ETL Expert provides performance, asset, and capacity management for the three
IBM ETL solutions:
IBM TotalStorage Enterprise Automated Tape Library, described in IBM TotalStorage
Enterprise Automated Tape Library 3494 on page 495.
IBM TotalStorage Virtual Tape Server, described in Introduction to Virtual Tape Server
(VTS) on page 497.
IBM TotalStorage Peer-to-Peer Virtual Tapeserver, described in IBM TotalStorage
Peer-to-Peer VTS on page 499.
Both tools can run on the same server, share a common database, efficiently monitor storage
resources from any location within the enterprise, and provide a similar look and feel through
a Web browser user interface. Together they provide a complete solution that helps optimize
the potential of IBM disk and tape subsystems.
Tape volumes
The term tape refer to volumes that can be physically moved. You can only store sequential
data sets on tape. Tape volumes can be sent to a safe, or to other data processing centers.
Internal labels are used to identify magnetic tape volumes and the data sets on those
volumes. You can process tape volumes with:
IBM standard labels
Labels that follow standards published by:
International Organization for Standardization (ISO)
American National Standards Institute (ANSI)
Federal Information Processing Standard (FIPS)
Nonstandard labels
No labels
Note: Your installation can install a bypass for any type of label processing; however, the
use of labels is recommended as a basis for efficient control of your data.
IBM standard tape labels consist of volume labels and groups of data set labels. The volume
label, identifying the volume and its owner, is the first record on the tape. The data set label,
Usually, the formats of ISO and ANSI labels, which are defined by the respective
organizations, are similar to the formats of IBM standard labels.
Nonstandard tape labels can have any format and are processed by routines you provide.
Unlabeled tapes contain only data sets and tape marks.
/ /
IBM Standard IBM Standard
IBM IBM Standard
Data Set Data Set
Standard Volume TM Data Set TM TM TM
Header Trailer
Labels Label
Label Label
/ /
/ /
Unlabeled
Tapes Data Set TM TM
/ /
TM= Tapemark
Other parameters of the DD statement identify the data set, give volume and unit information
and volume disposition, and describe the data set's physical attributes. You can use a data
class to specify all of your data set's attributes (such as record length and record format), but
not data set name and disposition. Specify the name of the data class using the JCL keyword
DATACLAS. If you do not specify a data class, the automatic class selection (ACS) routines
assign a data class based on the defaults defined by your storage administrator.
An example of allocating a tape data set using DATACLAS in the DD statement of the JCL
statements follows. In this example, TAPE01 is the name of the data class.
//NEW DD DSN=DATASET.NAME,UNIT=TAPE,DISP=(,CATLG,DELETE),DATACLAS=TAPE01,LABEL=(1,SL)
AL ISO/ANSI/FIPS labels
BLP Bypass label processing. The data is treated in the same manner as though NL had been
specified, except that the system does not check for an existing volume label. The user is
responsible for the positioning.
If your installation does not allow BLP, the data is treated exactly as though NL had been
specified. Your job can use BLP only if the Job Entry Subsystem (JES) through Job class,
RACF through TAPEVOL class, or DFSMSrmm(*) allow it.
LTM Bypass a leading tape mark. If encountered, on unlabeled tapes from VSE.
Note: If you do not specify the label type, the operating system assumes that the data set
has IBM standard labels.
3590=10,000 Mb
3490=800 Mb
3480=200 Mb 3592=300,000 Mb
Tape capacity
The capacity of a tape depends on the device type that is recording it. 3480 and 3490 tapes
are physically the same cartridges. The IBM 3590 and 3592 high performance cartridge tape
is not compatible with the 3480, 3490, or 3490E drives. 3490 units can read 3480 cartridges,
but cannot record as a 3480, and 3480 units cannot read or write as a 3490.
Tape mount management allows you to efficiently fill a tape cartridge to its capacity and gain
full benefit from improved data recording capability (IDRC) compaction, 3490E Enhanced
Capability Magnetic Tape Subsystem, 36-track enhanced recording format, and Enhanced
Capacity Cartridge System Tape. By filling your tape cartridges, you reduce your tape mounts
and even the number of tape volumes you need.
With an effective tape cartridge capacity of 2.4 GB using 3490E and the Enhanced Capacity
Cartridge System Tape, DFSMS can intercept all but extremely large data sets and manage
them with tape mount management. By implementing tape mount management with DFSMS,
you might reduce your tape mounts by 60% to 70% with little or no additional hardware
Tape mount management also improves job throughput because jobs are no longer queued
up on tape drives. Approximately 70% of all tape data sets queued up on drives are less than
10 MB. With tape mount management, these data sets reside on DASD while in use. This
frees up the tape drives for other allocations.
Tape mount management recommends that you use DFSMShsm to do interval migration to
SMS storage groups. You can use ACS routines to redirect your tape data sets to a tape
mount management DASD buffer storage group. DFSMShsm scans this buffer on a regular
basis and migrates the data sets to migration level 1 DASD or migration level 2 tape as soon
as possible, based on the management class and storage group specifications.
Table 8-5 lists all IBM tape capacities supported since 1952.
For further information about tape processing, see z/OS DFSMS Using Magnetic Tapes,
SC26-7412.
Improved environmentals
The IBM 3592 tape drive can be used as a standalone solution or as an automated solution
within a 3494 tape library.
Improved environmentals
By using a smaller form factor than 3590 Magstar drives, you can put two 3592 drives in place
of one 3590 drive in the 3494. In a stand-alone solution you can put a maximum of 12 drives
into one 19-inch rack, managed by one controller.
Various solutions providing tape automation, including the following, are available:
The Automatic Cartridge Loader on IBM 3480 and 3490E tape subsystems provides quick
scratch (a volume with no valued data, used for output) mount.
The Automated Cartridge Facility on the Magstar 3590 tape subsystem, working with
application software, can provide a 10-cartridge mini-tape library.
The IBM 3494, an automated tape library dataserver, is a device consisting of robotics
components, cartridge storage areas (or shelves), tape subsystems, and controlling
hardware and software, together with the set of tape volumes that reside in the library and
can be mounted on the library tape drives.
The Magstar Virtual Tape Server (VTS) provides volume stacking capability and exploits
the capacity and bandwidth of Magstar 3590 technology.
VTS models:
Model B10 VTS
Model B20 VTS
Peer-to-Peer (PtP) VTS (up to twenty-four 3590
tape drives)
VTS design (single VTS)
32, 64, 128 or 256 3490E virtual devices
Tape volume cache:
Analogous to DASD cache
Data access through the cache
Dynamic space management
Cache hits eliminate tape mounts
Up to twelve 3590 tape drives (the real 3590 volume
contains up to 250,000 virtual volumes per VTS)
Stacked 3590 tape volumes managed by the 3494
VTS introduction
The IBM Magstar Virtual Tape Server (VTS), integrated with the IBM Tape Library
Dataservers (3494), delivers an increased level of storage capability beyond the traditional
storage products hierarchy. The host software sees VTS as a 3490 Enhanced Capability
(3490E) Tape Subsystem with associated standard (CST) or Enhanced Capacity Cartridge
System Tapes (ECCST). This virtualization of both the tape devices and the storage media to
the host allows for transparent utilization of the capabilities of the IBM 3590 tape technology.
Along with introduction of the IBM Magstar VTS, IBM introduced new views of volumes and
devices because of the different knowledge about volumes and devices on the host system
and the hardware. Using a VTS subsystem, the host application writes tape data to virtual
devices. The volumes created by the hosts are called virtual volumes and are physically
stored in a tape volume cache that is built from RAID DASD.
VTS models
These are the IBM 3590 drives you can choose:
For the Model B10 VTS, four, five, or six 3590-B1A/E1A/H1A can be associated with VTS.
For the Model B20 VTS, six to twelve 3590-B1A/E1A/H1A can be associated with VTS.
Each ESCON channel in the VTS is capable of supporting 64 logical paths, providing up to
1024 logical paths for Model B20 VTS with sixteen ESCON channels, and 256 logical paths
for Model B10 VTS with four ESCON channels. Each logical path can address any of the 32,
64, 128, or 256 virtual devices in the Model B20 VTS.
VTS design
VTS looks like an automatic tape library with thirty-two 3490E drives and 50,000 volumes in
37 square feet. Its major components are:
Magstar 3590 (three or six tape drives) with two ESCON channels
Magstar 3494 Tape Library
Fault-tolerant RAID-1 disks (36 Gb or 72 Gb)
RISC Processor
VTS functions
VTS provides the following functions:
Thirty-two 3490E virtual devices.
Tape volume cache (implemented in a RAID-1 disk) that contains virtual volumes.
The tape volume cache consists of a high performance array of DASD and storage
management software. Virtual volumes are held in the tape volume cache when they are
being used by the host system. Outboard storage management software manages which
virtual volumes are in the tape volume cache and the movement of data between the tape
volume cache and physical devices. The size of the DASD is made large enough so that
more virtual volumes can be retained in it than just the ones currently associated with the
virtual devices.
After an application modifies and closes a virtual volume, the storage management
software in the system makes a copy of it onto a physical tape. The virtual volume remains
available on the DASD until the space it occupies reaches a predetermined threshold.
Leaving the virtual volume in the DASD allows for fast access to it during subsequent
requests. The DASD and the management of the space used to keep closed volumes
available is called tape volume cache. Performance for mounting a volume that is in tape
volume cache is quicker than if a real physical volume is mounted.
Up to six 3590 tape drives; the real 3590 volume contains logical volumes. Installation
sees up to 50,000 volumes.
Stacked 3590 tape volumes managed by the 3494. It fills the tape cartridge up to 100%.
Putting multiple virtual volumes into a stacked volume, VTS uses all of the available space
on the cartridge. VTS uses IBM 3590 cartridges when stacking volumes.
VTS is expected to provide a ratio of 59:1 in volume reduction, with dramatic savings in all
tape hardware items (drives, controllers, and robots).
CX1
VTC
Master VTS
I/O VTS
FICON/ESCON
to zSeries
ESCON/FICON
Distributed Library
UI Library
VTC
Peer-to-Peer VTS
IBM TotalStorage Peer-to-Peer Virtual Tape Server, an extension of IBM TotalStorage Virtual
Tape Server, is specifically designed to enhance data availability. It accomplishes this by
providing dual volume copy, remote functionality, and automatic recovery and switchover
capabilities. With a design that reduces single points of failure (including the physical media
where logical volumes are stored), IBM TotalStorage Peer-to-Peer Virtual Tape Server
improves system reliability and availability, as well as data access. To help protect current
hardware investments, existing IBM TotalStorage Virtual Tape Servers can be upgraded for
use in this new configuration.
IBM TotalStorage Peer-to-Peer Virtual Tape Server consists of new models and features of
the 3494 Tape Library that are used to join two separate Virtual Tape Servers into a single,
interconnected system. The two virtual tape systems can be located at the same site or at
separate sites that are geographically remote. This provides a remote copy capability for
remote vaulting applications.
IBM TotalStorage Peer-to-Peer Virtual Tape Server appears to the host IBM eServer zSeries
processor as a single automated tape library with 64, 128, or 256 virtual tape drives and up to
500,000 virtual volumes. The configuration of this system has up to 3.5 TB of Tape Volume
Cache native (10.4 TB with 3:1 compression), up to 24 IBM TotalStorage 3590 tape drives,
and up to 16 host ESCON or FICON channels.
LAN
Any Storage
ESS
Figure 8-29 Storage area network (SAN)
SANs today are usually built using Fibre Channel technology, but the concept of a SAN is
independent of the underlying type of network.
Today zSeries has 2 Gbps link data rate support. The 2 Gbps links are for native FICON,
FICON CTC, cascaded directors and fibre channels (FCP channels) on the FICON Express
cards on z800, z900, and z990 only.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources:
z/OS DFSMStvs Administration Guide, GC26-7483
Device Support Facilities Users Guide and Reference Release 17, GC35-0033
z/OS MVS Programming: Assembler Services Guide, SA22-7605
z/OS MVS System Commands, SA22-7627
z/OS MVS System Messages,Volume 1 (ABA-AOM), SA22-7631
DFSMS Optimizer Users Guide and Reference, SC26-7047
z/OS DFSMStvs Planning and Operating Guide, SC26-7348
z/OS DFSMS Access Method Services for Catalogs, SC26-7394
z/OS DFSMSdfp Storage Administration Reference, SC26-7402
z/OS DFSMSrmm Guide and Reference, SC26-7404
z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405
z/OS DFSMS Implementing System-Managed Storage, SC26-7407
z/OS DFSMS: Managing Catalogs, SC26-7409
z/OS DFSMS: Using Data Sets, SC26-7410
z/OS DFSMS: Using the Interactive Storage Management Facility, SC26-7411
z/OS DFSMS: Using Magnetic Tapes, SC26-7412
z/OS DFSMSdfp Utilities, SC26-7414
z/OS Network File System Guide and Reference, SC26-7417
DFSORT Getting Started with DFSORT R14, SC26-4109
DFSORT Installation and Customization Release 14, SC33-4034
z/OS DFSMShsm Storage Administration Guide, SC35-0421
Online resources
These Web sites and URLs are also relevant as further information sources:
For articles, online books, news, tips, techniques, examples, and more, visit the z/OS
DFSORT home page:
http://www-1.ibm.com/servers/storage/support/software/sort/mvs
DFSMS, Data set The ABCs of z/OS System Programming is a thirteen-volume collection that
provides an introduction to the z/OS operating system and the hardware INTERNATIONAL
basics, SMS architecture. Whether you are a beginner or an experienced system programmer, TECHNICAL
the ABCs collection provides the information that you need to start your research
SUPPORT
Storage management into z/OS and related subjects. The ABCs collection serves as a powerful technical
tool to help you become more familiar with z/OS in your current environment, or ORGANIZATION
software and
to help you evaluate platforms to consolidate your e-business applications.
hardware This edition is updated to z/OS Version 1 Release 1.