SG 248451
SG 248451
SG 248451
Octavian Lascu
Hervey Kamga
Esra Ufacik
Bo Xu
John Troy
Frank Packheiser
Michal Kordyzon
Redbooks
International Technical Support Organization
October 2018
SG24-8451-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page xiii.
This edition applies to IBM Z® : IBM z14™, IBM z13™, IBM z13s™, IBM zEnterprise EC12 (zEC12), and IBM
zEnterprise BC12 (zBC12).
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Contents v
3.10.3 VFM administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Contents vii
8.1.4 Temporary upgrades. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
8.2 Concurrent upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
8.2.1 Model upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
8.2.2 Customer Initiated Upgrade facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
8.2.3 Concurrent upgrade functions summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
8.3 Miscellaneous equipment specification upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
8.3.1 MES upgrade for processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
8.3.2 MES upgrades for memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.3.3 MES upgrades for I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
8.3.4 Feature on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
8.3.5 Summary of plan-ahead features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
8.4 Permanent upgrade through the CIU facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
8.4.1 Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
8.4.2 Retrieval and activation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
8.5 On/Off Capacity on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
8.5.2 Capacity Provisioning Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
8.5.3 Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
8.5.4 On/Off CoD testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
8.5.5 Activation and deactivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8.5.6 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
8.5.7 z/OS capacity provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
8.6 Capacity for Planned Event. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.7 Capacity Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8.7.1 Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8.7.2 CBU activation and deactivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
8.7.3 Automatic CBU enablement for GDPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
8.8 Nondisruptive upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
8.8.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
8.8.2 Concurrent upgrade considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
8.9 Summary of Capacity on-Demand offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Contents ix
11.5.14 Cryptographic support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
11.5.15 Installation support for z/VM that uses the HMC. . . . . . . . . . . . . . . . . . . . . . . 445
11.5.16 Dynamic Partition Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
Contents xi
xii IBM z14 (3906) Technical Guide
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Cloud™ Resource Measurement Facility™
Bluemix® IBM LinuxONE™ RMF™
CICS® IBM Z® S/390®
Cognitive Era® IBM z Systems® System Storage®
Cognos® IBM z13® System z®
DB2® IBM z13s® System z10®
Db2® IBM z14™ System z9®
Distributed Relational Database Interconnect® VIA®
Architecture™ Language Environment® VTAM®
DS8000® MVS™ WebSphere®
ECKD™ OMEGAMON® z Systems®
FICON® Parallel Sysplex® z/Architecture®
FlashCopy® Passport Advantage® z/OS®
GDPS® PowerPC® z/VM®
Geographically Dispersed Parallel PR/SM™ z/VSE®
Sysplex™ Processor Resource/Systems z10™
Global Technology Services® Manager™ z13®
HiperSockets™ RACF® z13s®
HyperSwap® Redbooks® z9®
IA® Redbooks (logo) ® zEnterprise®
IBM® Resource Link®
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redbooks® publication describes the new member of the IBM Z® family, IBM
z14™. IBM z14 is the trusted enterprise platform for pervasive encryption, integrating data,
transactions, and insights into the data.
IBM z14 servers are designed with improved scalability, performance, security, resiliency,
availability, and virtualization. The superscalar design allows z14 servers to deliver a record
level of capacity over the prior IBM Z platforms. In its maximum configuration, z14 is powered
by up to 170 client characterizable microprocessors (cores) running at 5.2 GHz. This
configuration can run more than 146,000 million instructions per second (MIPS) and up to
32 TB of client memory. The IBM z14 Model M05 is estimated to provide up to 35% more total
system capacity than the IBM z13® Model NE1.
This Redbooks publication provides information about IBM z14 and its functions, features,
and associated software support. More information is offered in areas that are relevant to
technical planning. It is intended for systems engineers, consultants, planners, and anyone
who wants to understand the IBM Z servers functions and plan for their usage. It is not
intended as an introduction to mainframes. Readers are expected to be generally familiar with
existing IBM Z technology and terminology.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, Poughkeepsie Center.
Octavian Lascu is a Senior IT Consultant for IBM Romania with over 25 years of experience.
He specializes in designing, implementing, and supporting complex IT infrastructure
environments (systems, storage, and networking), including high availability and disaster
recovery solutions and high-performance computing deployments. He has developed
materials for and taught workshops for technical audiences around the world for the past 19
years. He has written several IBM publications.
Hervey Kamga is an IBM Z Product Engineer with the EMEA I/O Connectivity Team in
Montpellier, France. Before serving in his current role, he was a Support Engineer and
Engineer On Site for 13 years with Sun MicroSystems and Oracle in EMEA. Hervey’s areas of
expertise include Oracle Solaris (Operating System and hardware products), virtualization
(VMware, virtualBox), Linux (Ubuntu), and IBM Z I/O features and protocols (IBM FICON®
and OSA).
John Troy is an IBM Z and storage hardware National Top Gun in the northeast area of the
United States. He has 35 years of experience in the service field. His areas of expertise
include IBM Z servers and high-end storage systems technical and customer support. John
has been an IBM Z hardware technical support course designer, developer, and instructor for
the last six generations of IBM high-end servers.
Frank Packheiser is a Senior zIT Specialist at the Field Technical Sales Support office in
Germany. He has 27 years of experience in IBM Z platform. Frank has worked for 10 years in
the IBM Education Center in Germany, developing and providing professional training. He
also provides professional services to IBM Z and mainframe clients. In 2008 and 2009, Frank
supported clients in Middle East/North Africa (MENA) as a zIT Architect. In addition to
co-authoring several IBM Redbooks publications since 1999, he has been an official ITSO
presenter at ITSO workshops for the last four years.
Michal Kordyzon is an IBM Z Client Technical Specialist at IBM Poland with 12 years of
experience with the IBM Z platform. He also has expertise in LinuxONE. His other areas of
expertise include mainframe architecture, Linux systems, Hyperledger fabric, Node.js, and
Machine Learning algorithms.
Patty Driever
Dave Surman
Harry Yudenfriend
Ellen Carbarnes
Diana Henderson
Robert Haimowitz
Anthony Saporito
Garth Godfrey
Darelle Gent
Parwez Hamid
Gary King
Jeff Kubala
Philip Sciuto
Rhonda Sundlof
Barbara Weiler
IBM Poughkeepsie
Barbara Sannerud
IBM White Plains
Monika Zimmermann
Walter Niklaus
Carl Mayer
Angel Nunes Mencias
IBM Germany
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xvii
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Information Technology (IT) today is experiencing a time of exponential growth in data and
transaction volumes that are driven by digital transformation. New dynamics in the market
provide opportunities for businesses to grab market share and win. As companies struggle to
meet the demands of the new IT, which is dominated by cloud, analytics, mobile, social, and
IoT, IT leaders are now being challenged to add value by opening their enterprises to new
ways of doing business.
In this challenging landscape, businesses must manage, store, protect, and, most
importantly, use the information for gaining competitive advantage. This need is creating the
demand to apply intelligence and insight to the data to build new services that are wrapped
for a customized user experience.
By creating applications that provide intelligent user experiences, companies can provide
value to their partners and clients, which ultimately preserves loyalty. And most of all to
succeed, organizations must provide their users with the peace of mind that no matter what
device is being used, data is protected.
Organizations in every industry and sector must secure that growing data and comply with
increasingly intricate regulations. This operational environment increases the pressure on IT
to securely deliver services and support on time and on budget. By encrypting as much of
your data and transactional pipeline as possible, you can reduce potential data breach risks
and financial losses, and comply with complex regulatory mandates.
Cryptography always is in the DNA of IBM Z family. The IBM z14 continues that tradition with
pervasive encryption to defend and protect your critical assets with unrivaled encryption and
intelligent data monitoring without compromising transactional throughput or response times.
Most importantly, this pervasive encryption requires no application changes. Pervasive
encryption can dramatically simplify data protection and reduce the costs of regulatory
compliance. By using simple policy controls, z14 pervasive computing streamlines data
protection for mission critical IBM Db2® for z/OS, IBM IMS, and Virtual Storage Access
Method (VSAM) datasets.
The Central Processor Assist for Cryptographic Function (CPACF), which is standard on
every core, supports pervasive encryption and provides hardware acceleration for encryption
operations. The new Crypto Express6S gets a performance boost on z14. Combined, these
enhancements perform encryption more efficiently on the z14 than on earlier IBM Z servers.
The IBM z14 was designed specifically to meet the demand for new services and customer
experiences, while securing the growing amounts of data and complying with increasingly
intricate regulations. With up to 170 configurable cores, z14 has performance and scaling
advantage over prior generations and 31% more capacity than the 141-way z13.
The new FICON Express16S+ delivers an increase in I/O rates and in-link bandwidth, and a
reduction in single-stream latency, which provides the system the ability to absorb large
applications and transaction spikes driven by unpredictable mobile and IoT devices.
1 For more information, see the Technology in Action page of the IBM website.
With up to 32 TB of memory, z14 can open opportunities, such as in-memory data marts and
in-memory analytics while giving you the necessary room to tune applications for optimal
performance. By using the Vector Packed Decimal Facility that allows packed decimal
operations to be performed in registers rather than memory, and by using new fast
mathematical computations, compilers (such as Enterprise COBOL for z/OS, V6.2,
Enterprise PL/I for z/OS, V5.2, z/OS V2.3 XL C/C++), the COBOL optimizer, Automatic Binary
Optimizer for z/OS, V1.3, and Java, are optimized on z14. These compilers and optimizer are
designed to improve application performance, reduce CPU usage, and reduce operating
costs. Java improvements and the use of crypto acceleration deliver more improvements in
throughput per core, which gives a natural boost to z/OS Connect EE, IBM WebSphere®
Liberty in IBM CICS®, Spark for z/OS, and IBM Java for Linux on Z.
Smoothly handling the data tsunami requires robust infrastructure that is designed specifically
for high-volume data transactions. To take advantage of new unstructured data,2 businesses
on IBM z can use application programming interfaces (APIs) that can help with creating and
delivering innovative new services.
Linux on IBM Z, which is optimized for open source software, brings more value to the
platform. Linux on IBM Z supports a wealth of new products that are familiar to application
developers, such as Python, Scala, Spark, MongoDB, PostgreSQL, and MariaDB. Access to
data that was unavailable, without the need for Extract Transform and Load (ETL) allows for
the development of intelligent transactions and intuitive business processes.
As your business technology needs evolve to compete in today’s digital economy, IBM stands
ready to help with intelligent, robust, and comprehensive technology solutions. The IBM
approach integrates server, software, and storage solutions to ensure that each member of
the stack is designed and optimized to work together. The new IBM z14™ leads that
approach by delivering the power and speed users demand, the security users and regulators
require, and the operational efficiency that maximizes your bottom line.
Terminology: The remainder of this book uses the designation CPC to refer to the central
processor complex.
2 This data accounts for 80% of all data that is generated today and is expected to grow to over 93% by 2020.
The superscalar processor implements second-generation SMT4 now enabled for System
Assist Processors (SAPs). It also implements redesigned caches and translation lookaside
buffer (TLB), optimized pipeline, and better branch prediction. Also featured is an expanded
instruction set with Vector Packed Decimal Facility, Guarded Storage Facility, Vector Facility
enhancements, Semaphore Assist Facility, Order Preserving Compression, and Entropy
Encoding for Co-processor Compression for better performance in several different areas.
Depending on the model, the z14 server can support 256 GB - 32 TB of usable memory, with
up to 8 TB of usable memory per CPC drawer. In addition, a fixed amount of 192 GB is
reserved for the hardware system area (HSA) and is not part of customer-purchased memory.
Memory is implemented as a redundant array of independent memory (RAIM) and uses extra
physical memory as spare memory. The RAIM function accounts for 20% of the physical
installed memory in each CPC drawer.
New with z14, Virtual Flash Memory (VFM) feature is offered from the main memory capacity
in 1.5 TB units and replaces the Flash Express adapters, which were available on the zEC12
and z13. VFM provides much simpler management and better performance by eliminating the
I/O the adapters in the PCIe drawers. VFM does not require any application changes when
moving from IBM Flash Express.
The increased performance and the total system capacity available (with possible energy
savings) allow consolidating diverse applications on a single platform with significant financial
savings. The introduction of new technologies and an expanded and enhanced instruction set
ensure that the z14 server is a high-performance, reliable, and rich-security platform. The z14
server is designed to maximize the use of resources and allows you to integrate and
consolidate applications and data across the enterprise IT infrastructure.
z14 servers are offered in five models, with 1 - 170 configurable PUs. Models M01, M02, M03,
and M04 have up to 41 PUs per CPC drawer. The high-capacity model (the M05) has four
processor (CPC) drawers with 49 PU per drawer. Model M05 is estimated to provide up to
35% more total system capacity than the z13 Model NE1, with the same amount of memory
and power requirements. With up to 32 TB of main storage and enhanced SMT, the
performance of the z14 processors deliver considerable improvement. Uniprocessor
performance also increased significantly. A z14 Model 701 offers average performance
improvements of 10%5 over the z13 Model 701.
3
FINFET is the industry solution; SOI is the IBM solution for SER.
4
Simultaneous multithreading is two threads per core.
5 Observed performance increases vary depending on the workload types.
The z14 server expands the subcapacity settings, offering three subcapacity levels (in models
4xx, 5xx and 6xx) for up to 33 processors that are characterized as CPs (compared to up to
30 for z13). This configuration gives a total of 269 distinct capacity settings. The z14 servers
deliver scalability and granularity to meet the needs of medium-sized enterprises, while also
satisfying the requirements of large enterprises that have demanding, mission-critical
transaction and data processing requirements.
This comparison is based on the Large System Performance Reference (LSPR) mixed
workload analysis. For more information about performance and workload variation on z14
servers, see Chapter 12, “Performance” on page 447.
z14 servers continue to offer all the specialty engines7 that are available on z13.
Workload variability
Consult the LSPR when considering performance on z14 servers. The range of performance
ratings across the individual LSPR workloads is likely to have a large spread. More
performance variation of individual logical partitions (LPARs) is available when an increased
number of partitions and more PUs are available. For more information, see Chapter 12,
“Performance” on page 447.
For more information about millions of service units (MSUs) ratings, see the IBM Z Software
Contracts website.
Capacity on demand
Capacity on demand (CoD) enhancements enable clients to have more flexibility in managing
and administering their temporary capacity requirements. The z14 server supports the same
architectural approach for CoD offerings as the z13 (temporary or permanent). Within the z14
server, one or more flexible configuration definitions can be available to solve multiple
temporary situations, and multiple capacity configurations can be active simultaneously.
Up to 200 staged records can be created to handle many scenarios. Up to eight of these
records can be installed on the server at any time. After the records are installed, the
activation of the records can be done manually, or the z/OS Capacity Provisioning Manager
can automatically start the activation when Workload Manager (WLM) policy thresholds are
reached. Tokens are available that can be purchased for On/Off CoD before or after workload
execution (pre- or post-paid).
LPAR capping
IBM Processor Resource/Systems Manager™ (IBM PR/SM™) offers different options to limit
the amount of capacity that is assigned to and used by an LPAR or a group of LPARs. By
using the Hardware Management Console (HMC), a user can define an absolute or a relative
capping value for LPARs that are running on the system.
z14 servers support z/Architecture mode only, which can be initialized in LPAR mode (also
known as PR/SM) or Dynamic Partition Manager (DPM) mode.
PR/SM mode
PR/SM is Licensed Internal Code (LIC) that manages and virtualizes all the installed and
enabled system resources as a single large symmetric multiprocessor (SMP) system. This
virtualization enables full sharing of the installed resources with high security and efficiency.
LPAR configurations can be dynamically adjusted to optimize the virtual servers’ workloads.
z14 servers provide improvements to the PR/SM HiperDispatch function. HiperDispatch
provides alignment of logical processors to physical processors that ultimately improves
cache utilization, minimizes inter-CPC drawer communication, and optimizes operating
system work dispatching, which combined results in increased throughput. For more
information, see “HiperDispatch” on page 93.
HiperSockets
z14 servers support defining up to 32 IBM HiperSockets™. HiperSockets provide for
memory-to-memory communication across LPARs without the need for any I/O adapters and
have virtual LAN (VLAN) capability.
IBM Z servers also offer other virtual appliance-based solutions and support other the
following hypervisors and containerization:
IBM GDPS® Virtual Appliance
KVM for IBM Z
Docker Enterprise Edition for Linux on IBM Systems9
For example, in a z/VM-mode LPAR, z/VM can manage Linux on IBM Z guests that are
running on IFL processors while also managing z/VSE and z/OS guests on CPs. It also
allows z/OS to fully use zIIPs.
SSC can be used to create isolated partitions for protecting data and applications
automatically, which helps keep them safe from insider threats and external cyber criminals.
SSC offers the following benefits:
Streamline the IBM Z Application experience so it is comparable to installing an
application on a mobile device.
Deploy an appliance in minutes instead of days.
Protect the workload from being accessed by a sysadmin or external attacker.
8
IBM HSBN is a cloud service plan that is available on IBM Bluemix® for Blockchain.
9
For more information, see the IBM to Deliver Docker Enterprise Edition for Linux on IBM Systems topic of the BIM
News releases website.
The z/VSE Network Appliance is an extension of the z/VSE - z/VM IP Assist (IBM VIA®)
function that was introduced on z114 and z196 servers. VIA provides network access for
TCP/IP socket applications that run on z/VSE as a z/VM guest. With the new z/VSE Network
Appliance, this function is available for z/VSE systems that are running in an LPAR. The
z/VSE Network Appliance is provided as a downloadable package that can then be deployed
with the SSC Installer and Loader.
The VIA function is available for z/VSE systems that run as z/VM guests. The z/VSE Network
Appliance is available for z/VSE systems that run without z/VM in LPARs. Both functions
provide network access for TCP/IP socket applications that use the LFP without the
requirement of TCP/IP stack on the z/VSE system and installing Linux on IBM Z.
IBM zAware: With Announcement Letter 916-201 dated November 1, 2016, IBM changed
how IBM System z Advanced Workload Analysis Reporter (IBM zAware) is delivered. IBM
zAware was available as a firmware feature on zEC12, zBC12, z13. Also, z13s is now
offered as a software feature with IBM Operations Analytics for Z.
IBM Operations Analytics for Z brings new capabilities and functions to the product, or
applies maintenance, based on user schedules, that is not tied to IBM Z firmware updates.
Integration with IBM Operational Analytics for Z Problem Insights dashboard eliminates the
need for tedious searching through volumes of operational data, which puts key
operational issues at your fingertips. New functions, such as proactive outage avoidance
with email alerts, improve the users’ ability to respond to identified anomalies.
Enterprises should migrate from HCA3-O and HCA3-O LR adapters to ICA SR or Coupling
Express Long Reach (CE LR) adapters on z14, z13, and z13s. For high-speed short-range
coupling connectivity, enterprises should migrate to the Integrated Coupling Adapter
(ICA-SR).
For a four CPC drawer system, up to 40 PCIe and 16 InfiniBand fanout slots can be
configured for data communications between the CPC drawers and the I/O infrastructure, and
for coupling. The multiple channel subsystem (CSS) architecture allows up to six CSSs, each
with 256 channels.
For I/O constraint relief, four subchannel sets are available per CSS, which allows access to
many logical volumes. The fourth subchannel set allows extending the amount of addressable
external storage for Parallel Access Volumes (PAVs), Peer-to-Peer Remote Copy (PPRC)
secondary devices, and IBM FlashCopy® devices. z14 supports Initial Program Load (IPL)
from subchannel set 1 (SS1), subchannel set 2 (SS2), or subchannel set 3 (SS3), and
subchannel set 0 (SS0). For more information, see “Initial program load from an alternative
subchannel set” on page 200.
The system I/O buses use the Peripheral Component Interconnect® Express (PCIe)
technology and the InfiniBand technology, which are also used in coupling links.
IBM Z servers are designed to enable highest availability and lowest downtime. These facts
are recognized by various IT analysts, such as ITIC11 and IDC12. Comprehensive,
multi-layered strategy includes the following features:
Error Prevention
Error Detection and Correction
Error Recovery
With a properly configured z14 server, further reduction of outages can be attained through
First Failure Data Capture (FFDC), which is designed to reduce service times and avoid
subsequent errors, and improve nondisruptive replace, repair, and upgrade functions for
memory, drawers, and I/O adapters. In addition, z14 servers extended nondisruptive
capability to download and install LIC updates.
IBM z14™ RAS features provide unique high-availability and nondisruptive operational
capabilities that differentiate the Z servers in the marketplace. z14 RAS enhancements are
made on many components of the CPC (processor chip, memory subsystem, I/O, and
service) in areas, such as error checking, error protection, failure handling, error checking,
faster repair capabilities, sparing, and cooling.
The ability to cluster multiple systems in a Parallel Sysplex takes the commercial strengths of
the z/OS platform to higher levels of system management, scalable growth, and continuous
availability.
11
For more information, see ITIC Global Server Hardware, Server OS Reliability Report.
12 For more information, see Quantifying the Business Value of IBM Z.
1.3.1 Models
The IBM z14 server has a machine type of 3906. Five models are offered: M01, M02, M03,
M04, and M05. The model name indicates the number of CPC drawers for models
M01 - M04. Model M05 also has four CPC drawers, but with more PU per drawer than models
M01 - M04. A PU is the generic term for the IBM z/Architecture processor unit (processor
core) on the CP SCM.
On z14 servers, some PUs are part of the system base; that is, they are not part of the PUs
that can be purchased by clients. They include the following characteristics:
System assist processor (SAP) that is used by the channel subsystem. The number of
predefined SAPs depends on the z14 model.
One integrated firmware processor (IFP). The IFP is used in support of select features,
such as zEDC and RoCE Express.
Two spare PUs that can transparently assume any characterization during a permanent
failure of another PU.
The PUs that clients can purchase can assume any of the following characteristics:
CP for general-purpose use.
Integrated Facility for Linux (IFL) for the use of Linux on Z.
IBM Z Integrated Information Processor (zIIP13) is designed to help free-up general
computing capacity and lower overall total cost of computing for select data and
transaction processing workloads.
zIIPs: At least one CP must be purchased with, or before, a zIIP can be purchased.
Clients can purchase up to two zIIPs for each purchased CP (assigned or unassigned)
on the system (2:1). However, for migrations from zEC12 with zAAPs, the ratio
(CP:zIIP) can go up to 4:1.
13
IBM zEnterprise Application Assist Processors (zAAPs) are not available on z14 servers. The zAAP workload is
run on zIIPs.
The multi-CPC drawer system design provides the capability to concurrently increase the
capacity of the system in the following ways:
Add capacity by concurrently activating more CPs, IFLs, ICFs, or zIIPs on a CPC drawer.
Add a CPC drawer concurrently and activate more CPs, IFLs, ICFs, or zIIPs.
Add a CPC drawer to provide more memory, or one or more adapters to support a larger
number of I/O features.
1.3.3 Frames
z14 servers have two frames that are bolted together and are known as the A frame and the Z
frame. The frames contain the following components:
Up to four CPC drawers in Frame A
Up to five PCIe I/O drawers (up to one in Frame A and up to four in Frame Z) that hold I/O
features and special purpose features
Power supplies in Frame Z
Optional Internal Battery Feature (IBF)
Cooling units for either air or water cooling
Two System Control Hubs (SCHs) to interconnect the CPC components through Ethernet
Two 1U rack-mounted Support Elements (mounted in A frame) with their keyboards,
pointing devices, and displays mounted on a tray in the Z frame
The SCM provides a significant increase in system scalability and an extra opportunity for
server consolidation. All CPC drawers are fully interconnected by using high-speed
communication links through the L4 cache (in the SC SCM). This configuration allows the z14
server to be controlled by the PR/SM facility as a memory-coherent and cache-coherent SMP
system.
The SCMs are cooled by a cold plate that is connected to an internal water cooling loop. In an
air-cooled system, the radiator units (RUs) exchange the heat from the internal water loop
with air. The RU has N+1 availability for pumps and blowers.
The z14 server offers also a water-cooling option for increased system and data center
energy efficiency. The water cooling units (WCUs) are fully redundant in an N+1 arrangement.
Processor features
The processor core operates at 5.2 GHz. Depending on the z14 model, 41 - 196 active PUs
are available on 1 - 4 CPC drawers.
Each core on the CP SCM includes an enhanced dedicated coprocessor for data
compression and cryptographic functions, which are known as the Central Processor Assist
for Cryptographic Functions (CPACF)14. Having standard clear key cryptographic
coprocessors that are integrated with the processor provides high-speed cryptography for
protecting data.
Hardware data compression can play a significant role in improving performance and saving
costs over performing compression in software. z14 is the fourth IBM Z generation having
CMPSC, the on-chip compression co-processor. A new compression ratio with Entropy
Encoding that uses Huffman coding and Order Preserving Compression in z14 results in
fewer CPU cycles to enable further compression of data including Db2 indexes, which
improves memory, transfer, and disk efficiency.
The zEDC Express feature complements the functionality of the coprocessor (CPACF). Their
functions are not interchangeable.
The micro-architecture of the core was improved in several ways to increase parallelism and
pipeline efficiency. z14 cores have 2x more on-chip cache, compared to z13 per core, to
minimize memory waits while maximizing the throughput of concurrent workloads, which
makes it perfectly optimized for data serving.
z14 includes a new translation lookaside buffer (TLB2) design with four
hardware-implemented translation engines that reduces latency when compared with one
pico-coded engine on z13. Pipeline optimization presents enhancements, including improved
instruction delivery, faster branch wake-up, reduced execution latency, improved Operand
Store Compare (OSC) prediction.
14
Feature code (FC) 3863 must be ordered to enable CPACF. This feature code is available for no extra fee.
15
Based on preliminary internal IBM lab measurements on a stand-alone, dedicated system in a controlled
environment and compared to the z13. Results might vary.
z14 has a new decimal architecture with Vector Enhancements Facility and Vector Packed
Decimal Facility for Data Access Accelerator. Vector Packed Decimal Facility introduces a set
of instructions that perform operation on decimal type data that uses vector registers to
improve performance. z14 offers up to 2x throughput of vector Binary Floating Point double
precision operations and RSA/ECC acceleration (Long Multiply Support).
By using the Vector Packed Decimal Facility that allows packed decimal operations to be
performed in registers rather than memory (by using new fast mathematical computations),
compilers, such as Enterprise COBOL for z/OS, V6.2, Enterprise PL/I for z/OS, V5.2, z/OS
V2.3 XL C/C++, the COBOL optimizer, Automatic Binary Optimizer for z/OS, V1.3, and Java,
are optimized on z14.
Much of today’s commercial computing uses decimal floating point calculus, so on-core
hardware decimal floating point units meet the requirements of business and user
applications. This ability provides greater floating point execution throughput with improved
performance and precision.
Simultaneous multithreading
z/Architecture introduced SMT support with z13, that allows simultaneous running of two
threads (SMT) in the same zIIP or IFL core, which dynamically shares processor resources,
such as execution units and caches. SMT in z13 allowed a more efficient use of the core and
increased capacity because while one of the threads is waiting for a storage access (cache
miss), the other thread that is running simultaneously in the core can use the shared
resources rather than remain idle.
z14 introduces a new decimal architecture and new SIMD (vector) instruction set, which are
designed to boost performance for traditional workloads by using COBOL and new
applications, such as analytics. The SIMD unit in z14 now supports 32-bit floating point
operations. The use of enhanced mathematical libraries, such as OpenBLAS, provides
performance improvements for analytical workloads.
Out-of-order execution
As with its predecessor z13, z14 has an enhanced superscalar microprocessor with
Out-of-Order execution to achieve faster throughput. With Out-of-Order, instructions might not
run in the original program order, although results are presented in the original order. For
example, Out-of-Order allows a few instructions to complete while another instruction is
waiting. Up to six instructions can be decoded per system cycle, and up to 10 instructions can
be in execution.
z14 servers offer PCIe I/O drawers that host PCIe features. I/O drawers that were used in
previous IBM Z servers are not supported on z14.
With z14, the number of resource groups is increased from two to four to add granularity,
which helps mitigate the effect of the disruptive Resource Group Microcode Change Level
(MCL) installations. This firmware management enhancement contributes to the RAS of the
server.
During the ordering process of the native PCIe features, features of the same type are evenly
spread across the four resource groups (RG1, RG2, RG3, and RG4) for availability and
serviceability reasons. Resource groups are automatically activated when these features are
present in the CPC.
In addition to the zEDC and 10GbE RoCE Express features, the z14 introduces the following
native PCIe I/O features:
Coupling Express Long Reach (CE LR)
zHyperLink Express
25GbE and 10GbE RoCE Express2
1.3.7 I/O and special purpose features in the PCIe I/O drawer
The z14 server (new build) supports the following PCIe features that are installed in the PCIe
I/O drawers:
Storage connectivity:
– FICON Express16S+ Short Wave (SX)
– FICON Express16S+ Long Wave (LX) 10 km (6.2 miles)
– zHyperLink Express
Network connectivity:
– OSA-Express7S 25GbE Short Reach (SR)
– OSA-Express6S 10GbE Long Reach (LR)
– OSA-Express6S 10GbE Short Reach (SR)
– OSA-Express6S GbE LX
– OSA-Express6S GbE SX
– OSA-Express6S 1000BASE-T
– 25GbE RoCE Express2
– 10GbE RoCE Express2
Coupling and Server Time Protocol connectivity: Coupling Express LR
Cryptography:
– Crypto Express6S
– Regional Crypto Enablement
zEDC Express
Although they are used for coupling connectivity, the IBM Integrated Coupling Adapter (ICA
SR) and the InfiniBand coupling links HCA3-O and HCA3-O LR are other z14 supported
features that are not listed here because they are attached directly to the CPC drawer.
zHyperLink Express feature directly connects the z14 Central Processor Complex (CPC) to
the I/O Bay of the DS8880 (R8.3). This short distance (up to 150 m) direct connection is
intended to reduce I/O latency and improve storage I/O throughput.
The improved performance of zHyperLink Express allows the z14 PU to make a synchronous
request for the data that is in the DS8880 cache. This feature eliminates the undispatch of the
running request, the queuing delays to resume the request, and the PU cache disruption.
The IBM zHyperLink Express is a two-port feature in the PCIe I/O drawer. Up to 16 features
with up to 32 zHyperLink Express ports are supported in a z14 CPC. The zHyperLink Express
feature uses PCIe Gen3 technology, with x16 lanes that are bifurcated into x8 lanes for
storage connectivity. It is designed to support a link data rate of 8 GigaBytes per second
(GBps)16.
16
The link data rates do not represent the performance of the links. The actual performance is dependent upon
many factors, including latency through the adapters, cable lengths, and the type of workload.
FICON channels
Up to 160 features with up to 320 FICON Express16S+ channels are supported on a new
build z14. FICON Express 16S+ and FICON Express 16S (carry forward only) support 4, 8, or
16 Gbps. FICON Express8S features (carry forward only) support link data rates of 2, 4, or
8 Gbps.
FICON Express16S+ offers increased performance compared to FICON Express16S with its
new IBM I/O ASIC that supports up to 3x the I/O start rate of previous FICON/FCP solution.
Another distinction for FICON Express16S+ is that both ports of a feature must be defined as
the same CHPID type (no mix of FC and FCP CHPID for the same feature).
OSA-Express features provide important benefits for TCP/IP traffic by reducing latency and
improving throughput for standard and jumbo frames. Data router function that is present in all
OSA-Express features enables performance enhancements.
On z14, an OSA feature that is configured as an integrated console controller CHPID type
(OSC) supports the configuration and enablement of secure connections by using the
Transport Layer Security (TLS) protocol versions 1.0, 1.1, and 1.2.
Important: Fulfilling the related Statement of Direction, z14 removed the support for
configuring OSN CHPID types.
For more information about the OSA features, see 4.7, “Connectivity” on page 163.
HiperSockets
The HiperSockets function (also known as internal queued direct input/output or internal
QDIO or iQDIO) is an integrated function of the z14 server that provides users with
attachments to up to 32 high-speed virtual LANs with minimal system and network processor
usage.
For communications between LPARs in the same z14 server, HiperSockets eliminate the
need to use I/O subsystem features to traverse an external network. Connection to
HiperSockets offers significant value in server consolidation by connecting many virtual
servers.
HiperSockets can also be used for Dynamic cross-system coupling, which is a z/OS
Communications Server feature that creates trusted, internal links to other stacks within a
Parallel Sysplex.
RoCE Express features reduce CPU consumption for applications that use the TCP/IP stack
(sockets communication), such as IBM WebSphere Application Server that accesses a Db2
database. It is transparent to applications and also might help to reduce network latency with
memory-to-memory transfers that use SMC-R in supported z/OS releases.
The 10GbE RoCE Express2 and 10GbE RoCE Express features use SR optics and support
the use of a multimode fiber optic cable that ends with an LC Duplex connector. Both support
point-to-point and switched connections with an enterprise-class 10 GbE switch. A maximum
of eight RoCE Express features can be installed in PCIe I/O drawers of z14.
The new 25GbE RoCE Express2 also features SR optics and supports the use of 50 micron
multimode fiber optic that ends with an LC duplex connector. It supports point-to point and
switched connection with 25GbE capable switch (supports only 25 Gbps, no down negotiation
to 10 Gbps).
For more information, see Appendix D, “Shared Memory Communications” on page 475.
Introduced with z13 GA2 and z13s, SMC-D enables high-bandwidth LPAR-to-LPAR TCP/IP
traffic (sockets communication) by using the direct memory access software protocols over
virtual Internal Shared Memory PCIe devices (vPCIe). SMC-D maintains the socket-API
transparency aspect of SMC-R so that applications that use TCP/IP communications can
benefit immediately without requiring any application software or IP topology changes.
z14 continues to support SMC-D with its lightweight design that improves throughput, latency,
and CPU consumption and complements HiperSockets, OSA, or RoCE without sacrificing
quality of service.
Notes: SMC-D does not support coupling facilities (z/OS to z/OS only).
Notea: z14 is the last high-end IBM Z server to support Parallel Sysplex InfiniBand (PSIFB)
coupling links; z13s is the last midrange IBM Z server to support them. IBM Z enterprises
should plan to migrate off from PSIFB links.
a. All statements regarding IBM plans, directions, and intent are subject to change or withdrawal
without notice. Any reliance on these statements of general direction is at the relying party’s
sole risk and will not create liability or obligation for IBM.
The ICA SR is designed to drive distances up to 150 m and support a link data rate of
8 GBps. The ICA SR fanout takes one PCIe I/O fanout slot in the z14 CPC drawer. It is used
for coupling connectivity between z14, z13, and z13s CPCs, and cannot be connected to
HCA3-O or HCA3-O LR coupling fanouts. The ICA SR is compatible with another ICA SR
only.
Compared to the HCAO-3 LR feature, which has 4-port and 4-link increments, the CE LR link
allows for more granularity when scaling up or completing maintenance. Link performance is
similar to the InfiniBand 1x coupling link and uses identical Single Mode fiber. The CE LR link
provides point-to-point coupling connectivity at distances of 10 km unrepeated and 100 km
with a qualified dense wavelength division multiplexing (DWDM) device.
CFCC Level 23
CFCC level 23 is delivered on the z14 with driver level 36. CFCC Level 23 introduces the
following enhancements:
Asynchronous cross-invalidate (XI) of CF cache structures. Requires PTF support for
z/OS and explicit data manager support (IBM DB2® V12 with PTFs).
Coupling Facility hang detect enhancements- provides a significant reduction in failure
scope and client disruption (CF-level to structure-level), with no loss of FFDC collection
capability.
Coupling Facility ECR granular latching
z14 servers with CFCC Level 23 require z/OS V1R13 or later, and z/VM V6R4 or later for
virtual guest coupling.
CFCC Level 22
CFCC level 22 is delivered on the z14 with driver level 32. CFCC Level 22 introduces the
following enhancements:
Support for up to 170 ICF processors per z14. The maximum number of logical processors
in a Coupling Facility Partition remains 16.
Support for new Coupling Express LR links.
CF Processor Scalability: CF work management and dispatching changes for z14 allow
improved efficiency and scalability for coupling facility images.
CF List Notification Enhancements: Significant enhancements were made to CF
notifications that inform users about the status of shared objects within a CF.
Coupling Link Constraint Relief: z14 provides more physical coupling link connectivity
compared to z13.
CF Encryption: z/OS 2.3 supports end-to-end encryption for CF data in flight and data at
rest in CF structures (as a part of the Pervasive Encryption solution). Host-based CPACF
encryption is used for high performance and low latency.
z14 servers with CFCC Level 22 require z/OS V1R13 or later, and z/VM V6R3 or later for
virtual guest coupling.
Although the CF LPARs are running on different server generations, different levels of CFCC
can coexist in the same sysplex, which enables upgrade from one CFCC level to the next. CF
LPARs that are running on the same server share a single CFCC level.
A CF running on a z14 server (CFCC level 22) can coexist in a sysplex with CFCC levels 21
and 19. For more information about determining the CF LPAR size by using the CFSizer tool,
see the System z Coupling Facility Structure Sizer Tool page of the IBM Systems support
website.
Network Time Protocol (NTP) client support is available to the STP code on the z14, z13,
z13s, zEC12, and zBC12 servers. By using this function, these servers can be configured to
use an NTP server as an External Time Source (ETS). This implementation fulfills the need
for a single time source across the heterogeneous platforms in the enterprise, including IBM
Z servers and others systems that are running Linux, UNIX, and Microsoft Windows operating
systems.
The time accuracy of an STP-only CTN can be improved by using an NTP server with the
pulse per second (PPS) output signal as ETS. This type of ETS is available from various
vendors that offer network timing solutions.
HMC can be configured as an NTP client or an NTP server. To ensure secure connectivity,
HMC NTP broadband authentication can be enabled on z14, z13, and zEC12 servers.
Older systems allowed synchronization up to only level 3, or up to two levels from the CTS.
This extra stratum level is not intended for long-term use; rather, it is intended for short-term
use during configuration changes for large timing networks to avoid some of the cost and
complexity that is caused by being constrained to a three-level STP stratum configuration.
z14 also introduces a new Graphical User Display for the STP network and configuration. The
new user interface was revamped for a quick, intuitive view and management of the various
pieces of the CTN, including the status of the components of the timing network. The new
level of HMC allows the management of older systems by using the same new interface.
Attention: As with its predecessor z13, a z14 server cannot be connected to a Sysplex
Timer and cannot be a member in a Mixed CTN. An STP-only CTN is required for the z14
and z13 servers.
If a current configuration consists of a Mixed CTN or a Sysplex Timer (M/T 9037), the
configuration must be changed to an STP-only CTN before z14 integration. The z14 server
can coexist only with IBM Z CPCs that do not have the external time reference (ETR) port
capability.
The new ECAR support is faster than the original support almost no delay occurs between
the system checkstop and the start of CAR processing. ECAR is available on z14, z13 GA2,
and z13s servers only. In a mixed environment with previous generation machines, ECAR
supported servers are defined as the PTS and CTS.
1.3.11 Cryptography
A strong synergy exists between cryptography and security. Cryptography provides the
primitives to support security functions. Similarly, security functions help to ensure authorized
use of key material and cryptographic functions.
Cryptography on IBM Z is built on the platform with integrity. IBM Z platform offers
hardware-based cryptography features that are used by the following environments and
functions:
Java
Db2/IMS encryption tool
Db2 built in encryption z/OS Communication Server
IPsec/IKE/AT-TLS
z/OS System SSL
z/OS
z/OS Encryption Facility
Linux on Z
CP Assist for Cryptographic Functions
Crypto Express6S
Regional Crypto Enablement
Trusted Key Entry workstation
z/OS Integrated Cryptographic Service Facility (ICSF) callable services and the z90crypt
device driver that is running on Linux on Z also start CPACF functions. ICSF is a base
element of z/OS. It uses the available cryptographic functions, CPACF, or PCIe cryptographic
features to balance the workload and help address the bandwidth requirements of your
applications.
With z14, CPACF is enhanced to support pervasive encryption to provide faster encryption
and decryption than previous servers. For every Processor Unit that is defined as a CP or an
IFL, it offers the following enhancements over z13:
Reduced overhead on short data (hashing and encryption)
Up to 4x throughput for AES
Special instructions for elliptic curve cryptography (ECC)/RSA
New hashing algorithms (for example, SHA-3)
The z13 CPACF provides (supported by z14 also) the following features:
For data privacy and confidentially: DES, Triple Data Encryption Standard (TDES), and
AES for 128-bit, 192-bit, and 256-bit keys.
For data integrity: Secure Hash Algorithm-1 (SHA-1) 160-bit, and SHA-2 for 224-, 256-,
384-, and 512-bit support. SHA-1 and SHA-2 are shipped enabled on all z14s and do not
require the no-charge enablement feature.
For key generation: Pseudo Random Number Generation (PRNG), Random Number
Generation Long (RNGL) (1 - 8192 bytes), and Random Number Generation Long (RNG)
with up to 4096-bit key RSA support for message authentication.
CPACF must be explicitly enabled by using a no-charge enablement feature (FC 3863). This
requirement excludes the SHAs, which are enabled by default with each server.
The enhancements to CPACF are exclusive to the IBM Z servers and are supported by z/OS,
z/VM, z/VSE, z/TPF, and Linux on Z.
Crypto Express6S
Crypto Express6S represents the newest generation of cryptographic features. Cryptographic
performance improvements with new Crypto Express6S (FC #0893) allow more data to be
securely transferred across the internet. Crypto Express6S is designed to complement the
cryptographic capabilities of the CPACF. It is an optional feature of the z14 server generation.
The Crypto Express6S feature is designed to provide granularity for increased flexibility with
one PCIe adapter per feature. Although installed in the PCIe I/O drawer, Crypto Express6S
features do not perform I/O operations. That is, no data is moved between the CPC and any
externally attached devices. For availability reasons, a minimum of two features is required.
z14 servers allow sharing of a cryptographic coprocessor across 85 domains (the maximum
number of LPARs on the system for z14 is 85).
The Crypto Express6S is designed to meet the following cryptographic standards, among
others:
FIPS 140-2 Level 417
Common Criteria EP11 EAL4
ANSI 9.97
Federal Information Processing Standard (FIPS) 140-2 certification is supported only when
Crypto Express6S is configured as a CCA or an EP11 coprocessor.
Crypto Express6S supports several ciphers and standards that are described next. For more
information about cryptographic algorithms and standards, see Chapter 6, “Cryptographic
features” on page 207.
Regional Crypto Enablement (RCE) is a framework that is used to enable the integration of
IBM certified third-party cryptographic hardware for regional or industry encryption
requirements. It also supports the use of cryptography algorithms and equipment from
selected providers with IBM Z in specific countries. Support for the use of international
algorithms (AES, DES, RSA, and ECC) with regional crypto devices (supporting regional
algorithms, such as SMx) is added to the ICSF PKCS#11 services.
When ordered, the RCE support reserves the I/O slots for the IBM approved vendor-supplied
cryptographic cards. Clients must contact the IBM approved vendor directly for purchasing
information.
The TKE workstation offers a security-rich solution for basic local and remote key
management. It provides authorized personnel with a method for key identification, exchange,
separation, update, and backup, and a secure hardware-based key loading mechanism for
operational and master keys. TKE also provides secure management of host cryptographic
module and host capabilities.
Support for an optional smart card reader that is attached to the TKE workstation allows the
use of smart cards that contain an embedded microprocessor and associated memory for
data storage. Access to and the use of confidential data on the smart cards are protected by
a user-defined personal identification number (PIN).
TKE workstation and the most recent TKE 9.1 LIC are optional features on the z14. TKE
workstation is offered in two types: TKE Tower (FC #0086) and TKE Rack Mount (FC #0085).
TKE 9.x18 requires the crypto adapter FC 4768. You can use an older TKE version to collect
data from previous generations of cryptographic modules and apply the data to Crypto
Express6S coprocessors.
17
Federal Information Processing Standard (FIPS) 140-2 Security Requirements for Cryptographic Modules
18 TKE 9.0 LIC or TKE 9.1 LIC have the same hardware requirements. TKE 9.0 LIC can be upgraded to 9.1 LIC.
For more information about the cryptographic features, see Chapter 6, “Cryptographic
features” on page 207.
For more information about the most current ICSF updates that are available, see the Web
Deliverables download website.
Although installed in the PCIe I/O drawer, zEDC Express features do not perform I/O
operations. That is, no data is moved between the CPC and externally attached devices. One
PCIe adapter or compression coprocessor is available per feature. The zEDC Express feature
can be shared by up to 15 LPARs. Up to 16 features can be installed on z14.
For more information about the IBM System z Batch Network Analyzer (zBNA) tool, which
reports on potential zEDC usage for QSAM/BSAM data sets, see the IBM System z Batch
Network Analyzer (zBNA) Tool page.
For more information, see Appendix F, “IBM zEnterprise Data Compression Express” on
page 511.
The initial focus is on preventing failures from occurring. This goal is accomplished by using
Hi-Rel (highest reliability) components that use screening, sorting, burn-in, and run-in, and by
taking advantage of technology integration.
For LIC and hardware design, failures are reduced through rigorous design rules; design
walk-through; peer reviews; element, subsystem, and system simulation; and extensive
engineering and manufacturing testing.
The RAS strategy is focused on a recovery design to mask errors and make them transparent
to client operations. An extensive hardware recovery design is implemented to detect and
correct memory array faults. In cases where transparency cannot be achieved, you can
restart the server with the maximum capacity possible.
For more information, see Chapter 9, “Reliability, availability, and serviceability” on page 363.
HMC is offered as a Tower (FC #0082) and a Rack Mount (FC #0083) feature. Rack Mount
HMC can be placed in a customer-supplied 19-inch rack and occupies 1U rack space. z14
includes driver level 32 and HMC application Version 2.14.0.
For more information, see Chapter 11, “Hardware Management Console and Support
Elements” on page 407.
For more information about supported Linux on Z distribution levels, see the Tested platforms
for Linux page of the IBM Z website.
For more information about features and functions that are supported on z14 by operating
system, see Chapter 7, “Operating system support” on page 243.
z/VM support
z/VM 7.1 (Available as of Sept. 2018) increases the level of engagement with the z/VM user
community. z/VM 7.1 includes the following new features:
Single System Image and Live Guest Relocation included in the base (no extra charge).
Enhances the dump process to reduce the time that is required to create and process
dumps.
Upgrades to a new Architecture Level Set (requires an IBM zEnterprise EC12 or BC12, or
later).
Provides the base for more functionality to be delivered as service after general
availability.
Enhances the dynamic configuration capabilities of a running z/VM system with Dynamic
Memory Downgrade* support. For more information, see this web page.
Includes SPE20s shipped for z/VM 6.4, including Virtual Switch Enhanced Load Balancing,
DS8K z-Thin Provisioning, and Encrypted Paging.
To support new functionality that was announced October 2018, z/VM requires fixes for the
following APARs:
PI99085
VM66130
VM65598
VM66179
VM66180
With the PTF for APAR VM65942, z/VM V6.4 provides support for z14.
19
Customers should monitor for new distribution releases supported.
20 Small Program Enhancements, part of the continuous delivery model, see http://www.vm.ibm.com/newfunction/
For more information about the features and functions that are supported on z14 by operating
system, see Chapter 7, “Operating system support” on page 243.
z/OS support
z/OS uses many of the following new functions and features of z14 (depending on version
and release; PTFs might be required to support new functions):
Up to 170 processors per LPAR or up to 128 physical processors per LPAR in SMT mode
(SMT for zIIP)
Up to 16 TB of real memory per LPAR (dependent on z/OS version)
Two-way simultaneous multithreading (SMT) optimization and support of SAPs (SAP SMT
enabled by default) in addition to zIIP engines
XL C/C++ ARCH(12) and TUNE(12)complier options
Use of faster CPACF
Pervasive Encryption:
– Coupling Facility Encryption
– Dataset and network encryption
HiperDispatch Enhancements
z14 Hardware Instrumentation Services (HIS)
Entropy-Encoding Compression Enhancements
Guarded Storage Facility (GSF)
For more information about the features and functions that are supported on z14 by operating
system, see Chapter 7, “Operating system support” on page 243.
The compilers increase the return on your investment in IBM Z hardware by maximizing
application performance by using the compilers’ advanced optimization technology for
z/Architecture. Through their support of web services, XML, and Java, they allow for the
modernization of assets in web-based applications. They also support the latest IBM
middleware products (CICS, Db2, and IMS), which allows applications to use their latest
capabilities.
To fully use the capabilities of z14 servers, you must compile it by using the minimum level of
each compiler. To obtain the best performance, you must specify an architecture level of 12 by
using the ARCH(12) option.
For more information, see “7.5.4, “z/OS XL C/C++ considerations” on page 308.
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
The objective of this chapter is to explain the z14 hardware building blocks and how these
components interconnect from a physical point of view. This information is useful for planning
purposes and can help in defining configurations that fit your requirements.
The z14 server and its predecessor, the z13, have the option of ordering the infrastructure to
support the top exit of fiber optic cables (FICON, OSA, 12x InfiniBand, 1x InfiniBand, ICA,
zHyperLink Express, Coupling Express LR, and RoCE) and copper cables for the
1000BASE-T Ethernet features. On the z14 server, the top exit capability is designed to
provide an option for overhead power cabling.
Figure 2-1 z14 internal front view of an air-cooled CPC (Machine type 3906, models M04 or M05)
Figure 2-2 z14 internal front view of a water-cooled CPC (Machine type 3906, models M04 or M05)
The z14 CPC drawer is packaged slightly differently than z13 and contains the following
components:
Two Drawer Sizes with each containing two logical CP clusters:
– 5 PU SCMs + 1 SC SCM / drawer (41 PUs) – for models M01, M02, M03, M04
– 6 PU SCMs + 1 SC SCM / drawer (49 PUs) – for model M05
PU SCM 14nm SOI technology, 17 layers of metal, core running at 5.2GHz: 10 PUs/SCM
(7, 8, 9, 10 active core PNs).
One System Controller (SC) SCM, with a 672 MB L4 cache.
Five DDR3 dual in-line module (DIMM) slots per memory controller, for a total of up to 25
DIMMs per drawer.
DIMMs plugged in to 15, 20 or 25 DIMM slots, providing 640 - 40960 GB of physical
memory (includes RAIM) and 512 - 32576 GB of addressable memory in a four-drawer
system.
10 PCIe Gen3 x16 fanouts (16 GBps bandwidth) per CPC Drawer:
– PCIe Gen3 I/O fanout for PCIe I/O Drawer
– ICA SR, PCIe fanout
Four GX++ slots for IFB fanouts (6 GBps bandwidth): HCA3-O, HCA3-O LR.
Two flexible service processor (FSP) cards for system control.
Figure 2-6 shows the front view of a CPC drawer, with fanouts slots and connector for water
cooling, and the rear view of drawer, with the DIMM slots and DCA connector.
Figure 2-7 shows the front view of a fully populated processor drawer. Redundant FSP
adapters (2) always are installed, and PCIe I/O fanouts are plugged in specific slots for best
performance and availability. SMP interconnect cables are present when multiple drawers are
present. Also present are the ICA SR and InfiniBand fanouts for coupling connectivity.
Memory is connected to the SCMs through memory control units (MCUs). Five MCUs can be
placed in a drawer (one per PU SCM) that provide the interface to the controller on memory
DIMM. A memory controller drives five DIMM slots.
The CPC drawers are in the Frame A and are populated from bottom to top.
The order of CPC drawer installation and position in the Frame A is listed in Table 2-1.
CPC drawer installation is concurrent, except for the upgrade to the model M05, which can be
obtained only from manufacturing with the model M05 processor drawers that are installed.
Concurrent drawer repair requires a minimum of two drawers.
2.2.2 Oscillator
The z14 server has two oscillator cards (OSCs): One primary and one backup. If the primary
OSC fails, the secondary detects the failure, takes over transparently, and continues to
provide the clock signal to the CPC. The two oscillators have Bayonet Neill-Concelman (BNC)
connectors that provide pulse per second signal (PPS) synchronization to an external time
source with PPS output.
The accuracy of an STP-only CTN is improved by using an NTP server with the PPS output
signal as the External Time Source (ETS). NTP server devices with PPS output are available
from several vendors that offer network timing solutions. A cable connection from the PPS
port on the OSC to the PPS output of the NTP server is required when z14 uses STP and is
configured in an STP-only CTN that uses NTP with PPS as the external time source. The z14
server cannot participate in a mixed CTN; it can participate only in an STP-only CTN.
STP tracks the highly stable and accurate PPS signal from the NTP server and maintains an
accuracy of 10 µs to the ETS, as measured at the PPS input of the z14 server.
If STP uses an NTP server without PPS, a time accuracy of 100 ms to the ETS is maintained.
Although not part of the CPC drawer design, the OSCs cards are next to the CPC drawers
and connected to the same backplane to which the drawers are connected. All four drawers
connect to the OSC backplane.
Figure 2-10 shows the location of the two OSC cards with BNC connectors for PPS on the
CPC, which is next to the drawer 2 and drawer 3 locations.
2 x Oscillators
with BNC
connectors
for pulse per
second
Tip: STP is available as FC 1021. It is implemented in the Licensed Internal Code (LIC),
and allows multiple servers to maintain time synchronization with each other and
synchronization to an ETS. For more information, see the following publications:
Server Time Protocol Planning Guide, SG24-7280
Server Time Protocol Implementation Guide, SG24-7281
Server Time Protocol Recovery Guide, SG24-7380
Note: The maximum configuration has four CPC drawers and five PCIe or I/O drawers for
z14. The various supported FSP connections are referenced in Figure 2-11.
A typical FSP operation is to control a power supply. An SE sends a command to the FSP to
start the power supply. The FSP (by using SSI connections) cycles the various components of
the power supply, monitors the success of each step and the resulting voltages, and reports
this status to the SE.
Most system elements are duplexed (n+1), and each element has at least one FSP. Two
internal Ethernet LANs and two SEs, for redundancy, and crossover capability between the
LANs, are available so that both SEs can operate on both LANs.
The Hardware Management Consoles (HMCs) and SEs are connected directly to one or two
Ethernet Customer LANs. One or more HMCs can be used.
Figure 2-12 shows the location of DCAs on the backplane of the A frame.
DCA Power
Supplies
behind the
N+1 Blowers fans
2 DCA
2 DCA
Fans
Fans
2 DCA 2 DCA
Fans Fans
2 DCA 2 DCA
Fans Fans
2 DCA 2 DCA
Fans Fans
The SCMs are plugged into a socket that is part of the CPC drawer packaging. The
interconnectivity between the CPC drawers is accomplished through SMP connectors and
cables. Three inter-drawer connections are available on each CPC drawer. This configuration
allows a multidrawer system to act as a symmetric multiprocessor (SMP) system.
Each PU chip includes up to 10 active cores that run at 5.2 GHz, which means that the cycle
time is 0.192 ns. The PU chips come in four versions: 7, 8, 9, or 10 active cores. For models
M01, M02, M03, and M04, the processor units in each drawer are implemented with 41 active
cores per drawer. This configuration means that model M01 has 41, model M02 has 82,
mode, M03 has 123, and model M04 has 164 active cores.
The maximum number of characterized PUs depends on the z14 model. Some PUs are
characterized by the system as standard system assist processors (SAPs) to run the I/O
processing. By default, least two spare PUs per system are available that are used to assume
the function of a failed PU. The remaining installed PUs can be characterized for client use. A
z14 model nomenclature includes a number that represents the maximum number of PUs
that can be characterized for client use, as listed in Table 2-2.
M01 0 - 33 0 - 33 0 - 32 0 - 22 0 - 33 1 5 0-4 2
M02 0 - 69 0 - 69 0 - 68 0 - 46 0 - 69 1 10 0-8 2
Figure 2-17 on page 53 shows a schematic representation of the SC chip. Consider the
following points:
X-Bus (CP-CP and CP-SC): Significant changes to allow SC to fit more X-Bus
connections
A Bus (SC-SC off drawer): Minor changes to reflect protocol improvements and new
system topology
672 MB shared eDRAM L4 Cache
L4 Directory is built with eDRAM
New L4 Cache Management:
– Ratio of L3 to L4 cache capacity is increasing
– New on-drawer Cluster-to-Cluster (topology change) management
The maximum and minimum memory sizes that you can order for each z14 model are listed
in Table 2-3.
The minimum physical installed memory is 512 GB per CPC drawer. The minimum initial
amount of memory that can be ordered is 256 GB for all z14 models. The maximum customer
memory size is based on the physical installed memory minus the RAIM and minus the
hardware system area (HSA) memory, which has a fixed amount of 192 GB.
The memory granularity, which is based on the installed customer memory, is listed in
Table 2-4.
64 256 - 576
With the z14, the memory granularity varies from 64 GB (for customer memory sizes
256 - 576 GB) up to 512 GB (for CPCs having 4416 - 32576 GB of customer memory).
Figure 2-19 shows an overview of the CPC drawer memory topology of a z14 server.
Each CPC drawer includes 15, 20, or 25 DIMMs. DIMMs are connected to each PU chip
through the memory control units (MCU). Each PU SCM has one MCU, which uses five
channels (one for each DIMM and one for RAIM implementation) in a 4 +1 design. Each CPC
drawer can have three, four, or five populated MCUs.
DIMMs are used in 32, 64, 128, 256, and 512 GB sizes with five DIMMs of the same size
included in a memory feature. (160, 320, 640, 1280, and 2560 GB RAIM array size).
The RAIM design requires the addition of one memory channel that is dedicated for reliability,
availability, and serviceability (RAS), as shown in Figure 2-20.
Level 4 Cache
2B 2B 2B
DATA DATA
CHECK
CHECK
ECC
RAIM Parity
Extra column
provides RAIM
function
The fifth channel in each MCU enables memory to be implemented as a Redundant Array of
Independent Memory (RAIM). This technology has significant error detection and correction
capabilities. Bit, lane, DRAM, DIMM, socket, and complete memory channel failures can be
detected and corrected, including many types of multiple failures. Therefore, RAIM takes 20%
of DIMM capacity (a non-RAIM option is not available).
1 64 32 32 512
2 64 64 64 768
3 10 64 64 1024
The support element View Hardware Configuration task can be used to determine the size
and quantity of the memory plugged in each drawer. Figure 2-21 shows an example of the
location and description of the installed memory modules.
Table 2-7 lists the physical memory DIMM plugging configurations by feature code from
manufacturing when the system is ordered. The drawer columns for the specific model
contain the memory configuration number for the specific drawer. Use available unused
memory that can be enabled by LIC, when required. Consider the following points:
If more storage is ordered by using other feature codes, such as Virtual Flash Memory,
Flexible Memory, or Preplanned memory, the extra storage is installed and plugged as
necessary.
For a model upgrade that results in the addition of a CPC drawer, the minimum memory
increment is added to the system. Each CPC drawer has a minimum physical memory size of
320 GB.
During a model upgrade, adding a CPC drawer is a concurrent operation. Adding physical
memory to the added drawer is also concurrent. If all or part of the added memory is enabled
for use, it might become available to an active LPAR if the partition includes defined reserved
storage. (For more information, see 3.7.3, “Reserved storage” on page 134.) Alternatively, the
added memory can be used by an already-defined LPAR that is activated after the memory
addition.
Note: Memory downgrades are always disruptive. Model downgrades (removal of a CPC
drawer) are not supported.
Removing a CPC drawer often results in removing active memory. With the flexible memory
option, removing the affected memory and reallocating its use elsewhere in the system are
possible. For more information, see 2.4.7, “Flexible Memory Option” on page 63. This process
requires more available memory to compensate for the memory that is lost with the removal
of the drawer.
VFM is designed to help improve availability and handling of paging workload spikes when
z/OS V2.1, V2.2, or V2.3, or on z/OS V1.131 is run. With this support, z/OS is designed to
help improve system availability and responsiveness by using VFM across transitional
workload events, such as market openings and diagnostic data collection. z/OS is also
designed to help improve processor performance by supporting middleware use of pageable
large (1 MB) pages.
VFM can also be used in coupling facility images to provide extended capacity and availability
for workloads that are use IBM WebSphere MQ Shared Queues structures. The use of VFM
can help availability by reducing latency from paging delays that can occur at the start of the
workday or during other transitional periods. It is also designed to help eliminate delays that
can occur when collecting diagnostic data during failures.
1 z/OS V1.13 has additional requirements. See the Software Requirements section.
When you order memory, you can request extra flexible memory. The extra physical memory,
if required, is calculated by the configurator and priced accordingly.
Flexible memory is available on the M02, M03, M04, and M05 models only. The flexible
memory sizes that are available for the z14 are listed in Table 2-8.
Note: Although flexible memory can be purchased, it cannot be used for normal everyday
use. For that reason, a different purchase price for flexible memory is offered to increase
the overall availability of the system.
The installation and activation of any pre-planned memory requires the purchase of the
required feature codes (FCs), as listed in Table 2-9.
FC 1894/1940a
(256 GB memory
capacity Increments)
a. (Main memory <1 TB)/(main memory >1 TB).
The payment for plan-ahead memory is a two-phase process. One charge occurs when the
plan-ahead memory is ordered. Another charge occurs when the prepaid memory is activated
for use. For more information about the exact terms and conditions, contact your IBM
representative.
Pre-planned memory is installed by ordering FC 1990 (32 GB) or FC 1991 (64 GB). The
ordered amount of plan-ahead memory is charged at a reduced price compared to the normal
price for memory. One FC 1990 is needed for each 32 GB of usable memory (40 GB RAIM),
or one FC 1991 is needed for each 64 GB of usable memory (80 GB RAIM).
If VFM is present, it is included in the Flexible memory calculations. Normal plan ahead
memory increments are used first with the normal “feature conversion” action. When normal
plan ahead features are used, the VFM 1.5 TB plan ahead increment is deleted and normal
32 GB memory increments are added. No feature conversions are allowed from the 1.5 TB
VFM plan ahead increment to regular memory, plan ahead memory, or VFM memory.
The IBM Z hardware has decades of intense engineering behind it, which results in a robust
and reliable platform. The hardware has many RAS features that are built into it. For more
information, see Chapter 9, “Reliability, availability, and serviceability” on page 363.
DIMM level failures, including components, such as the memory controller application-specific
integrated circuit (ASIC), power regulators, clocks, and system board, can be corrected.
Memory channel failures, such as signal lines, control lines, and drivers and receivers on the
MCM, can be corrected.
Upstream and downstream data signals can be spared by using two spare wires on the
upstream and downstream paths. One of these signals can be used to spare a clock signal
line (one upstream and one downstream). The following improvements were also added in
the z14 server:
No cascading of memory DIMMs
Independent channel recovery
Double tabs for clock lanes
Separate replay buffer per channel
Hardware driven lane soft error rate (SER) and sparing
IBM z14 servers continue to deliver robust server designs through exciting new technologies,
hardening both new and classic redundancy.
For more information, see Chapter 9, “Reliability, availability, and serviceability” on page 363.
2 The air density sensor measures air pressure and is used to control blower speed.
Figure 2-23 shows the location of the fanouts for a four CPC drawer system. In all, 10 PCIe
fanout slots and 4 IFB fanout slots are available per CPC drawer. Each CPC drawer has two
FSPs for system control; the location code is LGXX.
Up to 10 PCIe fanouts (LG03 - LG12) and four IFB fanouts (LG13 - LG16) can be installed in
each CPC drawer.
A fanout can be repaired concurrently with the use of redundant I/O interconnect. For more
information, see 2.6.1, “Redundant I/O interconnect” on page 68.
When you are configuring for availability, balance the channels, coupling links, and OSAs
across drawers. In a system that is configured for maximum availability, alternative paths
maintain access to critical I/O devices, such as disks and networks. The CHPID Mapping Tool
can be used to assist with configuring a system for high availability.
Enhanced (CPC) drawer availability (EDA) allows a single CPC drawer in a multidrawer CPC
to be removed and reinstalled concurrently for an upgrade or a repair. Removing a CPC
drawer means that the connectivity to the I/O devices that are connected to that CPC drawer
is lost. To prevent connectivity loss, the redundant I/O interconnect feature allows you to
maintain connection to critical devices, except for ICA and PSIFB coupling, when a CPC
drawer is removed.
The PCIe I/O drawer supports up to 32 PCIe features, which are organized in four hardware
domains per drawer, as shown in Figure 2-24.
To support Redundant I/O Interconnect (RII) between front to back domain pairs 0, 1 and 2, 3,
the two interconnects to each pair must be driven from two different PCIe fanouts. Normally,
each PCIe interconnect in a pair supports the eight features in its domain. In backup
operation mode, one PCIe interconnect supports all 16 features in the domain pair.
Note: The PCIe Gen3 Interconnect (switch) adapter must be installed in the PCIe Drawer
to maintain the interconnect across I/O domains. If the adapter is removed, the I/O cards in
that domain (up to eight) become unavailable.
Before removing the CPC drawer, the contents of the PUs and memory of the drawer must be
relocated. PUs must be available on the remaining CPC drawers to replace the deactivated
drawer. Also, sufficient redundant memory must be available if no degradation of applications
is allowed. To ensure that the CPC configuration supports removal of a CPC drawer with
minimal effect on the workload, consider the flexible memory option. Any CPC drawer can be
replaced, including the first CPC drawer that initially contains the HSA.
If the enhanced drawer availability and flexible memory options are not used when a CPC
drawer must be replaced, the memory in the failing drawer is also removed. This process
might be necessary during an upgrade or a repair action. Until the removed CPC drawer is
replaced, a power-on reset of the system with the remaining CPC drawers is supported. The
CPC drawer can then be replaced and added back into the configuration concurrently.
A minimum of one PU that is characterized as a CP, IFL, or ICF is required per system. The
maximum number of CPs, IFLs, and ICFs is 170. The maximum number of zIIPs is always up
to twice the number of PUs that are characterized as CPs.
The z14 model nomenclature is based on the number of PUs that are available for client use
in each configuration. The models are listed in Table 2-10.
A capacity marker identifies the number of CPs that were purchased. This number of
purchased CPs is higher than or equal to the number of CPs that is actively used. The
capacity marker marks the availability of purchased but unused capacity that is intended to be
used as CPs in the future. They often have this status for software-charging reasons. Unused
CPs are not a factor when establishing the millions of service units (MSU) value that is used
for charging monthly license charge (MLC) software, or when charged on a per-processor
basis.
2.7.1 Upgrades
Concurrent upgrades of CPs, IFLs, ICFs, zIIPs, or SAPs are available for the z14 server.
However, concurrent PU upgrades require that more PUs be installed but not activated.
Spare PUs are used to replace defective PUs. Two spare PUs always are on a z14 server. In
the rare event of a PU failure, a spare PU is activated concurrently and transparently and is
assigned the characteristics of the failing PU.
Although upgrades from one z14 model to another z14 model are concurrent (meaning that
one or more CPC drawers can be added) one exception exists. Upgrades from any z14
server (model M01, M02, M03, or M04) to a model M05 is not supported. M05 model is
factory only.
Model M04 - - - - No
You can also upgrade a IBM zEnterprise zEC12 (2827) or a IBM z13 (2964) to a z14 server
and preserve the CPC serial number (S/N). The I/O cards can also be carried forward (with
certain restrictions) to the z14 server.
Important: Upgrades from z Enterprise EC12 (zEC12) and IBM z13 are always disruptive.
Upgrade paths from any z Enterprise EC12 (zEC12) to any z14 server are supported, as
listed in Table 2-12.
Upgrades from any IBM z13 to any z14 server are supported, as listed in Table 2-13.
Most conversions are nondisruptive. In exceptional cases, the conversion might be disruptive;
for example, when a model z14 with 30 CPs is converted to an all IFL system. In addition, an
LPAR might be disrupted when PUs must be freed before they can be converted. Conversion
information is listed in Table 2-14.
The following distinct model capacity identifier ranges are recognized (one for full capacity
and three for granular capacity):
For full-capacity engines, model capacity identifiers 701 - 7H0 are used. They express
capacity settings for 1 - 170 characterized CPs.
Three model capacity identifier ranges offer a unique level of granular capacity at the low
end. They are available when no more than 33 CPs are characterized. These three
subcapacity settings are applied to up to 33 CPs, which combined offer 90 more capacity
settings. For more information, see “Granular capacity”.
Granular capacity
The z14 server offers 99 capacity settings at the low end of the processor. Only 33 CPs can
have granular capacity. When subcapacity settings are used, other PUs beyond 33 can be
characterized only as specialty engines.
The three defined ranges of subcapacity settings have model capacity identifiers numbered
401- 433, 501 - 533, and 601 - 633.
Consideration: Within a z14 server, all CPs have the same capacity identifier. Specialty
engines (IFLs, zIIPs, and ICFs) operate at full speed.
Model M01 701 - 733, 601 - 633, 501 - 533, and 401 - 433
Model M02 701 - 769, 601 - 633, 501 - 533, and 401 - 433
Model M03 701 - 7A5, 601 - 633, 501 - 533, and 401 - 433
Model M04 701 - 7E5, 601 - 633, 501 - 533, and 401 - 433
Model M05 701 - 7H0, 601 - 633, 501 - 533, and 401 - 433
Important: On z14 servers, model capacity identifier 400 is used ICFs only configurations,
and 400 or 401 for IFLs only model.
When CBU for CP is added within the same capacity setting range (indicated by the model
capacity indicator) as the currently assigned PUs, the total number of active PUs (the sum of
all assigned CPs, IFLs, ICFs, zIIPs, and optional SAPs) plus the number of CBUs cannot
exceed the total number of PUs available in the system.
When CBU for CP capacity is acquired by switching from one capacity setting to another, no
more CBUs can be requested than the total number of PUs available for that capacity setting.
You can test the CBU. The number of CBU test activations that you can run for no extra fee in
each CBU record is now determined by the number of years that are purchased with the CBU
record. For example, a three-year CBU record has three test activations, as compared to a
one-year CBU record that has one test activation.
You can increase the number of tests up to a maximum of 15 for each CBU record. The real
activation of CBU lasts up to 90 days with a grace period of two days to prevent sudden
deactivation when the 90-day period expires. The contract duration can be set 1 - 5 years.
The CBU record describes the following properties that are related to the CBU:
Number of CP CBUs allowed to be activated
Number of IFL CBUs allowed to be activated
Number of ICF CBUs allowed to be activated
Number of zIIP CBUs allowed to be activated
Number of SAP CBUs allowed to be activated
Number of additional CBU tests that are allowed for this CBU record
Number of total CBU years ordered (duration of the contract)
Expiration date of the CBU contract
The record content of the CBU configuration is documented in IBM configurator output, which
is shown in Example 2-1. In the example, one CBU record is made for a five-year CBU
contract without more CBU tests for the activation of one CP CBU.
In Example 2-2, a second CBU record is added to the configuration for two CP CBUs, two IFL
CBUs, and two zIIP CBUs, with five more tests and a five-year CBU contract. The result is
that a total number of 10 years of CBU ordered: Five years in the first record and five years in
the second record. The two CBU records are independent and can be activated individually.
Five more CBU tests were requested. Because a total of five years are contracted for a total
of three CP CBUs (two IFL CBUs and two zIIP CBUs), they are shown as 15, 10, 10, and 10
CBU years for their respective types.
Remember: CBU for CPs, IFLs, ICFs, zIIPs, and SAPs can be activated together with
On/Off Capacity on Demand temporary upgrades. Both facilities can be on a single
system, and can be activated simultaneously.
Unassigned IFLs are ignored because they are considered spares and are available for use
as CBU. When an unassigned IFL is converted to an assigned IFL, or when more PUs are
characterized as IFLs, the number of CBUs of any type that can be activated is decreased.
When the addition of temporary capacity that is requested by On/Off CoD for CPs results in a
cross-over from one capacity identifier range to another, the total number of CPs active when
the temporary CPs are activated is equal to the number of temporary CPs ordered. For
example, when a CPC with model capacity identifier 504 specifies six CP6 temporary CPs
through On/Off CoD, the result is a CPC with model capacity identifier 606. A cross-over does
not necessarily mean that the CP count for the extra temporary capacity increases. The same
504 might temporarily be upgraded to a CPC with model capacity identifier 704. In this case,
the number of CPs does not increase, but more temporary capacity is achieved.
For more information about temporary capacity increases, see Chapter 8, “System upgrades”
on page 315.
The water-cooled system is still an option for the z14 server. The Top Exit Power feature is
available for the z14 server. Combined with the Top Exit I/O Cabling feature, it gives you more
options when you are planning your computer room cabling. For more information about the
z14 Top Exit features, see 10.3, “Physical planning” on page 398.
The larger systems that have a minimum of four BPR pairs that are installed must have four
power cords installed. Systems that specify four power cords can be started with two power
cords on the same side with sufficient power to keep the system running.
Power cords attach to a three-phase, 50/60 Hz, 200 - 480 V AC power source, or a 380 - 520
V DC power source.
A Balanced Power Plan Ahead feature is available for future growth, which helps to ensure
adequate and balanced power for all possible configurations. With this feature, system
downtime for upgrading a server is eliminated by including the maximum power requirements
in terms of BPRs and power cords to your installation.
For ancillary equipment, such as the Hardware Management Console, its display, and its
switch, more single phase outlets are required.
The power requirements depend on the cooling facility that is installed, and on the number of
CPC drawers and I/O units that are installed. For more information about the requirements
that are related to the number of installed I/O units, see 10.1.2, “Power requirements and
consumption” on page 391.
Transfer
Switch UPS Step down 208 VAC
480 VAC 480 VAC PDU 3 phase 410V 12V DC
AC/DC DC/AC DC/DC
3 phase AC/DC
High Voltage DC
AC Power
PSU
Batteries
Losses ~13% Server
Step down
substation
Transfer
Switch UPS
12V DC
480 VAC DC/DC
3 phase AC/DC
520 VDC AC/DC
PSU
Batteries
High Voltage Losses ~4% Server
DC Power
Transfer
UPS
Switch
12V DC
DC/DC DC/DC
DC Input
520 VDC
AC/DC
PSU
Step down Batteries
substation Server
The z14 bulk power supplies were modified to support HV DC, so the only difference in the
shipped hardware to implement this option is the DC power cords. Because HV DC is a new
technology, multiple proposed standards are available.
The IBF further enhances the robustness of the power design, which increases power line
disturbance immunity. It provides battery power to preserve processor data during a loss of
power on all power feeds from the computer room. The IBF can hold power briefly during a
brownout, or for orderly shutdown for a longer outage. For information about the hold times,
which depend on the I/O configuration and amount of CPC drawers, see 10.1.4, “Internal
Battery Feature” on page 396.
Tip: The exact power consumption for your system varies. The object of the tool is to
estimate the power requirements to aid you in planning for your system installation. Actual
power consumption after installation can be confirmed by using the HMC Monitors
Dashboard task.
2.8.5 Cooling
The PU SCMs are cooled by a cold plate that is connected to the internal water-cooling loop.
The SC SCMs are air-cooled. In an air-cooled system, the radiator unit dissipates the heat
from the internal water loop with air. The radiator unit provides improved availability with N+1
pumps and blowers. The WCUs are fully redundant in an N+1 arrangement.
Air-cooled models
In z14 servers, the CPC drawer, SC SCMs, PCIe I/O drawers, I/O drawers, and power
enclosures are all cooled by forced air with blowers that are controlled by the Move Device
Assembly (MDA).
The PU SCMs in the CPC drawers are cooled by water. The internal closed water loop
removes heat from PU SCMs by circulating water between the radiator heat exchanger and
the cold plate that is mounted on the PU SCMs. For more information, see 2.8.6, “Radiator
Unit” on page 80.
Although the PU SCMs are cooled by water, the heat is exhausted into the room from the
radiator heat exchanger by forced air with blowers. At the system level, z14 servers are still
air-cooled systems.
Unlike the radiator in air-cooled models, a WCU has two water loops: An internal closed water
loop and an external (chilled) water loop. The external water loop connects to the
client-supplied building’s chilled water. The internal water loop circulates between the WCU
heat exchanger and the PU SCMs cold plates. The loop takes heat away from the PU SCMs
and transfers it to the external water loop in the WCU’s heat exchanger. For more information,
see 2.8.7, “Water-cooling unit” on page 82.
In addition to the PU SCMs, the internal water loop circulates through two heat exchangers
that are in the path of the exhaust air in the rear of the frames. These heat exchangers
remove approximately 60% - 65% of the residual heat from the I/O drawers, PCIe I/O
drawers, the air-cooled logic in the CPC drawers, and the power enclosures. Almost
two-thirds of the total heat that is generated can be removed from the room by the chilled
water.
The selection of air-cooled models or water-cooled models is done when ordering, and the
appropriate equipment is factory-installed. An MES (conversion) from an air-cooled model to
a water-cooled model and vice versa is not allowed.
The water pumps, manifold assembly, radiator assembly (which includes the heat exchanger),
and blowers are the main components of the z14 RU, as shown in Figure 2-28.
WCU
Cus tomer water system PU SCM cold plates
Heat Exchanger
ch illed cold
Internal water
closed loop
warmer warmer
Pump
z14 servers operate with two fully redundant WCUs. These water-cooling units have each
their own facility feed and return water connections. If water is interrupted to one of the units,
the other unit picks up the entire load, and the server continues to operate without
interruption. You must provide independent redundant water loops to the water-cooling units
to obtain full redundancy.
The internal circulating water is conditioned water that is added to the radiator unit during
system installation with the Fill and Drain Tool (FC 3380). The FDT is included with new z14
servers. However, if you have an FDT from a zEC12 (FC 3378) in the data center, you can
order an upgrade kit (FC 3379) to have the same equipment as in the FC 3380, and it can be
used for the zEC12, z13, and z14 servers. The FDT is used to provide the internal water at
the installation and for maintenance, and to remove it at discontinuance. The FDT is shown in
Figure 2-27 on page 80.
In addition to the PU SCMs cold plates, the internal water loop circulates through these two
heat exchangers. These exchangers are in the path of the exhaust air in the rear of the
frames. These heat exchangers remove approximately 65% of the residual heat from the I/O
drawers, PCIe I/O drawer, the air-cooled logic in the CPC drawer, and the power enclosures.
The goal is for two-thirds of the total heat that is generated to be removed from the room by
the chilled water.
If one client water supply or one WCU fails, the remaining feed maintains PU SMCs cooling.
The WCUs and the associated drive card are concurrently replaceable. In addition, the heat
exchangers can be disconnected and removed from the system concurrently.
Inlet air temperature Heat to water and as % of total system heat load
The water-cooling option cannot be installed in the field. Therefore, you must carefully
consider the present and future computer room and CPC configuration options before you
decide which cooling option to order. For more information, see 10.1.3, “Cooling
requirements” on page 393.
2.9 Summary
All aspects of the z14 structure are listed in Table 2-18.
Number of SCMs 6 12 18 24 28
Standard SAPs 5 10 15 20 23
Number of IFP 1 1 1 1 1
Enabled memory sizes GB 320 - 8000 320 - 16192 320 - 24384 320 - 32576 320 - 32576
Flexible memory sizes GB N/A 320 - 8000 320 - 16192 320 - 24384 320 - 24384
L2 cache per PU 2/4 MB (I/D) 2/4 MB (I/D) 2/4 MB (I/D) 2/4 MB (I/D) 2/4 MB (I/D)
Clock frequency 5.2 GHz 5.2 GHz 5.2 GHz 5.2 GHz 5.2 GHz
I/O interface per IFB cable 6 GBps 6 GBps 6 GBps 6 GBps 6 GBps
I/O interface per PCIe cable 16 GBps 16 GBps 16 GBps 16 GBps 16 GBps
Number of support 2 2 2 2 2
elements
Optional external DC 520 V/380 V 520 V/380 V 520 V/380 V 520 V/380 V 520 V/380 V
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
z14 servers offer high levels of reliability, availability, serviceability (RAS), resilience, and
security. It fits into the IBM strategy in which mainframes play a central role in creating an
infrastructure for cloud, analytics, and mobile, underpinned by security. The z14 server is
designed so that everything around it, such as operating systems, middleware, storage,
security, and network technologies that support open standards, helps you achieve your
business goals.
The modular CPC drawer design aims to reduce, or in some cases even eliminate, planned
and unplanned outages. The design does so by offering concurrent repair, replace, and
upgrade functions for processors, memory, and I/O. For more information about the z14 RAS
features, see Chapter 9, “Reliability, availability, and serviceability” on page 363.
z14 servers continue the line of mainframe processors that are compatible with an earlier
version. Evolution® brings the following processor design enhancements:
A total of 10 cores per CP chip
Pipeline optimization
Improved SMT and SIMD
Better branch prediction
Improved co-processor functionality
It uses 24-bit, 31-bit, and 64-bit addressing modes, multiple arithmetic formats, and multiple
address spaces for robust interprocess security.
1 Federal Information Processing Standard (FIPS)140-2 Security Requirements for Cryptographic Modules
The z14 has up to 20 memory controller units (MCUs) (five MCUs per CPC drawer). The
configuration uses five-channel redundant array of independent memory (RAIM) protection,
with dual inline memory modules (DIMM) bus cyclic redundancy check (CRC) error retry.
The cache hierarchy (L1, L2, L3, and L4) is implemented with embedded dynamic random
access memory (eDRAM) caches. Until recently, eDRAM was considered to be too slow for
this use. However, a breakthrough in technology that was made by IBM eliminated that
limitation. In addition, eDRAM offers higher density, less power utilization, fewer soft errors,
and better performance. Concurrent maintenance allows dynamic central processing complex
(CPAC) drawer add and repair.2
z14 servers use CMOS Silicon-on-Insulator (SOI) 14 nm chip technology, with advanced low
latency pipeline design, which creates high-speed yet power-efficient circuit designs. The PU
SCM has a dense packaging, which allows closed water loop cooling. The heat exchange
from the closed loop is air-cooled by a radiator unit (RU) or optionally, water-cooled by a
water-cooling unit (WCU). The water-cooling option can lower the total power consumption of
the system. This benefit is significant for larger configurations. For more information, see
2.8.1, “Power and cooling” on page 77.
The z14 cache levels and memory hierarchy are shown in Figure 3-1.
While L1, L2, and L3 caches are implemented on the CP SCM, the fourth cache level (L4) is
implemented within the system controller (SC) SCM. One L4 cache is present in each CPC
drawer, which is shared by all CP SCMs. The cache structure of the z14 has the following
characteristics:
Larger L1, L2, and L3 caches (more data closer to the core).
L1 and L2 caches use eDRAM, and are private for each PU core.
L2-L3 interface has a new Fetch cancel protocol, a revised L2 Least Recent Used (LRU)
Demote handling.
L3 cache also uses eDRAM and is shared by all 10 cores within the PU chip. Each CPC
drawer has five (M01 - M04) or six L3 caches (M05). Therefore, a four-CPC drawer system
Model M05 has 24 caches, which results in 3072 MB (24 x 128 MB) of this shared PU
chip-level cache. For availability and reliability, L3 cache now implements symbol ECC.
L4 cache also uses eDRAM, and is shared by all PU chips on the CPC drawer. Each L4
cache has 672 MB inclusive of L3’s, 42w Set Associative and 256 bytes cache line size. A
four-CPC drawer system has 2688 MB (4 x 672 MB) of shared L4 cache.
Considerations
Cache sizes are being limited by ever-diminishing cycle times because they must respond
quickly without creating bottlenecks. Access to large caches costs more cycles. Instruction
and data cache (L1) sizes must be limited because larger distances must be traveled to reach
long cache lines. This L1 access time generally occurs in one cycle, which prevents increased
latency.
Also, the distance to remote caches as seen from the microprocessor becomes a significant
factor. An example is an L4 cache that is not on the microprocessor (and might not even be in
the same CPC drawer). Although the L4 cache is rather large, several cycles are needed to
travel the distance to the cache. The node-cache topology of z14 servers is shown in
Figure 3-2.
Although large caches mean increased access latency, the new technology of CMOS 14S0
(14 nm chip lithography) and the lower cycle time allows z14 servers to increase the size of
cache levels (L1, L2, and L3) within the PU chip by using denser packaging. This design
reduces traffic to and from the shared L4 cache, which is on another chip (SC chip). Only
when a cache miss occurs in L1, L2, or L3 is a request sent to L4. L4 is the coherence
manager, which means that all memory fetches must be in the L4 cache before that data can
be used by the processor. However, in the z14 cache design, some lines of the L3 cache are
not included in the L4 cache.
To overcome the delays that are inherent in the SMP CPC drawer design and save cycles to
access the remote L4 content, keep instructions and data as close to the processors as
possible. This configuration can be managed by directing as much work of a particular LPAR
workload to the processors in the same CPC drawer as the L4 cache. This configuration is
achieved by having the IBM Processor Resource/Systems Manager (PR/SM) scheduler and
the z/OS WLM and dispatcher work together. Have them keep as much work as possible
within the boundaries of as few processors and L4 cache space (which is best within a CPC
drawer boundary) without affecting throughput and response times.
The cache structures of z14 servers are compared with the previous generation of IBM Z
servers (z13) in Figure 3-3.
Compared to z13, the z14 cache design has larger L1, L2, and L3 cache sizes. In z14
servers, more affinity exists between the memory of a partition, the L4 cache in the SC,
accessed by the two logical clusters in the same CPC drawer, and the cores in the PU. The
access time of the private cache usually occurs in one cycle. The z14 cache level structure is
focused on keeping more data closer to the PU. This design can improve system performance
on many production workloads.
HiperDispatch
To help avoid latency in a high-frequency processor design, PR/SM and the dispatcher must
be prevented from scheduling and dispatching a workload on any processor available, which
keeps the workload in as small a portion of the system as possible. The cooperation between
z/OS and PR/SM is bundled in a function called HiperDispatch. HiperDispatch uses the z14
cache topology, which features reduced cross-cluster “help” and better locality for multi-task
address spaces.
The IBM System z10® EC introduced a dramatic PU cycle time improvement. Its succeeding
generations reduced the cycle time even further, with the z196 reaching 0.192 ns (5.2 GHz)
and the zEC12 reaching 0.178 ns (5.5 GHz). Although chip lithography drove higher on chip
core and cache density, the thermal design and cache sizes added some challenges to chip
frequency evolution.
Through innovative processor design (pipeline and cache management redesigns), the IBM Z
processor performance continues to evolve. With the introduction of out-of-order execution,
ever improving branch prediction mechanism, and simultaneous multi-threading, the
processing performance was enhanced despite processor frequency variations (z13 core
runs at 5.0 GHz).
z13 servers introduced architectural extensions with instructions that reduce processor
quiesce effects, cache misses, and pipeline disruption, and increase parallelism with
instructions that process several operands in a single instruction (SIMD). The processor
architecture was further developed for z14 and includes the following features:
Optimized second-generation SMT
Enhanced SIMD instructions set
Improved Out-of-Order core execution
Improvements in branch prediction and handling
Pipeline optimization
Enhanced branch prediction structure and sequential instruction fetching
The z14 enhanced Instruction Set Architecture (ISA) includes a set of instructions that are
added to improve compiled code efficiency. These instructions optimize PUs to meet the
demands of various business and analytics workload types without compromising the
performance characteristics of traditional workloads.
SMT is supported only for Integrated Facility for Linux (IFL) and IBM Z Integrated Information
Processor (zIIP) speciality engines on z143 servers, and it requires operating system support.
An operating system with SMT support can be configured to dispatch work to a thread on a
zIIP (for eligible workloads in z/OS) or an IFL (for z/VM and Linux on Z) core in single thread
or SMT mode so that HiperDispatch cache optimization can be considered. For more
information about operating system support, see Chapter 7, “Operating system support” on
page 243.
SMT technology allows instructions from more than one thread to run in any pipeline stage at
a time. SMT can handle up to four pending translations.
Each thread has its own unique state information, such as Program Status Word - S/360
Architecture (PSW) and registers. The simultaneous threads cannot necessarily run
instructions instantly and must at times compete to use certain core resources that are
shared between the threads. In some cases, threads can use shared resources that are not
experiencing competition.
3
In addition to optional SMT support for zIIPs and IFLs, z14 introduced SMT as default for SAPs (not user
controllable).
A B A B
instructions
Shared
Cache
Cache
A B A A
A B B A A B A A
/
B
Execution Units (FXU/FPU)
A A B
Use of Pipeline Stages in SMT2
B A
A Thread-A Both threads
Figure 3-6 Two threads running simultaneously on the same processor core
The use of SMT provides more efficient use of the processors’ resources and helps address
memory latency, which results in overall throughput gains. The active thread shares core
resources in space, such as data and instruction caches, TLBs, branch history tables, and, in
time, pipeline slots, execution units, and address translators.
Although SMT increases the processing capacity, the performance in some cases might be
superior if a single thread is used. Enhanced hardware monitoring supports measurement
through CPUMF for thread usage and capacity.
For workloads that need maximum thread speed, the partition’s SMT mode can be turned off.
For workloads that need more throughput to decrease the dispatch queue size, the partition’s
SMT mode can be turned on.
SMT use is functionally transparent to middleware and applications, and no changes are
required to run them in an SMT-enabled partition.
SIMD provides the next phase of enhancements of IBM Z analytics capability. The set of
SIMD instructions are a type of data parallel computing and vector processing that can
decrease the amount of code and accelerate code that handles integer, string, character, and
floating point data types. The SIMD instructions improve performance of complex
mathematical models and allow integration of business transactions and analytic workloads
on IBM Z servers.
The collection of elements in a register is called a vector. A single instruction operates on all
of the elements in the register. Instructions include a non-destructive operand encoding that
allows the addition of the register vector A and register vector B and stores the result in the
register vector A (A = A + B).
A schematic representation of a SIMD instruction with 16-byte size elements in each vector
operand is shown in Figure 3-7.
SIMD Registers
VA
+ + + + + + + + + + + + + + + +
VB
VT
Figure 3-7 Schematic representation of add SIMD instruction with 16 elements in each vector
The vector register file overlays the floating-point registers (FPRs), as shown in Figure 3-8.
The FPRs use the first 64 bits of the first 16 vector registers, which saves hardware area and
power, and makes it easier to mix scalar and SIMD codes. Effectively, the core gets 64 FPRs,
which can further improve FP code efficiency.
Vector regfile
FPRs
Register
15
31
0 63 127
Figure 3-8 Floating point registers overlaid by vector registers
For most operations, the condition code is not set. A summary condition code is used only for
a few instructions.
Program results
The Out-of-Order execution does not change any program results. Execution can occur out of
(program) order, but all program dependencies are honored, ending up with the same results
as in-order (program) execution.
This implementation requires special circuitry to make execution and memory accesses
display in order to the software. The logical diagram of a z14 core is shown in Figure 3-9 on
page 101.
Memory address generation and memory accesses can occur out of (program) order. This
capability can provide a greater use of the z14 superscalar core, and can improve system
performance.
The z14 processor unit core is a superscalar, out-of-order, SMT processor with 10 execution
units. Up to six instructions can be decoded per cycle, and up to 10 instructions or operations
can be started to run per clock cycle (<0.192 ns). The execution of the instructions can occur
out of program order, and memory address generation and memory accesses can also occur
out of program order. Each core has special circuitry to display execution and memory
accesses in order to the software.
The z14 superscalar PU core can have up to 10 instructions or operations that are running
per cycle. This technology results in shorter workload runtime.
Branch prediction
If the branch prediction logic of the microprocessor makes the wrong prediction, all
instructions in the parallel pipelines are removed. The wrong branch prediction is expensive in
a high-frequency processor design. Therefore, the branch prediction techniques that are used
are important to prevent as many wrong branches as possible.
The z14 microprocessor improves the branch prediction throughput by using the new branch
prediction and instruction fetch front end.
On z14 servers, up to six instructions can be decoded per cycle and up to 10 instructions or
operations can be in execution per cycle. Execution can occur out of (program) order. These
improvements also make possible the simultaneous execution of two threads in the same
processor.
Many challenges exist in creating an efficient superscalar processor. The superscalar design
of the PU made significant strides in avoiding address generation interlock (AGI) situations.
Instructions that require information from memory locations can suffer multi-cycle delays to
get the needed memory content. Because high-frequency processors wait “faster” (spend
processor cycles more quickly while idle), the cost of getting the information might become
prohibitive.
Coprocessor units
One coprocessor unit is available for compression and cryptography on each core in the chip.
The compression engine uses static dictionary compression and expansion. The
compression dictionary uses the L1-cache (instruction cache).
The cryptography engine is used for the CPACF, which offers a set of symmetric
cryptographic functions for encrypting and decrypting of clear key operations.
Compression enhancements
The compression features the following enhancements:
Huffman compression on top of CMPSC compression (embedded in dictionary, reuse of
generators)
Order Preserving compression in B-Trees and other index structures
Faster expansion algorithms
Reduced overhead on short data
CPACF
CPACF accelerates the encrypting and decrypting of SSL/TLS transactions, virtual private
network (VPN)-encrypted data transfers, and data-storing applications that do not require
FIPS 140-2 level 4 security. The assist function uses a special instruction set for symmetrical
clear key cryptographic encryption and decryption, and for hash operations. This group of
instructions is known as the Message-Security Assist (MSA). For more information about
these instructions, see z/Architecture Principles of Operation, SA22-7832.
Base 10 arithmetic is used for most business and financial computation. Floating point
computation that is used for work that is typically done in decimal arithmetic involves frequent
data conversions and approximation to represent decimal numbers. This process makes
floating point arithmetic complex and error-prone for programmers who use it for applications
in which the data is typically decimal.
z14 servers have two DFP accelerator units per core, which improve the decimal floating
point execution bandwidth. The floating point instructions operate on newly designed vector
registers (32 new 128-bit registers).
z14 servers include new decimal floating point packed conversion facility support with the
following benefits:
Reduces code path length because extra instructions to format conversion are no longer
needed.
Packed data is operated in memory by all decimal instructions without general-purpose
registers, which were required only to prepare for decimal floating point packed conversion
instruction.
Converting from packed can now force the input packed value to positive instead of
requiring a separate OI, OILL, or load positive instruction.
Converting to packed can now force a positive zero result instead of requiring ZAP
instruction.
The z14 core implements two other execution subunits for 2x throughput on BFP
(single/double precision) operations (see Figure 3-9 on page 101).
The key point is that Java and C/C++ applications tend to use IEEE BFP operations more
frequently than earlier applications. Therefore, the better the hardware implementation of this
set of instructions, the better the performance of applications.
Instruction State
Checkpoint
PU x PU y
R-Unit
No Error Fault
The success rate of branch prediction contributes significantly to the superscalar aspects of
z14 servers. This success is because the architecture rules prescribe that, for successful
parallel execution of an instruction stream, the correctly predicted result of the branch is
essential.
The z14 branch prediction includes the following enhancements over z13:
Branch prediction search pipeline extended from five to six cycles to accommodate new
predictors for increased accuracy/performance.
New predictors:
– Perceptron (neural network direction predictor)
– SSCRS (hardware-based super simple call-return stack)
Capacity increases:
– Level 1 Branch Target Buffer (BTB1): 1 K rows x 6 sets → 2 K rows x 4 sets
– Level 2 Branch Target Buffer (BTB2): 16 K rows x 6 sets → 32 K rows x 4 sets
Better power efficiency: Several structures were redesigned to maintain their accuracy
while less power is used through smart access algorithms.
New static IBM IA® regions expanded from four to eight. To conserve space, prediction
structures do not store full target addresses. Instead, they use the locality and limited
ranges of “4gig regions” of virtual instruction addresses - IA(0:31).
With the wild branch hardware facility, the last address from which a successful branch
instruction was run is kept. z/OS uses this information with debugging aids, such as the SLIP
command, to determine from where a wild branch came. It can also collect data from that
storage location. This approach decreases the number of debugging steps that are
necessary when you want to know from where the branch came.
The size of the TLB is kept as small as possible because of its short access time
requirements and hardware space limitations. Because memory sizes recently increased
significantly as a result of the introduction of 64-bit addressing, a smaller working set is
represented by the TLB.
To increase the working set representation in the TLB without enlarging the TLB, large (1 MB)
page and giant page (2 GB) support is available and can be used when appropriate. For more
information, see “Large page support” on page 123.
With the enhanced DAT-2 (EDAT-2) improvements, the IBM Z servers support 2 GB page
frames.
The new translation engine allows up to four translations pending concurrently. Each
translation step is ~2x faster, which helps level 2 guests.
Instruction fetching
Instruction fetching normally tries to get as far ahead of instruction decoding and execution as
possible because of the relatively large instruction buffers available. In the microprocessor,
smaller instruction buffers are used. The operation code is fetched from the I-cache and put in
instruction buffers that hold prefetched data that is awaiting decoding.
Instruction decoding
The processor can decode up to six instructions per cycle. The result of the decoding process
is queued and later used to form a group.
Instruction grouping
From the instruction queue, up to 10 instructions can be completed on every cycle. A
complete description of the rules is beyond the scope of this publication.
The compilers and JVMs are responsible for selecting instructions that best fit with the
superscalar microprocessor. They abide by the rules to create code that best uses the
superscalar implementation. All IBM Z compilers and JVMs are constantly updated to benefit
from new instructions and advances in microprocessor designs.
The Transaction Execution Facility provides instructions, including declaring the beginning
and end of a transaction, and canceling the transaction. TX is expected to provide significant
performance benefits and scalability by avoiding most locks. This benefit is especially
important for heavily threaded applications, such as Java.
3.5.1 Overview
All PUs on a z14 server are physically identical. When the system is initialized, one integrated
firmware processor (IFP) is allocated from the pool of PUs that is available for the entire
system. The other PUs can be characterized to specific functions (CP, IFL, ICF, zIIP, or SAP).
The function that is assigned to a PU is set by the Licensed Internal Code (LIC). The LIC is
loaded when the system is initialized at power-on reset (POR) and the PUs are characterized.
This design brings outstanding flexibility to z14 servers because any PU can assume any
available characterization. The design also plays an essential role in system availability
because PU characterization can be done dynamically, with no system outage.
For more information about software level support of functions and features, see Chapter 7,
“Operating system support” on page 243.
Concurrent upgrades
Except on a fully configured model, concurrent upgrades can be done by the LIC, which
assigns a PU function to a previously non-characterized PU. Within the CPC drawer
boundary or boundary of multiple CPC drawers, no hardware changes are required. The
upgrade can be done concurrently through the following facilities:
Customer Initiated Upgrade (CIU) for permanent upgrades
On/Off Capacity on Demand (On/Off CoD) for temporary upgrades
Capacity BackUp (CBU) for temporary upgrades
Capacity for Planned Event (CPE) for temporary upgrades
If the PU chips in the installed CPC drawers have no available remaining PUs, an upgrade
results in a model upgrade and the installation of an extra CPC drawer. However, the number
of available CPC drawers is limited to four. CPC drawer installation is nondisruptive, but takes
more time than a simple LIC upgrade.
For more information about Capacity on Demand, see Chapter 8, “System upgrades” on
page 315.
PU sparing
In the rare event of a PU failure, the failed PU’s characterization is dynamically and
transparently reassigned to a spare PU. z14 servers have two spare PUs. PUs that are not
characterized on a CPC configuration can also be used as extra spare PUs. For more
information about PU sparing, see 3.5.10, “Sparing rules” on page 120.
PU pools
PUs that are defined as CPs, IFLs, ICFs, and zIIPs are grouped in their own pools from where
they can be managed separately. This configuration significantly simplifies capacity planning
and management for LPARs. The separation also affects weight management because CP
and zIIP weights can be managed separately. For more information, see “PU weighting” on
page 110.
PUs are removed from their pools when a concurrent downgrade occurs as the result of the
removal of a CBU. They are also removed through the On/Off CoD process and the
conversion of a PU. When a dedicated LPAR is activated, its PUs are taken from the correct
pools. This process is also the case when an LPAR logically configures a PU as on, if the
width of the pool allows for it.
For an LPAR, logical PUs are dispatched from the supporting pool only. The logical CPs are
dispatched from the CP pool, logical zIIPs from the zIIP pool, logical IFLs from the IFL pool,
and the logical ICFs from the ICF pool.
PU weighting
Because CPs, zIIPs, IFLs, and ICFs have their own pools from where they are dispatched,
they can be given their own weights. For more information about PU pools and processing
weights, see the IBM Z Processor Resource/Systems Manager Planning Guide, SB10-7169.
The z14 server can be initialized in LPAR (PR/SM) mode or in Dynamic Partition Manger
(DPM) mode. For more information, see Appendix E, “IBM Dynamic Partition Manager” on
page 501.
CPs are defined as dedicated or shared. Reserved CPs can be defined to an LPAR to allow
for nondisruptive image upgrades. If the operating system in the LPAR supports the logical
processor add function, reserved processors are no longer needed. Regardless of the
installed model, an LPAR can have up to 170 logical CPs that are defined (the sum of active
and reserved logical CPs). In practice, define no more CPs than the operating system
supports.
All PUs that are characterized as CPs within a configuration are grouped into the CP pool.
The CP pool can be seen on the Hardware Management Console (HMC) workplace. Any
z/Architecture operating systems, CFCCs, and IBM zAware can run on CPs that are assigned
from the CP pool.
Granular capacity adds 90 subcapacity settings to the 170 capacity settings that are available
with full capacity CPs (CP7). Each of the 90 subcapacity settings applies to up to 33 CPs
only, independent of the model installed.
Information about CPs in the remainder of this chapter applies to all CP capacity settings,
unless indicated otherwise. For more information about granular capacity, see 2.7, “Model
configurations” on page 70.
IFL pool
All PUs that are characterized as IFLs within a configuration are grouped into the IFL pool.
The IFL pool can be seen on the HMC workplace.
IFLs do not change the model capacity identifier of the z14 server. Software product license
charges that are based on the model capacity identifier are not affected by the addition of
IFLs.
Unassigned IFLs
An IFL that is purchased but not activated is registered as an unassigned IFL (FC 1937).
When the system is later upgraded with another IFL, the system recognizes that an IFL was
purchased and is present.
ICFs exclusively run CFCC. ICFs do not change the model capacity identifier of the z14
server. Software product license charges that are based on the model capacity identifier are
not affected by the addition of ICFs.
All ICFs within a configuration are grouped into the ICF pool. The ICF pool can be seen on the
HMC workplace.
After the image is dispatched, “poll for work” logic in CFCC and z/OS can be used largely as
is to locate and process the work. The new interrupt expedites the redispatching of the
partition.
LPAR presents these Coupling Thin Interrupts to the guest partition, so CFCC and z/OS both
require interrupt handler support that can deal with them. CFCC also changes to relinquish
control of the processor when all available pending work is exhausted, or when the LPAR
undispatches it off the shared processor, whichever comes first.
CF processor combinations
A CF image can have one of the following combinations that are defined in the image profile:
Dedicated ICFs
Shared ICFs
Dedicated CPs
Shared CPs
Shared ICFs add flexibility. However, running only with shared coupling facility PUs (ICFs or
CPs) is not a preferable production configuration. It is preferable for a production CF to
operate by using dedicated ICFs. With CFCC Level 19 (and later; z14 servers run CFCC level
22), Coupling Thin Interrupts are available, and dedicated engines continue to be
recommended to obtain the best coupling facility performance.
HMC
Setup
The LPAR processing weights are used to define how much processor capacity each CF
image can have. The capped option can also be set for a test CF image to protect the
production environment.
Connections between these z/OS and CF images can use internal coupling links to avoid the
use of real (external) coupling links, and get the best link bandwidth available.
Dynamic CF dispatching
The dynamic coupling facility dispatching function has a dispatching algorithm that you can
use to define a backup CF in an LPAR on the system. When this LPAR is in backup mode, it
uses few processor resources. When the backup CF becomes active, only the resources that
are necessary to provide coupling are allocated.
CFCC Level 19 introduced Coupling Thin Interrupts and the new DYNDISP specification. It
allows more environments with multiple CF images to coexist in a server, and to share CF
engines with reasonable performance. For more information, see 3.9.3, “Dynamic CF
dispatching” on page 141.
To improve CF processor scaling for the customer’s CF images and to make effective use of
more processors as the sysplex workload increases, CF work management and dispatcher
provide the following improvements (z14):
Non-prioritized (FIFO-based) work queues, which avoids overhead of maintaining ordered
queues in the CF.
A zIIP enables eligible z/OS workloads to have a portion of them directed to zIIP. The zIIPs do
not increase the MSU value of the processor and so do not affect the IBM software license
changes.
z14 is the second generation of IBM Z processors to support SMT. z14 servers implement two
threads per core on IFLs and zIIPs. SMT must be enabled at the LPAR level and supported by
the z/OS operating system. SMT was enhanced for z14 and it is enabled for SAPs by default
(no customer intervention required).
This process reduces the CP time that is needed to run Java WebSphere applications, which
frees that capacity for other workloads.
4
IBM z Systems® Application Assist Processors (zAAPs) are not available on z14 servers. A zAAP workload is
dispatched to available zIIPs (zAAP on zIIP capability).
WebSphere
WebSphere z/OS
z/OSDispatcher
Dispatcher
ExecuteJAVA
JAVACode
Code z/OS Dispatcher
Execute Dispatch
Dispatch
JVMJVM
task
task
on on
z/OS
z/OS
zIIP
zAAP
logical
logical logical
processor
processorprocessor
JVM
SwitchtotozIIP
Switch zAAP
JVM
JVM
z/OS
z/OSDispatcher
Dispatcher
Suspend
SuspendJVM
JVMtask
taskon
onz/OS
z/OS
standard
standard
logical
logical
processor
processor JavaApplication
Java Application Code
Code
Executingon
Executing ona azIIP
zAAP
z/OS
z/OS z/OS
Dispatcher
Dispatcher
Dispatcher logical processor
logical processor
Dispatch JVM
Dispatch JVMtask onon
task z/OS
z/OS
standard logical
standard processor
logical processor
JVM
Switch JVM
to
JVM
JVM
JVM Switch to to
Switch standard processor
standard processor
z/OS
z/OSDispatcher
Dispatcher
Suspend
Suspend JVMJVMtask onon
task z/OS
z/OS
WebSphere
WebSphere zIIPzAAPlogical processor
logical processor
A zIIP runs only IBM authorized code. This IBM authorized code includes the z/OS JVM in
association with parts of system code, such as the z/OS dispatcher and supervisor services.
A zIIP cannot process I/O or clock comparator interruptions, and it does not support operator
controls, such as IPL.
Java application code can run on a CP or a zIIP. The installation can manage the use of CPs
so that Java application code runs only on CPs, only on zIIPs, or on both.
If zIIPs are defined to the LPAR but are not online, the zIIP-eligible work units are processed
by CPs in order of priority. The system ignores the IIPHONORPRIORITY parameter in this
case and handles the work as though it had no eligibility to zIIPs.
The following Db2 UDB for z/OS V8 or later workloads are eligible to run in Service Request
Block (SRB) mode:
Query processing of network-connected applications that access the Db2 database over a
TCP/IP connection by using IBM Distributed Relational Database Architecture™ (DRDA).
DRDA enables relational data to be distributed among multiple systems. It is native to Db2
for z/OS, which reduces the need for more gateway products that can affect performance
and availability. The application uses the DRDA requester or server to access a remote
database. IBM Db2 Connect is an example of a DRDA application requester.
Star schema query processing, which is mostly used in Business Intelligence (BI) work. A
star schema is a relational database schema for representing multidimensional data. It
stores data in a central fact table and is surrounded by more dimension tables that hold
information about each perspective of the data. For example, a star schema query joins
various dimensions of a star schema data set.
Db2 utilities that are used for index maintenance, such as LOAD, REORG, and REBUILD.
Indexes allow quick access to table rows, but over time, the databases become less
efficient and must be maintained as data in large databases is manipulated.
On a z14 server, the following workloads can also benefit from zIIPs:
z/OS Communications Server uses the zIIP for eligible Internet Protocol Security (IPSec)
network encryption workloads. This configuration requires z/OS V1R10 or later. Portions
of IPSec processing take advantage of the zIIPs, specifically end-to-end encryption with
IPSec. The IPSec function moves a portion of the processing from the general-purpose
processors to the zIIPs. In addition, to run the encryption processing, the zIIP also handles
the cryptographic validation of message integrity and IPSec header processing.
z/OS Global Mirror, formerly known as Extended Remote Copy (XRC), also uses the zIIP.
Most z/OS Data Facility Storage Management Subsystem (DFSMS) system data mover
(SDM) processing that is associated with z/OS Global Mirror can run on the zIIP. This
configuration requires z/OS V1R10 or later releases.
The first IBM user of z/OS XML system services is Db2 V9. For Db2 V9 before the z/OS
XML System Services enhancement, z/OS XML System Services non-validating parsing
was partially directed to zIIPs when used as part of a distributed Db2 request through
DRDA. This enhancement benefits Db2 V9 by making all z/OS XML System Services
non-validating parsing eligible to zIIPs. This configuration is possible when processing is
used as part of any workload that is running in enclave SRB mode.
z/OS Communications Server also allows the HiperSockets Multiple Write operation for
outbound large messages (originating from z/OS) to be run by a zIIP. Application
workloads that are based on XML, HTTP, SOAP, and Java, and traditional file transfer can
benefit.
For BI, IBM Scalable Architecture for Financial Reporting provides a high-volume,
high-performance reporting solution by running many diverse queries in z/OS batch. It can
also be eligible for zIIP.
For more information about zIIP and eligible workloads, see the IBM zIIP website.
zIIP installation
One CP must be installed with or before any zIIP is installed. In zNext, the zIIP-to-CP ratio is
2:1, which means that up to 112 zIIPs on a model M05 can be characterized. The allowable
number of zIIPs for each model is listed in Table 3-1.
zIIPs are orderable by using FC 1936. Up to two zIIPs can be ordered for each CP or marked
CP configured in the system. If the installed CPC drawer has no remaining unassigned PUs,
the assignment of the next zIIP might require the installation of another CPC drawer.
PUs that are characterized as zIIPs within a configuration are grouped into the zIIP pool. This
configuration allows zIIPs to have their own processing weights, independent of the weight of
parent CPs. The zIIP pool can be seen on the hardware console.
LPAR: In an LPAR, as many zIIPs as are available can be defined together with at least
one CP.
Standard SAPs 5 10 15 20 23
SAP configuration
A standard SAP configuration provides a well-balanced system for most environments.
However, some application environments have high I/O rates, typically Transaction
Processing Facility (TPF) environments. In this case, more SAPs can be ordered. Assigning
of more SAPs can increase the capability of the channel subsystem to run I/O operations. In
z14 systems, the number of SAPs can be greater than the number of CPs. However, more
SAPs plus standard SAPs cannot exceed 128.
By using reserved processors, you can define more logical processors than the number of
available CPs, IFLs, ICFs, and zIIPs in the configuration to an LPAR. This process makes it
possible to configure online, nondisruptively, more logical processors after more CPs, IFLs,
ICFs, and zIIPs are made available concurrently. They can be made available with one of the
Capacity on-demand options.
The maximum number of reserved processors that can be defined to an LPAR depends on
the number of logical processors that are defined. The maximum number of logical
processors plus reserved processors is 170. If the operating system in the LPAR supports the
logical processor add function, reserved processors are no longer needed.
Do not define more active and reserved processors than the operating system for the LPAR
can support. For more information about logical processors and reserved processors and
their definitions, see 3.7, “Logical partitioning” on page 125.
It also is initialized at POR. The IFP supports Resource Group (RG) LIC to provide native
PCIe I/O feature management and virtualization functions. For more information, see
Appendix C, “Native Peripheral Component Interconnect Express” on page 469.
The PU assignment is based on CPC drawer plug ordering. The CPC drawers are populated
from the bottom upward. This process defines following the low-order and the high-order CPC
drawers:
CPC drawer 1: Plug order 1 (low-order CPC drawer)
CPC drawer 2: Plug order 2
CPC drawer 3: Plug order 3
CPC drawer 4: Plug order 4 (high-order CPC drawer)
These rules are intended to isolate, as much as possible, on different CPC drawers and even
on different PU chips, processors that are used by different operating systems. This
configuration ensures that different operating systems do not use the same shared caches.
For example, CPs and zIIPs are all used by z/OS, and can benefit by using the same shared
caches. However, IFLs are used by z/VM and Linux, and ICFs are used by CFCC. Therefore,
for performance reasons, the assignment rules prevent them from sharing L3 and L4 caches
with z/OS processors.
This initial PU assignment, which is done at POR, can be dynamically rearranged by an LPAR
by swapping an active core to a core in a different PU chip in a different CPC drawer or node
to improve system performance. For more information, see “LPAR dynamic PU reassignment”
on page 130.
When a CPC drawer is added concurrently after POR and new LPARs are activated, or
processor capacity for active partitions is dynamically expanded, the extra PU capacity can
be assigned from the new CPC drawer. The processor unit assignment rules consider the
newly installed CPC drawer only after the next POR.
Systems with a failed PU for which no spare is available call home for a replacement. A
system with a failed PU that is spared and requires an SCM to be replaced (referred to as a
pending repair) can still be upgraded when sufficient PUs are available.
With transparent sparing, the status of the application that was running on the failed
processor is preserved. The application continues processing on a newly assigned CP, IFL,
ICF, zIIP, SAP, or IFP (allocated to one of the spare PUs) without client intervention.
Application preservation
If no spare PU is available, application preservation (z/OS only) is started. The state of the
failing processor is passed to another active processor that is used by the operating system.
Through operating system recovery services, the task is resumed successfully (in most
cases, without client intervention).
3.6.1 Overview
The z14 memory design also provides flexibility, high availability, and the following upgrades:
Concurrent memory upgrades if the physically installed capacity is not yet reached
z14 servers can have more physically installed memory than the initial available capacity.
Memory upgrades within the physically installed capacity can be done concurrently by LIC,
and no hardware changes are required. However, memory upgrades cannot be done
through CBU or On/Off CoD.
Concurrent memory upgrades if the physically installed capacity is reached
Physical memory upgrades require a processor cage to be removed and reinstalled after
replacing the memory cards in the processor cage. Except for a model M01, the
combination of enhanced drawer availability and the flexible memory option allows you to
concurrently add memory to the system. For more information, see 2.4.5, “Drawer
replacement and memory” on page 62, and 2.4.7, “Flexible Memory Option” on page 63.
When the total capacity that is installed has more usable memory than required for a
configuration, the LIC Configuration Control (LICCC) determines how much memory is used
from each processor drawer. The sum of the LICCC provided memory from each CPC drawer
is the amount that is available for use in the system.
Memory allocation
When system is activated by using a POR, PR/SM determines the total installed memory and
the customer enabled memory. Later in the process, during LPAR activation, PR/SM assigns
and allocates each partition memory according to their image profile.
PR/SM has control over all physical memory, and can make physical memory available to the
configuration when a CPC drawer is added.
In older IBM Z processors, memory allocation was striped across the available CPC drawers
because relatively fast connectivity existed between the drawers. Splitting the work between
all of the memory controllers allowed a smooth performance variability.
With z14 servers, this process occurs whenever the configuration changes, such as in the
following circumstances:
Activating or deactivating an LPAR
Changing the LPARs processing weights
Upgrading the system through a temporary or permanent record
Downgrading the system through a temporary record
PR/SM schedules a global reoptimization of the resources in use. It does so by looking at all
the partitions that are active and prioritizing them based on their processing entitlement and
weights, which creates a high and low priority rank. Then, the resources, such as logical
processors and memory, can be moved from one CPC drawer to another to address the
priority ranks that were created.
When partitions are activated, PR/SM tries to find a home assignment CPC drawer, home
assignment node, and home assignment chip for the logical processors that are defined to
them. The PR/SM goal is to allocate all the partition logical processors and memory to a
single CPC drawer (the home drawer for that partition).
If all logical processors can be assigned to a home drawer and the partition-defined memory
is greater than what is available in that drawer, the exceeding memory amount is allocated on
another CPC drawer. If all the logical processors cannot fit in one CPC drawer, the remaining
logical processors spill to another CPC drawer. When that overlap occurs, PR/SM stripes the
memory (if possible) across the CPC drawers where the logical processors are assigned.
The process of reallocating memory is based on the memory copy/reassign function, which is
used to allow enhanced drawer availability (EDA) and concurrent drawer replacement
(CDR)5. This process was enhanced for z13 and z13s to provide more efficiency and speed
to the process without affecting system performance.
z14 implements a faster dynamic memory reallocation mechanism, which is especially useful
during service operations (EDA and CDR). PR/SM controls the reassignment of the content
of a specific physical memory array in one CPC drawer to a physical memory array in another
CPC drawer. To do accomplish this task, PR/SM uses all the available physical memory in the
system. This memory includes the memory that is not in use by the system that is available
but not purchased by the client, and the planned memory options, if installed.
Because of the memory allocation algorithm, systems that undergo many miscellaneous
equipment specification (MES) upgrades for memory can have different memory mixes and
quantities in all processor drawers of the system. If the memory fails, it is technically feasible
to run a POR of the system with the remaining working memory resources. After the POR
completes, the memory distribution across the processor drawers is different, as is the total
amount of available memory.
5
In previous IBM Z generations (beforez13), these service operations were known as enhanced book availability
(EBA) and concurrent book repair (CBR).
The TLB reduces the amount of time that is required to translate a virtual address to a real
address. This translation is done by dynamic address translation (DAT) when it must find the
correct page for the correct address space. Each TLB entry represents one page. As with
other buffers or caches, lines are discarded from the TLB on a least recently used (LRU)
basis.
The worst-case translation time occurs when a TLB miss occurs and the segment table
(which is needed to find the page table) and the page table (which is needed to find the entry
for the particular page in question) are not in cache. This case involves two complete real
memory access delays plus the address translation delay. The duration of a processor cycle
is much shorter than the duration of a memory cycle, so a TLB miss is relatively costly.
It is preferable to have addresses in the TLB. With 4 K pages, holding all of the addresses for
1 MB of storage takes 256 TLB lines. When 1 MB pages are used, it takes only one TLB line.
Therefore, large page size users have a much smaller TLB footprint.
Large pages allow the TLB to better represent a large working set and suffer fewer TLB
misses by allowing a single TLB entry to cover more address translations.
Users of large pages are better represented in the TLB and are expected to see performance
improvements in elapsed time and processor usage. These improvements are because DAT
and memory operations are part of processor busy time even though the processor waits for
memory operations to complete without processing anything else in the meantime.
To overcome the processor usage that is associated with creating a 1 MB page, a process
must run for some time. It also must maintain frequent memory access to keep the pertinent
addresses in the TLB.
Short-running work does not overcome the processor usage. Short processes with small
working sets are expected to receive little or no improvement. Long-running work with high
memory-access frequency is the best candidate to benefit from large pages.
Long-running work with low memory-access frequency is less likely to maintain its entries in
the TLB. However, when it does run, few address translations are required to resolve all of the
memory it needs. Therefore, a long-running process can benefit even without frequent
memory access.
Weigh the benefits of whether something in this category must use large pages as a result of
the system-level costs of tying up real storage. A balance exists between the performance of
a process that uses large pages and the performance of the remaining work on the system.
On z14 servers, 1 MB large pages become pageable if Virtual Flash Memory6 is available
and enabled. They are available only for 64-bit virtual private storage, such as virtual memory
that is above 2 GB.
It is easy to assume that increasing the TLB size is a feasible option to deal with TLB-miss
situations. However, this process is not as straightforward as it seems. As the size of the TLB
increases, so does the processor usage that is involved in managing the TLB’s contents.
6 Virtual Flash Memory replaced IBM zFlash Express for z14. No carry forward of zFlash Express exists.
Main storage can be accessed by all processors, but cannot be shared between LPARs. Any
system image (LPAR) must include a defined main storage size. This defined main storage is
allocated exclusively to the LPAR during partition activation.
The fixed size of the HSA eliminates planning for future expansion of the HSA because the
hardware configuration definition (HCD)/input/output configuration program (IOCP) always
reserves space for the following items:
Six channel subsystems (CSSs)
A total of 15 LPARs in the 5 CSSs and 10 LPARs for the sixth CSS for a total of 85 LPARs
Subchannel set 0 with 63.75-K devices in each CSS
Subchannel set 1 with 64-K devices in each CSS
Subchannel set 2 with 64-K devices in each CSS
Subchannel set 3 with 64-K devices in each CSS
The HSA has sufficient reserved space to allow for dynamic I/O reconfiguration changes to
the maximum capability of the processor.
3.7.1 Overview
Logical partitioning is a function that is implemented by the PR/SM on z14. z14 runs in LPAR,
or DPM mode. DPM provides a dynamic GUI for managing PR/SM. Therefore, all system
aspects are controlled by PR/SM functions.
PR/SM is aware of the processor drawer structure on z14 servers. However, LPARs do not
feature this awareness. LPARs have resources that are allocated to them from various
physical resources. From a systems standpoint, LPARs have no control over these physical
resources, but the PR/SM functions do have this control.
PR/SM manages and optimizes allocation and the dispatching of work on the physical
topology. Most physical topology that was handled by the operating systems is the
responsibility of PR/SM.
As described in 3.5.9, “Processor unit assignment” on page 119, the initial PU assignment is
done during POR by using rules to optimize cache usage. This step is the “physical” step,
where CPs, zIIPs, IFLs, ICFs, and SAPs are allocated on the processor drawers.
When an LPAR is activated, PR/SM builds logical processors and allocates memory for the
LPAR.
Memory allocation changed from the previous IBM Z servers. IBM System z9® memory was
spread across all books. This optimization was done by using a round-robin algorithm with
several increments per book to match the number of memory controllers (MCs) per book.
This memory allocation design is driven by performance results, which minimizes variability
for most workloads.
With z14 servers, memory allocation changed from the model that was used for the z9.
Partition memory is now allocated in a per processor drawer basis and striped across
processor clusters. For more information, see “Memory allocation” on page 121.
Processor drawers and node level assignments are more important because they optimize L4
cache usage. Therefore, logical processors from a specific LPAR are packed into a processor
drawer as much as possible.
PR/SM optimizes chip assignments within the assigned processor drawers (or drawers) to
maximize L3 cache efficiency. Logical processors from an LPAR are dispatched on physical
processors on the same PU chip as much as possible. The number of processors per chip
(10) matches the number of z/OS processor affinity queues that is used by HiperDispatch,
which achieves optimal cache usage within an affinity node.
PR/SM also tries to redispatch a logical processor on the same physical processor to
optimize private cache (L1 and L2) usage.
Performance can be optimized by redispatching units of work to the same processor group,
which keeps processes running near their cached instructions and data, and minimizes
transfers of data ownership among processors and processor drawers.
The nested topology is returned to z/OS by the Store System Information (STSI) instruction.
HiperDispatch uses the information to concentrate logical processors around shared caches
(L3 at PU chip level, and L4 at drawer level), and dynamically optimizes the assignment of
logical processors and units of work.
z/OS dispatcher manages multiple queues, called affinity queues, with a target number of
eight processors per queue, which fits well onto a single PU chip. These queues are used to
assign work to as few logical processors as are needed for an LPAR workload. Therefore,
even if the LPAR is defined with many logical processors, HiperDispatch optimizes this
number of processors to be near the required capacity. The optimal number of processors to
be used is kept within a processor drawer boundary, when possible.
Logical partitions
PR/SM enables z14 servers to be initialized for a logically partitioned operation, supporting up
to 85 LPARs. Each LPAR can run its own operating system image in any image mode,
independently from the other LPARs.
An LPAR can be added, removed, activated, or deactivated at any time. Changing the number
of LPARs is not disruptive and does not require a POR. Certain facilities might not be
available to all operating systems because the facilities might have software corequisites.
Each LPAR has the following resources that are the same as a real CPC:
Processors
Called logical processors, they can be defined as CPs, IFLs, ICFs, or zIIPs. They can be
dedicated to an LPAR or shared among LPARs. When shared, a processor weight can be
defined to provide the required level of processor resources to an LPAR. Also, the capping
option can be turned on, which prevents an LPAR from acquiring more than its defined
weight and limits its processor consumption.
LPARs for z/OS can have CP and zIIP logical processors. The two logical processor types
can be defined as all dedicated or all shared. The zIIP support is available in z/OS.
The weight and number of online logical processors of an LPAR can be dynamically
managed by the LPAR CPU Management function of the Intelligent Resource Director
(IRD). These functions can be used to achieve the defined goals of this specific partition
and of the overall system. The provisioning architecture of z14 servers, as described in
Chapter 8, “System upgrades” on page 315, adds a dimension to the dynamic
management of LPARs.
PR/SM is enhanced to support an option to limit the amount of physical processor
capacity that is used by an individual LPAR when a PU is defined as a general-purpose
processor (CP) or an IFL that is shared across a set of LPARs.
Memory
Memory (main storage) must be dedicated to an LPAR. The defined storage must be
available during the LPAR activation; otherwise, the LPAR activation fails.
Reserved storage can be defined to an LPAR, which enables nondisruptive memory
addition to and removal from an LPAR by using the LPAR dynamic storage reconfiguration
(z/OS and z/VM). For more information, see 3.7.5, “LPAR dynamic storage
reconfiguration” on page 135.
Channels
Channels can be shared between LPARs by including the partition name in the partition
list of a channel-path identifier (CHPID). I/O configurations are defined by the IOCP or the
HCD with the CHPID mapping tool (CMT). The CMT is an optional tool that is used to map
CHPIDs onto physical channel IDs (PCHIDs). PCHIDs represent the physical location of a
port on a card in an I/O cage, I/O drawer, or PCIe I/O drawer.
IOCP is available on the z/OS, z/VM, and z/VSE operating systems, and as a stand-alone
program on the hardware console. For more information, see IBM Z Input/Output
Configuration Program User’s Guide for ICP IOCP, SB10-7163. HCD is available on the
z/OS and z/VM operating systems. Consult the appropriate 2964DEVICE Preventive
Service Planning (PSP) buckets before implementation.
Modes of operation
The nodes of operation are listed in Table 3-4. All available mode combinations, including
their operating modes and processor types, operating systems, and addressing modes, also
are summarized. Only the currently supported versions of operating systems are considered.
CP z/VSE 64-bit
Linux on IBM Z
z/TPF
z/VM
The 64-bit z/Architecture mode has no special operating mode because the architecture
mode is not an attribute of the definable images operating mode. The 64-bit operating
systems are in 31-bit mode at IPL and change to 64-bit mode during their initialization. The
operating system is responsible for taking advantage of the addressing capabilities that are
provided by the architectural mode.
For information about operating system support, see Chapter 7, “Operating system support”
on page 243.
General mode is also used to run the z/TPF operating system on dedicated or shared CPs
CF mode, by loading the CFCC code into the LPAR that is defined as one of the following
types:
– Dedicated or shared CPs
– Dedicated or shared ICFs
Linux only mode to run the following systems:
– A Linux on Z operating system, on either of the following types:
• Dedicated or shared IFLs
• Dedicated or shared CPs
– A z/VM operating system, on either of the following types:
• Dedicated or shared IFLs
• Dedicated or shared CPs
z/VM mode to run z/VM on dedicated or shared CPs or IFLs, plus zIIPs and ICFs
Secure Service Container (SSC) mode LPAR can run on:
– Dedicated or shared CPs
– Dedicated or shared IFLs
All LPAR modes, required characterized PUs, operating systems, and the PU
characterizations that can be configured to an LPAR image are listed in Table 3-5. The
available combinations of dedicated (DED) and shared (SHR) processors are also included.
For all combinations, an LPAR also can include reserved processors that are defined, which
allows for nondisruptive LPAR upgrades.
z/VM CPs, IFLs, z/VM (V6R2 and later) All PUs must be SHR or DED
zIIPs, or
ICFs
The extra channel subsystem and multiple image facility (MIF) image ID pairs (CSSID/MIFID)
can be later assigned to an LPAR for use (or later removed). This process can be done
through dynamic I/O commands by using the HCD. At the same time, required channels must
be defined for the new LPAR.
Partition profile: Cryptographic coprocessors are not tied to partition numbers or MIF IDs.
They are set up with Adjunct Processor (AP) numbers and domain indexes. These
numbers are assigned to a partition profile of a given name. The client assigns these AP
numbers and domains to the partitions and continues to have the responsibility to clear
them out when their profiles change.
LPAR dynamic PU reassignment can swap client processors of different types between
processor drawers. For example, reassignment can swap an IFL on processor drawer 1 with a
CP on processor drawer 2. Swaps can also occur between PU chips within a processor
drawer or a node and can include spare PUs. The goals are to pack the LPAR on fewer
processor drawers and also on fewer PU chips, based on the z14 processor drawers’
topology. The effect of this process is evident in dedicated and shared LPARs that use
HiperDispatch.
PR/SM and WLM work together to enforce the capacity that is defined for the group and the
capacity that is optionally defined for each individual LPAR.
Unlike traditional LPAR capping, absolute capping is designed to provide a physical capacity
limit that is enforced as an absolute (versus relative) value that is not affected by changes to
the virtual or physical configuration of the system.
Absolute capping provides an optional maximum capacity setting for logical partitions that is
specified in the absolute processors capacity (for example, 5.00 CPs or 2.75 IFLs). This
setting is specified independently by processor type (namely CPs, zIIPs, and IFLs) and
provides an enforceable upper limit on the amount of the specified processor type that can be
used in a partition.
Absolute capping is ideal for processor types and operating systems that the z/OS WLM
cannot control. Absolute capping is not intended as a replacement for defined capacity or
group capacity for z/OS, which are managed by WLM.
Absolute capping can be used with any z/OS, z/VM, or Linux on z LPAR that is running on an
IBM Z server. If specified for a z/OS LPAR, it can be used concurrently with defined capacity
or group capacity management for z/OS. When used concurrently, the absolute capacity limit
becomes effective before other capping controls.
DPM provides facilities to define and run virtualized computing systems by using a
firmware-managed environment that coordinate the physical system resources that are
shared by the partitions. The partitions’ resources include processors, memory, network,
storage, crypto, and accelerators.
DPM provides a new mode of operation for IBM Z servers that provide the following services:
Facilitates defining, configuring, and operating PR/SM LPARs in a similar way to how
someone performs these tasks on another platform.
Lays the foundation for a general IBM Z new user experience.
DPM is not another hypervisor for IBM Z servers. DPM uses the PR/SM hypervisor
infrastructure and provides an intelligent interface on top of it that allows customers to define,
use, and operate the platform virtualization without IBM Z experience or skills. For more
information about DPM, see Appendix E, “IBM Dynamic Partition Manager” on page 501.
Main storage can be dynamically assigned to expanded storage and back to main storage as
needed without a POR.
Operating systems that run as guests of z/VM can use the z/VM capability of implementing
virtual memory to guest virtual machines. The z/VM dedicated real storage can be shared
between guest operating systems.
The z14 storage allocation and usage possibilities, depending on the image mode, are listed
in Table 3-6.
7
Expanded storage is NOT supported on z14.
8 1 TB if an I/O drawer is installed in the z13 system (carry forward only). z14 does not support I/O drawer.
Currently, the z/VSE Network Appliance that is available on z14 and z13s servers runs in an
SSC LPAR.
An LPAR must define an amount of main storage and optionally (if not a CF image), an
amount of expanded storage. Both main storage and expanded storage can have the
following storage sizes defined:
The initial value is the storage size that is allocated to the partition when it is activated.
The reserved value is another storage capacity beyond its initial storage size that an LPAR
can acquire dynamically. The reserved storage sizes that are defined to an LPAR do not
have to be available when the partition is activated. They are predefined storage sizes to
allow a storage increase, from an LPAR point of view.
Without the reserved storage definition, an LPAR storage upgrade is a disruptive process that
requires the following steps:
1. Partition deactivation.
2. An initial storage size definition change.
3. Partition activation.
The extra storage capacity for an LPAR upgrade can come from the following sources:
Any unused available storage
Another partition that features released storage
A memory upgrade
A concurrent LPAR storage upgrade uses DSR. z/OS uses the reconfigurable storage unit
(RSU) definition to add or remove storage units in a nondisruptive way.
z/VM V6R4 and later releases support the dynamic addition of memory to a running LPAR by
using reserved storage. It also virtualizes this support to its guests. Removing storage from
the z/VM LPAR is disruptive. Removing memory from a z/VM guest is not disruptive to the
z/VM LPAR.
LPAR storage granularity information is required for LPAR image setup and for z/OS RSU
definition. LPARs are limited to a maximum size of 16 TB of main storage. However, the
maximum amount of memory that is supported by z/OS V2.3 at the time of this writing is 4 TB.
For z/VM V6R3, the limit is 1 TB; for z/VM V6R4 and V7R1, the limit is 2 TB.
With dynamic storage reconfiguration, the unused storage does not have to be continuous.
PR/SM dynamically takes offline a storage increment and makes it available to other
partitions when an operating system running on an LPAR releases a storage increment.
For more information about implementing LPAR processor management under IRD, see z/OS
Intelligent Resource Director, SG24-5952.
IBM z13
CF01
ICF
FICON
Figure 3-15 also shows a z14 system that contains multiple z/OS sysplex partitions. It
contains an internal CF (CF02), a z13 system that contains a stand-alone CF (CF01), and a
zEC12 that contains multiple z/OS sysplex partitions.
Parallel Sysplex technology is an enabling technology that allows highly reliable, redundant,
and robust IBM Z technology to achieve near-continuous availability. A Parallel Sysplex
makes up one or more (z/OS) operating system images that are coupled through one or more
Coupling Facilities. The images can be combined to form clusters.
A correctly configured Parallel Sysplex cluster maximizes availability in the following ways:
Continuous (application) availability: Changes can be introduced, such as software
upgrades, one image at a time, while the remaining images continue to process work. For
more information, see Parallel Sysplex Application Considerations, SG24-6523.
High capacity: 2 - 32 z/OS images in a sysplex.
Dynamic workload balancing: Because it is viewed as a single logical resource, work can
be directed to any similar operating system image in a Parallel Sysplex cluster that has
available capacity.
Systems management: The architecture provides the infrastructure to satisfy client
requirements for continuous availability. It also provides techniques for achieving simplified
systems management consistent with this requirement.
Resource sharing: Several base (z/OS) components use the CF shared storage. This
configuration enables sharing of physical resources with significant improvements in cost,
performance, and simplified systems management.
Single system image: The collection of system images in the Parallel Sysplex is displayed
as a single entity to the operator, user, and database administrator. A single system image
ensures reduced complexity from operational and definition perspectives.
N-2 support: Multiple hardware generations (normally three) are supported in the same
Parallel Sysplex. This configuration provides for a gradual evolution of the systems in the
Parallel Sysplex without having to change all of them simultaneously. Similarly, software
support for multiple releases or versions is supported.
Note: N-2 support is available for z14 M/T 3906. The IBM z14 Model ZR1 (M/T 3907)
supports only N-1 coupling connectivity.
Through state-of-the-art cluster technology, the power of multiple images can be harnessed
to work in concert on common workloads. The IBM Z Parallel Sysplex cluster takes the
commercial strengths of the platform to improved levels of system management, competitive
price for performance, scalable growth, and continuous availability.
Consideration: z14, z13, z13s, zEC12, and zBC12 servers cannot coexist in the same
sysplex with System z196 and previous systems. The introduction of z14 servers into
existing installations might require more planning.
z14 servers with CFCC Level 23 require z/OS V1R13 or later, and z/VM V6R4 or later for
virtual guest coupling.
CFCC Level 22
CFCC level 22 is delivered on the z14 servers with driver level D32. CFCC Level 22
introduces the following enhancements:
CF Enhancements:
– CF structure encryption
CF Structure encryption is transparent to CF-using middleware and applications, while
CF users are unaware of and not involved in the encryption. All data and adjunct data
that flows between z/OS and the CF is encrypted. The intent is to encrypt all data that
might be sensitive.
Internal control information and related request metadata is not encrypted, including
locks and lock structures.
z14 systems with CFCC Level 22 require z/OS V1R12 with PTFs or later, and z/VM V6R4 or
later for guest virtual coupling.
To support an upgrade from one CFCC level to the next, different levels of CFCC can be run
concurrently while the CF LPARs are running on different servers. CF LPARs that run on the
same server share the CFCC level.
z14 servers (CFCC level 22) can coexist in a sysplex with CFCC levels 19, 20, and 21.
The CFCC is implemented by using the active wait technique. This technique means that the
CFCC is always running (processing or searching for service) and never enters a wait state.
With CFCC Level 19 and Coupling Thin Interrupts, shared-processor CF can provide more
consistent CF service time and acceptable usage in a broader range of configurations. For
more information, see 3.9.3, “Dynamic CF dispatching” on page 141.
CF structure sizing changes are expected when moving from CFCC Level 17 (or earlier) to
CFCC Level 20 or later. Review the CF structure size by using the CFSizer tool.
For more information about the recommended CFCC levels, see the current exception letter
that is published on Resource Link.
The interrupt causes a shared logical processor CF partition to be dispatched by PR/SM (if it
is not already dispatched), which allows the request or signal to be processed in a more
timely manner. The CF relinquishes control when work is exhausted or when PR/SM takes
the physical processor away from the logical processor.
The use of Coupling Thin Interrupts is controlled by the new DYNDISP specification.
You can experience CF response time improvements or more consistent CF response time
when using CFs with shared engines. This improvement can allow more environments with
multiple CF images to coexist in a server, and share CF engines with reasonable
performance.
The response time for asynchronous CF requests can also be improved as a result of the use
of Coupling Thin Interrupts on the z/OS host system, regardless of whether the CF is using
shared or dedicated engines.
This capability allows ICF engines to be shared by several CF images. In this environment, it
provides faster and far more consistent CF service times. It can also provide performance that
is reasonably close to dedicated-engine CF performance if the CF engines are not CF Control
Code thin interrupts.
The introduction of thin interrupts allows a CF to run by using a shared processor while
maintaining good performance. The shared engine is allowed to be undispatched when there
is no more work, as in the past. The new thin interrupt now gets the shared processor that is
dispatched when a command or duplexing signal is presented to the shared engine.
This function saves processor cycles and is an excellent option to be used by a production
backup CF or a testing environment CF. This function is activated by using the CFCC
command DYNDISP ON.
The CPs can run z/OS operating system images and CF images. For software charging
reasons, generally use only ICF processors to run CF images.
ICF CP
Partition
Image
z/OS CF z/OS Profile
BACK UP CF
Setup
For more information about CF configurations, see Coupling Facility Configuration Options,
GF22-5042.
The “storage class memory” that is provided by Flash Express adapters is replaced with
memory allocated from main memory (VFM).
VFM is designed to help improve availability and handling of paging workload spikes when
running z/OS V2.1, V2.2, or V2.3. With this support, z/OS is designed to help improve system
availability and responsiveness by using VFM across transitional workload events, such as
market openings and diagnostic data collection. z/OS is also designed to help improve
processor performance by supporting middleware use of pageable large (1 MB) pages.
VFM can also be used in CF images to provide extended capacity and availability for
workloads that use IBM WebSphere MQ Shared Queues structures. The use of VFM can
help availability by reducing latency from paging delays that can occur at the start of the
workday or during other transitional periods. It is also designed to eliminate delays that can
occur when collecting diagnostic data during failures.
The information is relocated during CDR in a manner that is identical to the process that was
used for expanded storage. VFM is much simpler to manage (HMC task) and no hardware
repair and verify (no cables and no adapters) are needed. Also, because this feature is part of
internal memory, VFM is protected by RAIM and ECC and can provide better performance
because no I/O to an attached adapter occurs.
Note: Use cases for Flash did not change (for example, z/OS paging and CF shared queue
overflow). Instead, they transparently benefit from the changes in the hardware
implementation.
No option is available for VFM plan ahead. The only option is to always include zVFM plan
ahead when Flexible Memory option is selected.
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
The data transmission rate of a PCIe link is determined by the link width (numbers of lanes),
the signaling rate of each lane, and the signal encoding rule. The signaling rate of a PCIe
Generation 3 lane is 8 gigatransfers per second (GTps), which means that nearly 8 gigabits
are transmitted per second (Gbps).
A PCIe Gen3 x16 link has the following data transmission rates:
Data transmission rate per lane:
8 Gbps * 128/130 bit (encoding) = 7.87 Gbps=984.6 MBps
Data transmission rate per link: 984.6 MBps * 16 (lanes) = 15.75 GBps
Considering that the PCIe link is full-duplex mode, the data throughput rate of a PCIe Gen3
x16 link is 31.5 GBps (15.75 GBps in both directions).
Link performance: The link speeds do not represent the actual performance of the link.
The actual performance depends on many factors that include latency through the
adapters, cable lengths, and the type of workload.
PCIe Gen3 x16 links are used in IBM z14™ servers for driving the PCIe I/O drawers, and
coupling links for CPC to CPC communications.
Note: Unless specified otherwise, when PCIe is mentioned in remaining sections of this
chapter, it refers to PCIe Generation 3.
4.2.1 Characteristics
The z14 I/O subsystem is designed to provide great flexibility, high availability, and the
following excellent performance characteristics:
High bandwidth
Link performance: The link speeds do not represent the actual performance of the
link. The actual performance depends on many factors that include latency through the
adapters, cable lengths, and the type of workload.
IBM z14™ servers use PCIe as an internal interconnect protocol to drive PCIe I/O drawers
and CPC to CPC connections. The I/O bus infrastructure data rate increases up to
160 GBps per drawer (10 PCIe Gen3 Fanout slots). For more information about coupling
link connectivity, see 4.7.4, “Parallel Sysplex connectivity” on page 184.
Notes: The maximum number of coupling CHPIDs on a IBM z14™ server is 256, which is
a combination of the following ports (not all combinations are possible (subject to I/O
configuration options):
Up to 80 ICA SR ports
Up to 64 CE LR ports
Up to 32 HCA3-O 12x IFB ports
Up to 64 HCA3-O LR 1x IFB ports
IBM Virtual Flash Memory replaces IBM zFlash Express feature on IBM z14™ servers.
The maximum combined number of RoCE features that can be installed is 8; that is, any
combination of 25GbE RoCE Express2, 10GbE RoCE Express2, and 10GbE RoCE
Express (carry forward only) features.
Regarding SMC-R, 25GbE RoCE Express should not be configured in the same SMC-R
link group with 10GbE RoCE features.
AMD
Front
7U
Domain 0 Domain 2
7U
~311mm
Domain 3 Domain 1
DCA
560mm (max)
Figure 4-1 PCIe I/O drawer
PCIe switch application-specific integrated circuits (ASICs) are used to fan out the host bus
from the processor drawers to the individual I/O features. Maximum 32 PCIe I/O features (up
to 64 channels) per drawer are supported.
The PCIe I/O drawer is a two-sided drawer (I/O cards on both sides, front and back) that is 7U
high. The drawer contains the 32 I/O slots for PCIe features, four switch cards (two in front,
two in the back), two DCAs to provide redundant power, and two air-moving devices (AMDs)
for redundant cooling, as shown in Figure 4-1.
The I/O structure in a z14 CPC is shown in Figure 4-3 on page 152. The PCIe switch card
provides the fanout from the high-speed x16 PCIe host bus to eight individual card slots. The
PCIe switch card is connected to the drawers through a single x16 PCIe Gen 3 bus from a
PCIe fanout card.
In the PCIe I/O drawer, the eight I/O feature cards that directly attach to the switch card
constitute an I/O domain. The PCIe I/O drawer supports concurrent add and replace I/O
features to enable you to increase I/O capability as needed without having to plan ahead.
The PCIe I/O slots are organized into four hardware I/O domains. Each I/O domain supports
up to eight features and is driven through a PCIe switch card. Two PCIe switch cards always
provide a backup path for each other through the passive connection in the PCIe I/O drawer
backplane. During a PCIe fanout card or cable failure, 16 I/O cards in two domains can be
driven through a single PCIe switch card.
A switch card in the front is connected to a switch card in the rear through the PCIe I/O drawer
board (through the Redundant I/O Interconnect, or RII). In addition, switch cards in same
PCIe I/O drawer are connected to PCIe fanouts across nodes and CPC drawers for higher
availability.
The RII design provides a failover capability during a PCIe fanout card failure or CPC drawer
upgrade. All four domains in one of these PCIe I/O drawers can be activated with four fanouts.
The flexible service processors (FSPs) are used for system control.
Front - 16 R ear - 16
1 38
01 0
1 37
02 0 36
1
03 0 35
1
04 0
PC Ie 34
P C Ie S w itch
05
Sw itch
1 33
06 0
1 32
07 0 31
1
08 0 30
1
09 0
R II F SP -1, 2 29
10 F SP -1, 1
3 28
11 2
3 27
12 2
3 26
13 2
3 25
14 2
PC Ie 24
P C Ie S w itch
15
Sw itch
3 23
16 2
3 22
17 2
3 21
18 2
3 20
19 2
Figure 4-4 PCIe I/O drawer with 32 PCIe slots and 4 I/O domains
Each I/O domain supports up to eight features (FICON, OSA, Crypto, and so on.) All I/O
cards connect to the PCIe switch card through the backplane board. The I/O domains and
slots are listed in Table 4-1.
For an upgrade to IBM z14™ servers, only the following PCIe I/O features carried forward:
FICON Express16S
FICON Express8S
OSA-Express5S (all 5S features)
OSA-Express4S 1000BaseT (only)
Consideration: On a new build IBM z14™ server, only PCIe I/O drawers are supported.
No carry-forward of I/O drawers or associated features is supported on upgrades to a IBM
z14™ server.
A new build IBM z14 server supports the following PCIe I/O feature that is hosted in the PCIe
I/O drawers:
FICON Express16S+
OSA-Express7S 25GbE SR
OSA-Express6S
25GbE RoCE Express2
10GbE RoCE Express2
Crypto Express6S
zEDC Express
Coupling Express Long Reach (CE LR)
zHyperLink Express
Note: Model-upgrades allowed to IBM z14™ from z13 or zEC12, downgrades are not
allowed from IBM z14™. Capacity upgrades or downgrades are allowed in an upgrade to
IBM z14™ from z13 or zEC12.
For frame roll MES from zEC12 and z13 to IBM z14™, new frames are shipped. New PCIe
I/O drawers are supplied with the MES for zEC12 to replace the I/O drawers.
4.5 Fanouts
The z14 server uses fanout cards to connect the I/O subsystem to the CPC drawer. The
fanout cards also provide the ICA SR and InfiniBand coupling links for Parallel Sysplex. All
fanout cards support concurrent add, delete, and move.
Note: IBM z14 is the last z Systems and IBM Z server to support HCA3-O and HCA3-O LR
adapters.a
Also, z14 is the last z Systems and IBM Z server to support HCA3-O fanout for 12x IFB (FC
0171) and HCA3-O LR fanout for 1x IFB (FC 0170).
a. IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal
without notice at IBM’s sole discretion.
Five types of fanout cards are supported by IBM z14™ servers. Each slot can hold one of the
following five fanouts:
PCIe Gen3 fanout card: This copper fanout provides connectivity to the PCIe switch card
in the PCIe I/O drawer.
Integrated Coupling Adapter (ICA SR): This adapter provides coupling connectivity
between z14, z13 and z13s servers, up to 150-meter (492 ft) distance, 8 GBps link rate.
Host Channel Adapter (HCA3-O (12xIFB)): This optical fanout provides 12x InfiniBand
coupling link connectivity up to 150-meter (492 ft) distance to a IBM z14™, z13, z13s,
zEC12, zBC12 servers.
Host Channel Adapter (HCA3-O LR (1xIFB)): This optical long range fanout provides
1x InfiniBand coupling link connectivity to IBM z14™, z13, z13s, zEC12, zBC12 servers.
HCA3-O LR supports up to 10 km (6.2 miles) unrepeated distance or 100 km (62 miles)
when IBM Z qualified dense wavelength division multiplexing (DWDM) equipment is used.
The PCIe Gen3 fanout card includes one port. The HCA3-O LR (1xIFB) fanout includes four
ports, and other fanouts include two ports.
Note: HCA2-O fanout card carry-forward is no longer supported on IBM z14™ servers.
The following PCIe and IFB connections from the CPC drawer (see Figure 4-7 on page 188):
PCIe I/O drawer (PCIe Gen3)
Z server that is connected through InfiniBand (12x or 1x HCA3-O)
Z server that is connected through a dedicated PCIe ICA SR
Figure 4-3 on page 152 shows an I/O connection scheme that is not tied to a particular CPC
drawer. In a real configuration, I/O connectivity is mixed across multiple CPC drawers (if
available) for I/O connection redundancy.
A 16x PCIe copper cable of 1.5 meters (4.92 ft) to 4.0 meters (13.1 ft) is used for connection
to the PCIe switch card in the PCIe I/O drawer. PCIe fanout cards are always plugged in pairs
and provide redundancy for I/O domains within the PCIe I/O drawer.
The pairs of PCIe fanout cards of a z14 are named as LG03 - LG12 from left to right. All z14
models (except for model M01) split the PCIe fanout pairs across different processor drawers
for redundancy purposes.
PCIe fanout: The PCIe fanout is used exclusively for I/O and cannot be shared for any
other purpose.
The ICA SR uses PCIe Gen3 technology, with x16 lanes that are bifurcated into x8 lanes for
coupling. No performance degradation is expected compared to the coupling over InfiniBand
12x IFB3 protocol.
The ICA SR is designed to drive distances up to 150 m (492 ft) with a link data rate of 8 GBps.
ICA SR supports up to four channel-path identifiers (CHPIDs) per port and eight subchannels
(devices) per CHPID.
The coupling links can be defined as shared between images within a CSS. They also can be
spanned across multiple CSSs in a CPC. Unlike the HCA3-O 12x InfiniBand links, the ICA SR
cannot define more than four CHPIDS per port. When STP is enabled, ICA SR coupling links
can be defined as timing-only links to other z14 and z13/z13s CPCs.
The ICA SR fanout is housed in the PCIe I/O fanout slot on the z14 CPC drawer, which
supports 10 PCIe I/O slots. Up to 10 ICA SR fanouts and up to 20 ICA SR ports are
supported on a z14 CPC drawer, enabling greater connectivity for short distance coupling on
a single processor node compared to previous generations. The maximum number of ICA SR
fanout features is 20 per system on IBM z14™ servers.
The ICA SR can be used for coupling connectivity between z14 and z13/z13s servers. It does
not support connectivity to zEC12, zBC12 servers, and it cannot be connected to HCA3-O or
HCA3-O LR coupling fanouts.
The ICA SR fanout requires cabling that different from the 12x IFB cables. For distances up to
100 m (328 ft), OM3 fiber optic can be used. For distances up to 150 m (492 ft), OM4 fiber
optic cables can be used. For more information, see the following resources:
Planning for Fiber Optic Links, GA23-1407
IBM 3906 Installation Manual for Physical Planning, GC28-6965
The fiber optic cables are industry-standard OM3 (2000 MHz-km) 50-µm multimode optical
cables with multifiber push-on (MPO) connectors. The maximum cable length is 150 m
(492 ft). Each port (link) has 12 pairs of fibers: 12 fibers for transmitting, and 12 fibers for
receiving. The HCA3-O (12xIFB) fanout supports a link data rate of 6 GBps.
Important: The HCA3-O fanout features two ports (1 and 2). Each port includes one
connector for transmitting (TX) and one connector for receiving (RX). Ensure that you use
the correct cables. An example is shown in Figure 4-5 on page 157.
Transmitter (TX)
Figure 4-5 OM3 50/125 μm multimode fiber cable with MPO connectors
A fanout features two ports for optical link connections, and supports up to 16 CHPIDs across
both ports. These CHPIDs are defined as channel type CIB in the I/O configuration data set
(IOCDS). The coupling links can be defined as shared between images within a channel
subsystem (CSS). They also can be spanned across multiple CSSs in a CPC.
Each HCA3-O (12x IFB) fanout has an assigned Adapter ID (AID) number. This number must
be used for definitions in IOCDS to create a relationship between the physical fanout location
and the CHPID number. For more information about AID numbering, see “Adapter ID number
assignment” on page 158.
For more information about how the AID is used and referenced in the HCD, see
Implementing and Managing InfiniBand Coupling Links on System z SG24-7539.
When STP is enabled, IFB coupling links can be defined as timing-only links to other z14,
z13, z13s, zEC12, and zBC12 CPCs.
The HCA3-O feature that supports 12x InfiniBand coupling links is designed to deliver
improved service times. When no more than four CHPIDs are defined per HCA3-O (12xIFB)
port, the 12x IFB3 protocol is used. When you use the 12x IFB3 protocol, synchronous
service times are up to 40% faster than when you use the 12x IFB protocol.
Each connection supports a link rate of up to 5 Gbps if connected to a z14, z13, or z13s
server. HCA3-O LR supports also a link rate of 2.5 Gbps when connected to IBM Z qualified
DWDM equipment. The link rate is auto-negotiated to the highest common rate.
The fiber optic cables are 9-µm SM optical cables that end with an LC Duplex connector. With
direct connection, the supported unrepeated distance2 is up to 10 km (6.2 miles), and up to
100 km (62 miles) with IBM Z qualified DWDM equipment.
A fanout has four ports for optical link connections, and supports up to 16 CHPIDs across all
four ports. These CHPIDs are defined as channel type CIB in the IOCDS. The coupling links
can be defined as shared between images within a channel subsystem, and also can be
spanned across multiple channel subsystems in a server.
Each HCA3-O LR (1xIFB) fanout can be used for link definitions to another server, or a link
from one port to a port in another fanout on the same server.
The source and target operating system image, CF image, and the CHPIDs that are used on
both ports in both servers are defined in IOCDS.
Each HCA3-O LR (1xIFB) fanout has an assigned AID number. This number must be used for
definitions in IOCDS to create a relationship between the physical fanout location and the
CHPID number. For more information about AID numbering, see “Adapter ID number
assignment” on page 158.
When STP is enabled, HCA3-O LR coupling links can be defined as timing-only links to other
z14, z13, z13s, zEC12, and zBC12 CPCs.
2 On special request. For more information, see the Parallel Sysplex page of the IBM IT infrastructure website.
Fanout slots
The fanout slots are numbered LG03 - LG16 left to right, as shown in Figure 4-4 on page 153.
All fanout locations and their AIDs for all four drawers are shown for reference only. Slots
LG01 and LG02 never include a fanout that is installed because they are dedicated for FSPs.
Important: The AID numbers that are listed in Table 4-2 on page 158 are valid only for a
new build system or if new processor drawers are added. If a fanout is moved, the AID
follows the fanout to its new physical location.
The AID assignment is listed in the PCHID REPORT that is provided for each new server or
for an MES upgrade on existing servers. Part of a PCHID REPORT for a model M03 is shown
in Example 4-1. In this example, one fanout card is installed in the first drawer (location A15A,
slot LG14); then, it is assigned as AID 0D. Another fanout card is installed in the drawer
(location A19A, slot LG14) and is assigned as AID 09.
HCA3-O 0171 Coupling link 50-µm MM MPO 150 m (492 ft) 6 GBpsb
(12xIFB) OM3 (2000
MHz-km)
100 kmc
(62 miles)
ICA SR 0172 Coupling link OM4 MTP 150 m (492 ft) 8 Gbps
3 Certain I/O features do not have external ports, such as Crypto Express and zEDC
Important: IBM z14™ servers do not support the ISC-3, HCA2-O (12x), or HCA2-O LR
(1x) features and cannot participate in a Mixed Coordinated Timing Network (CTN).
A CHPID does not directly correspond to a hardware channel port. Instead, it is assigned to a
PCHID in the hardware configuration definition (HCD) or IOCP.
The PCHID REPORT that is shown in Example 4-2 includes the following components:
Feature code 0170 (HCA3-O LR (1xIFB)) is installed in CPC drawer 1 (location A15A, slot
LG14), and includes AID 0D assigned.
Feature code 0172 (Integrated Coupling Adapter (ICA SR) is installed in CPC drawer 4
(location A27A, slot LG05), and has AID 12 assigned.
Feature code 0424 (OSA-Express6S 10 GbE LR) is installed in PCIe I/O drawer 1
(location Z22B, slot 11), and has PCHID 120 assigned.
Feature code 0427 (FICON Express16S+ long wavelength (LX) 10 km (6.2 miles)) is
installed in PCIe I/O drawer 2 (location Z22B, slot 26), and has PCHIDs 154 and 155
assigned.
Feature code 0431 (zHyperLink Express) is installed in PCIe I/O drawer 2 (location Z22B,
slot 04), and includes PCHID10C assigned. PCHID 240 is shared by ports D1 and D2.
A resource group (RG) parameter is shown in the PCHID REPORT for native PCIe features.
A balanced plugging of native PCIe features exists between four resource groups (RG1, RG2,
RG3, and RG4).
For more information about resource groups, see Appendix C, “Native Peripheral Component
Interconnect Express” on page 469.
The preassigned PCHID number of each I/O port relates directly to its physical location (jack
location in a specific slot).
4.7 Connectivity
I/O channels are part of the CSS. They provide connectivity for data exchange between
servers, between servers and external control units (CUs) and devices, or between networks.
For more information about connectivity to external I/O subsystems (for example, disks), see
“Storage connectivity” on page 167.
For more information about communication to LANs, see “Network connectivity” on page 173.
At least one I/O feature (FICON) or one coupling link feature (ICA SR or HCA3-O) must be
present in the minimum configuration.
The following features are exclusively plugged into a PCIe I/O drawer and do not require the
definition of a CHPID and CHPID type:
Each Crypto Express (5S/6S) feature occupies one I/O slot, but does not have a CHPID
type. However, LPARs in all CSSs have access to the features. Each Crypto Express
adapter can be defined to up to 85 LPARs.
Each RoCE Express/Express2 feature occupies one I/O slot but does not have a CHPID
type. However, LPARs in all CSSs have access to the feature. The 10GbE RoCE Express
can be defined to up to 31 LPARs per PCHID. The 25 GbE RoCE Express2 and the
10GbE RoCE Express2 features support up to 126 LPARs per PCHID.
Each zEDC Express feature occupies one I/O slot but does not have a CHPID type.
However, LPARs in all CSSs have access to the feature. The zEDC feature can be defined
to up to 15 LPARs.
Each zHyperLink Express feature occupies one I/O slot but does not have a CHPID type.
However, LPARs in all CSSs have access to the feature. The zHyperLink Express adapter
works as native PCIe adapter and can be shared by multiple LPARs. Each port can
support up to 127 Virtual Functions (VFs), with one or more VFs/PFIDs being assigned to
each LPAR. This support gives a maximum of 254 VFs per adapter.
Cables: All fiber optic cables, cable planning, labeling, and installation are client
responsibilities for new z14 installations and upgrades. Fiber optic conversion kits and
mode conditioning patch cables are not orderable as features on z13 servers. All other
cables must be sourced separately.
Whether you choose a packaged service or a custom service, high-quality components are
used to facilitate moves, additions, and changes in the enterprise to prevent having to extend
the maintenance window.
The required connector and cable type for each I/O feature on IBM z14™ servers are listed in
Table 4-6.
0433 CE LR LC Duplex 9 µm SM
FICON channels
z14 supports the following FICON features:
FICON Express16S+
FICON Express16S
FICON Express8S (carry-forward only)
The FICON Express16S+, FICON Express16S, and FICON Express8S features conform to
the following architectures:
Fibre Connection (FICON)
High Performance FICON on Z (zHPF)
Fibre Channel Protocol (FCP)
The FINCON features provide connectivity between any combination of servers, directors,
switches, and devices (control units, disks, tapes, and printers) in a SAN.
Each FICON Express16S+, FICON Express16S, and FICON Express 8S feature occupies
one I/O slot in the PCIe I/O drawer. Each feature has two ports, each supporting an LC
Duplex connector, with one PCHID and one CHPID associated with each port.
All FICON Express16S+, FICON Express16S, and FICON Express8S features use SFP
optics that allow for concurrent repairing or replacement for each SFP. The data flow on the
unaffected channels on the same feature can continue. A problem with one FICON port no
longer requires replacement of a complete feature.
All FICON Express16S+, FICON Express16S, and FICON Express8S features also support
cascading, which is the connection of two FICON Directors in succession. This configuration
minimizes the number of cross-site connections and helps reduce implementation costs for
disaster recovery applications, IBM Geographically Dispersed Parallel Sysplex™ (GDPS),
and remote copy.
IBM z14™ servers support 32K devices per FICON channel for all FICON features.
Each FICON Express16S+, FICON Express16S, and FICON Express8S channel can be
defined independently for connectivity to servers, switches, directors, disks, tapes, and
printers, by using the following CHPID types:
CHPID type FC: The FICON, zHPF, and FCTC protocols are supported simultaneously.
4 zHyperLink feature operates together with a FICON channel
FICON channels (CHPID type FC or FCP) can be shared among LPARs and can be defined
as spanned. All ports on a FICON feature must be of the same type (LX or SX). The features
are connected to a FICON capable control unit (point-to-point or switched point-to-point)
through a Fibre Channel switch.
FICON Express16S+
The FICON Express16S+ feature is installed in the PCIe I/O drawer. Each of the two
independent ports is capable of 4 Gbps, 8 Gbps, or 16 Gbps. The link speed depends on the
capability of the attached switch or device. The link speed is auto-negotiated, point-to-point,
and is transparent to users and applications.
The following types of FICON Express16S+ optical transceivers are supported (no mix on
same card):
FICON Express16S+ 10 km LX feature, FC #0427, with two ports per feature, supporting
LC Duplex connectors
FICON Express16S+ SX feature, FC #0428, with two ports per feature, supporting LC
Duplex connectors
Each port of the FICON Express16S+ 10 km LX feature uses an optical transceiver that
supports an unrepeated distance of 10 km (6.2 miles) by using 9 µm single-mode fiber.
Each port of the FICON Express16S SX feature uses an optical transceiver that supports to
up to 125 m (410 ft.) of distance variable with link data rate and fiber type.
FICON Express16S
The FICON Express16S feature is installed in the PCIe I/O drawer. Each of the two
independent ports is capable of 4 Gbps, 8 Gbps, or 16 Gbps. The link speed depends on the
capability of the attached switch or device. The link speed is auto-negotiated, point-to-point,
and is transparent to users and applications.
Each port of the FICON Express16S 10 km LX feature uses an optical transceiver that
supports an unrepeated distance of 10 km (6.2 miles) by using 9 µm single-mode fiber.
Each port of the FICON Express16S SX feature uses an optical transceiver that supports to
up to 125 m (410 ft.) of distance depending on the fiber that is used.
Each port of the FICON Express8S 10 km LX feature uses an optical transceiver that
supports an unrepeated distance of 10 km (6.2 miles) by using 9 µm single-mode fiber.
Each port of the FICON Express8S SX feature uses an optical transceiver that supports up to
150 m (492 feet) of distance depending on the fiber used.
FICON enhancements
Together with the FICON Express16S+, IBM z14™ servers provide enhancements for FICON
in both functional and performance aspects.
The FICON Express16S+ and FICON Express16S are designed to support FEC coding on
top of its 64b/66b data encoding for 16Gbps connections. This design can correct up to 11 bit
errors per 2112 bits transmitted. Therefore, while connected to devices that support FEC at
16 Gbps connections, the FEC design allows FICON Express16S+ and FICON Express16S
channels to operate at higher speeds, over longer distances, with reduced power and higher
throughput while retaining the same reliability and robustness for which FICON channels are
traditionally known.
With the IBM DS8870 or above, IBM z14 (and z13/z13s) servers can extend the use of FEC
to the fabric N_Ports for a completed end-to-end coverage of 16 Gbps FC links. For more
information, see the IBM DS8884 and z13s: A new cost optimized solution, REDP-5327.
The port-based routing (PBR) assigns the ISL routes statically that is based on “first come,
first served” when a port starts a fabric login (FLOGI) to a destination domain. The ISL is
round-robin that is selected for assignment. Therefore, I/O flow from same incoming port to
same destination domain always is assigned the same ISL route, regardless of the
destination port of each I/O. This setup can result in some ISLs overloaded while some are
under-used. The ISL routing table is changed whenever Z server undergoes a power-on-reset
(POR), so the ISL assignment is unpredictable.
Device-based routing (DBR) assigns the ISL routes statically that is based on a hash of the
source and destination port. That I/O flow from same incoming port to same destination is
assigned to same ISL route. Compared to PBR, the DBR is more capable of spreading the
load across ISLs for I/O flow from the same incoming port to different destination ports within
a destination domain.
When a static SAN routing policy is used, the FICON director features limited capability to
assign ISL routes based on workload. This limitation can result in unbalanced use of ISLs
(some might be overloaded, while others are under-used).
The dynamic routing ISL routes are dynamically changed based on the Fibre Channel
exchange ID, which is unique for each I/O operation. ISL is assigned at I/O request time, so
different I/Os from same incoming port to same destination port are assigned different ISLs.
With FIDR, IBM z14™ servers feature the following advantages for performance and
management in configurations with ISL and cascaded FICON directors:
Support sharing of ISLs between FICON and FCP (PPRC or distributed)
I/O traffic is better balanced between all available ISLs
Improved utilization of FICON director and ISL
Easier to manage with a predicable and repeatable I/O performance
FICON dynamic routing can be enabled by defining dynamic routing capable switches and
control units in HCD. Also, z/OS implemented a health check function for FICON dynamic
routing.
By using the FICON Express16S (or above) as an FCP channel with NPIV enabled, the
maximum numbers of the following aspects for one FCP physical channel are doubled:
Maximum number of NPIV hosts defined: Increased from 32 to 64
Maximum number of remote N_Ports communicated: Increased from 512 to 1024
Maximum number of addressable LUNs: Increased from 4096 to 8192
Concurrent I/O operations: Increased from 764 to 1528
For more information about operating systems that support NPIV, see “N_Port ID
Virtualization” on page 289.
Note: For more information about the FICON enhancement of IBM z14™ servers, see Get
More Out of Your IT Infrastructure with IBM z13 I/O Enhancements, REDP-5134.
The zHyperLink Express feature (FC 0431) provides a low latency direct connection between
z14 CPC and DS8880 I/O Port.
The zHyperLink Express is the result of new business requirements that demand fast and
consistent application response times. It dramatically reduces latency by interconnecting the
z14 CPC directly to I/O Bay of the DS8880 by using PCIe Gen3 x 8 physical link (up to 150 m
(492 ft) distance). A new transport protocol is defined for reading and writing IBM ECKD™
data records, as documented in the zHyperLink interface specification.
On z14, zHyperLink Express card is a new PCIe adapter, which installed in the PCIe I/O
drawer. HCD definition support was added for new PCIe function type with PORT attributes.
Requirements of zHyperLink
The zHyperLink Express feature is available on z14 servers, and requires:
z/OS 2.1 or later
DS888x with I/O Bay Planar board and firmware level 8.3
z14 with zHyperLink Express adapter (FC #0431) installed
FICON channel as a driver
Only ECKD supported
z/VM is not supported
The zHyperLink Express is managed as a native PCIe adapter and can be shared by multiple
LPARs. Each port can support up to 127 Virtual Functions (VFs), with one or more VFs/PFIDs
being assigned to each LPAR. This configuration gives a maximum of 254 VFs per adapter.
The zHyperlink Express requires the following components:
zHyperLink connector on DS8880 I/O Bay
For DS8880 firmware R8.3 above, the I/O Bay planar is updated to support the zHyperLink
interface. This update includes the update of the PEX 8732 switch to PEX8733 that
includes a DMA engine for the zHyperLink transfers, and the upgrade from a copper to
optical interface by a CXP connector (provided).
Cable
The zHyperLink Express uses optical cable with MTP connector. Maximum supported
cable length is 150 m (492 ft).
OSA-Express6S
The OSA-Express6S feature is installed in the PCIe I/O drawer. The following
OSA-Express6S features can be installed on z14 servers:
OSA-Express6S 10 Gigabit Ethernet LR, FC 0424
OSA-Express6S 10 Gigabit Ethernet SR, FC 0425
OSA-Express6S Gigabit Ethernet LX, FC 0422
OSA-Express6S Gigabit Ethernet SX, FC 0423
OSA-Express6S 1000BASE-T Ethernet, FC 0426
The OSA-Express7S 25GbE SR feature supports the use of an industry standard small form
factor LC Duplex connector. Ensure that the attaching or downstream device has an SR
transceiver. The sending and receiving transceivers must be the same (SR to SR).
The OSA-Express7S 25GbE SR feature does not support auto-negotiation to any other
speed and runs in full duplex mode only.
A 50 µm multimode fiber optic cable that ends with an LC Duplex connector is required for
connecting each port on this feature to the selected device.
Note: zBX Model 004 can be carried forward during an upgrade from z13 to IBM z14™ (as
the z/BX is an independent Ensemble node, not tied to any IBM Z CPC); however, ordering
any zBX features was withdrawn from marketing as of March 31, 2017.
The OSA-Express6S 10 GbE LR feature supports the use of an industry standard small form
factor LC Duplex connector. Ensure that the attaching or downstream device includes an LR
transceiver. The transceivers at both ends must be the same (LR to LR).
The OSA-Express6S 10 GbE LR feature does not support auto-negotiation to any other
speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for
connecting this feature to the selected device.
The OSA-Express6S 10 GbE SR feature supports the use of an industry standard small form
factor LC Duplex connector. Ensure that the attaching or downstream device has an SR
transceiver. The sending and receiving transceivers must be the same (SR to SR).
The OSA-Express6S 10 GbE SR feature does not support auto-negotiation to any other
speed and runs in full duplex mode only.
A 50 or a 62.5 µm multimode fiber optic cable that ends with an LC Duplex connector is
required for connecting each port on this feature to the selected device.
The OSA-Express6S GbE LX feature supports the use of an LC Duplex connector. Ensure
that the attaching or downstream device has an LX transceiver. The sending and receiving
transceivers must be the same (LX to LX).
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for
connecting each port on this feature to the selected device. If multimode fiber optic cables are
being reused, a pair of Mode Conditioning Patch cables is required, with one cable for each
end of the link.
The OSA-Express6S GbE SX feature supports the use of an LC Duplex connector. Ensure
that the attaching or downstream device has an SX transceiver. The sending and receiving
transceivers must be the same (SX to SX).
A multi-mode fiber optic cable that ends with an LC Duplex connector is required for
connecting each port on this feature to the selected device.
The OSA-Express6S 1000BASE-T Ethernet feature can be configured as CHPID type OSC,
OSD, OSE, or OSM. Non-QDIO operation mode requires CHPID type OSE.
The following settings are supported on the OSA-Express6S 1000BASE-T Ethernet feature
port:
Auto-negotiate
100 Mbps half-duplex or full-duplex
1000 Mbps full-duplex
If auto-negotiate is not used, the OSA-Express port attempts to join the LAN at the specified
speed and duplex mode. If this specified speed and duplex mode do not match the speed and
duplex mode of the signal on the cable, the OSA-Express port does not connect.
The OSA-Express5S 10 GbE LR feature supports the use of an industry standard small form
factor LC Duplex connector. Ensure that the attaching or downstream device includes an LR
transceiver. The transceivers at both ends must be the same (LR to LR).
The OSA-Express5S 10 GbE LR feature does not support auto-negotiation to any other
speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for
connecting this feature to the selected device.
The OSA-Express5S 10 GbE SR feature supports the use of an industry standard small form
factor LC Duplex connector. Ensure that the attaching or downstream device includes an SR
transceiver. The sending and receiving transceivers must be the same (SR to SR).
The OSA-Express5S 10 GbE SR feature does not support auto-negotiation to any other
speed and runs in full duplex mode only.
A 50 or a 62.5 µm multimode fiber optic cable that ends with an LC Duplex connector is
required for connecting each port on this feature to the selected device.
The OSA-Express5S GbE LX feature supports the use of an LC Duplex connector. Ensure
that the attaching or downstream device has an LX transceiver. The sending and receiving
transceivers must be the same (LX to LX).
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for
connecting each port on this feature to the selected device. If multimode fiber optic cables are
being reused, a pair of Mode Conditioning Patch cables is required, with one cable for each
end of the link.
The OSA-Express5S GbE SX feature supports the use of an LC Duplex connector. Ensure
that the attaching or downstream device has an SX transceiver. The sending and receiving
transceivers must be the same (SX to SX).
A multi-mode fiber optic cable that ends with an LC Duplex connector is required for
connecting each port on this feature to the selected device.
The OSA-Express5S 1000BASE-T Ethernet feature can be configured as CHPID type OSC,
OSD, OSE, or OSM. Non-QDIO operation mode requires CHPID type OSE.
The following settings are supported on the OSA-Express5S 1000BASE-T Ethernet feature
port:
Auto-negotiate
100 Mbps half-duplex or full-duplex
1000 Mbps full-duplex
If auto-negotiate is not used, the OSA-Express port attempts to join the LAN at the specified
speed and duplex mode. If this specified speed and duplex mode do not match the speed and
duplex mode of the signal on the cable, the OSA-Express port does not connect.
OSA-Express4S features
This section describes the characteristics of all OSA-Express4S features that are supported
on z14 servers.
The OSA-Express4S feature is installed in the PCIe I/O drawer. Only OSA-Express4S
1000BASE-T Ethernet, FC #0408 is supported on IBM z14™ servers as a carry forward
during an MES.
The characteristics of the OSA-Express4S features that are supported on IBM z14™ are
listed in Table 4-10.
If the attached Ethernet router or switch does not support auto-negotiation, the OSA-Express
port examines the signal that it is receiving. It then connects at the speed and duplex mode of
the device at the other end of the cable.
The following settings are supported on the OSA-Express4 1000BASE-T Ethernet feature
port:
Auto-negotiate
10 Mbps half-duplex or full-duplex
100 Mbps half-duplex or full-duplex
1000 Mbps full-duplex
If auto-negotiate is not used, the OSA-Express port attempts to join the LAN at the specified
speed and duplex mode. If these settings do not match the speed and duplex mode of the
signal on the cable, the OSA-Express port does not connect.
On IBM z14™, both ports are supported by z/OS and can be shared by up to 126 partitions
(LPARs) per PCHID. The 25GbE RoCE Express2 feature uses SR optics and supports the
use of a multimode fiber optic cable that ends with an LC Duplex connector. Both
point-to-point connections and switched connections with an enterprise-class 25GbE switch
are supported.
Switch configuration for RoCE Express2: If the IBM 25GbE RoCE Express2 features
are connected to 25GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The 25GbE RoCE Express feature does not support auto-negotiation to any other speed
and runs in full duplex mode only.
10GbE and 25GbE RoCE features should not be mixed in a z/OS SMC-R Link Group.
The maximum supported unrepeated distance, point-to-point, is 100 meters (328 ft). A
client-supplied cable is required. Two types of cables can be used for connecting the port to
the selected 25GbE switch or to the 25GbE RoCE Express2 feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an
LC Duplex connector; supports 70 meters (229 ft)
OM4 50-micron multimode fiber optic cable that is rated at 4700 MHz-km that ends with an
LC Duplex connector; supports 100 meters (328 ft)
For more information about the management and definition of the RoCE features, see
Appendix D, “Shared Memory Communications” on page 475, and Appendix C, “Native
Peripheral Component Interconnect Express” on page 469
Switch configuration for RoCE Express2: If the IBM 10GbE RoCE Express2 features
are connected to 10 GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
The maximum supported unrepeated distance, point-to-point, is 300 meters (984 ft). A
client-supplied cable is required. Three types of cables can be used for connecting the port to
the selected 10 GbE switch or to the 10GbE RoCE Express2 feature on the attached server:
OM3 50-micron multimode fiber optic cable that is rated at 2000 MHz-km that ends with an
LC Duplex connector (supports 300 meters (984 ft))
OM2 50-micron multimode fiber optic cable that is rated at 500 MHz-km that ends with an
LC Duplex connector (supports 82 meters (269 ft))
OM1 62.5-micron multimode fiber optic cable that is rated at 200 MHz-km that ends with
an LC Duplex connector (supports 33 meters (108 ft))
For more information about the management and definition of the 10GbE RoCE2, see
Appendix D, “Shared Memory Communications” on page 475, and Appendix C, “Native
Peripheral Component Interconnect Express” on page 469.
The 10GbE RoCE Express is a native PCIe feature. It does not use a CHPID and is defined
by using the IOCP FUNCTION statement or in the hardware configuration definition (HCD).
For zEC12 and zBC12, each feature can be dedicated to an LPAR only, and z/OS can use
only one of the two ports. Both ports are supported by z/OS and can be shared by up to 31
partitions (LPARs) per PCHID on z14 and z13.
The 10GbE RoCE Express feature uses SR optics and supports the use of a multimode fiber
optic cable that ends with an LC Duplex connector. Point-to-point connections and switched
connections with an enterprise-class 10 GbE switch are supported.
Switch configuration for RoCE: If the IBM 10GbE RoCE Express features are connected
to 10 GbE switches, the switches must meet the following requirements:
Global Pause function enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no IEDN
For more information about the management and definition of the 10GbE RoCE, see
Appendix D, “Shared Memory Communications” on page 475, and Appendix C, “Native
Peripheral Component Interconnect Express” on page 469.
SMC-R provides application transparent use of the RoCE-Express feature. This feature
reduces the network overhead and latency of data transfers, which effectively offers the
benefits of optimized network performance across processors.
SMC-D was used with the introduction of the Internal Shared Memory (ISM) virtual PCI
function. ISM is a virtual PCI network adapter that enables direct access to shared virtual
memory providing a highly optimized network interconnect for IBM Z intra-CPC
communications.
SMC-D maintains the socket-API transparency aspect of SMC-R so that applications that use
TCP/IP communications can benefit immediately without requiring any application software or
IP topology changes. SMC-D completes the overall SMC solution, which provides synergy
with SMC-R.
SMC-R and SMC-D use shared memory architectural concepts, which eliminates the TCP/IP
processing in the data path, yet preserves TCP/IP Qualities of Service for connection
management purposes.
ISM is defined by the FUNCTION statement with a virtual CHPID (VCHID) in hardware
configuration definition (HCD)/IOCDS. Identified by the PNETID parameter, each ISM VCHID
defines an isolated, internal virtual network for SMC-D communication, without any hardware
component required. Virtual adapters are defined by virtual function (VF) statements. Multiple
LPARs can access the same virtual network for SMC-D data exchange by associating their
VF with same VCHID.
Applications that use HiperSockets can have network latency and CPU reduction benefits and
performance improvement by using the SMC-D over ISM.
For more information about the SMC-D and ISM, see Appendix D, “Shared Memory
Communications” on page 475.
HiperSockets
The HiperSockets function of IBM z14™ servers provides up to 32 high-speed virtual LAN
attachments.
HiperSockets IOCP definitions on IBM z14™: A parameter was added for HiperSockets
IOCP definitions on IBM z14™ and z13 servers. Therefore, the IBM z14™ IOCP definitions
must be migrated to support the HiperSockets definitions (CHPID type IQD).
On IBM z14™ and z13 servers, the CHPID statement of HiperSockets devices requires the
keyword VCHID. VCHID specifies the virtual channel identification number that is
associated with the channel path. The vSalid range is 7E0 - 7FF.
For more information, see IBM Z Input/Output Configuration Program User's Guide for ICP
IOCP, SB10- 7163.
HiperSockets eliminates the need to use I/O subsystem operations and traverse an external
network connection to communicate between LPARs in the same z14 server. HiperSockets
offers significant value in server consolidation when connecting many virtual servers. It can
be used instead of certain coupling link configurations in a Parallel Sysplex.
Traffic can be IPv4 or IPv6, or non-IP, such as AppleTalk, DECnet, IPX, NetBIOS, or SNA.
Layer 2 support helps facilitate server consolidation, and can reduce complexity and simplify
network configuration. It also allows LAN administrators to maintain the mainframe network
environment similarly to non-mainframe environments.
Packet forwarding decisions are based on Layer 2 information instead of Layer 3. The
HiperSockets device can run automatic MAC address generation to create uniqueness within
and across LPARs and servers. The use of Group MAC addresses for multicast is supported,
and broadcasts to all other Layer 2 devices on the same HiperSockets networks.
HiperSockets Layer 2 on IBM z14™ and z13 servers is supported by Linux on Z, and by z/VM
for Linux guest use.
IBM z14™ supports the HiperSockets Completion Queue function that is designed to allow
HiperSockets to transfer data synchronously (if possible) and asynchronously, if necessary.
This feature combines ultra-low latency with more tolerance for traffic peaks.
With the asynchronous support, data can be temporarily held until the receiver has buffers
that are available in its inbound queue during high volume situations. The HiperSockets
Completion Queue function requires the following applications at a minimum6:
z/OS V1.13
Linux on Z distributions:
– Red Hat Enterprise Linux (RHEL) 6.2
– SUSE Linux Enterprise Server (SLES) 11 SP2
– Ubuntu 16.04 LTS
z/VSE V5.1.17
z/VM V6.28 with maintenance
In z/VM (supported versions), the virtual switch function is enhanced to transparently bridge a
guest virtual machine network connection on a HiperSockets LAN segment. This bridge
allows a single HiperSockets guest virtual machine network connection to communicate
directly with the following systems:
Other guest virtual machines on the virtual switch
External network hosts through the virtual switch OSA UPLINK port
6 Minimum OS support for z14 can differ. For more information, see Chapter 7, “Operating system support” on
page 243.
7
z/VSE 5.1.1 is end of support.
8 z/VM V6.2 and V6.3 are not longer supported. z/VM V6.4 or newer is needed.
The following links are available to connect an operating system LPAR to a CF:
Integrated Coupling Adapter (ICA SR) for short distance connectivity, which is defined as
CHPID type CS5. The ICA SR can be used only for coupling connectivity between z14,
z13, and z13s servers. It does not support connectivity to zEC12 or zBC12 servers, and it
cannot be connected to HCA3-O or HCA3-O LR coupling fanouts.
The ICA SR supports distances up to 150 m (492 ft) and a link data rate of 8 GBps. OM3
fiber optic cable is used for distances up to 100 m (328 ft), and OM4 for distances up to
150 m (492 ft). ICA SR supports four CHPIDs per port and seven subchannels (devices)
per CHPID. ICA SR supports transmission of Server Time Protocol (STP) messages.
Parallel Sysplex that uses IFB 12x connects z14, z13, z13s, zEC12, and zBC12 servers.
12x IFB coupling links are fiber optic connections that support a maximum distance of up
to 150 m (492 ft). IFB coupling links are defined as CHPID type CIB. IFB supports
transmission of STP messages.
Parallel Sysplex that uses InfiniBand 1x Long Reach (IFB LR) connects z14, z13, z13s,
zEC12, and zBC12. 1x InfiniBand coupling links are fiber optic connections that support a
maximum unrepeated distance of up to 10 km (6.2 miles), and up to 100 km (62 miles)
with an IBM Z qualified DWDM. IFB LR coupling links are defined as CHPID type CIB. IFB
LR supports transmission of STP messages.
IBM z14 ZR1 (M/T 3907) coupling connectivity: InfiniBand features are not supported
(nor available) on IBM z14 ZR1. z14 ZR1 supports only ICA SR and CE LR for sysplex
coupling connectivity.
Coupling Express Long Reach: Coupling Express LR (FC #0433) is recommended for
Long Distance Coupling IBM z14™/z13/z13s to z13 and above. It supports a maximum
unrepeated distance to 10 km (6.2 miles) and up to 100 km (62 miles) with a qualified
DWDM. CE LR coupling links are defined as CHPID CL5. CE LR uses same 9 µm single
mode fiber cable as 1x IFB.
The maximum number of combined external coupling links (active CE LR, ICA SR links, and
IFB LR) is 144 per IBM z14™ server. IBM z14™ servers support up to 256 coupling CHPIDs
per CPC. A z14 coupling link support summary for z14 is shown in Figure 4-6.
When defining IFB coupling links (CHPID type CIB), HCD defaults to seven subchannels. A
total of 32 subchannels are supported on only HCA2-O LR (1xIFB) and HCA3-O LR (1xIFB)
on zEC12 and later when both sides of the connection use IFB protocol.
Sysplex Coupling and Timing Connectivity: IBM z14 M0x (M/T 3906) supports N-2
sysplex connectivity (z14M0x, z14 ZR1, z13, z13s, zEC12, and zBC12), while IBM z14
ZR1 supports only N-1 sysplex connectivity (z14 M0x, z14 ZR1, z134, and z13s).
In a Parallel Sysplex configuration, z/OS and CF images can run on the same or on separate
servers. There must be at least one CF that is connected to all z/OS images, even though
other CFs can be connected only to selected z/OS images.
To eliminate any single points of failure in a Parallel Sysplex configuration, have at least the
following components:
Two coupling links between the z/OS and CF images.
Two CF images not running on the same server.
One stand-alone CF. If using system-managed CF structure duplexing or running with
resource sharing only, a stand-alone CF is not mandatory.
An IC link is a fast coupling link that uses memory-to-memory data transfers. Although IC
links do not have PCHID numbers, they do require CHPIDs.
IC links require an ICP channel path definition at the z/OS and the CF end of a channel
connection to operate in peer mode. The links are always defined and connected in pairs. The
IC link operates in peer mode, and its existence is defined in HCD/IOCP.
IC links are enabled by defining channel type ICP. A maximum of 32 IC channels can be
defined on a Z server.
IBM z14™ does not support ISC-3 links, HCA2-O, or HCA2-O (LR).
HCA3-O link compatibility: HCA3-O (LR) links can connect to HCA2-O (LR) on zEC12
and zBC12.
z196 and z114 are not supported in same Parallel Sysplex or STP CTN with IBM z14™.
The z14 server fanout slots in the CPC drawer provide coupling links connectivity through the
ICA SR and IFB fanout cards. In addition to coupling links for Parallel Sysplex, the fanout
cards that the fanout slots provide connectivity for the PCIe I/O drawer (PCIe fanout).
Up to 10 PCIe and 4 IFB fanout cards can be installed in each CPC drawer, as shown in
Figure 4-7.
Figure 4-7 CPC drawer front view showing the coupling links
Previous generations IBM Z platforms, in particular z196 and zEC12, use processor books,
which provide connectivity for up to eight InfiniBand fanouts per book.
In this case, a second CPC drawer is needed to fulfill all IFB connectivity, as shown in
Figure 4-8.
It is beyond the scope of this book to describe all possible migration scenarios. Always
consult with subject matter experts to help you to develop your migration strategy.
The following considerations can help you assess possible migration scenarios. The objective
of this list is to enable migration to IBM z14™ servers, support legacy coupling where
essential, and adopt ICA SR where possible to avoid the need for more CPC drawers and
other possible migration issues:
The IBM zEnterprise EC12 and BC12 are the last generation of Z servers to support
ISC-3, 12x HCA2-O, and 1x HCA2-O LR. They also are the last Z servers that can be part
of a Mixed Coordinated Timing Network (CTN).
Consider Long Distance Coupling requirements first:
– HCA3-O 1x or CE LR are the long-distance coupling links that are available on IBM
z14™ servers.
– ICA SR or HCA3-O 12x should be used for short distance coupling requirements.
ISC-3 Migration (IBM z14™/z13 servers do not support ISC-3):
– Evaluate current ISC-3 usage (long- and short-distance, coupling data, or timing only)
to determine how to fulfill ISC-3 requirements with the links that are available on IBM
z14™/z13 servers.
– You can migrate from ISC-3 to CE LR, ICA SR, 12x InfiniBand, or 1x InfiniBand on IBM
z14™/z13 servers.
– 1:1 Mapping of ISC-3 to Coupling over InfiniBand. On previous servers, the HCA2-C
fanouts enable ISC-3 coupling in the I/O Drawer. Two HCA2-C fanouts can be replaced
by two 1x fanouts (eight 1x links) or two 12x fanouts (four 12x links).
– ISC-3 supports one CHPID/link. Consolidate ISC-3 CHPIDs into CE LR, ICA SR or
IFB, and use multiple CHPIDs per link.
Sysplex Coupling and Timing Connectivity: IBM z14 M0x (M/T 3906) supports N-2
sysplex connectivity (z14M0x, z14 ZR1, z13, z13s, zEC12, and zBC12), while IBM z14
ZR1 supports only N-1 sysplex connectivity (z14 M0x, z14 ZR1, z134, and z13s).
Between any two servers that are intended to exchange STP messages, configure each
server so that at least two coupling links exist for communication between the servers. This
configuration prevents the loss of one link from causing the loss of STP communication
between the servers. If a server does not have a CF LPAR, timing-only links can be used to
provide STP connectivity.
The z14 server does not support attachment to the IBM Sysplex Timer. A IBM z14™ server
cannot be added into a Mixed CTN. It can participate in an STP-only CTN only.
Important: For more information about configuring an STP CTN with three or more
servers, see the Important Considerations for STP server role assignments white paper
that is available at the IBM Techdocs Library website.
If the guidelines are not followed, it might result in all the servers in the CTN becoming
unsynchronized. This condition results in a sysplex-wide outage.
Warning: This extra stratum level should be used only as a temporary state during
reconfiguration. Customer should not run with machines at stratum level 4 for extended
periods because of the lower quality of the time synchronization.
Connections to all the CEC drawers provide redundancy for continued operation and
concurrent maintenance when a single oscillator card fails. Each oscillator card includes a
Bayonet Neill-Concelman (BNC) connector for PPS connection support, which attaches to
two different ETSs. Two PPS connections from two different ETSs are preferable for
redundancy.
The time accuracy of an STP-only CTN is improved by adding an ETS device with the PPS
output signal. STP tracks the highly stable accurate PPS signal from ETSs. It maintains
accuracy of 10 µs as measured at the PPS input of the z14 server. If STP uses an NTP server
without PPS, a time accuracy of 100 m (328 ft) to the ETS is maintained. ETSs with PPS
output are available from various vendors that offer network timing solutions.
The tamper-resistant hardware security module, which is contained on the Crypto Express6S
feature, is designed to conform to the Federal Information Processing Standard (FIPS) 140-2
Level 4 Certification. It supports User Defined Extension (UDX) services to implement
cryptographic functions and algorithms (when defined as an IBM CCA coprocessor).
The following EP11 compliance levels are available (Crypto Express6S and Crypto
Express5S):
FIPS 2009 (default)
FIPS 2011
BSI 2009
BSI 2011
Each Crypto Express6S feature occupies one I/O slot in the PCIe I/O drawer, and features no
CHPID assigned. However, it has one PCHID.
Each Crypto Express5S feature occupies one I/O slot in the PCIe I/O drawer, and features no
CHPID assigned. However, it has one PCHID.
All native PCIe features should be ordered in pairs for redundancy. The features are assigned
to one of the four resource groups (RGs) that are running on the IFP according to their
physical location in the PCIe I/O drawer, which provides management functions and
virtualization functions.
If two features of same type are installed, one always is managed by resource group 1 (RG 1)
or resource group 3 (RG3) while the other feature is managed by resource group 2 (RG 2) or
resource group 4 (RG 4). This configuration provides redundancy if one of the features or
resource groups needs maintenance or has a failure.
The IFP and RGs support the following infrastructure management functions:
Firmware update of adapters and resource groups
Error recovery and failure data collection
Diagnostic and maintenance tasks
The IBM zEnterprise Data Compression (zEDC) acceleration capability in z/OS and the
zEDC Express feature is designed to help improve cross-platform data exchange, reduce
CPU consumption, and save disk space.
The feature installs exclusively on the PCIe I/O drawer. Up to 16 features can be installed on
the system. One PCIe adapter or compression coprocessor is available per feature, which
implements compression as defined by RFC1951 (DEFLATE).
For more information about the management and definition of the zEDC feature, see
Appendix F, “IBM zEnterprise Data Compression Express” on page 511, and Appendix C,
“Native Peripheral Component Interconnect Express” on page 469.
The channel subsystem directs the flow of information between I/O devices and main storage.
It allows data processing to proceeded concurrently with I/O processing, which relieves data
processors (central processor (CP), Integrated Facility for Linux (IFL)) of the task of
communicating directly with I/O devices.
The channel subsystem includes subchannels, I/O devices that are attached through control
units, and channel paths between the subsystem and control unites. For more information
about the channel subsystem, see 5.1.1, “Multiple logical channel subsystems”.
The design of IBM Z servers offers considerable processing power, memory size, and I/O
connectivity. In support of the larger I/O capability, the CSS structure is scaled up by
introducing the multiple logical channel subsystem (LCSS) since z990, and multiple
subchannel sets (MSS) since z9.
An overview of the channel subsystem for z14 servers is shown in Figure 5-1. z14 servers are
designed to support up to six logical channel subsystems, each with four subchannel sets
and up to 256 channels.
z14
All channel subsystems are defined within a single configuration, which is called I/O
configuration data set (IOCDS). The IOCDS is loaded into the hardware system area (HSA)
during a central processor complex (CPC) power-on reset (POR) to start all of the channel
subsystems.
On z14 servers, the HSA is pre-allocated in memory with a fixed size of 192 GB, which is in
addition to the customer purchased memory. This fixed size memory for HSA eliminates the
requirement for more planning of the initial I/O configuration and pre-planning for future I/O
expansions.
CPC drawer repair: The HSA can be moved from one CPC drawer to a different drawer in
an enhanced availability configuration as part of a concurrent CPC drawer repair (CDR)
action.
The introduction of multiple LCSSs enabled an IBM Z server to have more than one channel
subsystems logically, while each logical channel subsystem maintains the same manner of
I/O processing. Also, a logical partition (LPAR) is now attached to a specific logical channel
subsystem, which makes the extension of multiple logical channel subsystems not apparent
to the operating systems and applications. The multiple image facility (MIF) in the structure
enables resource sharing across LPARs within a single LCSS or across the LCSSs.
The multiple LCSS structure extended the Z servers’ total number of I/O connectivities to
support a balanced configuration for the growth of processor and I/O capabilities.
Note: The phrase channel subsystem has same meaning as logical channel subsystem in
this section, unless otherwise stated.
Subchannels
A subchannel provides the logical appearance of a device to the program and contains the
information that is required for sustaining a single I/O operation. Each device is accessible by
using one subchannel in a channel subsystem to which it is assigned according to the active
IOCDS of the Z server.
In z/Architecture, the first subchannel set of an LCSS can have 63.75 K subchannels (with
0.25 K reserved), with a subchannel set ID (SSID) of 0. By enabling the multiple subchannel
sets, which are described in 5.1.2, “Multiple subchannel sets” on page 198, extra subchannel
sets are available to increase the device addressability of a channel subsystem.
Each channel path in a channel subsystem features a unique 2-digit hexadecimal identifier
that is known as a channel-path identifier (CHPID), which ranges 00 - FF. Therefore, a total of
256 CHPIDs are supported by a CSS, and a maximum of 1536 CHPIDs are available on a
z14 server with six logical channel subsystems.
A port on an I/O feature card features a unique physical channel identifier (PCHID) according
to the physical location of this I/O feature adapter, and the sequence of this port on the
adapter.
In addition, a port on a fanout adapter has a unique adapter identifier (AID), according to the
physical location of this fanout adapter, and the sequence of this port on the adapter.
A CHPID is assigned to a physical port by defining the corresponding PCHID or AID in the I/O
configuration definitions.
Control units
A control unit provides the logical capabilities that are necessary to operate and control an
I/O device. It adapts the characteristics of each device so that it can respond to the standard
form of control that is provided by the CSS.
A control unit can be housed separately or can be physically and logically integrated with the
I/O device, channel subsystem, or within the Z server.
I/O devices
An I/O device provides external storage, a means of communication between
data-processing systems, or a means of communication between a system and its
environment. In the simplest case, an I/O device is attached to one control unit and is
accessible through one or more channel paths that are connected to the control unit.
Each subchannel has a unique four-digit hexadecimal number 0x0000 - 0xFFFF. Therefore, a
single subchannel set can address and access up to 64 K I/O devices.
MSS was introduced in z9 to extend the maximum number of addressable I/O devices for a
channel subsystem.
As with the z13 server, the z14 support four subchannel sets for each logical channel
subsystem. It can access a maximum of 255.74 K devices for a logical channel subsystem
and a logical partition and the programs that are running on it.
Subchannel number
The subchannel number is a four-digit hexadecimal number 0x0000 - 0xFFFF, which is
assigned to a subchannel within a subchannel set of a channel subsystem. Subchannels in
each subchannel set are always assigned subchannel numbers within a single range of
contiguous numbers.
With the subchannel numbers, a program that is running on an LPAR (for example, an
operating system) can specify all I/O functions relative to a specific I/O device by designating
a subchannel that is assigned to the I/O devices.
Normally, subchannel numbers are used only in communication between the programs and
the channel subsystem.
Device number
A device number is an arbitrary number 0x0000 - 0xFFFF, which is defined by a system
programmer in an I/O configuration for naming an I/O device. The device number must be
unique within a subchannel set of a channel subsystem. It is assigned to the corresponding
subchannel by channel subsystem when an I/O configuration is activated. Therefore, a
subchannel in a subchannel set of a channel subsystem includes a device number together
with s subchannel number for designating an I/O operation.
The device number provide a means to identify a device, independent of any limitations that
are imposed by the system model, configuration, or channel-path protocols.
A device number also can be used to designate an I/O function to a specific I/O device.
Because it is an arbitrary number, it can easily be fit into any configuration management and
operating management scenarios. For example, a system administrator can set all of the
z/OS systems in an environment to device number 1000 for their system RES volumes.
With multiple subchannel sets, a subchannel is assigned to a specific I/O device by the
channel subsystem with an automatically assigned subchannel number and a device number
that is defined by user. An I/O device can always be identified by an SSID with a subchannel
number or a device number. For example, a device with device number AB00 of subchannel
set 1 can be designated as 1AB00.
Normally, the subchannel number is used by the programs to communicate with the channel
subsystem and I/O device, whereas the device number is used by a system programmer,
operator, and administrator.
For the extra subchannel sets enabled by the MSS facility, each has 65535 subchannels
(64 K minus one) for specific types of devices. These extra subchannel sets are referred as
alternative subchannel sets in z/OS. Also, a device that is defined in an alternative subchannel
set is considered a special device, which normally features a special device type in the I/O
configuration.
Currently, a z14 server that is running z/OS defines the following types of devices in another
subchannel set, with proper APAP or PTF installed:
Alias devices of the parallel access volumes (PAV).
Secondary devices of GDPS Metro Mirror Copy Service (formerly Peer-to-Peer Remote
Copy (PPRC)).
FlashCopy SOURCE and TARGET devices with program temporary fix (PTF) OA46900.
Db2 data backup volumes with PTF OA24142.
The use of another subchannel set for these special devices helps reduce the number of
devices in the subchannel set 0, which increases the growth capability for accessing more
devices.
IPL from an alternative subchannel set is supported by z/OS V1.13 or later, and Version 1.12
with PTFs.
D IOS,CONFIG(ALL)
IOS506I 11.32.19 I/O CONFIG DATA 340
ACTIVE IODF DATA SET = SYS6.IODF39
CONFIGURATION ID = L06RMVS1 EDT ID = 01
TOKEN: PROCESSOR DATE TIME DESCRIPTION
SOURCE: SCZP501 14-10-31 08:51:47 SYS6 IODF39
ACTIVE CSS: 0 SUBCHANNEL SETS CONFIGURED: 0, 1, 2, 3
CHANNEL MEASUREMENT BLOCK FACILITY IS ACTIVE
LOCAL SYSTEM NAME (LSYSTEM): SCZP501
HARDWARE SYSTEM AREA AVAILABLE FOR CONFIGURATION CHANGES
PHYSICAL CONTROL UNITS 8099
CSS 0 - LOGICAL CONTROL UNITS 3996
SS 0 SUBCHANNELS 54689
SS 1 SUBCHANNELS 58862
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 1 - LOGICAL CONTROL UNITS 4088
SS 0 SUBCHANNELS 65280
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 2 - LOGICAL CONTROL UNITS 4088
SS 0 SUBCHANNELS 65280
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 3 - LOGICAL CONTROL UNITS 4088
SS 0 SUBCHANNELS 65280
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 4 - LOGICAL CONTROL UNITS 4088
SS 0 SUBCHANNELS 65280
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 5 - LOGICAL CONTROL UNITS 4088
SS 0 SUBCHANNELS 65280
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
Figure 5-2 Output for display ios,config(all) command with MSS
By assigning the same CHPID from different LCSSs to the same channel path (for example, a
PCHID), the channel path can be accessed by any LPARs from these LCSSs at the same
time. The CHPID is spanned across those LCSSs. The use of spanned channels paths
decreases the number of channels that are needed in an installation of Z servers.
A sample of channel paths that are defined as dedicated, shared, and spanned is shown in
Figure 5-3.
Partition Partition
1 2
... Partition Partition Partition Partition Partition
14 15 16 17 18
... Partition
5A
CSS0 CSS1
CHPID CHPID
CHPID CHPID CHPID CHPID CHPID CHPID CHPID CHPID CHPID CHPID
03 05
00 01 02 FF 06 00 01 22 FF
Share
... 04
SPAN SPAN ... Share
...
PCHID PCHID PCHID PCHID PCHID PCHID PCHID PCHID PCHID PCHID
10B 10C 10D 20E 20A 145 146 147 158 159
PCHID
120
Channel spanning is supported for internal links (HiperSockets and IC links) and for certain
types of external links. External links that are supported on z14 servers include FICON
Express16S+, FICON Express16S, FICON Express8S, OSA-Express6S, OSA-Express5S,
and Coupling Links.
The definition of LPAR name, MIF image ID, and LPAR ID are used to identify an LPAR by the
channel subsystem to identify I/O functions from different LPARs of multiple LCSSs, which
supports the implementation of these dedicated, shared, and spanned paths.
Specified in
CSS0 CSS1 CSS2 CSS3 CSS4 CSS5 HCD / IOCP
Logical Partition Name Logical Partition Name LPAR Name LPAR Name LPAR Name LPAR Name Specified in
HCD / IOCP
TST1 PROD1 PROD2 TST2 PROD3 PROD4 TST3 TST4 PROD5 PROD6 TST55 PROD7 PROD8 TST6
02 04 0A 14 16 1D 22 26 35 3A 44 47 56 5A
Specified in
Image Profile
2 4 A 4 6 D 2 6 5 A 4 7 6 A
Specified in
HCD / IOCP
LPAR name
The LPAR name is defined as partition name parameter in the RESOURCE statement of an I/O
configuration. The LPAR name must be unique across the server.
MIF image ID
The MIF image ID is defined as a parameter for each LPAR in the RESOURCE statement of an
I/O configuration. It ranges 1 - F, and must be unique within an LCSS. However, duplicates
are allowed in different LCSSs.
If a MIF image ID is not defined, an arbitrary ID is assigned when the I/O configuration
activated. The z14 server supports a maximum of six LCSSs, with a total of 85 LPARs that
can be defined. Each LCSS of a z14 server can support the following numbers of LPARs:
LCSS0 to LCSS4 support 15 LPARs each, and the MIF image ID is 1 - F.
LCSS5 supports 10 LPARs, and the MIF image IDs are 1 - A.
LPAR ID
The LPAR ID is defined by a user in an image activation profile for each LPAR. It is a 2-digit
hexadecimal number 00 - 7F. The LPAR ID must be unique across the server. Although it is
arbitrarily defined by the user, an LPAR ID often is the CSS ID concatenated to its MIF image
ID, which makes the value more meaningful for the system administrator. For example, an
LPAR with LPAR ID 1A defined in that manner means that the LPAR is defined in LCSS1, with
the MIF image ID A.
Note: Certain functions might require specific levels of an operating system, PTFs,
or both.
The z14 is designed for delivering a transparent and consumable approach that enables
extensive (pervasive) encryption of data in flight and at rest, with the goal of substantially
simplifying data security and reducing the costs that are associated with protecting data while
achieving compliance mandates.
This chapter introduces the principles of cryptography and describes the implementation of
cryptography in the hardware and software architecture of IBM Z servers. It also describes
the features that IBM z14 servers offer. Finally, the chapter summarizes the cryptographic
features and required software.
The new functions support new standards and are designed to meet the following compliance
requirements:
Payment Card Industry (PCI) Hardware Security Module (HSM) certification to strength
the cryptographic standards for attack resistance in the payment card systems area.
PCI HSM certification is exclusive for Crypto Express6S.
National Institute of Standards and Technology (NIST) through the Federal Information
Processing Standard (FIPS) standard to implement guidance requirements.
Common Criteria EP11 EAL4.
German Banking Industry Commission (GBIC).
VISA Format Preserving Encryption (VFPE) for credit card numbers.
Enhanced public key Elliptic Curve Cryptography (ECC) for users such as Chrome,
Firefox, and Apple’s iMessage.
IBM z14 servers include standard cryptographic hardware and optional cryptographic
features for flexibility and growth capability. IBM has a long history of providing hardware
cryptographic solutions. This history stretches from the development of the Data Encryption
Standard (DES) in the 1970s to the Crypto Express tamper-sensing and tamper-responding
programmable features.
Crypto Express is designed to meet the US Government’s highest security rating, which is,
FIPS 140-2 Level 41, and several other security ratings, such as the Common Criteria for
Information Technology Security Evaluation, the PCI HSM, criteria, and the criteria for
German Banking Industry Commission, formerly known as Deutsche Kreditwirtschaft
evaluation.
The cryptographic functions include the full range of cryptographic operations that are
necessary for local and global business and financial institution applications. User Defined
Extensions (UDX) allow you to add custom cryptographic functions to the functions that z14
servers offer.
Also, it is necessary to ensure that a message cannot be corrupted, while ensuring that the
sender and the receiver really are the persons who they claim to be. Over time, several
methods were used to achieve these objectives, with more or less success. Many procedures
and algorithms for encrypting and decrypting data were developed that are increasingly
complicated and time-consuming.
These goals should all be possible without unacceptable overhead to the communication. The
goal is to keep the system secure, manageable, and productive.
The basic method data protection is to encrypt and decrypt it, while for authentication,
integrity and non-repudiation, hash algorithms, message authentication codes (MACs), digital
signatures, and certificates are used.
In other words, the security of a cryptographic system should depend on the security of the
key, so the key must be kept secret. Therefore, the secure management of keys is the primal
task of modern cryptographic systems.
6.2.3 Keys
The keys that are used for the cryptographic algorithms usually are sequences of numbers
and characters, but can also be any other sequence of bits. The length of a key influences the
security (strength) of the cryptographic method. The longer the used key, the more difficult it
is to compromise a cryptographic algorithm.
For example, the DES (symmetric key) algorithm uses keys with a length of 56 bits,
Triple-DES (TDES) uses keys with a length of 112 bits, and Advanced Encryption Standard
(AES) uses keys of 128, 192, 256, or 512 bits. The asymmetric key RSA algorithm (named
after its inventors Rivest, Shamir, and Adleman) uses keys with a length of 1024 - 4096 bits.
The cryptographic hardware that is supported on IBM z14 servers is shown in Figure 6-2.
These features are described in this chapter.
PU SCM
CPC Drawer Each PU is
capable of
having the
CPACF
function
Crypto Express6S
PCIe I/O
drawers
Implemented in every processor unit (PU) or core in a central processor complex (CPC) is a
cryptographic coprocessor that can be used for cryptographic algorithms that uses clear keys
or protected keys. For more information, see 6.4, “CP Assist for Cryptographic Functions” on
page 216.
The Crypto Express6S card is an HSM that is placed in the PCIe I/O drawer of z14 servers. It
also supports cryptographic algorithms using secret keys. For more information, see 6.5,
“Crypto Express6S” on page 220.
Finally, for entering keys in a secure way into the Crypto Express6S HSM, a TKE workstation
is required, which often also is equipped with smart card readers. For more information, see
6.6, “TKE workstation” on page 232.
A TKE includes support for the AES encryption algorithm with 256-bit master keys and key
management functions to load or generate master keys to the cryptographic coprocessor.
Important: Products that include any of the cryptographic feature codes contain
cryptographic functions that are subject to special export licensing requirements by the
United States Department of Commerce. It is your responsibility to understand and adhere
to these regulations when you are moving, selling, or transferring these products.
To access and use the cryptographic hardware devices that are provided by z14 servers, the
application must use an application programming interface (API) that is provided by the
operating system. In z/OS, the Integrated Cryptographic Service Facility (ICSF) provides the
APIs and is managing the access to the cryptographic devices, as shown in Figure 6-3.
ICSF is a software component of z/OS. ICSF works with the hardware cryptographic features
and the Security Server (IBM Resource Access Control Facility (IBM RACF®) element) to
provide secure, high-speed cryptographic services in the z/OS environment. ICSF provides
the APIs by which applications request the cryptographic services, and from the CPACF and
the Crypto Express6S feature.
ICSF transparently routes application requests for cryptographic services to one of the
integrated cryptographic engines (CPACF or a Crypto Express6S card), depending on
performance or requested cryptographic function. ICSF is also the means by which the
secure Crypto Express6S features are loaded with master key values, which allows the
hardware features to be used by applications. The cryptographic hardware that is installed in
z14 servers determines the cryptographic features and services that are available to the
applications.
This cryptographic coprocessor, called the CPACF, is not qualified as an HSM; therefore, it is
not suitable for handling algorithms that use secret keys. However, the coprocessor can be
used for cryptographic algorithms that use clear keys or protected keys. The CPACF works
synchronously with the PU, which means that the owning processor is busy when its
coprocessor is busy. This setup provides a fast device for cryptographic services.
CPACF now supports pervasive encryption. Simple policy controls allow business to enable
encryption to protect data in mission-critical databases without the need to stop the database
or re-create database objects. Database administrators can use z/OS Dataset Encryption,
z/OS Coupling Facility Encryption, z/VM encrypted hypervisor paging, and z/TPF transparent
database encryption, which use performance enhancements in the hardware.
The CPACF offers a set of symmetric cryptographic functions that enhance the encryption
and decryption performance of clear key operations. These functions are for SSL, virtual
private network (VPN), and data-storing applications that do not require FIPS 140-2 Level 4
security.
CPACF is designed to facilitate the privacy of cryptographic key material when used for data
encryption through key wrapping implementation. It ensures that key material is not visible to
applications or operating systems during encryption operations. For more information, see
6.4.2, “CPACF protected key” on page 218.
These functions are provided as problem-state z/Architecture instructions that are directly
available to application programs. These instructions are known as Message-Security Assist
(MSA). When enabled, the CPACF runs at processor speed for every CP, IFL, and zIIP. For
more information about MSA instructions, see z/Architecture Principles of Operation,
SA22-7832.
The CPACF must be enabled by using an enablement feature (feature code 3863), which is
available for no extra charge. The exception is support for the hashing algorithms SHA-1,
SHA-256, SHA-384, and SHA-512, which is always enabled.
The CPACF coprocessor in z14 servers is redesigned for improved performance compared to
the z13, depending on the function that is being used. The following tools might benefit from
the throughput improvements:
Db2/IMS encryption tool
Db2 built-in encryption
z/OS Communication Server: IPsec/IKE/AT-TLS
z/OS System SSL
z/OS Network Authentication Service (Kerberos)
DFDSS Volume encryption
z/OS Java SDK
For the SHA hashing algorithms and the random number generation algorithms, only clear
keys are used. For the symmetric encryption and decryption DES and AES algorithms and
clear keys, protected keys can also be used. Protected keys require a Crypto Express6S or a
Crypto Express5S card that is running in CCA mode. For more information, see 6.5.2, “Crypto
Express6S as a CCA coprocessor” on page 223.
The hashing algorithms SHA-1, SHA-2, and SHA-3 support for SHA-224, SHA-256,
SHA-384, and SHA-512, are enabled on all servers and do not require the CPACF
enablement feature. For all other algorithms, the no-charge CPACF enablement feature (FC
3863) is required.
The CPACF functions are supported by z/OS, z/VM, z/VSE, zTPF, and Linux on Z.
Clear keys process faster than secure keys because the process is done synchronously on
the CPACF. Protected keys blend the security of Crypto Express6S or Crypto Express5S
coprocessors and the performance characteristics of the CPACF. This process allows it to run
closer to the speed of clear keys.
Because the wrapping key is unique to each LPAR, a protected key cannot be shared with
another LPAR. By using key wrapping, CPACF ensures that key material is not visible to
applications or operating systems during encryption operations.
CPACF code generates the wrapping key and stores it in the protected area of the hardware
system area (HSA). The wrapping key is accessible only by firmware. It cannot be accessed
by operating systems or applications. DES/T-DES and AES algorithms are implemented in
CPACF code with the support of hardware assist functions. Two variations of wrapping keys
are generated: One for DES/T-DES keys and another for AES keys.
Wrapping keys are generated during the clear reset each time an LPAR is activated or reset.
No customizable option is available at Support Element (SE) or Hardware Management
Console (HMC) that permits or avoids the wrapping key generation. This function flow is
shown in Figure 6-5.
Secure Key
8 ICSF 1
DKCCAMK CKDS
Sof tware
Hardware + Firmware
CPACF
CPACF Wrapping Key (CPACFWK)
6 DK TK => DK CPACFWK
A new segment in the profiles of the CSFKEYS class in IBM RACF restricts which secure
keys can be used as protected keys. By default, all secure keys are considered not eligible to
be used as protected keys. The process that is shown in Figure 6-5 considers a secure key as
being the source of a protected key.
The protected key is designed to provide substantial throughput improvements for a large
volume of data encryption and low latency for encryption of small blocks of data. A
high-performance secure key solution, also known as a protected key solution, requires the
ICSF HCR7770 as a minimum release.
Each Crypto Express6S PCI Express adapter is available in one of the following
configurations:
Secure IBM CCA coprocessor (CEX6C) for FIPS 140-2 Level 4 certification. This
configuration includes secure key functions. It is optionally programmable to deploy more
functions and algorithms by using UDX. For more information, see 6.5.2, “Crypto
Express6S as a CCA coprocessor” on page 223.
Secure IBM Enterprise PKCS #11 (EP11) coprocessor (CEX6P) implements an
industry-standardized set of services that adhere to the PKCS #11 specification V2.20 and
more recent amendments. It was designed for extended FIPS and Common Criteria
evaluations to meet public sector requirements. This new cryptographic coprocessor
mode introduced the PKCS #11 secure key function. For more information, see 6.5.3,
“Crypto Express6S as an EP11 coprocessor” on page 228.
A TKE workstation is required to support the administration of the Crypto Express5S when
it is configured in EP11 mode.
Accelerator (CEX6A) for acceleration of public key and private key cryptographic
operations that are used with SSL/TLS processing. For more information, see 6.5.4,
“Crypto Express6S as an accelerator” on page 229.
These modes can be configured by using the SE. The PCIe adapter must be configured
offline to change the mode.
Attention: Switching between configuration modes erases all card secrets. The exception
is when you are switching from Secure CCA to accelerator, and vice versa.
The Crypto Express6S feature does not include external ports and does not use optical fiber
or other cables. It does not use channel path identifiers (CHPIDs), but requires one slot in the
PCIe I/O drawer and one physical channel ID (PCHID) for each PCIe cryptographic adapter.
Removal of the feature or card zeroizes its content. Access to the PCIe cryptographic adapter
is controlled through the setup in the image profiles on the SE.
Adapter: Although PCIe cryptographic adapters include no CHPID type and are not
identified as external channels, all logical partitions (LPARs) in all channel subsystems can
access to the adapter. In z14 servers, up to 85 LPARs are available per adapter. Accessing
the adapter requires a setup in the image profile for each partition. The adapter must be in
the candidate list.
Each z14 server supports up to 16 Crypto Express6S and Crypto Express 5S features.
Crypto Express5S features are not orderable for a new build system, but can be carried
forward from a z13 by using an MES. Configuration information for Crypto Express6S is listed
in Table 6-2.
The concept of dedicated processor does not apply to the PCIe cryptographic adapter.
Whether configured as a coprocessor or an accelerator, the PCIe cryptographic adapter is
made available to an LPAR. It is made available as directed by the domain assignment and
the candidate list in the LPAR image profile. This availability is not changed by the shared or
dedicated status that is given to the CPs in the partition.
The same PCIe cryptographic adapter number and usage domain index combination can be
defined for more than one LPAR (up to 85). For example, you might define a configuration for
backup situations. However, only one of the LPARs can be active at a time.
For more information, see 6.5.5, “Managing Crypto Express6S” on page 229.
3
SHA-3 was standardized by NIST in 2015. SHA-2 is still acceptable and there is no indication that SHA-2 is
vulnerable or that SHA-3 is more or less vulnerable than SHA-2.
Several of these algorithms require a secure key and must run on an HSM. Some of these
algorithms can also run with a clear key on the CPACF. Many standards are only supported
when the Crypto Express6S card is running in CCA mode. Others are supported only when
the card is running in EP11 mode.
The three modes for the Crypto Express6S card are further described in the following
sections. For more information, see 6.7, “Cryptographic functions comparison” on page 238.
UDX is supported under a special contract through an IBM or approved third-party service
offering. The Crypto Cards website directs your request to an IBM Global Services location
for your geographic location. A special contract is negotiated between IBM Global Services
and you for the development of the UDX code by IBM Global Services according to your
specifications and an agreed-upon level of the UDX.
A UDX toolkit for IBM Z servers is tied to specific versions of the CCA card code and the
related host code. UDX is available for the Crypto Express6S (Secure IBM CCA coprocessor
mode only) features. An UDX migration is no more disruptive than a normal Microcode
Change Level (MCL) or ICSF release migration.
In z14 servers, up to four UDX files can be imported. These files can be imported only from a
DVD. The UDX configuration window is updated to include a Reset to IBM Default button.
Consideration: CCA features a new code level starting with z13 servers, and the UDX
clients require a new UDX.
On z14 servers, the Crypto Express6S card is delivered with CCA Level 6.0 firmware. A new
set of cryptographic functions and callable services are provided by the IBM CCA LIC to
enhance the functions that secure financial transactions and keys. The Crypto Express6S
includes the following features:
Greater than 16 domains support up to 85 LPARs on z14 servers
Payment Card Industry (PCI) PIN Transaction Security (PTS) HSM Certification exclusive
to CEX6S and z14
Visa Format Preserving Encryption (VFPE) support, introduced to z13 or z13s servers
AES PIN support for the German banking industry
PKA Translate UDX function into CCA
Verb Algorithm Currency
Now in z14 servers, the IBM Z crypto architecture can support up to 256 domains in an
adjunct processor (AP) with the AP extended addressing (APXA) facility that is installed. As
such, the Crypto Express cards are enhanced to handle 256 domains, and the IBM Z
firmware provides up to 85 domains to customers (to match the current LPAR maximum).
Compliance with the PCI-HSM standard is valuable for customers, particularly those
customers who are in the banking and finance industry. This certification is important to
clients for the following fundamental reasons:
Compliance is increasingly becoming mandatory.
The requirements in PCI-HSM make the system more secure.
If you are a bank, acquirer, processor, or other participant in the payment card systems, the
card brands can impose requirements on you if you want to process their cards. One set of
requirements they are increasingly enforcing are the PCI standards.
The card brands work with PCI in developing these standards, and they focused first on the
standards they considered most important, particularly the PCI Data Security Standard
(PCI-DSS). Some of the other standards were written or required later, and PCI-HSM is one
of the last standards to be developed. In addition, the standards themselves were increasing
the strength of their requirements over time. Some requirements that were optional in earlier
versions of the standards are now mandatory.
In general, the trend is for the card brands to enforce more of the PCI standards and to
enforce them more rigorously. The trend in the standards is to impose more and stricter
requirements in each successive version. The net result is that companies subject to these
requirements can expect that they eventually must comply with all of the requirements.
VFPE allows customers to add encryption to their applications in such a way that the
encrypted data can flow through their systems without requiring a massive redesign of their
application. In our example, if the credit card number is VFPE-encrypted at the point of entry,
the cipher text still behaves as a credit card number. It can flow through business logic until it
meets a back-end transaction server that can VFPE-decrypt it to get the original credit card
number to process the transaction.
This support includes PIN method APIs, PIN administration APIs, new key management
verbs, and new access control points support that is needed for DK-defined functions.
UDX is integrated into the base CCA code to support translating an external RSA CRT key
into new formats. These new formats use tags to identify key components. Depending on
which new rule array keyword is used with the PKA Key Translate callable service, the service
TDES encrypts those components in CBC or ECB mode. In addition, AES CMAC support is
delivered.
4
CCA 5.4 and 6.1 enhancements are also supported for z/OS V2R1 with ICSF HCR77C1 (WD17) with Small
Program Enhancements (SPEs) (z/OS continuous delivery model).
Note: Although older IBM Z servers and operating systems are also supported, they
are out of the scope of this IBM Redbooks publication.
The secure IBM Enterprise PKCS #11 (EP11) coprocessor runs the following tasks:
Encrypt and decrypt (AES, DES, TDES, and RSA)
Sign and verify (DSA, RSA, and ECDSA)
Generate keys and key pairs (DES, AES, DSA, ECC, and RSA)
HMAC (SHA1, SHA2 or SHA3 [SHA224, SHA256, SHA384, and SHA512])
Digest (SHA1, SHS2 or SHA3 [SHA224, SHA256, SHA384, and SHA512])
Wrap and unwrap keys
Random number generation
Get mechanism list and information
Attribute values
Key Agreement (Diffie-Hellman)
The function extension capability through UDX is not available to the EP11.
z/OS V2.2 and V2.3 require ICSF Web Deliverable WD18 (HCR77D0) to support the
following new features for EP11 Stage 4:
New elliptic curve algorithms for PKCS#11 signature, key derivation operations
– Ed448 elliptic curve
– EC25519 elliptic curve
EP11 Concurrent Patch Apply: Allows service to be applied to the EP11 coprocessor
dynamically without taking the crypto adapter offline (already available for CCA
coprocessors)
eIDAS compliance: eIDAS: Cross-border EU regulation for portable recognition of
electronic identification
FIPS 140-2 certification is not relevant to the accelerator because it operates with clear keys
only. The function extension capability through UDX is not available to the accelerator.
The functions that remain available when the Crypto Express6S feature is configured as an
accelerator are used for the acceleration of modular arithmetic operations. That is, the RSA
cryptographic operations are used with the SSL/TLS protocol. The following operations are
accelerated:
PKA Decrypt (CSNDPKD) with PKCS-1.2 formatting
PKA Encrypt (CSNDPKE) with zero-pad formatting
Digital Signature Verify
The RSA encryption and decryption functions support key lengths of 512 - 4,096 bits in the
Modulus-Exponent (ME) and Chinese Remainder Theorem (CRT) formats.
Each Crypto Express5S feature includes one PCI-X adapter. The adapter is available in the
following configurations:
IBM Enterprise Common Cryptographic Architecture (CCA) Coprocessor (CEX6C)
IBM Enterprise Public Key Cryptography Standards#11 (PKCS) Coprocessor (CEX6P)
IBM Crypto Express6S Accelerator (CEX6A)
During the feature installation, the PCI-X adapter is configured by default as the CCA
coprocessor.
The Crypto Express5S feature does not use CHPIDs from the channel subsystem pool.
However, the Crypto Express6S feature requires one slot in a PCIe I/O drawer, and one
PCHID for each PCIe cryptographic adapter.
For enabling an LPAR to use a Crypto Express6S card, the following cryptographic resources
in the image profile must be defined for each partition:
Usage domain index
Control domain index
PCI Cryptographic Coprocessor Candidate List
PCI Cryptographic Coprocessor Online List
This task is accomplished by using the Customize/Delete Activation Profile task, which is in
the Operational Customization Group, from the HMC or from the SE. Modify the
cryptographic initial definition from the Crypto option in the image profile, as shown in
Figure 6-6 on page 231.
Important: After this definition is modified, any change to the image profile requires a
DEACTIVATE and ACTIVATE of the logical partition for the change to take effect.
Therefore, this cryptographic definition is disruptive to a running system.
Operational changes can be made by using the Change LPAR Cryptographic Controls task
from the SE, which reflects the cryptographic definitions in the image profile for the partition.
With this function, the cryptographic feature can be added and removed dynamically, without
stopping a running operating system.
For more information about the management of Crypto Express6S cards, see IBM z14 (3906)
Configuration Setup, SG24-8460.
The TKE contains a combination of hardware and software. A mouse, keyboard, flat panel
display, PCIe adapter, and a writable USB media to install the TKE LIC are included with the
system unit. The TKE workstation requires an IBM 4768 crypto adapter.
A TKE workstation is part of a customized solution for using the Integrated Cryptographic
Service Facility for z/OS (ICSF for z/OS) or Linux for z Systems. This program provides a
basic key management system for the cryptographic keys of a z14 server that has Crypto
Express features installed.
TKE FCs #0085 and #0086 can be used to control the Crypto Express6S or Crypto
Express5S cards on z14 servers. They also can be used to control the Crypto Express5S on
z13 and z13s servers, and the Crypto cards on older still supported servers.
The new TKE 9.1 LIC (FC 0880) features the following enhancements:
TKE 9.1 License Internal Code enhancements for support EC521 strength TKE and
Migration zones. An EC521 Migration zone is required if you want to use the migration
wizard to collect and apply PCI-compliant domain information.
TKE 9.1 also has a new family of wizards that makes it easy to create EC521 zones on all
of its smart cards. This feature simplifies the process of deploying a TKE for the first time
or simplifies the process of moving data from a weaker TKE zone to a new EC521 zone.
A new smart card for the Trusted Key Entry (TKE) allows stronger Elliptic Curve Cryptography
(ECC) levels. Additional TKE Smart Cards (FC 0900, packs of 10, FIPS certified blanks)
require TKE 9.1 LIC. The TKE 9.0 LIC (FC 0879) features the following enhancements:
Key material copy to alternative zone
By using TKE 9.0, key material can be copied from smart cards in one TKE zone to smart
cards in another zone. You might have old 1024-bit strength TKE zones, and might want to
move or copy the key material in those zones into a new, stronger TKE zone. To use this
new feature, you create TKE or EP11 smart cards on your TKE 9.0 system. You then
enroll the new TKE or EP11 smart cards in an alternative zone. This process allows you to
copy smart card content from a smart card that is enrolled in the alternative zone.
Save TKE data directory structure with files to USB
TKE data can be saved to, or restored from, removable media in the same directory
structure they were found on the TKE.
Create key parts without opening a host
Administrators can now use the TKE application to create key parts without opening a
host. This ability allows the key administrator to create key parts while being offline or
before any hosts are defined. This feature can be found in the TKE application under the
Utilities → Create CCA key parts pull-down menu.
New TKE Audit Log application
A new TKE Audit Log application is available for the Privileged Mode Access ID of
AUDITOR. This application provides an easy-to-use interface to view the TKE workstation
security audit records from the TKE workstation.
Heartbeat audit record
TKE workstations cut an audit record when the TKE starts or when no audit events
occurred during a client-configured duration. The record shows the serial number of the
TKE local crypto adapter and indicates whether the local crypto adapter was changed
since the last check.
The following features are related to support for the Crypto Express6S with CCA 6.0. The
Crypto Express6S with CCA 6.0 is designed to meet the PCI-HSM PIN Transaction Security
v3.0, 2016 standard:
Domain mode management
With CCA 6.0, individual domains are in one of thefollowing modes:
– Normal Mode
– Imprint Mode
– Compliant Mode
Imprint and compliant mode were added to indirectly and directly meet the PCI-HSM PIN
Transaction Security v3.0, 2016 requirement. TKE is required to manage Host Crypto
Module domains in imprint and compliant mode.
Set clock
With TKE 9.0, the host crypto module’s clock can be set. The clock must be set before a
domain can be placed in imprint mode.
Domain-specific Host Crypto Module Audit Log management
Domains in imprint mode or compliant mode on a Crypto Express6S maintain a
domain-specific module audit log. The TKE provides a feature for downloading the audit
records so they can be viewed.
Domain-specific roles and authorities
Domains in imprint mode or compliant mode on a Crypto Express6S must be managed by
using domain-specific roles and authorities. The TKE provides new management features
for the domain-specific roles and authorities. The roles are subject to forced dual control
policies that prevent roles from issuing and co-signing a command. For information about
how to manage imprint and compliant mode domains, see the TKE User’s Guide.
Setup PCI Environment Wizard
To simplify the management of a compliant domain, the TKE provides a setup wizard that
creates a minimum set of forced dual control roles and authorities that are needed to
manage a compliant domain. For information about how to manage imprint and compliant
mode domains, see the TKE User’s Guide.
Tip: For more information about handling a TKE, see the TKE Introduction Video 1
Introduction to TKE video that is available on YouTube.
Each LPAR in the same system that uses a domain that is managed through a TKE
workstation connection is a TKE host or TKE target. An LPAR with a TCP/IP connection to the
TKE is referred to as the TKE host; all other partitions are TKE targets.
The cryptographic controls that are set for an LPAR through the SE determine whether the
workstation is a TKE host or a TKE target.
Smart card readers from feature code 0885 or 0891 can be carried forward. Smart cards can
be used on TKE 9.0 with these readers. Access to and use of confidential data on the smart
card are protected by a user-defined PIN. Up to 990 other smart cards can be ordered for
backup. (The extra smart card feature code is FC #0892.) When one feature code is ordered,
10 smart cards are shipped. The order increment is 1 - 99 (10 - 990 blank smart cards).
If smart cards with applets that are not supported by the new smart card reader are reused,
new smart cards on TKE 8.1 or later must be created and the content from the old smart
cards to the new smart cards must be copied. The new smart cards can be created and
copied on a TKE 8.1 system. If the copies are done on TKE 9.0, the source smart card must
be placed in an older smart card reader from feature code 0885 or 0891.
A new smart card for the Trusted Key Entry (TKE) allows stronger Elliptic Curve Cryptography
(ECC) levels. Additional TKE Smart Cards (FC 0900, packs of 10, FIPS certified blanks)
require TKE 9.1 LIC
Note: Several options for ordering the TKE with or without ordering Keyboard, Mouse, and
Display are available. Ask your IBM Representative for the best option for you.
The TKE 9.x5 LIC require the 4768 crypto adapter. The TKE 8.0 and TKE 8.1 workstations
can be upgraded to the TKE 9.x tower workstation by purchasing a 4768 crypto adapter.
When performing a MES upgrade from TKE 7.3, TKE 8.0, or TKE 8.1 to a TKE 9.x
installation, the following steps must be completed:
1. Save Upgrade Data on old TKE to USB memory to save client data.
2. Replace the 4767 crypto adapter with the 4768 crypto adapter.
3. Upgrade the firmware to TKE 9.0
4. Install the Frame Roll to apply Save Upgrade Data (client data) to the TKE 9.1 system.
5. Run the TKE Workstation Setup wizard.
Note: A workstation that was upgraded to TKE V8.x includes the 4767 cryptographic
adapter that is required to manage Crypto Express5S; however, it cannot be used to
manage the Crypto Express6s.
If your z14 includes Crypto Express6S, you must upgrade to TKE V9.0, which requires the
4768 cryptographic adapter.
Upgrading to TKE V9.0 requires that your TKE hardware is compatible with the 4768
cryptographic adapter. The following older TKE hardware features are compatible 4768
cryptographic adapters:
FC 0842
FC 0847
FC 0097
Important: TKE workstations that are at feature code 0841 or less do not support the 4767
or 4768 cryptographic adapters.
For more information about TKE hardware support, see Table 6-3. For some functionality,
requirements must be considered; for example, the characterization of a Crypto Express card
in EP 11 mode always requires the use of a TKE.
Manage Host CEC3C (CCA) Yes Yes Yes Yes Yes Yes
Crypto Module
CEX4C (CCA) Yes Yes Yes Yes Yes Yes
Attention: The TKE is unaware of the CPC type where the host crypto module is installed.
That is, the TKE does not care whether a Crypto Express is running on a zEC12, zBC12,
z13, 13s, or z14 server. Therefore, the LIC can support any CPC where the coprocessor is
supported, but the TKE LIC must support the specific crypto module.
Offers UDX - X - -
RSA functions - X X X
For more information about the software support levels for cryptographic functions, see
Chapter 7, “Operating system support” on page 243.
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
Because this information is subject to change, see the hardware fix categories
(IBM.Device.Server.z14-3906.*) for the most current information.
Support of z14 functions depends on the operating system, its version, and release.
End of service operating systems: Operating system levels that are no longer in service
are not covered in this publication. These older levels might support some features.
z/OS V1R13c
z/VM V6R4
z/VSE V6d
z/TPF V1R1
KVM Hypervisore Offered with the following Linux distributions SLES-12 SP2 or
higher, and Ubuntu 16.04 LTS or higher.
a. Only z/Architecture mode is supported. For more information, see the shaded box titled
“z/Architecture mode” that follows this table.
b. Service is required. For more information, see the shaded box that is titled “Features” on
page 229.
c. z/OS V1R13 and V2R1 - Compatibility only. The IBM Software Support Services for z/OS
V1.13, offered as of October 1, 2016, and V2R1 offered as October 1st, 2018, provide the
ability for customers to purchase extended defect support service for z/OS V1.13, V2.1
respectively.
d. As announced on February 7, 2017, the end of service date for z/VSE V5R2 was October
31, 2018.
e. For more information about minimal and recommended distribution levels, see the Linux
on Z website.
IBM operating systems that run in ESA/390 mode are no longer in service or currently
available only with extended service contracts, and they are not usable on systems
beginning with IBM z14™. However, IBM z14™ does provide ESA/390-compatibility mode,
which is an environment that supports a subset of DAT-off ESA/390 applications in a hybrid
architectural mode.
Problem state application programs (24-bit and 32-bit) are unaffected by this change.
The use of certain features depends on the operating system. In all cases, program
temporary fixes (PTFs) might be required with the operating system level that is indicated.
Check the z/OS fix categories, or the subsets of the 3906DEVICE PSP buckets for z/VM and
z/VSE. The fix categories and the PSP buckets are continuously updated, and contain the
latest information about maintenance.
For for more information about Linux on Z distributions and KVM hypervisor, see the
distributor’s support information.
For more information about supported functions that are based on operating systems, see
7.3, “z14 features and function support overview” on page 248. Tables are built by function
and feature classification to help you determine, by a quick scan, what is supported and the
minimum operating system level that is required.
7.2.1 z/OS
z/OS Version 2 Release 2 is the earliest in-service release that supports z14 servers.
Consider the following points:
Service support for z/OS Version 1 Release 13 ended in September of 2016; however, a
fee-based extension for defect support (for up to three years) can be obtained by ordering
IBM Software Support Services - Service Extension1 for z/OS 1.13.
Service support for z/OS Version 2 Release 1 ended in September of 2018; however, a
fee-based extension for defect support (for up to three years) can be obtained by ordering
IBM Software Support Services - Service Extension for z/OS 2.1.
z14 capabilities differ depending on the z/OS release. Toleration support is provided on z/OS
V1R13 and V2R1. Exploitation support is provided on z/OS V2R2 and later only.
For more information about supported functions and their minimum required support levels,
see 7.3, “z14 features and function support overview” on page 248.
7.2.2 z/VM
z/VM V6R4 and z/VM V7R1 provide support that enables guests to use the following features
that are supported by z/VM on IBM z14™:
z/Architecture support
New hardware facilities
ESA/390-compatibility mode for guests
Crypto Clear Key ECC operations
RoCE Express2 support
1
Beginning with z/OS V1.12, IBM Software Support Services replaced the IBM Lifecycle Extension for z/OS offering
with a service extension for extended defect support.
For more information about supported functions and their minimum required support levels,
see 7.3, “z14 features and function support overview” on page 248.
7.2.3 z/VSE
z14 support is provided by z/VSE V5R22 and later, with the following considerations:
z/VSE runs in z/Architecture mode only.
z/VSE supports 64-bit real and virtual addressing.
For more information about supported functions and their minimum required support levels,
see 7.3, “z14 features and function support overview” on page 248.
7.2.4 z/TPF
z14 support is provided by z/TPF V1R1 with PTFs. For more information about supported
functions and their minimum required support levels, see 7.3, “z14 features and function
support overview” on page 248.
For more information about supported Linux distributions on IBM Z servers, see the Tested
platforms for Linux page of the IBM IT infrastructure website.
IBM is working with Linux distribution Business Partners to provide further use of selected
z14 functions in future Linux on Z distribution releases.
The KVM hypervisor is supported with the following minimum Linux distributions:
SLES 12 SP2 with service.
RHEL 7.5 with kernel-alt package (kernel 4.14).
Ubuntu 16.04 LTS with service and Ubuntu 18.04 LTS with service.
For more information about minimal and recommended distribution levels, see the IBM Z
website.
Information about Linux on Z refers exclusively to the appropriate distributions of SUSE, Red
Hat, and Ubuntu.
Note: The following tables list but do not explicitly mark all the features that require fixes
that are required by the corresponding operating system for toleration or exploitation. For
more information, see the PSP bucket for 3906DEVICE.
z14 servers Y Y Y Y Y Y
Maximum processor unit (PUs) per system image 170b 170b 170b 100 64c 64c
85 LPARs Y Y Y Y Y Y
Dynamic PU add Y Y Y Y Y Y
Program-directed re-IPL - - - - Y Y
HiperDispatch Y Y Y Y Y Y
Out-of-order execution Y Y Y Y Y Y
The supported base CPC functions for z/VSE, z/TPF and Linux on Z are listed in Table 7-4.
Table 7-4 Supported base CPC functions for z/VSE, z/TPF and Linux on Z
Functiona z/VSE z/VSE z/VSE z/TPF Linux on
V6R2 V6R1 V5R2 V1R1 Zb
z14 servers Y Y Y Y Y
85 LPARs Y Y Y Y Y
Dynamic PU add Y Y Y N Y
HiperDispatch N N N Ng Y
Transactional Execution N N N N Y
Out-of-order execution Y Y Y Y Y
Table 7-5 Supported coupling and clustering functions for z/OS and z/VM
Functiona z/OS z/OS z/OS z/OS z/VM z/VM
V2R3 V2R2 V2R1 V1R13 V7R1 V6R4
Storage connectivity
The supported storage connectivity functions for z/OS and z/VM are listed Table 7-6.
Table 7-6 Supported storage connectivity functions for z/OS and z/VM
Functiona z/OS z/OS z/OS z/OS z/VM z/VM
V2R3 V2R2 V2R1 V1R13 V7R1 V6R4
zHyperLink Express Y Y Y N N N
The supported storage connectivity functions for z/VSE, z/TPF, and Linux on Z are listed in
Table 7-7.
Table 7-7 Supported storage connectivity functions for z/VSE, z/TPF, and Linux on Z
Functiona z/VSE z/VSE z/VSE z/TPF Linux on
V6R2 V6R1 V5R2 V1R1 Zb
zHyperLink Express - - - - -
Table 7-8 Supported network connectivity functions for z/OS and z/VM
Functiona z/OS z/OS z/OS z/OS z/VM z/VM
V2R3 V2R2 V2R1 V1R13 V7R1 V6R4
Hipersockets
HiperSocketsd Y Y Y Y Y Y
32 HiperSockets Y Y Y Y Y Y
The supported network connectivity functions for z/VSE, z/TPF, and Linux on Z are listed in
Table 7-9.
Table 7-9 Supported network connectivity functions for z/VSE, z/TPF and Linux on Z
Functiona z/VSE z/VSE z/VSE z/TPF Linux
V6R2 V6R1 V5R2 V1R1 onZb
Hipersockets
HiperSocketsd Y Y Y N Y
32 HiperSockets Y Y Y N Y
Crypto Express6S Y Yc Yc Yc Yb Yb
Crypto Express5S Y Y Y Y Yb Yb
The supported cryptography functions for z/VSE, z/TPF, and Linux on Z are listed in
Table 7-11.
Table 7-11 Supported cryptography functions for z/VSE, z/TPF and Linux on Z
Functiona z/VSE z/VSE z/VSE z/TPF Linux on
V6R2 V6R1 V5R2 V1R1 Zb
Crypto Express6S Y Y Y Y Y
Crypto Express5S Y Y Y Y Y
zEDCb Express Y Y Y N Yc Yc Yd
a. PTFs might be required for toleration support or exploitation of z14 features and function.
b. zEnterprise Data Compression.
c. For guest exploitation.
d. See the IBM support site for Linux on Z.
z/VSE V5R2 and later z/VSE Turbo Dispatcher can use up to 4 CPs, and tolerates up to
10-way LPARs
KVM Hypervisor The KVM hypervisor is offered with the following Linux distributions
-- 256CPs or IFLs--:
SLES 12 SP2.
RHEL 7.5 with kernel-alt package (kernel 4.14).
Ubuntu 16.04 LTS and Ubuntu 18.04 LTS.
Up to 85 LPARs
This feature was first made available on z13 servers and allows the system to be configured
with up to 85 LPARs. Because channel subsystems can be shared by up to 15 LPARs, it is
necessary to configure six channel subsystems to reach the 85 LPARs limit.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
Dynamic PU add
Planning an LPAR configuration includes defining reserved PUs that can be brought online
when extra capacity is needed. Operating system support is required to use this capability
without an IPL; that is, nondisruptively. This support is available in z/OS for some time.
The dynamic PU add function enhances this support by allowing you to dynamically define
and change the number and type of reserved PUs in an LPAR profile, which removes any
planning requirements. The new resources are immediately made available to the operating
system and, in the case of z/VM, to its guests.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
z/OS can take advantage of this support and nondisruptively acquire and release memory
from the reserved area. z/VM V6R2 and later can acquire memory nondisruptively and
immediately make it available to guests. z/VM virtualizes this support to its guests, which now
also can increase their memory nondisruptively if supported by the guest operating system.
Linux on Z also supports acquiring and releasing memory nondisruptively. This feature is
enabled for SUSE Linux Enterprise Server 11 and RHEL 6 and later releases.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
The Capacity Provisioning Manager, which is a feature that is first available with z/OS V1R9,
interfaces with z/OS Workload Manager (WLM) and implements capacity provisioning
policies. Several implementation options are available, from an analysis mode that issues
only guidelines, to an autonomic mode that provides fully automated operations.
Program-directed re-IPL
First available on System z9, program directed re-IPL allows an operating system on a z14 to
IPL again without operator intervention. This function is supported for SCSI and IBM
extended count key data (IBM ECKD) devices.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
IOCP
All IBM Z servers require a description of their I/O configuration. This description is stored in
I/O configuration data set (IOCDS) files. The I/O configuration program (IOCP) allows for the
creation of the IOCDS file from a source file that is known as the I/O configuration source
(IOCS).
The IOCS file contains definitions of LPARs and channel subsystems. It also includes detailed
information for each channel and path assignment, each control unit, and each device in the
configuration.
IOCP required level for z14 servers: The required level of IOCP for the z14 is IOCP 5.4.0
with PTFs. For more information, see the following publications:
IBM Z Stand-Alone Input/Output Configuration Program User's Guide, SB10-7166
IBM Z Input/Output Configuration Program User’s Guide for ICP IOCP, SB10-7163
Dynamic Partition Manager V3.2: At the time of this writing, the Dynamic Partition
Manager V3.2 is available for managing IBM Z servers that are running Linux. DPM 3.2 is
available with HMC Driver Level 36. IOCP does not need to configure a server that is
running in DPM mode. For more information, see IBM Dynamic Partition Manager (DPM)
Guide, SB10-7170-02.
HiperDispatch
The HIPERDISPATCH=YES/NO parameter in the IEAOPTxx member of SYS1.PARMLIB and on
the SET OPT=xx command controls whether HiperDispatch is enabled or disabled for a z/OS
image. It can be changed dynamically, without an IPL or any outage.
The default is that HiperDispatch is disabled on all releases, from z/OS V1R10 (which
requires PTFs for zIIP support) through z/OS V1R12.
Beginning with z/OS V1R13, the IEAOPTxx keyword HIPERDISPATCH defaults to YES when it is
running on a z14, z13, z13s, zEC12, or zBC12 server. If HIPERDISPATCH=NO is specified, the
specification is accepted as it was on previous z/OS releases.
The use of SMT on z14 servers requires that HiperDispatch is enabled on the operating
system. For more information, see “Simultaneous multithreading” on page 268.
Additionally, with z/OS V1R12 or later, any LPAR that is running with more than 64 logical
processors is required to operate in HiperDispatch Management Mode.
3 FICON Express16S+ does not allow a mixture of CHPID types on new cards
The PR/SM in the System z9 EC to zEC12 servers stripes the memory across all books in the
system to take advantage of the fast book interconnection and spread memory controller
work. The PR/SM on z14 servers seeks to assign all memory in one CPC drawer that is
striped across the clusters of that drawer to take advantage of the lower latency memory
access in a drawer.
The PR/SM in the System z9 EC to zEC12 servers attempts to assign all logical processors
to one book, packed into PU chips of that book in cooperation with operating system
HiperDispatch optimize shared cache usage.
The PR/SM on z14 servers seeks to assign all logical processors of a partition to one CPC
drawer, packed into PU chips of that CPC drawer in cooperation with operating system
HiperDispatch optimize shared cache usage.
The PR/SM automatically keeps a partition’s memory and logical processors on the same
CPC drawer. This arrangement looks simple for a partition, but it is a complex optimization for
multiple logical partitions because some must be split among processors drawers.
To use HiperDispatch effectively, WLM goal adjustment might be required. Review the WLM
policies and goals and update them as necessary. WLM policies can be changed without
turning off HiperDispatch. A health check is provided to verify whether HiperDispatch is
enabled on a system image that is running on z14 servers.
z/TPF
z/TPF on z14 can utilize more processors immediately without reactivating the LPAR or
IPLing the z/TPF system.
In installations older than z14, z/TPF workload is evenly distributed across all available
processors, even in low-utilization situations. This configuration causes cache and core
contention with other LPARs. When z/TPF is running in a shared processor configuration, the
achieved MIPS is higher when z/TPF is using a minimum set of processors.
In low-utilization periods, z/TPF now minimizes the processor footprint by compressing TPF
workload onto a minimal set of I-streams (engines), which reduces the effect on other LPARs
and allows the entire CPC to operate more efficiently.
As a consequence, z/OS and z/VM experience less contention from the z/TPF system when
the z/TPF system is operating at periods of low demand.
zIIP support
zIIPs do not change the model capacity identifier of z14 servers. IBM software product license
charges that are based on the model capacity identifier are not affected by the addition of
zIIPs. On a z14 server, z/OS Version 1 Release 13 is the minimum level for supporting zIIPs.
No changes to applications are required to use zIIPs. They can be used by the following
applications:
Db2 V8 and later for z/OS data serving for applications that use data Distributed Relational
Database Architecture (DRDA) over TCP/IP, such as data serving, data warehousing, and
selected utilities.
z/OS XML services.
z/OS CIM Server.
z/OS Communications Server for network encryption (Internet Protocol Security (IPSec))
and for large messages that are sent by HiperSockets.
IBM GBS Scalable Architecture for Financial Reporting.
IBM z/OS Global Mirror (formerly XRC) and System Data Mover.
IBM OMEGAMON® XE on z/OS, OMEGAMON XE on Db2 Performance Expert, and Db2
Performance Monitor.
Any Java application that uses the current IBM SDK.
WebSphere Application Server V5R1 and later, and products that are based on it, such as
WebSphere Portal, WebSphere Enterprise Service Bus (WebSphere ESB), and
WebSphere Business Integration (WBI) for z/OS.
CICS/TS V2R3 and later.
Db2 UDB for z/OS Version 8 and later.
IMS Version 8 and later.
zIIP Assisted HiperSockets for large messages.
z/OSMF (z/OS Management Facility).
IBM z/OS Platform for Apache Spark.
IBM Machine Learning for z/OS.
On z14 servers, the zIIP processor is designed to run in SMT mode, with up to two threads
per processor. This new function is designed to help improve throughput for zIIP workloads
and provide appropriate performance measurement, capacity planning, and SMF accounting
data. This support is available for z/OS V2.1 with PTFs and higher.
Use the PROJECTCPU option of the IEAOPTxx parmlib member to help determine whether zIIPs
can be beneficial to the installation. Setting PROJECTCPU=YES directs z/OS to record the
amount of eligible work for zIIPs in SMF record type 72 subtype 3. The field APPL% IIPCP of
the Workload Activity Report listing by WLM service class indicates the percentage of a
processor that is zIIP eligible. Because of the zIIP’s lower price as compared to a CP, even a
utilization as low as 10% can provide cost benefits.
This feature enables software to indicate to the hardware the beginning and end of a group of
instructions that must be treated in an atomic way. All of their results occur or none occur, in
true transactional style. The execution is optimistic.
The hardware provides a memory area to record the original contents of affected registers
and memory as the instruction’s execution occurs. If the transactional execution group is
canceled or must be rolled back, the hardware transactional memory is used to reset the
values. Software can implement a fallback capability.
This capability increases the software’s efficiency by providing a way to avoid locks (lock
elision). This advantage is of special importance for speculative code generation and highly
parallelized applications.
TX is used by IBM Java virtual machine (JVM) and might be used by other software. The
supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on page 249.
Simultaneous multithreading
SMT is the hardware capability to process up to two simultaneous threads in a single core,
sharing the resources of the superscalar core. This capability improves the system capacity
and efficiency in the usage of the processor, which increases the overall throughput of the
system.
The z14 can run up two threads simultaneously in the same processor, which dynamically
shares resources of the core, such as cache, translation lookaside buffer (TLB), and
execution resources. It provides better utilization of the cores and more processing capacity.
Note: For zIIPs and IFLs, SMT must be enabled on z/OS, z/VM, or Linux on Z instances.
An operating system with SMT support can be configured to dispatch work to a thread on a
zIIP (for eligible workloads in z/OS) or an IFL (for z/VM) core in single-thread or SMT
mode.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
An operating system that uses SMT controls each core and is responsible for maximizing
their throughput and meeting workload goals with the smallest number of cores. In z/OS,
HiperDispatch cache optimization should be considered when you must choose the two
threads to be dispatched in the same processor.
HiperDispatch attempts to dispatch guest virtual CPUs on the same logical processor on
which they ran. PR/SM attempts to dispatch a vertical low logical processor in the same
physical processor. If that process is not possible, it attempts to dispatch it in the same node,
or then the same CPC drawer where it was dispatched before to maximize cache reuse.
From the point of view of an application, SMT is transparent and no changes are required in
the application for it to run in an SMT environment, as shown in Figure 7-1 on page 269.
4 On z14 SMT is also enabled (not user configurable) by default for SAPs
z/OS z/VM
PR/SM Hypervisor MT Aware
z/OS
The following APARs must be applied to z/OS V2R1 to use SMT:
OA43366 (BCP)
OA43622 (WLM)
OA44439 (XCF)
The use of SMT on z/OS V2R1 requires enabling HiperDispatch, and defining the processor
view (PROCVIEW) control statement in the LOADxx parmlib member and the MT_ZIIP_MODE
parameter in the IEAOPTxx parmlib member.
The PROCVIEW statement is defined for the life of IPL, and can have the following values:
CORE: This value specifies that z/OS should configure a processor view of core, in which a
core can include one or more threads. The number of threads is limited by z14 to two
threads. If the underlying hardware does not support SMT, a core is limited to one thread.
CPU: This value is the default. It specifies that z/OS should configure a traditional processor
view of CPU and not use SMT.
CORE,CPU_OK: This value specifies that z/OS should configure a processor view of core (as
with the CORE value) but the CPU parameter is accepted as an alias for applicable
commands.
When PROCVIEW CORE or CORE,CPU_OK are specified in z/OS that is running in z14,
HiperDispatch is forced to run as enabled, and you cannot disable HiperDispatch. The
PROCVIEW statement cannot be changed dynamically; therefore, you must run an IPL after
changing it to make the new setting effective.
The MT_ZIIP_MODE parameter in the IEAOPTxx controls zIIP SMT mode. It can be 1 (the
default), where only one thread can be running in a core, or 2, where up two threads can be
running in a core. If PROCVIEW CPU is specified, the MT_ZIIP_MODE is always 1. Otherwise, the
use of SMT to dispatch two threads in a single zIIP logical processor (MT_ZIIP_MODE=2) can be
changed dynamically by using the SET OPT=xx setting in the IEAOPTxx parmlib. Changing the
MT mode for all cores can take some time to complete.
PROCVIEW CORE requires DISPLAY M=CORE and CONFIG CORE to display the core states and
configure an entire core.
With the introduction of Multi-Threading support for SAPs in z14, a maximum of 88 logical
SAPs can used. RMF is updated to support this change by implementing page break support
in the I/O Queuing Activity report that is generated by the RMF Post processor.
The default in z/VM is multithreading disabled. With the addition of dynamic SMT capability to
z/VM V6R4 through an SPE, the number of active threads per core can be changed without a
system outage and potential capacity gains going from SMT-1 to SMT-2 (one to two threads
per core) can now be achieved dynamically. Dynamic SMT requires applying PTFs that are
running in SMT enabled mode and enables dynamically varying the active threads per core.
z/VM supports up to 32 multithreaded cores (64 threads) for IFLs, and each thread is treated
as an independent processor. z/VM dispatches virtual IFLs on the IFL logical processor so
that the same or different guests can share a core. Each core has a single dispatch vector,
and z/VM attempts to place virtual sibling IFLs on the same dispatch vector to maximize
cache reuses.
The guests have no awareness of SMT, and cannot use it. z/VM SMT exploitation does not
include guest support for multithreading. The value of this support for guests is that the
first-level z/VM hosts under the guests can achieve higher throughput from the multi-threaded
IFL cores.
Single-instruction multiple-data
The SIMD feature introduces a new set of instructions to enable parallel computing that can
accelerate code with string, character, integer, and floating point data types. The SIMD
instructions allow a larger number of operands to be processed with a single complex
instruction.
z14 is equipped with new set of instructions to improve the performance of complex
mathematical models and analytic workloads through vector processing and new complex
instructions, which can process a lot of data with a single instruction. This new set of
instructions, which is known as SIMD, enables more consolidation of analytic workloads and
business transactions on Z servers.
SIMD on z14 has support for 32-bit floats and enhanced math libraries that provide
performance improvements for analytical workloads by processing more information with a
single CPU instruction.
MASS and ATLAS can reduce the time and effort for middleware and application developers.
IBM provides compiler built-in functions for SMID that software applications can use as
needed, such as for using string instructions.
The use of new hardware instructions through XL C/C++ ARCH(12) and TUNE(12) or SIMD
usage by MASS and ATLAS libraries requires the z14 support for z/OS V2R1 XL C/C++ web
deliverable.
Code must be developed to take advantage of the SIMD functions, and applications with
SIMD instructions abend if they run on a lower hardware level system. Some mathematical
function replacement can be done without code changes by including the scalar MASS library
before the standard math library.
The MASS and standard math library include different accuracies, so assess the accuracy of
the functions in the context of the user application before deciding whether to use the MASS
and ATLAS libraries.
The SIMD functions can be disabled in z/OS partitions at IPL time by using the MACHMIG
parameter in the LOADxx member. To disable SIMD code, use the MACHMIG VEF
hardware-based vector facility. If you do not specify a MACHMIG statement, which is the default,
the system is unlimited in its use of the Vector Facility for z/Architecture (SIMD).
Decimal floating point support was introduced with z9 EC. z14 servers inherited the decimal
floating point accelerator feature that was introduced with z10 EC.
5 The features that are listed here might not be available on all operating systems that are listed in the tables.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249. For more information, see 7.5.4, “z/OS XL C/C++ considerations” on page 308.
Out-of-order execution
Out-of-order (OOO) execution yields significant performance benefits for compute-intensive
applications by reordering instruction execution, which allows later (newer) instructions to be
run ahead of a stalled instruction, and reordering storage accesses and parallel storage
accesses. OOO maintains good performance growth for traditional applications.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249. For more information, see “3.4.3, “Out-of-Order execution” on page 99.
For more information about this function, see The Load-Program-Parameter and the
CPU-Measurement Facilities.
For more information about the CPU Measurement Facility, see the CPU MF - Update and
WSC Experiences page of the IBM Techdocs Library website.
For more information, see “12.2, “LSPR workload suite” on page 449.
IBM Virtual Flash Memory (FC 0604) offers up to 6.0 TB of memory in 1.5 TB increments for
improved application availability and to handle paging workload spikes.
IBM Virtual Flash Memory is designed to help improve availability and handling of paging
workload spikes when running z/OS V2.1, V2.2, or V2.3, or on z/OS V1.13. With this support,
z/OS is designed to help improve system availability and responsiveness by using VFM
across transitional workload events, such as market openings, and diagnostic data collection.
z/OS is also designed to help improve processor performance by supporting middleware
exploitation of pageable large (1 MB) pages.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
z/OS
GSF support allows an area of storage to be identified such that an Exit routine assumes
control if a reference is made to that storage. GSF is managed by new instructions that define
Guarded Storage Controls and system code to maintain that control information across
un-dispatch and re-dispatch.
z/VM
With the PTF for APAR VM65987, z/VM V6.4 provides support for guest exploitation of the
z14 guarded storage facility. This facility is designed to improve the performance of
garbage-collection processing by various languages, in particular Java.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
Through enhanced hardware features (based on DAT table entry bit) and explicit software
requests to obtain memory areas as non-executable, areas of memory can be protected from
unauthorized execution. A Protection Exception occurs if an attempt is made to fetch an
instruction from an address in such an element or if an address in such an element is the
target of an execute-type instruction.
z/OS
To use IEP, Real Storage Manager (RSM) is enhanced to request non-executable memory
allocation. Use new keyword EXECUTABLE=YES|NO on STORAGE OBTAIN or IARV64 to indicate
whether memory to be used contains executable code. Recovery Termination Manager
(RTM) writes LOGREC record of any program-check that results from IEP.
IEP support is for z/OS 2.2 and later running on z14 with APARs OA51030 and OA51643
installed.
z/VM
Guest exploitation support for the Instruction Execution Protection Facility is provided with
APAR VM65986.
The supported operating systems are listed in Table 7-3 on page 248 and Table 7-4 on
page 249.
Consideration: Because coupling link connectivity to z196 and previous systems is not
supported, introducing z14 into an installation requires extra planning. Consider the level of
CFCC. For more information, see “Migration considerations” on page 188.
6 Results based on modeling, not actual measurements
Before you begin the migration process, install the compatibility and coexistence PTFs. A
planned outage is required when you upgrade the CF or CF LPAR to CFCC Level 23.
CFCC Level 22
CFCC Level 22 is delivered on z14 servers with driver level 32. CFCC Level 22 introduces the
following enhancements:
Coupling Express Long Range (CE LR): A new link type that was introduced with z14 for
long distance coupling connectivity.
Coupling Facility (CF) Processor Scalability: CF work management and dispatching
changes for IBM z14™ allow improved efficiency and scalability for coupling facility
images.
First, ordered work queues were eliminated from the CF in favor of first-in/first-out queues,
which avoids the overhead of maintaining ordered queues.
Second, protocols for system-managed duplexing were simplified to avoid the potential for
latching deadlocks between duplexed structures.
Third, the CF image can now use its processors to perform specific work management
functions when the number of processors in the CF image exceeds a threshold. Together,
these changes improve the processor scalability and throughput for a CF image.
CF List Notification Enhancements: Significant enhancements were made to CF
notifications that inform users about the status of shared objects within in a Coupling
Facility.
First, structure notifications can use a round-robin scheme for delivering immediate and
deferred notifications that avoids excessive “shotgun” notifications, which reduces
notification overhead.
The supported operating systems are listed in Table 7-5 on page 251.
For more information about CFCC code levels, see the Parallel Sysplex page of the IBM IT
infrastructure website.
For more information about the latest CFCC code levels, see the current exception letter that
is published on Resource Link website (login is required).
CF structure sizing changes are expected when upgrading from a previous CFCC Level to
CFCC Level 21. Review the CF LPAR size by using the available CFSizer tool, which is
available for download at the IBM Systems support website.
Sizer Utility, an authorized z/OS program download, is useful when you are upgrading a CF.
The tool is available for download at the IBM Systems support website.
Before you begin the migration process, install the compatibility and coexistence PTFs. A
planned outage is required when you upgrade the CF or CF LPAR to CFCC Level 22.
Note: IBM z14 is last z Systems and IBM Z server to support HCA3-O fanout for 12x IFB
(#0171) and HCA3-O LR fanout for 1x IFB (#0170).a As announced previously, z13s is the
last mid-range z Systems server to support these adapters.
Enterprises should begin migrating from HCA3-O and HCA3-O LR adapters to ICA SR or
Coupling Express Long Reach (CE LR) adapters on z14, z13, and z13s. For high-speed
short-range coupling connectivity, enterprises should migrate to the Integrated Coupling
Adapter (ICA-SR).
For long-range coupling connectivity, enterprises should migrate to the new Coupling
Express LR coupling adapter. For long-range coupling connectivity requiring a DWDM,
enterprises must determine their needed DWDM vendor’s plan to qualify the planned
replacement long-range coupling links.
IBM Z enterprises should plan to migrate off of InfiniBand coupling links. For high-speed
short-range coupling connectivity, enterprises should migrate to the Integrated Coupling
Adapter (ICA-SR).
Asynchronous CF Duplexing for lock structures requires the following software support:
z/OS V2R3, z/OS V2.2 SPE with PTFs for APAR OA47796 and OA49148
z/VM V7R1, z/VM V6.4 with PTFs for z/OS exploitation of guest coupling environment
Db2 V12 with PTFs for APAR PI66689
IRLM V2.3 with PTFs for APAR PI68378
The supported operating systems are listed in Table 7-5 on page 251.
Instead of performing XI signals synchronously on every cache update request that causes
them, data managers can “opt in” for the CF to perform these XIs asynchronously (and then
sync them up with the CF at or before transaction completion). Data integrity is maintained if
all XI signals complete by the time transaction locks are released.
The feature enables faster completion of cache update CF requests, especially with cross-site
distance involved and provides improved cache structure service times and coupling
efficiency. It requires explicit data manager exploitation/participation, which is not transparent
to the data manager. No SMF data changes were made for CF monitoring and reporting.
This function refers exclusively to the z/VM dynamic I/O support of InfiniBand and ICA
coupling links. Support is available for the CIB and CS5 CHPID type in the z/VM dynamic
commands, including the change channel path dynamic I/O command.
Specifying and changing the system name when entering and leaving configuration mode are
also supported. z/VM does not use InfiniBand or ICA, and does not support the use of
InfiniBand or ICA coupling links by guests. The supported operating systems are listed in
Table 7-5 on page 251.
zHyperlink Express
z14 introduces IBM zHyperLink Express as a brand new IBM Z input/output (I/O) channel link
technology since FICON. zHyperLink Express is designed to help bring data close to
processing power, increase the scalability of Z transaction processing, and lower I/O latency.
zHyperLink Express is designed for up to 5x lower latency than High Performance FICON for
Z (zHPF) by directly connecting the Z Central Processor Complex (CPC) to the I/O Bay of the
DS8880. This short distance (up to 150 m), direct connection is intended to speed Db2 for
z/OS transaction processing and improve active log throughput.
The improved performance of zHyperLink Express allows the Processing Unit (PU) to make a
synchronous request for the data that is in the DS8880 cache. This feature eliminates the
un-dispatch of the running request, the queuing delays to resume the request, and the PU
cache disruption.
Support for zHyperLink Writes can accelerate Db2 log writes to help deliver superior service
levels by processing high-volume Db2 transactions at speed. IBM zHyperLink Express (FC
0431) requires compatible levels of DS8880/F hardware, firmware R8.5.1, and Db2 12 with
PTFs.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on
page 253.
FICON Express16S+
FICON Express16S+ supports a link data rate of 16 gigabits per second (Gbps) and
autonegotiation to 4 or 8 Gbps for synergy with switches, directors, and storage devices. With
support for native FICON, High Performance FICON for Z (zHPF), and Fibre Channel
Protocol (FCP), the IBM z14™ server enables you to position your SAN for even higher
performance, which helps you to prepare for an end-to-end 16 Gbps infrastructure to meet
the lower latency and increased bandwidth demands of your applications.
The new FICON Express16S+ channel works with your existing fiber optic cabling
environment (single mode and multimode optical cables). The FICON Express16S+ feature
running at end-to-end 16 Gbps link speeds provides reduced latency for large read/write
operations and increased bandwidth compared to the FICON Express8S feature.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on
page 253.
FICON Express16S
FICON Express16S supports a link data rate of 16 Gbps and autonegotiation to 4 or 8 Gbps
for synergy with existing switches, directors, and storage devices. With support for native
FICON, zHPF, and FCP, the z14 server enables SAN for even higher performance, which
helps to prepare for an end-to-end 16 Gbps infrastructure to meet the increased bandwidth
demands of your applications.
The new features for the multimode and single mode fiber optic cabling environments reduce
latency for large read/write operations and increase bandwidth compared to the FICON
Express8S features.
FICON Express8S
The FICON Express8S provides a link rate of 8 Gbps, with auto negotiation to 4 or 2 Gbps for
compatibility with previous devices and investment protection. Both 10 km (6.2 miles) LX and
SX connections are offered (in a feature, all connections must include the same type).
Statement of Directiona: IBM z14 is the last z Systems and IBM Z high-end server to
support FICON Express8S (#0409 and #0410) channels. Enterprises should begin
migrating from FICON Express8S channels to FICON Express16S+ channels (FC 0427
and FC 0428). FICON Express8S is supported on future high-end IBM Z servers as carry
forward on an upgrade.
a. All statements regarding IBM plans, directions, and intent are subject to change or withdrawal
without notice. Any reliance on these statements of general direction is at the relying party’s
sole risk and will not create liability or obligation for IBM.
FICON Express8S introduced a hardware data router for more efficient zHPF data transfers.
It is the first channel with hardware that is designed to support zHPF, as compared to FICON
Express8, FICON Express4, and FICON Express2, which include a firmware-only zHPF
implementation.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on
page 253.
To use this enhancement, the control unit must support the new IU pacing protocol. IBM
System Storage® DS8000 series supports extended distance FICON for IBM Z
environments. The channel defaults to current pacing values when it operates with control
units that cannot use extended distance FICON.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on
page 253.
High-performance FICON
High-performance FICON (zHPF) was first provided on System z10, and is a FICON
architecture for protocol simplification and efficiency. It reduces the number of information
units (IUs) that are processed. Enhancements were made to the z/Architecture and the
FICON interface architecture to provide optimizations for online transaction processing
(OLTP) workloads.
zHPF is available on z14, z13, z13s, zEC12, and zBC12 servers. The FICON Express16S+,
FICON Express16S, and FICON Express8S (CHPID type FC) concurrently support the
existing FICON protocol and the zHPF protocol in the server LIC.
When used by the FICON channel, the z/OS operating system, and the DS8000 control unit
or other subsystems, the FICON channel processor usage can be reduced and performance
improved. Appropriate levels of Licensed Internal Code (LIC) are required.
For example, the zHPF channel programs can be used by the z/OS OLTP I/O workloads,
Db2, VSAM, the partitioned data set extended (PDSE), and the z/OS file system (zFS).
At the zHPF announcement, zHPF supported the transfer of small blocks of fixed size data
(4 K) from a single track. This capability was extended, first to 64 KB, and then to multitrack
operations. The 64 KB data transfer limit on multitrack operations was removed by z196. This
improvement allows the channel to fully use the bandwidth of FICON channels, which results
in higher throughputs and lower response times.
The multitrack operations extension applies exclusively to the FICON Express16S+, FICON
Express16S, and FICON Express8S, on the z14, z13, z13s, zEC12, and zBC12, when
configured as CHPID type FC and connecting to z/OS. zHPF requires matching support by
the DS8000 series. Otherwise, the extended multitrack support is transparent to the control
unit.
zHPF is enhanced to allow all large write operations (greater than 64 KB) at distances up to
100 km (62.13 miles) to be run in a single round trip to the control unit. This process does not
elongate the I/O service time for these write operations at extended distances. This
enhancement to zHPF removes a key inhibitor for clients adopting zHPF over extended
distances, especially when the IBM HyperSwap capability of z/OS is used.
From the z/OS perspective, the FICON architecture is called command mode and the zHPF
architecture is called transport mode. During link initialization, the channel node and the
control unit node indicate whether they support zHPF.
Requirement: All FICON channel path identifiers (CHPIDs) that are defined to the same
LCU must support zHPF. The inclusion of any non-compliant zHPF features in the path
group causes the entire path group to support command mode only.
The mode that is used for an I/O operation depends on the control unit that supports zHPF
and its settings in the z/OS operating system. For z/OS use, a parameter is available in the
IECIOSxx member of SYS1.PARMLIB (ZHPF=YES or NO) and in the SETIOS system command to
control whether zHPF is enabled or disabled. The default is ZHPF=NO.
Support is also added for the D IOS,ZHPF system command to indicate whether zHPF is
enabled, disabled, or not supported on the server.
Similar to the existing FICON channel architecture, the application or access method provides
the channel program (CCWs). The way in which zHPF (transport mode) manages channel
program operations is different from the CCW operation for the existing FICON architecture
(command mode). While in command mode, each CCW is sent to the control unit for
execution. In transport mode, multiple channel commands are packaged together and sent
over the link to the control unit in a single control block. Fewer processors are used compared
to the existing FICON architecture. Certain complex CCW chains are not supported by zHPF.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on
page 253.
The MIDAW facility is a system architecture and software feature that is designed to improve
FICON performance. This facility was first made available on System z9 servers, and is used
by the Media Manager in z/OS.
The MIDAW facility provides a more efficient CCW/IDAW structure for certain categories of
data-chaining I/O operations.
MIDAW can improve FICON performance for extended format data sets. Non-extended data
sets can also benefit from MIDAW.
MIDAW can improve channel utilization, and can improve I/O response time. It also reduces
FICON channel connect time, director ports, and control unit processor usage.
IBM laboratory tests indicate that applications that use EF data sets, such as Db2, or long
chains of small blocks can gain significant performance benefits by using the MIDAW facility.
MIDAW is supported on FICON channels that are configured as CHPID type FC. The
supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
Figure 7-2 on page 283 shows a single CCW that controls the transfer of data that spans
non-contiguous 4 K frames in main storage. When the IDAW flag is set, the data address in
the CCW points to a list of words (IDAWs). Each IDAW contains an address that designates a
data area within real storage.
8
Exceptions are made to this statement, and many details are omitted in this description. In this section, we assume
that you can merge this brief description with an existing understanding of I/O operations in a virtual memory
environment.
The number of required IDAWs for a CCW is determined by the following factors:
IDAW format as specified in the operation request block (ORB)
Count field of the CCW
Data address in the initial IDAW
For example, three IDAWS are required when the following events occur:
The ORB specifies format-2 IDAWs with 4 KB blocks.
The CCW count field specifies 8 KB.
The first IDAW designates a location in the middle of a 4 KB block.
CCWs with data chaining can be used to process I/O data blocks that have a more complex
internal structure, in which portions of the data block are directed into separate buffer areas.
This process is sometimes known as scatter-read or scatter-write. However, as technology
evolves and link speed increases, data chaining techniques become less efficient because of
switch fabrics, control unit processing and exchanges, and other issues.
The MIDAW facility is a method of gathering and scattering data from and into discontinuous
storage locations during an I/O operation. The MIDAW format is shown in Figure 7-3. It is
16 bytes long and is aligned on a quadword.
The use of MIDAWs is indicated by the MIDAW bit in the CCW. If this bit is set, the skip flag
cannot be set in the CCW. The skip flag in the MIDAW can be used instead. The data count in
the CCW must equal the sum of the data counts in the MIDAWs. The CCW operation ends
when the CCW count goes to zero or the last MIDAW (with the last flag) ends.
The combination of the address and count in a MIDAW cannot cross a page boundary.
Therefore, the largest possible count is 4 K. The maximum data count of all the MIDAWs in a
list cannot exceed 64 K, which is the maximum count of the associated CCW.
The scatter-read or scatter-write effect of the MIDAWs makes it possible to efficiently send
small control blocks that are embedded in a disk record to separate buffers from those that
are used for larger data areas within the record. MIDAW operations are on a single I/O block,
in the manner of data chaining. Do not confuse this operation with CCW command chaining.
VSAM and non-VSAM (DSORG=PS) sets can be defined as EF data sets. For non-VSAM
data sets, a 32-byte suffix is appended to the end of every physical record (that is, block) on
disk. VSAM appends the suffix to the end of every control interval (CI), which normally
corresponds to a physical record.
EA is useful for creating large Db2 partitions (larger than 4 GB). Striping can be used to
increase sequential throughput, or to spread random I/Os across multiple logical volumes.
DFSMS striping is useful for using multiple channels in parallel for one data set. The Db2 logs
are often striped to optimize the performance of Db2 sequential inserts.
Processing an I/O operation to an EF data set normally requires at least two CCWs with data
chaining. One CCW is used for the 32-byte suffix of the EF data set. With MIDAW, the
additional CCW for the EF data set suffix is eliminated.
MIDAWs benefit EF and non-EF data sets. For example, to read 12 4 K records from a
non-EF data set on a 3390 track, Media Manager chains 12 CCWs together by using data
chaining. To read 12 4 K records from an EF data set, 24 CCWs are chained (two CCWs per
4 K record). By using Media Manager track-level command operations and MIDAWs, an
entire track can be transferred by using a single CCW.
Performance benefits
z/OS Media Manager has I/O channel program support for implementing EF data sets, and
automatically uses MIDAWs when appropriate. Most disk I/Os in the system are generated by
using Media Manager.
Users of the Executing Fixed Channel Programs in Real Storage (EXCPVR) instruction can
construct channel programs that contain MIDAWs. However, doing so requires that they
construct an IOBE with the IOBEMIDA bit set. Users of the EXCP instruction cannot construct
channel programs that contain MIDAWs.
The MIDAW facility removes the 4 K boundary restrictions of IDAWs and, for EF data sets,
reduces the number of CCWs. Decreasing the number of CCWs helps to reduce the FICON
channel processor utilization. Media Manager and MIDAWs do not cause the bits to move any
faster across the FICON link. However, they reduce the number of frames and sequences that
flow across the link, and therefore use the channel resources more efficiently.
The performance of a specific workload can vary based on the conditions and hardware
configuration of the environment. IBM laboratory tests found that Db2 gains significant
performance benefits by using the MIDAW facility in the following areas:
Table scans
Logging
Utilities
Use of DFSMS striping for Db2 data sets
Media Manager with the MIDAW facility can provide significant performance benefits when
used in combination applications that use EF data sets (such as Db2) or long chains of small
blocks.
ICKDSF
Device Support Facilities, ICKDSF, Release 17 is required on all systems that share disk
subsystems with a z14 processor.
ICKDSF supports a modified format of the CPU information field that contains a two-digit
LPAR identifier. ICKDSF uses the CPU information field instead of CCW reserve/release for
concurrent media maintenance. It prevents multiple systems from running ICKDSF on the
same volume, and at the same time allows user applications to run while ICKDSF is
processing. To prevent data corruption, ICKDSF must determine all sharing systems that
might run ICKDSF. Therefore, this support is required for z14.
Remember: The need for ICKDSF Release 17 also applies to systems that are not part of
the same sysplex, or are running an operating system other than z/OS, such as z/VM.
The zDAC function is integrated into the hardware configuration definition (HCD). Clients can
define a policy that can includes preferences for availability and bandwidth that include
parallel access volume (PAV) definitions, control unit numbers, and device number ranges.
When new controllers are added to an I/O configuration or changes are made to existing
controllers, the system discovers them and proposes configuration changes that are based
on that policy.
zDAC provides real-time discovery for the FICON fabric, subsystem, and I/O device resource
changes from z/OS. By exploring the discovered control units for defined logical control units
(LCUs) and devices, zDAC compares the discovered controller information with the current
system configuration. It then determines delta changes to the configuration for a proposed
configuration.
All added or changed logical control units and devices are added into the proposed
configuration. They are assigned proposed control unit and device numbers, and channel
paths that are based on the defined policy. zDAC uses channel path chosen algorithms to
minimize single points of failure. The zDAC proposed configurations are created as work I/O
definition files (IODFs) that can be converted to production IODFs and activated.
zDAC is designed to run discovery for all systems in a sysplex that support the function.
Therefore, zDAC helps to simplify I/O configuration on z14 systems that run z/OS, and
reduces complexity and setup time.
zDAC applies to all FICON features that are supported on z14 when configured as CHPID
type FC. The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7
on page 253.
Information about the channels that are connected to a fabric (if registered) allows other
nodes or storage area network (SAN) managers to query the name server to determine what
is connected to the fabric.
The platform and name server registration service are defined in the Fibre Channel Generic
Services 4 (FC-GS-4) standard.
The informal name, 63.75-K subchannels, represents 65280 subchannels, as shown in the
following equation:
63 x 1024 + 0.75 x 1024 = 65280
This equation is applicable for subchannel set 0. For subchannel sets 1, 2 and 3, the available
subchannels are derived by using the following equation:
(64 X 1024) -1=65535
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on
page 253.
z/VM V6R3 MSS support for mirrored direct access storage device (DASD) provides a subset
of host support for the MSS facility to allow using an alternative subchannel set for
Peer-to-Peer Remote Copy (PPRC) secondary volumes.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on
page 253. For more information about channel subsystem, see Chapter 5, “Central processor
complex channel subsystem” on page 195.
See Table 7-6 on page 252 and Table 7-7 on page 253 for list of supported operating
systems.
See Table 7-6 on page 252 and Table 7-7 on page 253 for list of supported operating
systems. For more information, refer to “Initial program load from an alternative subchannel
set” on page 200.
This support is exclusive to the z14, z13 and z13s servers and applies to the FICON
Express16S+ and FICON Express16S features (defined as CHPID type FC). FICON
Express8S remains at 24 subchannel support when defined as CHPID type FC.
The supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on
page 253.
No action is required on z/OS to enable the health check; it is automatically enabled at IPL
and reacts to changes that might cause problems. The health check can be disabled by using
the PARMLIB or SDSF modify commands.
The supported operating systems are listed in Table 7-6 on page 252.
For more information about FCP channel performance, see the performance technical papers
that are available at the IBM Z I/O connectivity page of the IBM IT infrastructure website.
The FCP protocol is supported by z/VM, z/VSE, and Linux on Z. The supported operating
systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
T10-DIF support
American National Standards Institute (ANSI) T10 Data Integrity Field (DIF) standard is
supported on IBM Z for SCSI end-to-end data protection on fixed block (FB) LUN volumes.
IBM Z provides added end-to-end data protection between the operating system and the
DS8870 unit. This support adds protection information that consists of Cyclic Redundancy
Checking (CRC), Logical Block Address (LBA), and host application tags to each sector of FB
data on a logical volume.
IBM Z support applies to FCP channels only. The supported operating systems are listed in
Table 7-6 on page 252 and Table 7-7 on page 253.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) allows multiple system images (in LPARs or z/VM guests) to
use a single FCP channel as though each were the sole user of the channel. First introduced
with z9 EC, this feature can be used with supported FICON features on z14 servers. The
supported operating systems are listed in Table 7-6 on page 252 and Table 7-7 on page 253.
The capabilities of the WWPN are extended to calculate and show WWPNs for virtual and
physical ports ahead of system installation.
The tool assigns WWPNs to each virtual FCP channel or port by using the same WWPN
assignment algorithms that a system uses when assigning WWPNs for channels that use
NPIV. Therefore, the SAN can be set up in advance, which allows operations to proceed
much faster after the server is installed. In addition, the SAN configuration can be retained
instead of altered by assigning the WWPN to physical FCP ports when a FICON feature is
replaced.
The WWPN tool is applicable to all FICON channels that are defined as CHPID type FCP (for
communication with SCSI devices) on z14. It is available for download at the Resource Link at
the following website (log in is required).
Note: An optional feature can be ordered for WWPN persistency before shipment to keep
the same I/O serial number on the new CPC. Current information must be provided during
the ordering process.
The 25GbE RoCE Express2 has one PCHID and the same virtualization characteristics and
the 10GbE RoCE Express2 (FC 0412) - 126 Virtual Functions per PCHID.
z/OS requires fixes for APAR OA55686. RMF 2.2 and later is also enhanced to recognize the
CX4 card type and properly display CX4 cards in the PCIe Activity reports.
25GbE RoCE Express2 feature also are exploited by Linux on Z for applications that are
coded to the native RoCE verb interface or use Ethernet (such as TCP/IP). This native
exploitation does not require a peer OSA.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
z/OS Communications Server (CS) provides a new software device driver ConnectX4 (CX4)
for RoCE Express2. The device driver is not apparent to both upper layers of the CS (the
SMC-R and TCP/IP stack) and application software (exploiting TCP sockets). RoCE
Express2 introduces a minor change in how the physical port is configured.
RMF 2.2 and later is also enhanced to recognize the new CX4 card type and properly display
CX4 cards in the PCIE Activity reports.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
The 10-Gigabit Ethernet (10GbE) RoCE Express feature is designed to help reduce
consumption of CPU resources for applications that use the TCP/IP stack (such as
WebSphere accessing a Db2 database). Use of the 10GbE RoCE Express feature also can
help reduce network latency with memory-to-memory transfers by using Shared Memory
Communications over Remote Direct Memory Access (SMC-R) in z/OS V2R1 or later.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257. For more information, see Appendix D, “Shared Memory Communications” on
page 475.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257. For more information, see Appendix D, “Shared Memory Communications” on
page 475.
Support for this function is required by the sending operating system. For more information,
see “HiperSockets” on page 183. The supported operating systems are listed in Table 7-8 on
page 255.
Layer 2 support can help facilitate server consolidation. Complexity can be reduced, network
configuration is simplified and intuitive, and LAN administrators can configure and maintain
the mainframe environment the same way as they do a non-mainframe environment.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
Linux on Z tools can be used to format, edit, and process the trace records for analysis by
system programmers and network administrators.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
Note: Operating system support is required to recognize and use the second port on the
OSA-Express6S Gigabit Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
Note: Operating system support is required to recognize and use the second port on the
OSA-Express5S Gigabit Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
Note: Operating system support is required to recognize and use the second port on the
OSA-Express6S 1000BASE-T Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
Note: Operating system support is required to recognize and use the second port on the
OSA-Express5S 1000BASE-T Ethernet feature.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
Note: Operating system support is required to recognize and use the second port on the
OSA-Express4S 1000BASE-T Ethernet feature.
Removal of support for configuring OSA-Express NCP support OSN CHPID types:
The IBM z13 and z13s is the last z Systems and IBM Z generation to support configuring
OSN CHPID types. IBM z14™ servers do not support CHPID Type = OSN.
OSN CHPIDs were used to communicate between an operating system instance that is
running in one logical partition and the IBM Communication Controller for Linux on Z (CCL)
product in another logical partition on the same CPC. For more information about
withdrawal from marketing for the CCL product, see announcement letter #914-227 dated
12/02/2014.
With the OSA-ICC function, 3270 emulation for console session connections is integrated in
the z14 through a port on the OSA-Express6S 1000BASE-T, OSA-Express5S 1000BASE-T,
or OSA-Express4S 1000BASE-T features.
Note: OSA-ICC supports up to 48 secure sessions per CHPID (the overall maximum of
120 connections is unchanged).
Checksum offload provide checksum offload for several types of traffic and is supported by
OSA-Express6S GbE, OSA-Express6S 1000BASE-T Ethernet, OSA-Express5S GbE,
OSA-Express5S 1000BASE-T Ethernet, and OSA-Express4S 1000BASE-T Ethernet
features when configured as CHPID type OSD (QDIO mode only).
When checksum is offloaded, the OSA-Express feature runs the checksum calculations for
Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6) packets. The
checksum offload function applies to packets that go to or come from the LAN.
When multiple IP stacks share an OSA-Express, and an IP stack sends a packet to a next
hop address that is owned by another IP stack that is sharing the OSA-Express,
OSA-Express sends the IP packet directly to the other IP stack. The packet does not have to
be placed out on the LAN, which is termed LPAR-to-LPAR traffic. Checksum offload is
enhanced to support the LPAR-to-LPAR traffic, which was not originally available.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
The use of display OSAINFO (z/OS) or NETSTAT OSAINFO (z/VM) allows the operator to monitor
and verify the current OSA configuration and helps improve the overall management,
serviceability, and usability of OSA-Express cards.
These commands apply to CHPID type OSD. The supported operating systems are listed in
Table 7-8 on page 255.
QDIO data connection isolation allows disabling internal routing for each QDIO connected. It
also provides a means for creating security zones and preventing network traffic between the
zones.
QDIO data connection isolation is supported by all OSA-Express features on z14. The
supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
QDIO interface isolation is supported on all OSA-Express features on z14. The supported
operating systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
The supported operating systems are listed in Table 7-8 on page 255.
In extending the use of adapter interruptions to OSD (QDIO) channels, the processor
utilization to handle a traditional I/O interruption is reduced. This configuration benefits
OSA-Express TCP/IP support in z/VM, z/VSE, and Linux on Z. The supported operating
systems are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
IWQ reduces the conventional z/OS processing that is required to identify and separate
unique workloads. This advantage results in improved overall system performance and
scalability.
The following types of z/OS workloads are identified and assigned to unique input queues:
z/OS Sysplex Distributor traffic:
Network traffic that is associated with a distributed virtual Internet Protocol address (VIPA)
is assigned to a unique input queue. This configuration allows the Sysplex Distributor
traffic to be immediately distributed to the target host.
z/OS bulk data traffic:
Network traffic that is dynamically associated with a streaming (bulk data) TCP connection
is assigned to a unique input queue. This configuration allows the bulk data processing to
be assigned the appropriate resources and isolated from critical interactive workloads.
EE (Enterprise Extender / SNA traffic):
IWQ for the OSA-Express features is enhanced to differentiate and separate inbound
Enterprise Extender traffic to a dedicated input queue.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
Link aggregation is applicable to CHPID type OSD (QDIO). The supported operating systems
are listed in Table 7-8 on page 255 and Table 7-9 on page 257.
Large send support for IPv6 packets applies to the OSA-Express6S, OSA-Express5S, and
OSA-Express4S9 features (CHPID type OSD) on z14, z13, z13s, zEC12, and zBC12.
z13 added support of large send for IPv6 packets (segmentation offloading) for
LPAR-to-LPAR traffic. OSA-Express6S on z14 added TCP checksum on large send, which
reduces the cost (CPU time) of error detection for large send.
The supported operating systems are listed in Table 7-8 on page 255 and Table 7-9 on
page 257.
10 OSA-Express4S or newer.
In all cases, the TCP/IP stack determines the best setting based on the current system and
environmental conditions, such as inbound workload volume, processor utilization, and traffic
patterns. It can then dynamically update the settings.
Supported OSA-Express features adapt to the changes, which avoids thrashing and frequent
updates to the OAT. Based on the TCP/IP settings, OSA holds the packets before presenting
them to the host. A dynamic setting is designed to avoid or minimize host interrupts.
CPACF also is used by several IBM software product offerings for z/OS, such as IBM
WebSphere Application Server for z/OS. For more information, see 6.4, “CP Assist for
Cryptographic Functions” on page 216.
The supported operating systems are listed in Table 7-10 on page 259 and Table 7-11 on
page 260.
Crypto Express6S
Introduced with z14, Crypto Express6S complies with the following Physical Security
Standards:
FIPS 140-2 level 4
Common Criteria EP11 EAL4
Payment Card Industry (PCI) HSM
German Banking Industry Commission (GBIC, formerly DK)
Support of Crypto Express6S functions varies by operating system and release and by the
way the card is configured as a coprocessor or an accelerator. For more information, see 6.5,
“Crypto Express6S” on page 220. The supported operating systems are listed in Table 7-10
on page 259 and Table 7-11 on page 260.
11 CPACF hardware is implemented on each z14 core. CPACF functionality is enabled with FC 3863.
The supported operating systems are listed in Table 7-10 on page 259 and Table 7-11 on
page 260.
Web deliverables
For more information about web-deliverable code on z/OS, see the z/OS downloads website.
For Linux on Z, support is delivered through IBM and the distribution partners. For more
information, see Linux on Z on the IBM developerWorks website.
Despite being a z/OS base component, ICSF functions are generally made available through
web deliverable support a few months after a new z/OS release. Therefore, new functions are
related to an ICSF function modification identifier (FMID) instead of a z/OS version.
ICSF HCR77D0 - Cryptographic Support for z/OS V2R2 and z/OS V2R3
z/OS V2.2 and V2.3 require ICSF Web Deliverable WD18 (HCR77D0) to support the
following features:
Support for the updated German Banking standard (DK):
– CCA 5.4 & 6.112:
• ISO-4 PIN Blocks (ISO-9564-1)
• Directed keys: A key can either encrypt or decrypt data, but not both.
• Allow AES transport keys to be used to export/import DES keys in a standard ISO
20038 key block. This feature helps with interoperability between CCA and
non-CCA systems.
12
CCA 5.4 and 6.1 enhancements are also supported for z/OS V2R1 with ICSF HCR77C1 (WD17) with SPEs
(Small Program Enhancements (z/OS continuous delivery model).
The following software enhancements are available in ICSF Web Deliverable HCR77C1 when
running on z14 server:
Crypto Usage Statistics: When enabled, ICSF aggregates statistics that are related to
crypto workloads and logs to an SMF record.
Panel-based CKDS Administration: ICSF added an ISPF, panel-driven interface that
allows interactive administration (View, Create, Modify, and Delete) of CKDS keys.
CICS End User Auditing: When enabled, ICSF retrieves the CICS user identity and
includes it as a log string in the SAF resource check. The user identity is not checked for
access to the resource. Instead, it is included in the resource check (SMF Type 80)
records that are logged for any of the ICSF SAF classes protecting crypto keys and
services (CSFKEYS, XCSFKEY, CRYPTOZ, and CSFSERV).
For more information about ICSF versions and FMID cross-references, see the z/OS: ICSF
Version and FMID Cross Reference, TD103782, abstract that is available at the IBM
Techdocs website.
For PTFs that allow previous levels of ICSF to coexist with the Cryptographic Support for
z/OS 2.1 - z/OS V2R3 (HCR77C1) web deliverable, check below FIXCAT, as shown in the
following example:
IBM.Coexistence.ICSF.z/OS_V2R1-V2R3-HCR77C1
Reporting can be done at an LPAR/domain level to provide more granular reports for capacity
planning and diagnosing problems. This feature requires fix for APAR OA54952.
The supported operating systems are listed in Table 7-10 on page 259.
Policy driven z/OS Data Set Encryption enables users to perform the following tasks:
De-couple encryption from data classification; encrypt data automatically independent of
labor intensive data classification work.
Encrypt data immediately and efficiently at the time it is written.
Reduce risks that are associated with mis-classified or undiscovered sensitive data.
Help protect digital assets automatically.
Achieve application transparent encryption.
IBM Db2 for z/OS and IBM Information Management System (IMS) intend to use z/OS Data
Set Encryption.
With z/OS Data Set Encryption DFSMS enhances data security with support for data set level
encryption by using DFSMS access methods. This function is designed to give users the
ability to encrypt their data sets without changing their application programs. DFSMS users
can identify which data sets require encryption by using JCL, Data Class, or the RACF data
set profile. Data set level encryption can allow the data to remain encrypted during functions,
such as backup and restore, migration and recall, and replication.
z/OS Data Set Encryption requires CP Assist for Cryptographic Functions (CPACF). For
protected keys, it requires z196 or later Z servers with CEX3 or later. The degree of
encryption performance improvement is based on the encryption mode that is used.
Considering the significant enhancements that were introduced with z14, the encryption
mode of XTS is used by access method encryption to obtain the best performance possible. It
is not recommended to enable z/OS data set encryption until all sharing systems, fallback,
backup, and DR systems support encryption.
In addition to applying PTFs enabling the support, ICSF configuration is required. The
supported operating systems are listed in Table 7-10 on page 259.
Included in this support is the ability to dynamically control whether a running z/VM system is
encrypting this data. This support will protect guest paging data from administrators or users
with access to volumes. Enabled with AES encryption, z/VM Encrypted Paging includes low
overhead by using CPACF.
The supported operating systems are listed in Table 7-10 on page 259.
zEnterprise Data Compression (zEDC) Express is an optional feature that is available on z14,
z13, z13s, zEC12, and zBC12 servers that addresses those requirements by providing
hardware-based acceleration for data compression and decompression. zEDC provides data
compression with lower CPU consumption than the compression technology that previously
was available on Z servers.
Support for data recovery (decompression) when the zEDC is not installed, or installed but
not available on the system, is provided through software on z/OS V2R2, z/OS V2R1, and
V1R13 with required PTFs applied. Software decompression is slow and uses considerable
processor resources, so it is not recommended for production environments.
zEDC supports QSAM/BSAM (non-VSAM) data set compression by using any of the
following ways:
Data class level: Two new values, zEDC Required (ZR) and zEDC Preferred (ZP), can be
set with the COMPACTION option in the data class.
System Level: Two new values, zEDC Required (ZEDC_R) and zEDC Preferred (ZEDC_P),
can be specified with the COMPRESS parameter found in the IGDSMSXX member of the
SYS1.PARMLIB data set.
Data class takes precedence over system level. The supported operating systems are listed
in Table 7-12 on page 261.
For more information about zEDC Express, see Appendix F, “IBM zEnterprise Data
Compression Express” on page 511.
Although z14 servers do not require any “functional” software, it is recommended to install all
z14 service before upgrading to the new server. The support matrix for z/OS releases and the
Z servers that support them are listed in Table 7-16.
The exploitation of many functions covers fixes that are required to use the capabilities of the
IBM z14™ server. They are identified by:
IBM.Device.Server.z14-3906.Exploitation
Support for z14 is provided by using a combination of web deliverables and PTFs, which are
documented in PSP Bucket Upgrade = 3906DEVICE, Subset = 3906/ZOS.
13 For example, the use of Crypto Express6S requires the Cryptographic Support for z/OS V2R1 - z/OS V2R3 web
deliverable.
14
For more information, see the Tool to Compare IBM z14 Instruction Mnemonics with Macro Libraries IBM
technote.
Use the SMP/E REPORT MISSINGFIX command to determine whether any FIXCAT APARs exist
that are applicable and are not yet installed, and whether any SYSMODs are available to
satisfy the missing FIXCAT APARs.
For more information about IBM Fix Category Values and Descriptions, see the IBM Fix
Category Values and Descriptions page of the IBM IT infrastructure website.
Configurations with a Coupling Facility on one of these servers can add a z14 Server to their
Sysplex for a z/OS or a Coupling Facility image. z14 does not support participating in a
Parallel Sysplex with System z196 and earlier systems.
Each system can use, or not use, internal coupling links, InfiniBand coupling links, or ICA
coupling links independently of what other systems are using.
Coupling connectivity is available only when other systems also support the same type of
coupling. For more information about supported coupling link technologies on z14, see 4.7.4,
“Parallel Sysplex connectivity” on page 184, and the Coupling Facility Configuration Options
white paper.
15 z14 ZR1 (Machine Type 3907) does not support direct coupling connectivity to zEC12/zBC12.
To enable the use of new functions, Specify ARCH(12) and VECTOR for compilation. The
binaries that are produced by the compiler on z14 can be executed only on z14 and above
because it makes use of the vector facility on z14 for new functions. The use of older versions
of the compiler on z14 do not enable new functions.
For more information about the ARCHITECTURE, TUNE, and VECTOR compiler options, see z/OS
V2R2.0 XL C/C++ User’s Guide, SC09-4767.
Important: Use the previous Z ARCHITECTURE or TUNE options for C/C++ programs if the
same applications run on the z14 and on previous IBM Z servers. However, if C/C++
applications run only on z14 servers, use the latest ARCHITECTURE and TUNE options to
ensure that the best performance possible is delivered through the latest instruction set
additions.
For more information, see Migration from z/OS V2R1 to z/OS V2R2, GA32-0889.
Consider the following points before migrating z/OS 2.3 to IBM z14 Model ZR1:
IBM z/OS V2.3 with z14 ZR1 requires a minimum of 8 GB of memory. When running as a
z/VM guest or on an IBM System z Personal Development Tool, a minimum of 2 GB is
required for z/OS V2.3. If the minimum is not met, a warning WTOR is issued at IPL.
Continuing with less than the minimum memory might affect availability. A migration health
check will be introduced at z/OS V2.1 and z/OS V2.2 to warn if the system is configured
with less than 8 GB.
Dynamic splitting and merging of Coordinated Timing Network (CTN) is available with z14
ZR1.
The z/OS V2.3 real storage manager (RSM) is planned to support a new asynchronous
memory clear operation to clear the data from 1M page frames by using I/O processors
(SAPs) on next generation processors. The new asynchronous memory clear operation
eliminates the CPU cost for this operation and help improve performance of RSM first
reference page fault processing and system services, such as IARV64 and STORAGE
OBTAIN.
RMF support is provided to collect SMC-D related performance measurements in SMF 73
Channel Path Activity and SMF 74 subtype 9 PCIE Activity records. It also provides these
measurements in the RMF Postprocessor and Monitor III PCIE and Channel Activity
reports. This support is also available on z/OS V2.2 with PTF UA80445 for APAR
OA49113.
z/VM 7.1 includes SPEs shipped for z/VM 6.4, including Virtual Switch Enhanced Load
Balancing, DS8K z-Thin Provisioning, and Encrypted Paging.
A z/VM Release Status Summary for supported z/VM versions is listed in Table 7-17.
6.4 November, 2016 Not announced Not announced z196 & z114 -
a. Older z/VM versions (6.3, 6.2, 5.4 are End Of Support)
z/VM provides the support necessary for DAT-off guests to run in this new compatibility mode.
This support allows guests, such as CMS, GCS, and those that start in ESA/390 mode briefly
before switching to z/Architecture mode, to continue to run on IBM z14™.
The available PTF for APAR VM65976 provides infrastructure support for ESA/390
compatibility mode within z/VM V6.4. It must be installed on all members of an SSI cluster
before any z/VM V6.4 member of the cluster is run on an IBM z14™ server.
In addition to OS support, all the stand-alone utilities a client uses must be at a minimum level
or need a PTF.
7.6.4 Capacity
For the capacity of any z/VM logical partition (LPAR), and any z/VM guest, in terms of the
number of Integrated Facility for Linux (IFL) processors and central processors (CPs), real or
virtual, you might want to adjust the number to accommodate the processor unit (PU)
capacity of z14 servers.
Consider the following general guidelines when you are migrating z/VSE environment to z14
ZR1 servers:
Collect reference information before migration
This information includes baseline data that reflects the status of, for example,
performance data, CPU utilization of reference workload, I/O activity, and elapsed times.
This information is required to size z14 ZR1 and is the only way to compare workload
characteristics after migration.
For more information, see the z/VSE Release and Hardware Upgrade document.
Apply required maintenance for z14 ZR1
Review the Preventive Service Planning (PSP) bucket 3907DEVICE for z14 ZR1 and
apply the required PTFs for IBM and independent software vendor (ISV) products.
16
z/VSE 5.1 is end of support since June 2016. It can be IPL’ed on z14 after applying APAR DY47654 (PTF
UD54170).
For the z14, the following metric groups for software licensing are available from IBM,
depending on the software product:
Monthly license charge (MLC)
MLC pricing metrics feature a recurring charge that applies each month. In addition to the
permission to use the product, the charge includes access to IBM product support during
the support period. MLC pricing applies to z/OS, z/VSE, and z/TPF operating systems.
Charges are based on processor capacity, which is measured in millions of service units
(MSU) per hour.
IPLA
IPLA metrics have a single, up-front charge for an entitlement to use the product. An
optional and separate annual charge (called subscription and support) entitles clients to
access IBM product support during the support period. With this option, you can also
receive future releases and versions at no extra charge.
The subcapacity licensed products are charged monthly based on the highest observed
4-hour rolling average utilization of the logical partitions in which the product runs. The
exception is products that are licensed by using the Select Application License Charge
(SALC) pricing metric. This type of charge requires measuring the utilization and reporting it
to IBM.
The 4-hour rolling average utilization of the logical partition can be limited by a defined
capacity value on the image profile of the partition. This value activates the soft capping
function of the PR/SM, which limits the 4-hour rolling average partition utilization to the
defined capacity value. Soft capping controls the maximum 4-hour rolling average usage (the
last 4-hour average value at every 5-minute interval), but does not control the maximum
instantaneous partition use.
You can also use an LPAR group capacity limit, which sets soft capping by PR/SM for a group
of logical partitions that are running z/OS.
Some pricing metrics apply to stand-alone Z servers. Others apply to the aggregation of
multiple Z server workloads within the same Parallel Sysplex.
For more information about WLC and details about how to combine logical partition utilization,
see z/OS Planning for Sub-Capacity Pricing, SA23-2301.
One of the recent changes in software licensing for z/OS and z/VSE is Multi-Version
Measurement (MVM), which replaced Single Version Charging (SVC), Migration Pricing
Option (MPO), and the IPLA Migration Grace Period.
MVM for z/OS and z/VSE removes time limits for running multiple eligible versions of a
software program. Clients can run different versions of a program simultaneously for an
unlimited duration during a program version upgrade.
Clients can also choose to run multiple different versions of a program simultaneously for an
unlimited duration in a production environment. MVM allows clients to selectively deploy new
software versions, which provides more flexible control over their program upgrade cycles.
For more information, see Software Announcement 217-093, dated February 14, 2017.
Technology Update Pricing for the IBM z14™ extends the software price and performance
that is provided by AWLC and CMLC for z14 servers. The new and revised Transition
Charges for Sysplexes or Multiplexes offerings provide a transition to Technology Update
Pricing for the IBM z14™ for customers who have not yet fully migrated to z14 servers. This
transition ensures that aggregation benefits are maintained and also phases in the benefits of
Technology Update Pricing for the IBM z14™ pricing as customers migrate.
When a z14 server is in an actively coupled Parallel Sysplex or a Loosely Coupled Complex,
you might choose aggregated Advanced Workload License Charges (AWLC) pricing or
aggregated Parallel Sysplex License Charges (PSLC) pricing (subject to all applicable terms
and conditions).
When a z14 server is part of a Multiplex under Country Multiplex Pricing (CMP) terms,
Country Multiplex License Charges (CMLC), Multiplex zNALC (MzNALC), and Flat Workload
License Charges (FWLC) are the only pricing metrics available (subject to all applicable
terms and conditions).
For more information about software pricing for the z14 server, see Software Announcement
217-273, dated July 17, 2017, Technology Transition Offerings for the IBM z14™ offer
price-performance advantages.
When a z14 server is running z/VSE, you can choose Mid-Range Workload License Charges
(MWLC) (subject to all applicable terms and conditions).
For more information about AWLC, CMLC, MzNALC, PSLC, MWLC, or the Technology
Update Pricing and Transition Charges for Sysplexes or Multiplexes TTO offerings, see the
IBM z Systems Software Pricing page of the IBM IT infrastructure website.
7.9 References
For more information about planning, see the home pages for each of the following operating
systems:
z/OS
z/VM
z/VSE
z/TPF
Linux on Z
KVM for IBM Z
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
In response to client demands and changes in market requirements, many features were
added. The provisioning environment gives you unprecedented flexibility and more control
over cost and value.
For more information about all aspects of system upgrades, see the IBM Resource Link
website (registration is required). At the website, click Resource Link → Client Initiated
Upgrade Information, and then select Education. Select your particular product from the list
of available systems.
The growth capabilities that are provided by the z14 servers include the following benefits:
Enabling the use of new business opportunities
Supporting the growth of dynamic, smart, and cloud environments
Managing the risk of volatile, high-growth, and high-volume applications
Supporting 24 x 7 application availability
Enabling capacity growth during lockdown periods
Enabling planned-downtime changes without availability effects
For more information, see 8.1.2, “Terminology that is related to CoD for z14 systems” on
page 318.
Tip: An MES provides system upgrades that can result in more enabled processors, a
different central processor (CP) capacity level, and in more processor drawers, memory,
PCIe I/O drawers, and I/O features (physical upgrade). Extra planning tasks are required
for nondisruptive logical upgrades. An MES is ordered through your IBM representative
and installed by IBM service support representatives (IBM SSRs).
Activated capacity Capacity that is purchased and activated. Purchased capacity can be greater than the activated
capacity.
Billable capacity Capacity that helps handle workload peaks (expected or unexpected). The one billable offering
that is available is On/Off Capacity on Demand (OOCoD).
Capacity Hardware resources (processor and memory) that can process the workload can be added to
the system through various capacity offerings.
Capacity Backup Capacity Backup allows you to place model capacity or specialty engines in a backup system.
(CBU) CBU is used in an unforeseen loss of system capacity because of an emergency.
Capacity for Planned Used when temporary replacement capacity is needed for a short-term event. CPE activates
Event (CPE) processor capacity temporarily to facilitate moving systems between data centers, upgrades,
and other routine management tasks. CPE is an offering of CoD.
Capacity levels Can be full capacity or subcapacity. For the z14 system, capacity levels for the CP engine are
7, 6, 5, and 4:
1 - 99 in decimal and A0 - H0, where A0 represents 100 and H0 represents 170, for
capacity level 7nn.
1 - 33 for capacity levels 6yy and 5yy.
0 - 33 for capacity levels 4xx. An all Integrated Facility for Linux (IFL) or an all integrated
catalog facility (ICF) system has a capacity level of 400.
Capacity setting Derived from the capacity level and the number of processors.
For the z14 system, the capacity levels are 7nn, 6yy, 5yy, and 4xx, where xx, yy, or nn indicates
the number of active CPs.
Customer Initiated A web-based facility in which you can request processor and memory upgrades by using the
Upgrade (CIU) IBM Resource Link and the system’s Remote Support Facility (RSF) connection.
Capacity on Demand The ability of a computing system to increase or decrease its performance capacity as needed
(CoD) to meet fluctuations in demand.
Capacity As a component of z/OS Capacity Provisioning, CPM monitors business-critical workloads that
Provisioning are running on z/OS on z14 systems.
Manager (CPM)
Customer profile This information is on Resource Link, and contains client and system information. A customer
profile can contain information about more than one system.
Full capacity CP For z14 servers, feature (CP7) provides full capacity. Capacity settings 7nn are full capacity
feature settings.
Installed record The LICCC record is downloaded, staged to the Support Element (SE), and is installed on the
central processor complex (CPC). A maximum of eight different records can be concurrently
installed and active.
Model capacity Shows the current active capacity on the system, including all replacement and billable capacity.
identifier (MCI) For z14 servers, the model capacity identifier is in the form of 7nn, 6yy, 5yy, or 4xx, where xx,
yy, or nn indicates the number of active CPs:
1 - 99 in decimal and A0 - H0, where A0 represents 100 and H0 represents 170, for
capacity level 7nn.
yy can have a range of 01 - 33.
xx can have a range of 00 - 33. An all IFL or an all ICF system has a capacity level of 400.
Model Permanent Keeps information about the capacity settings that are active before any temporary capacity is
Capacity Identifier activated.
(MPCI)
Model Temporary Reflects the permanent capacity with billable capacity only, without replacement capacity. If no
Capacity Identifier billable temporary capacity is active, MTCI equals the MPCI.
(MTCI)
On/Off Capacity on Represents a function that allows spare capacity in a CPC to be made available to increase the
Demand (CoD) total capacity of a CPC. For example, On/Off CoD can be used to acquire more capacity for
handling a workload peak.
Features on Demand FoD is a new centralized way to entitle flexibly features and functions on the system. On z196
(FoD) and z114, the HWMs are stored in the processor and memory LICCC record. On z14, z13 and
zEC12 servers, the HWMs are stored in the FoD record.
Permanent capacity The capacity that a client purchases and activates. This amount might be less capacity than the
total capacity purchased.
Permanent upgrade LIC that is licensed by IBM to enable the activation of applicable computing resources, such as
processors or memory, for a specific CIU-eligible system on a permanent basis.
Purchased capacity Capacity that is delivered to and owned by the client. It can be higher than the permanent
capacity.
Permanent/ The internal representation of a temporary (TER) or permanent (PER) capacity upgrade that is
Temporary processed by the CIU facility. An entitlement record contains the encrypted representation of
entitlement record the upgrade configuration with the associated time limit conditions.
Replacement A temporary capacity that is used for situations in which processing capacity in other parts of
capacity the enterprise is lost. This loss can be a planned event or an unexpected disaster. The two
replacement offerings available are Capacity for Planned Events and Capacity Backup.
Resource Link The IBM Resource Link is a technical support website that provides a comprehensive set of
tools and resources. It is available at the IBM Systems technical support website.
Secondary approval An option that is selected by the client that requires second approver control for each CoD order.
When a secondary approval is required, the request is sent for approval or cancellation to the
Resource Link secondary user ID.
Staged record The point when a record that represents a temporary or permanent capacity upgrade is
retrieved and loaded on the SE disk.
Subcapacity For z14 servers, CP features (CP4, CP5, and CP6) provide reduced capacity relative to the full
capacity CP feature (CP7).
Temporary capacity An optional capacity that is added to the current system capacity for a limited amount of time. It
can be capacity that is owned or not owned by the client.
Vital product data Information that uniquely defines system, hardware, software, and microcode elements of a
(VPD) processing system.
Tip: The use of the CIU facility for a system requires that the online CoD buying feature
(FC 9900) is installed on the system. The CIU facility is enabled through the permanent
upgrade authorization feature code (FC 9898).
Considerations: Most of the MESs can be concurrently applied without disrupting the
workload. For more information, see 8.2, “Concurrent upgrades” on page 321. However,
certain MES changes are disruptive, such as model upgrades from any z14 model to the
z14 M05 model.
Memory upgrades that require dual in-line memory module (DIMM) changes can be made
nondisruptively multiple CPC drawers are available and the flexible memory option is used.
CBU or CPE temporary upgrades can be ordered by using the CIU application through
Resource Link or by calling your IBM marketing representative.
Billable capacity
To handle a peak workload, you can activate up to double the purchased capacity of any
processor unit (PU) type temporarily. You are charged daily.
The one billable capacity offering is On/Off Capacity on Demand (On/Off CoD).
Replacement capacity
When a processing capacity is lost in another part of an enterprise, replacement capacity can
be activated. It allows you to activate any PU type up to your authorized limit.
The concurrent capacity growth capabilities that are provided by z14 servers include, but are
not limited to, the following benefits:
Enabling the meeting of new business opportunities
Supporting the growth of smart and cloud environments
Managing the risk of volatile, high-growth, and high-volume applications
This capability is based on the flexibility of the design and structure, which allows concurrent
hardware installation and Licensed Internal Code (LIC) control over the configuration.
The subcapacity models allow more configuration granularity within the family. The added
granularity is available for models that are configured with up to 33 CPs, and provides 99
extra capacity settings. Subcapacity models provide for CP capacity increase in two
dimensions that can be used together to deliver configuration granularity. The first dimension
is adding CPs to the configuration. The second is changing the capacity setting of the CPs
currently installed to a higher model capacity identifier.
z14 servers allow the concurrent and nondisruptive addition of processors to a running logical
partition (LPAR). As a result, you can have a flexible infrastructure in which you can add
capacity without pre-planning. This function is supported by z/OS, z/VM, and z/VSE. This
addition is made by using one of the following methods:
With planning ahead for the future need of extra processors. Reserved processors can be
specified in the LPAR’s profile. When the extra processors are installed, the number of
active processors for that LPAR can be increased without the need for a partition
reactivation and initial program load (IPL).
Another (easier) way is to enable the dynamic addition of processors through the z/OS
LOADxx member. Set the DYNCPADD parameter in member LOADxx to ENABLE. z14 servers
support dynamic processor addition in the same way that the z13, zEC12, z196, and z10
support it. The operating system must be z/OS V1R10 or later.
Another function concerns the system assist processor (SAP). When more SAPs are
concurrently added to the configuration, the SAP-to-channel affinity is dynamically remapped
on all SAPs on the system to rebalance the I/O configuration.
The system model and the model capacity identifier can be concurrently changed.
Concurrent upgrades can be performed for permanent and temporary upgrades.
Tip: A model upgrade can be performed concurrently by using concurrent drawer add
(CDA), except for upgrades to Model M05, which are included only.
1 The z13 zero CP MCI is 400. This setting applies to an all-IFL or all-ICF system.
Important: The LICCC-based PU conversions require that at least one PU (CP, ICF, or
IFL), remains unchanged. Otherwise, the conversion is disruptive. The PU conversion
generates an LICCC that can be installed concurrently in two steps:
1. Remove the assigned PU from the configuration.
2. Activate the newly available PU as the new PU type.
LPARs also might have to free the PUs to be converted. The operating systems must include
support to configure processors offline or online so that the PU conversion can be done
nondisruptively.
Considerations: Client planning and operator action are required to use concurrent PU
conversion. Consider the following points about PU conversion:
It is disruptive if all current PUs are converted to different types.
It might require individual LPAR outages if dedicated PUs are converted.
The use of the CIU facility for a system requires that the online CoD buying feature code
(FC 9900) is installed on the system. Although it can be installed on your z14 servers at any
time, often it is added when ordering a z14 server. The CIU facility is controlled through the
permanent upgrade authorization feature code, FC 9898.
After you place an order through the CIU facility, you receive a notice that the order is ready
for download. You can then download and apply the upgrade by using functions that are
available through the Hardware Management Console (HMC), along with the RSF. After all of
the prerequisites are met, the entire process, from ordering to activation of the upgrade, is
performed by the client.
After download, the actual upgrade process is fully automated and does not require any
onsite presence of IBM SSRs.
As part of the setup, provide one resource link ID for configuring and placing CIU orders and,
if required, a second ID as an approver. The IDs are then set up for access to the CIU
support. The CIU facility allows upgrades to be ordered and delivered much faster than
through the regular MES process.
To order and activate the upgrade, log on to the IBM Resource Link website and start the CIU
application to upgrade a system for processors or memory. You can request a client order
approval to conform to your operational policies. You also can allow the definition of more IDs
to be authorized to access the CIU. More IDs can be authorized to enter or approve CIU
orders, or only view orders.
Permanent upgrades
Permanent upgrades can be ordered by using the CIU facility. Through the CIU facility, you
can generate online permanent upgrade orders to concurrently add processors (CPs, ICFs,
zIIPs, IFLs, and SAPs) and memory, or change the model capacity identifier. You can do so
up to the limits of the installed processor drawers on a system.
Temporary upgrades
The base model z14 server describes permanent and dormant capacity by using the capacity
marker and the number of PU features that are installed on the system. Up to eight temporary
offerings can be present. Each offering includes its own policies and controls, and each can
be activated or deactivated independently in any sequence and combination. Although
multiple offerings can be active at any time, only one On/Off CoD offering can be active at any
time if enough resources are available to fulfill the offering specifications.
Temporary upgrades are represented in the system by a record. All temporary upgrade
records are on the SE hard disk drive (HDD). The records can be downloaded from the RSF
or installed from portable media. At the time of activation, you can control everything locally.
API
HMC Application
Query Activation
R3 Up to 8 records installed
R1 R2 R4 R5 R6 R7 R8 and active
Dormant
capacity
Base Change permanent capacity
model through CIU or MES order
Purchased
capacity
The authorization layer enables administrative control over the temporary offerings. The
activation and deactivation can be driven manually or under the control of an application
through a documented application programming interface (API).
By using the API approach, you can customize at activation time the resources that are
necessary to respond to the current situation, up to the maximum that is specified in the order
record. If the situation changes, you can add or remove resources without having to go back
to the base configuration. This process eliminates the need for temporary upgrade
specifications for all possible scenarios. However, the ordered configuration is the only
possible activation for CPE.
R1 R2 R3 R4
R3 R1
CBU CPE
R4 R4 R3 R3 R1
CBU CBU CBU CBU CPE
R2 R2 R2 R2 R2 R3
OOCoD OOCoD OOCoD OOCoD OOCoD CBU
As shown in Figure 8-2, if R2, R3, and R1 are active at the same time, only parts of R1 can be
activated because not enough resources are available to fulfill all of R1. When R2 is
deactivated, the remaining parts of R1 can be activated as shown.
Temporary capacity can be billable as On/Off CoD, or replacement capacity as CBU or CPE.
Consider the following points:
On/Off CoD is a function that enables concurrent and temporary capacity growth of the
system.
On/Off CoD can be used for client peak workload requirements, for any length of time, and
includes a daily hardware and maintenance charge. The software charges can vary
according to the license agreement for the individual products. For more information,
contact your IBM Software Group representative.
On/Off CoD can concurrently add processors (CPs, ICFs, zIIPs, IFLs, and SAPs),
increase the model capacity identifier, or both. It can do so up to the limit of the installed
processor drawers of a system. It is restricted to twice the installed capacity. On/Off CoD
requires a contractual agreement between you and IBM.
You decide whether to pre-pay or post-pay On/Off CoD. Capacity tokens that are inside the
records are used to control activation time and resources.
CBU is a concurrent and temporary activation of more CPs, ICFs, zIIPs, IFLs, and SAPs,
an increase of the model capacity identifier, or both.
CBU cannot be used for peak workload management in any form. As stated, On/Off CoD
is the correct method to use for workload management. A CBU activation can last up to 90
days when a disaster or recovery situation occurs.
Permanent MES CPs, ICFs, zIIPs, IFLs, SAPs, processor Installed by IBM SSRs
drawer, memory, and I/Os
Online permanent CPs, ICFs, zIIPs, IFLs, SAPs, and Performed through the CIU facility
upgrade memory
Temporary On/Off CoD CPs, ICFs, zIIPs, IFLs, and SAPs Performed through the OOCoD facility
CBU CPs, ICFs, zIIPs, IFLs, and SAPs Performed through the CBU facility
CPE CPs, ICFs, zIIPs, IFLs, and SAPs Performed through the CPE facility
2
z14 servers provide more improvements in the CBU activation windows. These windows were improved to prevent
inadvertent CBU activation.
An MES upgrade requires IBM SSRs for the installation. In most cases, the time that is
required for installing the LICCC and completing the upgrade is short.
To better use the MES upgrade function, carefully plan the initial configuration to allow a
concurrent upgrade to a target configuration. The availability of PCIe I/O drawers improves
the flexibility to perform unplanned I/O configuration changes concurrently.
The Store System Information (STSI) instruction gives more useful and detailed information
about the base configuration and temporary upgrades. You can more easily resolve billing
situations where independent software vendor (ISV) products are used.
The model and model capacity identifiers that are returned by the STSI instruction are
updated to coincide with the upgrade. For more information, see “Store System Information
instruction” on page 358.
3
Other adapter types, such as zHyperlink, Coupling Express LR, zEDC, and Remote Direct Memory Access
(RDMA) over Converged Ethernet (RoCE), also can be added to the PCIe I/O drawers through an MES.
Limits: The sum of CPs, inactive CPs, ICFs, zIIPs, IFLs, unassigned IFLs, and SAPs
cannot exceed the maximum limit of PUs available for client use. The number of zIIPs
cannot exceed twice the number of purchased CPs.
An example of an MES upgrade for processors (with two upgrade steps) is shown in
Figure 8-3.
A model M01 (one processor drawer), model capacity identifier 708 (eight CPs), is
concurrently upgraded to a model M02 (two processor drawers), with MCI 738 (38 CPs). The
model upgrade requires adding a processor drawer and assigning and activating 38 PUs as
CPs. Then, model M02, MCI 738, is concurrently upgraded to a capacity identifier 739 (39
CPs) with two IFLs. This process is done by assigning and activating three more unassigned
PUs (one as CP and two as IFLs). If needed, more LPARs can be created concurrently to use
the newly added processors.
The example that is shown in Figure 8-3 was used to show how the addition of PUs as CPs
and IFLs and the addition of a processor drawer works. In reality, the addition of a processor
drawer to a z14 Model M01 upgrades the machine model to M02.
The number of processors that are supported by various z/OS and z/VM releases are listed in
Table 8-3.
Table 8-3 Number of processors that are supported by the operating system
Operating system Number of processors that are supported
z/OS V2R1 170 PUs per z/OS LPAR in non-SMT mode and 128 PUs per
z/OS LPAR in simultaneous multithreading (SMT) mode. For
both, the PU total is the sum of CPs and zIIPs.
z/OS V2R2 170 PUs per z/OS LPAR in non-SMT mode and 128 PUs per
z/OS LPAR in SMT mode. For both, the PU total is the sum of CPs
and zIIPs.
z/OS V2R3 170 PUs per z/OS LPAR in non-SMT mode and 128 PUs per
z/OS LPAR in SMT mode. For both, the PU total is the sum of CPs
and zIIPs.
z/TPF 86 CPs
Linux on IBM Z -170 CPs SLES12/RHEL7/Ubuntu 16.10, Linux supports 256 cores without
SMT and 128 cores with SMT (256 threads).
a. 32 in SMT mode
Software charges, which are based on the total capacity of the system on which the software
is installed, are adjusted to the new capacity after the MES upgrade.
Software products that use Workload License Charges (WLC) might not be affected by the
system upgrade. Their charges are based on partition usage, not on the system total
capacity. For more information about WLC, see 7.8, “Software licensing” on page 312.
If the z14 server is a multiple processor drawer configuration, you can use the EDA feature to
remove a processor drawer and add DIMM memory cards. It can also be used to upgrade the
installed memory cards to a larger capacity size. You can then use LICCC to enable the extra
memory.
With proper planning, more memory can be added nondisruptively to z/OS partitions and
z/VM partitions. If necessary, new LPARs can be created nondisruptively to use the newly
added memory.
Concurrency: Upgrades that require DIMM changes can be concurrent by using the EDA
feature. Planning is required to see whether this option is a viable for your configuration.
The use of the flexible memory option and the Preplanned Memory Feature (FC 1996 for
the 16-GB increment, or FC 1990 for the 32-GB increment) ensures that EBA can work
with the least disruption.
The one-processor drawer model M01 features a minimum of 320 GB physically installed
memory. The client addressable storage in this case is 256 GB. If you require more memory,
an extra memory upgrade can install up to 8 TB of memory. It does so by changing the DIMM
sizes and adding DIMMs in all available slots in the processor drawer. You can also add
memory by concurrently adding a second processor drawer with sufficient memory into the
configuration and then using LICCC to enable that memory.
An LPAR can dynamically take advantage of a memory upgrade if reserved storage is defined
to that LPAR. The reserved storage is defined to the LPAR as part of the image profile.
Reserved memory can be configured online to the LPAR by using the LPAR dynamic storage
reconfiguration (DSR) function. DSR allows a z/OS operating system image and z/VM
partitions to add reserved storage to their configuration if any unused storage exists.
The nondisruptive addition of storage to a z/OS and z/VM partition requires that pertinent
operating system parameters were prepared. If reserved storage is not defined to the LPAR,
the LPAR must be deactivated, the image profile changed, and the LPAR reactivated. This
process allows the extra storage resources to be available to the operating system image.
For more information about I/O drawers and PCIe I/O drawers, see 4.2, “I/O system overview”
on page 147.
The number of I/O drawers and PCIe I/O drawers that can be present in a z14 server are
listed in Table 8-4 on page 333.
Depending on the number of I/O features that are carried forward on an upgrade, the
configurator determines the number of PCIe I/O drawers.
To better use the MES for I/O capability, carefully plan the initial configuration to allow
concurrent upgrades up to the target configuration. If original I/O features are removed from
the I/O drawer, the configurator does not physically remove the drawer unless the I/O frame
slots are required to install a new PCIe I/O drawer.
If a PCIe I/O drawer is added to a z14 server and original features must be physically moved
to another PCIe I/O drawer, original card moves are disruptive.
z/VSE, z/TPF, Linux on Z, and CFCC do not provide dynamic I/O configuration support.
Although installing the new hardware is done concurrently, defining the new hardware to
these operating systems requires an IPL.
Tip: z14 servers feature a hardware system area (HSA) of 192 GB. z13 servers have a 96
GB HSA. HSA is not part of the client-purchased memory.
A staged record can be removed without installing it. An FoD record can be installed only
completely; no selective feature or partial record installation is available. The features that are
installed are merged with the CPC LICCC after activation.
An FoD record can be installed only once. If it is removed, a new FoD record is needed to
reinstall. A remove action cannot be undone.
1893 64 N/A
1898 32 N/A
1939 64 >= 1 TB
1940 32 >= 1 TB
Tip: Accurate planning and the definition of the target configuration allows you to maximize
the value of these plan-ahead features.
Adding permanent upgrades to a system through the CIU facility requires that the permanent
upgrade enablement feature (FC 9898) is installed on the system. A permanent upgrade
might change the system model capacity identifier (4xx, 5yy, 6yy, or 7nn) if more CPs are
requested, or if the capacity identifier is changed as part of the permanent upgrade. However,
it cannot change the system model. If necessary, more LPARs can be created concurrently to
use the newly added processors.
Software charges that are based on the total capacity of the system on which the software is
installed are adjusted to the new capacity after the permanent upgrade is installed. Software
products that use WLC might not be affected by the system upgrade because their charges
are based on an LPAR usage rather than system total capacity. For more information about
WLC, see 7.8, “Software licensing” on page 312.
The CIU facility process on IBM Resource Link is shown in Figure 8-5.
Customer ibm.com/servers/resourcelink
Internet
Online
permanent
Optional customer order
secondary order
approval
Remote Support
Facility
Figure 8-5 Permanent upgrade order example
The following sample sequence shows how to start an order on the IBM Resource Link:
1. Sign on to Resource Link.
2. Select Customer Initiated Upgrade from the main Resource Link page. Client and
system information that is associated with the user ID are displayed.
The order activation process for a permanent upgrade is shown in Figure 8-6. When the
LICCC is passed to the Remote Support Facility, you are notified through an email that the
upgrade is ready to be downloaded.
8.4.1 Ordering
Resource Link provides the interface that enables you to order a concurrent upgrade for a
system. You can create, cancel, or view the order, and view the history of orders that were
placed through this interface.
Configuration rules enforce that only valid configurations are generated within the limits of the
individual system. Warning messages are issued if you select invalid upgrade options. The
process allows only one permanent CIU-eligible order for each system to be placed at a time.
The initial view of the Machine profile on Resource Link is shown in Figure 8-7.
The number of CPs, ICFs, zIIPs, IFLs, SAPs, memory size, and unassigned IFLs on the
current configuration are displayed on the left side of the page.
Resource Link retrieves and stores relevant data that is associated with the processor
configuration, such as the number of CPs and installed memory cards. It allows you to select
only those upgrade options that are deemed valid by the order process. It also allows
upgrades only within the bounds of the currently installed hardware.
When the order is available for download, you receive an email that contains an activation
number. You can then retrieve the order by using the Perform Model Conversion task from the
SE, or through the Single Object Operation to the SE from an HMC.
The window provides several possible options. If you select the Retrieve and apply data
option, you are prompted to enter the order activation number to start the permanent
upgrade, as shown in Figure 8-9.
8.5.1 Overview
The capacity for CPs is expressed in millions of service units (MSUs). Capacity for speciality
engines is expressed in number of speciality engines. Capacity tokens are used to limit the
resource consumption for all types of processor capacity.
Each speciality engine type features its own tokens, and each On/Off CoD record includes
separate token pools for each capacity type. During the ordering sessions on Resource Link,
select how many tokens of each type to create for an offering record. Each engine type must
include tokens for that engine type to be activated. Capacity that has no tokens cannot be
activated.
When resources from an On/Off CoD offering record that contains capacity tokens are
activated, a billing window is started. A billing window is always 24 hours. Billing occurs at
the end of each billing window.
The resources that are billed are the highest resource usage inside each billing window for
each capacity type. An activation period is one or more complete billing windows. The
activation period is the time from the first activation of resources in a record until the end of
the billing window in which the last resource in a record is deactivated.
At the end of each billing window, the tokens are decremented by the highest usage of each
resource during the billing window. If any resource in a record does not have enough tokens
to cover usage for the next billing window, the entire record is deactivated.
Note: On/Off CoD requires that the Online CoD Buying feature (FC 9900) is installed on
the system that you want to upgrade.
The On/Off CoD to Permanent Upgrade Option is a new offering. It is an offshoot of On/Off
CoD that takes advantage of aspects of the architecture. You are given a window of
opportunity to assess capacity additions to your permanent configurations by using On/Off
CoD. If a purchase is made, the hardware On/Off CoD charges during this window (three
days or less) are waived. If no purchase is made, you are charged for the temporary use.
The resources eligible for temporary use are CPs, ICFs, zIIPs, IFLs, and SAPs. The
temporary addition of memory and I/O ports or adapters is not supported.
Unassigned PUs that are on the installed processor drawers can be temporarily and
concurrently activated as CPs, ICFs, zIIPs, IFLs, and SAPs through LICCC. You can assign
PUs up to twice the currently installed CP capacity, and up to twice the number of ICFs, zIIPs,
or IFLs. Therefore, an On/Off CoD upgrade cannot change the system model. The addition of
new processor drawers is not supported. However, the activation of an On/Off CoD upgrade
can increase the model capacity identifier (4xx, 5yy, 6yy, or 7nn).
In addition, the Capacity Provisioning Control Center must be downloaded from the host and
installed on a PC server. This application is used only to define policies. It is not required for
regular operation.
8.5.3 Ordering
Concurrently installing temporary capacity by ordering On/Off CoD is possible in the following
manner:
CP features equal to the MSU capacity of installed CPs
IFL features up to the number of installed IFLs
ICF features up to the number of installed ICFs
zIIP features up to the number of installed zIIPs
Up to 5 SAPs for model M01, 10 for an M02, 15 for an M03, and 20 for an M04 and 23 for
an M05
When resources on a prepaid offering are activated, they must have enough capacity tokens
to allow the activation for an entire billing window, which is 24 hours. The resources remain
active until you deactivate them or until one resource consumes all of its capacity tokens.
Then, all activated resources from the record are deactivated.
A postpaid On/Off CoD offering record contains resource descriptions, MSUs, speciality
engines, and can contain capacity tokens that denote MSU-days and speciality engine-days.
When resources in a postpaid offering record without capacity tokens are activated, those
resources remain active until they are deactivated, or until the offering record expires. The
record usually expires 180 days after its installation.
When resources in a postpaid offering record with capacity tokens are activated, those
resources must have enough capacity tokens to allow the activation for an entire billing
window (24 hours). The resources remain active until they are deactivated, until all of the
resource tokens are consumed, or until the record expires. The record usually expires 180
days after its installation. If one capacity token type is consumed, resources from the entire
record are deactivated.
For example, for a z14 server with capacity identifier 502 (two CPs), a capacity upgrade
through On/Off CoD can be delivered in the following ways:
Add CPs of the same capacity setting. With this option, the model capacity identifier can
be changed to a 503, which adds one more CP to make it a 3-way CP. It can also be
changed to a 504, which adds two CPs, making it a 4-way CP.
Change to a different capacity level of the current CPs and change the model capacity
identifier to a 602 or 702. The capacity level of the CPs is increased, but no other CPs are
added. The 502 also can be temporarily upgraded to a 603, which increases the capacity
level and adds another processor. The capacity setting 430 does not have an upgrade
path through On/Off CoD.
Use the Large System Performance Reference (LSPR) information to evaluate the capacity
requirements according to your workload type. For more information about LSPR data for
current IBM processors, see the Large Systems Performance Reference for IBM Z page of
the IBM Systems website.
The On/Off CoD hardware capacity is charged on a 24-hour basis. A grace period is granted
at the end of the On/Off CoD day. This grace period allows up to an hour after the 24-hour
billing period to change the On/Off CoD configuration for the next 24-hour billing period or
deactivate the current On/Off CoD configuration. The times when the capacity is activated
and deactivated are maintained in the z14 server and sent back to the support systems.
If On/Off capacity is active, On/Off capacity can be added without having to return the system
to its original capacity. If the capacity is increased multiple times within a 24-hour period, the
charges apply to the highest amount of capacity active in that period.
If more capacity is added from an active record that contains capacity tokens, the systems
checks whether the resource has enough capacity to be active for an entire billing window (24
hours). If that criteria is not met, no extra resources are activated from the record.
If necessary, more LPARs can be activated concurrently to use the newly added processor
resources.
To participate in this offering, you must accept contractual terms for purchasing capacity
through the Resource Link, establish a profile, and install an On/Off CoD enablement feature
on the system. Later, you can concurrently install temporary capacity up to the limits in On/Off
CoD and use it for up to 180 days.
Monitoring occurs through the system call-home facility. An invoice is generated if the
capacity is enabled during the calendar month. You are billed for the use of temporary
capacity until the system is returned to the original configuration. Remove the enablement
code if the On/Off CoD support is no longer needed.
On/Off CoD orders can be pre-staged in Resource Link to allow multiple optional
configurations. The pricing of the orders is done at the time that you order them, and the
pricing can vary from quarter to quarter. Staged orders can have different pricing.
When the order is downloaded and activated, the daily costs are based on the pricing at the
time of the order. The staged orders do not have to be installed in the order sequence. If a
staged order is installed out of sequence and later a higher-priced order is staged, the daily
cost is based on the lower price.
Another possibility is to store unlimited On/Off CoD LICCC records on the SE with the same
or different capacities, which gives you greater flexibility to enable quickly needed temporary
capacity. Each record is easily identified with descriptive names, and you can select from a
list of records that can be activated.
Resource Link provides the interface to order a dynamic upgrade for a specific system. You
can create, cancel, and view the order. Configuration rules are enforced, and only valid
configurations are generated based on the configuration of the individual system. After you
complete the prerequisites, orders for the On/Off CoD can be placed. The order process uses
the CIU facility on Resource Link.
You can order temporary capacity for CPs, ICFs, zIIPs, IFLs, or SAPs. Memory and channels
are not supported on On/Off CoD. The amount of capacity is based on the amount of owned
capacity for the different types of resources. An LICCC record is established and staged to
Resource Link for this order. After the record is activated, it has no expiration date.
However, an individual record can be activated only once. Subsequent sessions require a
new order to be generated, which produces a new LICCC record for that specific order.
This test can have a maximum duration of 24 hours, which commences upon the activation of
any capacity resource that is contained in the On/Off CoD record. Activation levels of capacity
can change during the 24-hour test period. The On/Off CoD test automatically stops at the
end of the 24-hour period.
You also can perform administrative testing. No capacity is added to the system, but you can
test all the procedures and automation for the management of the On/Off CoD facility.
The example order that is shown in Figure 8-11 is an On/Off CoD order for 0% more CP
capacity (system is already at capacity level 7), and for two more ICFs and two more zIIPs.
The maximum number of CPs, ICFs, zIIPs, and IFLs is limited by the current number of
available unused PUs of the installed processor drawers. The maximum number of SAPs is
determined by the model number and the number of available PUs on the already installed
processor drawers.
To finalize the order, you must accept Terms and Conditions for the order, as shown in
Figure 8-12.
If the On/Off CoD offering record does not contain resource tokens, you must deactivate the
temporary capacity manually. Deactivation is done from the SE and is nondisruptive.
Depending on how the capacity was added to the LPARs, you might be required to perform
tasks at the LPAR level to remove it. For example, you might have to configure offline any CPs
that were added to the partition, deactivate LPARs that were created to use the temporary
capacity, or both.
On/Off CoD orders can be staged in Resource Link so that multiple orders are available. An
order can be downloaded and activated only once. If a different On/Off CoD order is required
or a permanent upgrade is needed, it can be downloaded and activated without having to
restore the system to its original purchased capacity.
In support of automation, an API is provided that allows the activation of the On/Off CoD
records. The activation is performed from the HMC, and requires specifying the order number.
With this API, automation code can be used to send an activation command along with the
order number to the HMC to enable the order.
8.5.6 Termination
A client is contractually obligated to terminate the On/Off CoD right-to-use feature when a
transfer in asset ownership occurs. A client also can choose to terminate the On/Off CoD
right-to-use feature without transferring ownership.
Applying FC 9898 terminates the right to use the On/Off CoD. This feature cannot be ordered
if a temporary session is already active. Similarly, the CIU enablement feature cannot be
removed if a temporary session is active. When the CIU enablement feature is removed, the
On/Off CoD right-to-use feature is simultaneously removed. Reactivating the right-to-use
feature subjects the client to the terms and fees that apply then.
Monitoring
When you activate an On/Off CoD upgrade, an indicator is set in vital product data. This
indicator is part of the call-home data transmission, which is sent on a scheduled basis. A
time stamp is placed into the call-home data when the facility is deactivated. At the end of
each calendar month, the data is used to generate an invoice for the On/Off CoD that was
used during that month.
Software
Software Parallel Sysplex license charge (PSLC) clients are billed at the MSU level that is
represented by the combined permanent and temporary capacity. All PSLC products are
billed at the peak MSUs that are enabled during the month, regardless of usage. Clients with
WLC licenses are billed by product at the highest four-hour rolling average for the month. In
this instance, temporary capacity does not increase the software bill until that capacity is
allocated to LPARs and used.
Results from the STSI instruction reflect the current permanent and temporary CPs. For more
information, see “Store System Information instruction” on page 358.
Sysplex-wide data aggregation and propagation occur in the RMF Distributed Data Server
(DDS). The RMF Common Information Model (CIM) providers and associated CIM models
publish the RMF Monitor III data.
A function inside z/OS, the CPM retrieves critical metrics from one or more z/OS systems’
CIM structures and protocols. CPM communicates to local or remote SEs and HMCs by using
the Simple Network Management Protocol (SNMP).
CPM can see the resources in the individual offering records and the capacity tokens. When
CPM activates resources, a check is run to determine whether enough capacity tokens
remain for the specified resource to be activated for at least 24 hours. If insufficient tokens
remain, no resource from the On/Off CoD record is activated.
If a capacity token is used during an activation that is driven by the CPM, the corresponding
On/Off CoD record is deactivated prematurely by the system. This process occurs even if the
CPM activates this record, or parts of it. However, you do receive warning messages if
capacity tokens are close to being fully used.
You receive the messages five days before a capacity token is fully consumed. The five days
are based on the assumption that the consumption is constant for the five days. You must put
operational procedures in place to handle these situations. You can deactivate the record
manually, allow it occur automatically, or replenish the specified capacity token by using the
Resource Link application.
The CPD configuration defines the CPCs and z/OS systems that are controlled by an
instance of the CPM. One or more CPCs, sysplexes, and z/OS systems can be defined into a
domain. Although sysplexes and CPCs do not have to be contained in a domain, they must
not belong to more than one domain.
Each domain has one active capacity provisioning policy. The CPCC is the CPM user
interface component. Administrators work through this interface to define the domain
configuration and provisioning policies. The CPCC is installed on a Microsoft Windows
workstation.
CPM operates in the following modes, which allows four different levels of automation:
Manual mode
Use this command-driven mode when no CPM policy is active.
Analysis mode
In analysis mode, CPM processes capacity-provisioning policies and informs the operator
when a provisioning or deprovisioning action is required according to policy criteria.
Also, the operator determines whether to ignore the information or to manually upgrade or
downgrade the system by using the HMC, SE, or available CPM commands.
Several reports are available in all modes that contain information about the workload,
provisioning status, and the rationale for provisioning guidelines. User interfaces are provided
through the z/OS console and the CPCC application.
The provisioning policy defines the circumstances under which more capacity can be
provisioned (when, which, and how). The criteria features the following elements:
A time condition is when provisioning is allowed:
– Start time indicates when provisioning can begin.
– Deadline indicates that provisioning of more capacity is no longer allowed.
– End time indicates that deactivation of more capacity must begin.
A workload condition is which work qualifies for provisioning. It can have the following
parameters:
– The z/OS systems that can run eligible work.
– The importance filter indicates eligible service class periods, which are identified by
WLM importance.
– Performance Index (PI) criteria:
• Activation threshold: PI of service class periods must exceed the activation
threshold for a specified duration before the work is considered to be suffering.
• Deactivation threshold: PI of service class periods must fall below the deactivation
threshold for a specified duration before the work is considered to no longer be
suffering.
– Included service classes are eligible service class periods.
– Excluded service classes are service class periods that must not be considered.
Tip: If no workload condition is specified, the full capacity that is described in the policy
is activated and deactivated at the start and end times that are specified in the policy.
Provisioning scope is how much more capacity can be activated and is expressed in
MSUs.
The number of zIIPs must be one specification per CPC that is part of the CPD and are
specified in MSUs.
The maximum provisioning scope is the maximum extra capacity that can be activated for
all the rules in the CPD.
The provisioning rule is, in the specified time interval, that if the specified workload is behind
its objective, up to the defined extra capacity can be activated.
The rules and conditions are named and stored in the Capacity Provisioning Policy.
For more information about z/OS Capacity Provisioning functions, see z/OS MVS Capacity
Provisioning User’s Guide, SA33-8299.
The provisioning management routines can interrogate the installed offerings, their content,
and the status of the content of the offering. To avoid the decrease in capacity, create only
one On/Off CoD offering on the system by specifying the maximum allowable capacity. The
CPM can then, when an activation is needed, activate a subset of the contents of the offering
sufficient to satisfy the demand. If more capacity is needed later, the Provisioning Manager
can activate more capacity up to the maximum allowed increase.
Having an unlimited number of offering records pre-staged on the SE hard disk is possible.
Changing the content of the offerings (if necessary) is also possible.
Remember: The CPM controls capacity tokens for the On/Off CoD records. In a situation
where a capacity token is used, the system deactivates the corresponding offering record.
Therefore, you must prepare routines for catching the warning messages about capacity
tokens being used, and have administrative procedures in place for such a situation.
The messages from the system begin five days before a capacity token is fully used. To
avoid capacity records being deactivated in this situation, replenish the necessary capacity
tokens before they are used.
Important: CPE is for planned replacement capacity only, and cannot be used for peak
workload management.
The feature codes are calculated automatically when the CPE offering is configured. Whether
the eConfig tool or the Resource Link is used, a target configuration must be ordered. The
configuration consists of a model identifier, several speciality engines, or both. Based on the
target configuration, several feature codes from the list are calculated automatically, and a
CPE offering record is constructed.
CPE is intended to replace capacity that is lost within the enterprise because of a planned
event, such as a facility upgrade or system relocation.
Note: CPE is intended for short duration events that last a maximum of three days.
After each CPE record is activated, you can access dormant PUs on the system for which you
have a contract, as described by the feature codes. Processor units can be configured in any
combination of CP or specialty engine types (zIIP, SAP, IFL, and ICF). At the time of CPE
activation, the contracted configuration is activated. The general rule of two zIIPs for each
configured CP is enforced for the contracted configuration.
The processors that can be activated by CPE come from the available unassigned PUs on
any installed processor drawer. CPE features can be added to a z14 server nondisruptively. A
one-time fee is applied for each CPE event. This fee depends on the contracted configuration
and its resulting feature codes. Only one CPE contract can be ordered at a time.
The base system configuration must have sufficient memory and channels to accommodate
the potential requirements of the large CPE-configured system. Ensure that all required
functions and resources are available on the system where CPE is activated. These functions
and resources include CF LEVELs for coupling facility partitions, memory, and cryptographic
functions, and include connectivity capabilities.
The CPE configuration is activated temporarily and provides more PUs in addition to the
system’s original, permanent configuration. The number of extra PUs is predetermined by the
number and type of feature codes that are configured, as described by the feature codes. The
number of PUs that can be activated is limited by the unused capacity that is available on the
system. Consider the following points:
A model M03 with 26 CPs, and no IFLs or ICFs has 79 unassigned PUs available.
A model M04 with 38 CPs, 1 IFL, and 1 ICF has 101 unassigned PUs available.
When the planned event ends, the system must be returned to its original configuration. You
can deactivate the CPE features at any time before the expiration date.
A CPE contract must be in place before the special code that enables this capability can be
installed on the system. CPE features can be added to a z14 server nondisruptively.
CBU is the quick, temporary activation of PUs and is available in the following options:
For up to 90 contiguous days, for a loss of processing capacity as a result of an
emergency or disaster recovery situation.
For 10 days, for testing your disaster recovery procedures or running the production
workload. This option requires that IBM Z workload capacity that is equivalent to the CBU
upgrade capacity is shut down or otherwise made unusable during the CBU test.4
Important: CBU is for disaster and recovery purposes only, It cannot be used for peak
workload management or for a planned event.
8.7.1 Ordering
The CBU process allows for CBU to activate CPs, ICFs, zIIPs, IFLs, and SAPs. To use the
CBU process, a CBU enablement feature (FC 9910) must be ordered and installed. You must
order the quantity and type of PU that you require by using the following feature codes:
FC 6805: More CBU test activations
FC 6817: Total CBU years ordered
FC 6818: CBU records that are ordered
FC 6820: Single CBU CP-year
FC 6821: 25 CBU CP-year
FC 6822: Single CBU IFL-year
FC 6823: 25 CBU IFL-year
FC 6824: Single CBU ICF-year
FC 6825: 25 CBU ICF-year
FC 6828: Single CBU zIIP-year
FC 6829: 25 CBU zIIP-year
FC 6830: Single CBU SAP-year
FC 6831: 25 CBU SAP-year
FC 6832: CBU replenishment
The CBU entitlement record (FC 6818) contains an expiration date that is established at the
time of the order. This date depends on the quantity of CBU years (FC 6817). You can extend
your CBU entitlements through the purchase of more CBU years.
The number of FC 6817 per instance of FC 6818 remains limited to five. Fractional years are
rounded up to the nearest whole integer when calculating this limit. If there are two years and
eight months before the expiration date at the time of the order, the expiration date can be
extended by no more than two years. One test activation is provided for each CBU year that is
added to the CBU entitlement record.
FC 6805 allows for ordering more tests in increments of one. The total number of tests that is
allowed is 15 for each FC 6818.
4
All new CBU contract documents contain new CBU test terms to allow execution of production workload during
CBU test. CBU clients must run the IBM client Agreement Amendment for IBM Z Capacity Backup Upgrade Tests
(US form #Z125-8145).
However, the ordering system allows for over-configuration in the order. You can order up to
170 CBU features regardless of the current configuration. However, at activation, only the
capacity that is installed can be activated. At activation, you can decide to activate only a
subset of the CBU features that are ordered for the system.
Subcapacity makes a difference in the way that the CBU features are completed. On the
full-capacity models, the CBU features indicate the amount of extra capacity that is needed. If
the amount of necessary CBU capacity is equal to four CPs, the CBU configuration is four
CBU CPs.
The subcapacity models feature multiple capacity settings of 4xx, 5yy, or 6yy. The standard
models use the capacity setting 7nn. The number of CBU CPs must be equal to or greater
than the number of CPs in the base configuration.
All the CPs in the CBU configuration must have the same capacity setting. For example, if the
base configuration is a 2-way 402, providing a CBU configuration of a 4-way of the same
capacity setting requires two CBU feature codes. If the required CBU capacity changes the
capacity setting of the CPs, going from model capacity identifier 402 to a CBU configuration
of a 4-way 504 requires four CBU feature codes with a capacity setting of 5yy.
If the capacity setting of the CPs is changed, more CBU features are required, not more
physical PUs. Therefore, your CBU contract requires more CBU features when the capacity
setting of the CPs is changed.
CBU can add CPs through LICCC only, and the z14 server must have the correct number of
processor drawers that are installed to allow the required upgrade. CBU can change the
model capacity identifier to a higher value than the base setting (4xx, 5yy, or 6yy), but does
not change the system model. The CBU feature cannot decrease the capacity setting.
A CBU contract must be in place before the special code that enables this capability can be
installed on the system. CBU features can be added to a z14 server nondisruptively. For each
system enabled for CBU, the authorization to use CBU is available for a 1 - 5-year period.
The alternative configuration is activated temporarily, and provides more capacity that is
greater than the system’s original, permanent configuration. At activation time, determine the
capacity that you require for that situation. You can decide to activate only a subset of the
capacity that is specified in the CBU contract.
The base system configuration must have sufficient memory and channels to accommodate
the potential requirements of the large CBU target system. Ensure that all required functions
and resources are available on the backup systems. These functions include CF LEVELs for
coupling facility partitions, memory, and cryptographic functions, and connectivity capabilities.
Planning: CBU for processors provides a concurrent upgrade. This upgrade can result in
more enabled processors, changed capacity settings that are available to a system
configuration, or both. You can activate a subset of the CBU features that are ordered for
the system. Therefore, more planning and tasks are required for nondisruptive logical
upgrades. For more information, see “Guidelines to avoid disruptive upgrades” on
page 360.
For more information, see the IBM Z Capacity on Demand User’s Guide, SC28-6846.
CBU activation
CBU is activated from the SE by using the HMC and SSO to the SE, by using the Perform
Model Conversion task, or through automation by using the API on the SE or the HMC.
During a real disaster, use the Activate CBU option to activate the 90-day period.
Image upgrades
After CBU activation, the z14 server can have more capacity, more active PUs, or both. The
extra resources go into the resource pools and are available to the LPARs. If the LPARs must
increase their share of the resources, the LPAR weight can be changed or the number of
logical processors can be concurrently increased by configuring reserved processors online.
The operating system must concurrently configure more processors online. If necessary,
more LPARs can be created to use the newly added capacity.
CBU deactivation
To deactivate the CBU, the extra resources must be released from the LPARs by the
operating systems. In some cases, this process is a matter of varying the resources offline. In
other cases, it can mean shutting down operating systems or deactivating LPARs. After the
resources are released, the same facility on the HMC/SE is used to turn off CBU. To
deactivate CBU, select the Undo temporary upgrade option from the Perform Model
Conversion task on the SE.
CBU testing
Test CBUs are provided as part of the CBU contract. CBU is activated from the SE by using
the Perform Model Conversion task. Select the test option to start a 10-day test period. A
standard contract allows one test per CBU year. However, you can order more tests in
increments of one up to a maximum of 15 for each CBU order.
The test CBU must be deactivated in the same way as the regular CBU. Failure to deactivate
the CBU feature before the expiration date can cause the system to degrade gracefully back
to its original configuration. The system does not deactivate dedicated engines or the last of
in-use shared engines.
CBU example
An example of a capacity backup operation is 12 CBU features that are installed on a backup
model M02 with model capacity identifier 708. When a production model M01 with model
capacity identifier 708 experiences an unplanned outage, the backup system can be
temporarily upgraded from model capacity identifier 708 - 720. This process allows the
capacity to take over the workload from the failed production system.
You also can configure systems to back up each other. For example, if you use two models of
M01 model capacity identifier 705 for the production environment, each can have five or more
features installed. If one system suffers an outage, the other one uses a temporary upgrade
to recover the approximate original total capacity.
The GDPS service is for z/OS only, or for z/OS in combination with Linux on Z.
z14 servers allow concurrent upgrades, which means that dynamically adding capacity to the
system is possible. If the operating system images that run on the upgraded system do not
require disruptive tasks to use the new capacity, the upgrade is also nondisruptive. This
process type means that power-on reset (POR), LPAR deactivation, and IPL do not have to
occur.
If the concurrent upgrade is intended to satisfy the need for more operating system images,
more LPARs can be created concurrently on the z14 system. These LPARs include all
resources that are needed. These extra LPARs can be activated concurrently.
These enhanced configuration options are available through the separate HSA, which was
introduced on the zEnterprise 196.
Linux operating systems, in general, cannot add more resources concurrently. However,
Linux, and other types of virtual machines that run under z/VM, can benefit from the z/VM
capability to nondisruptively configure more resources online (processors and I/O).
With z/VM, Linux guests can manipulate their logical processors by using the Linux CPU
hotplug daemon. The daemon can start and stop logical processors that are based on the
Linux load average value. The daemon is available in Linux SLES 10 SP2 and later, and in
Red Hat Enterprise Linux (RHEL) V5R4 and up.
8.8.1 Components
The following components can be added, depending on the considerations that are described
in this section:
Processors
Memory
I/O
Cryptographic adapters
Special features
Processors
CPs, ICFs, zIIPs, IFLs, and SAPs can be added concurrently to a z14 server if unassigned
PUs are available on any installed processor drawer. The number of zIIPs cannot exceed
twice the number of CPs plus unassigned CPs. More processor drawers can also be installed
concurrently, which allows further processor upgrades.
If necessary, more LPARs can be created concurrently to use the newly added processors.
The Coupling Facility Control Code (CFCC) can also configure more processors online to
coupling facility LPARs by using the CFCC image operations window.
Memory
Memory can be added concurrently up to the physical installed memory limit. More processor
drawers can also be installed concurrently, which allows further memory upgrades by LICCC,
and enables memory capacity on the new processor drawers.
By using the previously defined reserved memory, z/OS operating system images, and z/VM
partitions, you can dynamically configure more memory online. This process allows
nondisruptive memory upgrades. Linux on Z supports Dynamic Storage Reconfiguration.
Dynamic I/O configurations are supported by certain operating systems (z/OS and z/VM),
which allows nondisruptive I/O upgrades. However, having dynamic I/O reconfiguration on a
stand-alone coupling facility system is not possible because no operating system with that
capability is running on the system.
Cryptographic adapters
Crypto Express6S features can be added concurrently if all the required infrastructure is in
the configuration.
Special features
Special features, such as zHyperlink, Coupling Express LR, zEnterprise Data Compression
(zEDC) Express, and RoCE features, also can be added concurrently if all infrastructure is
available in the configuration.
Enabling and using the extra processor capacity is not apparent to most applications.
However, certain programs depend on processor model-related information, such as ISV
products. Consider the effect on the software that is running on a z14 server when you
perform any of these configuration upgrades.
Processor identification
The following instructions are used to obtain processor information:
Store System Information (STSI) instruction
STSI reports the processor model and model capacity identifier for the base configuration,
and for any other configuration changes through temporary upgrade actions. It fully
supports the concurrent upgrade functions, and is the preferred way to request processor
information.
Store CPU ID (STIDP) instruction
STIDP is provided for compatibility with an earlier version.
The model capacity identifier contains the base capacity, On/Off CoD, and CBU. The Model
Permanent Capacity Identifier and the Model Permanent Capacity Rating contain the base
capacity of the system. The Model Temporary Capacity Identifier and Model Temporary
Capacity Rating contain the base capacity and On/Off CoD.
When issued from an operating system that is running as a guest under z/VM, the result
depends on whether the SET CPUID command was used. Consider the following points:
Without the use of the SET CPUID command, bits 0 - 7 are set to FF by z/VM. However, the
remaining bits are unchanged, which means that they are exactly as they were without
running as a z/VM guest.
If the SET CPUID command is issued, bits 0 - 7 are set to FF by z/VM and bits 8 - 31 are set
to the value that is entered in the SET CPUID command. Bits 32 - 63 are the same as they
were without running as a z/VM guest.
The possible output that is returned to the issuing program for an operating system that runs
as a guest under z/VM is listed in Table 8-7.
You can minimize the need for these outages by carefully planning and reviewing “Guidelines
to avoid disruptive upgrades” on page 360.
One major client requirement was to eliminate the need for a client authorization connection
to the IBM Resource Link system when activating an offering. This requirement is met by the
z196, zEC12, z13, and z14 servers.
After the offerings are installed on the z14 server, they can be activated at any time at the
client’s discretion. No intervention by IBM or IBM personnel is necessary. In addition, the
activation of CBU does not require a password.
The z14 server can have up to eight offerings that are installed at the same time, with the
limitation that only one of them can be an On/Off CoD offering. The others can be any
combination. The installed offerings can be activated fully or partially, and in any sequence
and any combination. The offerings can be controlled manually through command interfaces
on the HMC, or programmatically through a number of APIs. IBM applications, ISV programs,
and client-written applications can control the usage of the offerings.
Resource usage (and therefore, financial exposure) can be controlled by using capacity
tokens in the On/Off CoD offering records.
The CPM is an example of an application that uses the CoD APIs to provision On/Off CoD
capacity that is based on the requirements of the workload. The CPM cannot control other
offerings.
For more information about any of the topics in this chapter, see IBM Z Capacity on Demand
User’s Guide, SC28-6943.
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
The key objectives, in order of priority, are to ensure data integrity, computational integrity,
reduce or eliminate unscheduled outages, reduce scheduled outages, reduce planned
outages, and reduce the number of Repair Actions.
The following overriding RAS requirements are principles as shown in Figure 9-1:
Inclusion of existing (or equivalent) RAS characteristics from previous generations.
Learn from current field issues and addressing the deficiencies.
Understand the trend in technology reliability (hard and soft) and ensure that the RAS
design points are sufficiently robust.
Invest in RAS design enhancements (hardware and firmware) that provide IBM Z and
Customer valued differentiation.
1 Key in storage error uncorrected: Indicates that the hardware cannot repair a storage key that was in error.
Independent channel recovery with replay buffers on all interfaces allows recovery of a single
DIMM channel, while other channels remain active. Further redundancies are incorporated in
I/O pins for clock lines to main memory, which eliminates the loss of memory clocks because
of connector (pin) failure. The following RAS enhancements reduce service complexity:
Continued use of RAIM ECC.
No cascading of memory DIMM to simplify the recovery design.
Replay buffer for hardware retry on soft errors on the main memory interface.
Redundant I/O pins for clock lines to main memory.
Note: When this feature is ordered, a corequisite feature, the Plan Ahead for Line
Cords feature (FC 2000), is automatically selected.
The new IBM Z Channel Subsystem Function performs periodic polling from the channel
to the end points for the logical paths that are established and reduces the number of
useless Repair Actions (RAs).
The RDP data history is used to validate Predictive Failure Algorithms and identify Fibre
Channel Links with degrading signal strength before errors start to occur. The new Fibre
Channel Extended Link Service (ELS) retrieves signal strength.
FICON Dynamic Routing
FICON Dynamic Routing (FIDR) enables the use of storage area network (SAN) dynamic
routing policies in the fabric. With the z14 server, FICON channels are no longer restricted
to the use of static routing policies for inter-switch links (ISLs) for cascaded FICON
directors.
FICON Dynamic Routing dynamically changes the routing between the channel and
control unit based on the Fibre Channel Exchange ID. Each I/O operation has a unique
exchange ID. FIDR is designed to support static SAN routing policies and dynamic routing
policies.
FICON Dynamic Routing can help clients reduce costs by providing the following features:
– Share SANs between their FICON and FCP traffic.
– Improve performance because of SAN dynamic routing policies that better use all the
available ISL bandwidth through higher use of the ISLs,
– Simplify management of their SAN fabrics by using static routing policies that assign
different ISL routes with each power-on-reset (POR), which makes the SAN fabric
performance difficult to predict.
The difference between scheduled outages and planned outages might not be obvious. The
general consensus is that scheduled outages occur sometime soon. The time frame is
approximately two weeks.
Planned outages are outages that are planned well in advance and go beyond this
approximate two-week time frame. This chapter does not distinguish between scheduled and
planned outages.
Preventing unscheduled, scheduled, and planned outages was addressed by the IBM System
z system design for many years.
z14 servers introduce a fixed size HSA of 192 GB. This size helps eliminate pre-planning
requirements for HSA and provides the flexibility to update dynamically the configuration. You
can perform the following tasks dynamically:2
Add a logical partition (LPAR).
Add a logical channel subsystem (LCSS).
Add a subchannel set.
Add a logical CP to an LPAR.
Add a cryptographic coprocessor.
Remove a cryptographic coprocessor.
Enable I/O connections.
Swap processor types.
Add memory.
Add a physical processor.
By addressing the elimination of planned outages, the following tasks also are possible:
Concurrent driver upgrades
Concurrent and flexible customer-initiated upgrades
2
Some pre-planning considerations might exist. For more information, see Chapter 8, “System upgrades” on
page 315.
The EDA procedure and careful planning help ensure that all the resources are still available
to run critical applications in an (n-1) drawer configuration. This process allows you to avoid
planned outages. Consider the flexible memory option to provide more memory resources
when you are replacing a drawer. For more information about flexible memory, see 2.4.7,
“Flexible Memory Option” on page 63.
To minimize the effect on current workloads, ensure that sufficient inactive physical resources
exist on the remaining drawers to complete a drawer removal. Also, consider deactivating
non-critical system images, such as test or development LPARs. After you stop these
non-critical LPARs and free their resources, you might find sufficient inactive resources to
contain critical workloads while completing a drawer replacement.
The following configurations especially enable the use of the EDA function. These z14 models
need enough spare capacity so that they can cover the resources of a fenced or isolated
drawer. This configuration imposes limits on the following number of the client-owned PUs
that can be activated when one drawer within a model is fenced:
A maximum of 69 client PUs are configured on the M02.
A maximum of 105 client PUs are configured on the M03.
A maximum of 141 client PUs are configured on the M04.
A maximum of 170 client PUs are configured on the M05.
No special feature codes are required for PU and model configuration.
For all z14 models, five SAPs are in every drawer (model M05 has 23 total).
The flexible memory option delivers physical memory so that 100% of the purchased
memory increment can be activated even when one drawer is fenced.
The I/O connectivity must also support drawer removal. Most of the paths to the I/O feature
redundant I/O interconnect support in the I/O infrastructure (drawers) that enable connections
through multiple fanout cards.
If sufficient resources are not present on the remaining drawers, certain non-critical LPARs
might need to be deactivated. One or more CPs, specialty engines, or storage might need to
be configured offline to reach the required level of available resources. Plan to address these
possibilities to help reduce operational errors.
Include the planning as part of the initial installation and any follow-on upgrade that modifies
the operating environment. A client can use the Resource Link machine information report to
determine the number of drawers, active PUs, memory configuration, and channel layout.
If the z14 server is installed, click Prepare for Enhanced Drawer Availability in the Perform
Model Conversion window of the EDA process on the Hardware Management Console
(HMC). This task helps you determine the resources that are required to support the removal
of a drawer with acceptable degradation to the operating system images.
The EDA process determines which resources, including memory, PUs, and I/O paths, are
free to allow for the removal of a drawer. You can run this preparation on each drawer to
determine which resource changes are necessary. Use the results as input in the planning
stage to help identify critical resources.
With this planning information, you can examine the LPAR configuration and workload
priorities to determine how resources might be reduced and still allow the drawer to be
concurrently removed.
When you perform the review, document the resources that can be made available if the EDA
is used. The resources on the drawers are allocated during a POR of the system and can
change after that process. Perform a review when changes are made to z14 servers, such as
adding drawers, CPs, memory, or channels. Also, perform a review when workloads are
added or removed, or if the HiperDispatch feature was enabled and disabled since the last
time you performed a POR.
For the EDA process, this phase is the preparation phase. It is started from the SE, directly or
on the HMC by using the Single object operation option on the Perform Model Conversion
window from the CPC configuration task list, as shown in Figure 9-4.
To maximize the PU availability option, ensure that sufficient inactive physical resources are
on the remaining drawers to complete a drawer removal.
Memory availability
Memory resource availability for reallocation or deactivation depends on the following factors:
Physically installed memory
Image profile memory allocations
Amount of memory that is enabled through LICCC
Flexible memory option
Virtual Flash Memory if enabled and configured
For more information, see 2.6.2, “Enhanced drawer availability” on page 69.
Preparation: The preparation step does not reallocate any resources. It is used only to
record client choices and produce a configuration file on the SE that is used to run the
concurrent drawer replacement operation.
The preparation step can be done in advance. However, if any changes to the configuration
occur between the preparation and the physical removal of the drawer, you must rerun the
preparation phase.
The process can be run multiple times because it does not move any resources. To view the
results of the last preparation operation, click Display Previous Prepare Enhanced Drawer
Availability Results from the Perform Model Conversion window in the SE.
The preparation step can be run without performing a drawer replacement. You can use it to
dynamically adjust the operational configuration for drawer repair or replacement before IBM
SSR activity. The Perform Model Conversion window in you click Prepare for Enhanced
Drawer Availability is shown in Figure 9-4 on page 378.
The system verifies the resources that are required for the removal, determines the required
actions, and presents the results for review. Depending on the configuration, the task can take
from a few seconds to several minutes.
The preparation step determines the readiness of the system for the removal of the targeted
drawer. The configured processors and the memory in the selected drawer are evaluated
against unused resources that are available across the remaining drawers. The system also
analyzes I/O connections that are associated with the removal of the targeted drawer for any
single path I/O connectivity.
If insufficient resources are available, the system identifies the conflicts so that you can free
other resources.
Preparation tabs
The results of the preparation are presented for review in a tabbed format. Each tab indicates
conditions that prevent the EDA option from being run. Tabs are for processors, memory, and
various single path I/O conditions. The following tab selections are available:
Processors
Memory
Single I/O
Single Domain I/O
Single Alternate Path I/O
Only the tabs that feature conditions that prevent the drawer from being removed are
displayed. Each tab indicates the specific conditions and possible options to correct them.
For example, the preparation identifies single I/O paths that are associated with the removal
of the selected drawer. These paths must be varied offline to perform the drawer removal.
Important: Consider the results of these changes relative to the operational environment.
Understand the potential effect of making such operational changes. Changes to the PU
assignment, although technically correct, can result in constraints for critical system
images. In certain cases, the solution might be to defer the reassignments to another time
that has less effect on the production system images.
After you review the reassignment results and make any necessary adjustments, click OK.
The final results of the reassignment, which include the changes that are made as a result of
the review, are displayed (see Figure 9-7). These results are the assignments when the
drawer removal phase of the EDA is completed.
By understanding the system configuration and the LPAR allocation for memory, PUs, and
I/O, you can make the best decision about how to free the necessary resources to allow for
drawer removal.
The preparation process can be run multiple times to ensure that all conditions are met. It
does not reallocate any resources; instead, it produces only a report. The resources are not
reallocated until the Perform Drawer Removal process is started.
Review the results. The result of the preparation task is a list of resources that must be made
available before the drawer replacement can occur.
3 That is, if any native PCIe features are installed on the system.
Reserved storage: If you plan to use the EDA function with z/OS LPARs, set up
reserved storage and an RSU value. Use the RSU value to specify the number of
storage units that are to be kept free of long-term fixed storage allocations. This
configuration allows for storage elements to be varied offline.
When correctly configured, z14 servers support concurrently activating a selected new LIC
Driver level. Concurrent activation of the selected new LIC Driver level is supported only at
specific released sync points. Concurrently activating a selected new LIC Driver level
anywhere in the maintenance stream is not possible. Certain LIC updates do not allow a
concurrent update or upgrade.
The EDM function does not eliminate the need for planned outages for driver-level upgrades.
Upgrades might require a system level or a functional element scheduled outage to activate
the new LIC. The following circumstances require a scheduled outage:
Specific complex code changes might dictate a disruptive driver upgrade. You are alerted
in advance so that you can plan for the following changes:
– Design data or hardware initialization data fixes
– CFCC release level change
OSA CHPID code changes might require PCHID Vary OFF/ON to activate new code.
Crypto code changes might require PCHID Vary OFF/ON to activate new code.
Note: zUDX clients should contact their User Defined Extensions (UDX) provider
before installing Microcode Change Levels (MCLs). Any changes to Segments 2 and 3
from a previous MCL level might require a change to the client's UDX. Attempting to
install an incompatible UDX at this level results in a Crypto checkstop.
Consider the following points for managing native PCIe adapters microcode levels:
Updates to the Resource Group require all native PCIe adapters that are installed in that
RG to be offline. For more information about the requirement, see Appendix C, “Native
Peripheral Component Interconnect Express” on page 469.
Updates to the native PCIe adapter require the adapter to be offline. If the adapter is not
defined, the MCL session automatically installs the maintenance that is related to the
adapter.
Note: Other adapter types, such as FICON Express, OSA Express, and Crypto Express
that are installed in the PCIe I/O drawer are not effected because they are not managed by
the Resource Groups.
The front, rear, and top view of the PCIe I/O drawer and the Resource Group assignment by
card slot are shown in Figure 9-8. All PCIe I/O drawers that are installed in the system feature
the same Resource Group assignment.
The adapter locations and PCHIDs for the four Resource Groups are listed in Table 9-2.
Z22BLG01-04,06-09 100-11F
Z15BLG01-04,06-09 180-19F
RG1
Z08BLG01-04,06-09 200-21F
Front left
Z01BLG01-04,06-09 280-29F
A32BLG01-04,06-09 300-31F
Z22BLG30-33,35-38 160-17F
Z15BLG30-33,35-38 1E0-1FF
RG2
Z08BLG30-33,35-38 260-27F
Rear right
Z01BLG30-33,35-38 2E0-2FF
A32BLG30-33,35-38 360-37F
Z22BLG11-14,16-19 120-13F
Z15BLG11-14,16-19 1A0-1AF
RG3
Z08BLG11-14,16-19 220-22F
Front right
Z01BLG11-14,16-19 2A0-2BF
A32BLG11-14,16-19 320-33F
Z22BLG20-23,25-28 140-15F
Z15BLG20-23,25-28 1C0-1DF
RG4
Z08BLG20-23,25-28 240-25F
Rear left
Z01BLG20-23,25-28 2C0-2DF
A32BLG20-23,25-28 340-35F
z14 servers introduced the support to concurrently4 activate an MCL on an OSA-ICC channel
to improve the availability and simplification of the firmware maintenance. The OSD channels
already feature this capability.
Failover: The primary HMC and its alternative must be connected to the same LAN
segment. This configuration allows the alternative HMC to take over the IP address of
the primary HMC during failover processing.
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
The following options are available for physically installing the server:
Air or water cooling
Installation on a raised floor or non-raised floor
I/O and power cables can exit under the raised floor or off the top of the server frames
A high-voltage DC power supply or the usual AC power supply
For more information about physical planning see IBM 3906 Installation Manual for Physical
Planning, GC28-6965.
The water-cooling feature can be installed only on a raised floor because water hoses are
attached to the server from underneath the raised floor. Standard exit for power and I/O
cables is also on the bottom of the server frames unless the following top exit features are
installed:
Top Exit I/O Cabling feature code (FC 7942)
Top Exit cord DC (FC 8948)
Top Exit cord Low Voltage, 3 phase (FC 8949)
Top Exit cord HiV, 3 phase (FC 8951)
These options allow I/O cables and power cables to exit through the top of the server into
overhead cabling rails.
The new rear doors are all the same part. In the installation planning meeting, you can decide
in which orientation the IBM Service Support Representative (IBM SSR) should install the
covers. For more information about the vectored orientation, see IBM 3906 Installation
Manual for Physical Planning, GC28-6965 or contact your IBM SSR.
Power requirements
The system operates with two fully redundant power supplies. One is in the front, and the
other is in the rear of the Z frame. Each power supply has one or two power cords. The
number of power cords that is required depends on the system configuration. The total loss of
one power supply has no effect on system operation.
Systems with two power cords (one in the front and one in the rear) can be started with one
power cord and continue to run.
The larger systems with a minimum of four bulk power regulator (BPR) pairs must have four
power cords installed. A system with four power cords can be started with two power cords on
the same power supply with sufficient power to keep the system running.
Power cords can be attached to 3-phase, 50/60 Hz, 200 - 480 V AC power, or 380 - 520V DC
power.
The High-Voltage Direct Current (HVDC) feature is an option for z14 servers. It enables the
direct use of the high voltage (HV) DC distribution. A direct HVDC data center power design
improves data center energy efficiency by removing the need for a DC to AC inversion step.
The z14 bulk power supplies were modified to support HVDC; therefore, the only difference in
the included hardware to implement this option is the DC power cords. HVDC is a fresh
technology with many standards. z14 server supports two of these standards: Ground
referenced and dual polarity HVDC supplies, such as +/-190 V, +/-260 V, and +380 V. HVDC
brings many advantages.
Beyond the data center uninterruptible power supply and power distribution energy savings, a
z14 server that runs on HVDC power draws 1 - 3% less input power. HVDC does not change
the number of power cords that a system requires.
For extra equipment, such as the Hardware Management Console (HMC), its display, and
Ethernet switch, extra single-phase outlets are required.
The power requirements depend on the installed cooling facility, the number of central
processor complex (CPC) drawers, and the number of I/O units.
If you initially need only one power cord pair but you plan to use a second pair in the future,
you can order the Line Cord Plan Ahead feature (FC 2000). This feature gives you four power
cords at the initial configuration.
Also, Balanced Power Plan Ahead feature (FC 3003) provides an initial configuration of four
power cords and 12 BPRs. If the z14 server is configured with the Internal Battery Feature
(IBF), Balanced Power Plan Ahead automatically supplies the maximum number of batteries
(six IBFs) with the system.
0 1 2 3 4 5
a 2a a a a
1 2 2 2 3 3a
2 2a 3a 3a 3a 3a 4b
3 3b 3b 4b 4b 4b 5b
4 4b 4b 5b 5b 5b 6b
a. Single-line power cord pair.
b. Two-line power cord pair.
The number of power cords that are installed on one power supply (depending on the number
of I/O units and the number of CPC drawers) is listed in Table 10-2.
Table 10-2 Number of power cords that are installed per power supply
CPC drawers PCIe I/O drawers
0 1 2 3 4 5
1 1 1 1 1 1 1
2 1 1 1 1 1 2
3 1 1 2 2 2 2
4 2 2 2 2 2 2
Power consumption
This section describes the maximum power consumption for the air-cooled and water-cooled
models.
Power estimation for any configuration, power source, and room condition can be obtained
by using the power estimation tool at IBM Resource Link website (authentication required).
On the Resource Link page, click Tools → Power and weight estimation.
0 1 2 3 4 5
The absolute maximum power consumption for the water-cooled models in a warm room
(power is lower for DC input voltage) is listed in Table 10-4.
0 1 2 3 4 5
z14 servers include a recommended (long-term) ambient temperature range of 18°C (64.4°F)
- 27°C (80.6°F). The minimum allowed ambient temperature is 15°C (59°F) and the maximum
allowed temperature is 32°C (89.6°F).
For more information about the environmental specifications, see IBM 3906 Installation
Manual for Physical Planning, GC28-6965.
1
Input power (kVA) equals heat output (kW).
2 Input power (kVA) equals heat output (kW).
As shown in Figure 10-2, rows of servers must be placed front-to-front. Chilled air is provided
through perforated floor panels that are placed in rows between the fronts of servers (the cold
aisles). Perforated tiles generally are not placed in the hot aisles. If your computer room
causes the temperature in the hot aisles to exceed a comfortable temperature, add as many
perforated tiles as necessary to create a satisfactory comfort level. Heated exhaust air exits
the computer room above the computing equipment.
With the standard z14 rear covers (FC #0160), the exiting airflow direction can be
customized, which provides you more flexibility in placing z14 servers in your data center.
Optional Thin doors (non acoustic, non-vectored) are available also (FC #0161).
For more information about the requirements for air-cooling options, see IBM 3906 Installation
Manual for Physical Planning, GC28-6965.
Raised floor: The minimum raised floor height for a water-cooled system is 22.86 cm
(8.6 in.).
Before you install z14 servers with water-cooled option, your facility must meet following
requirements:
Total water hardness must not exceed 200 mg/L of calcium carbonate.
The pH must be 7 - 9.
Turbidity must be less than 10 Nephelometric Turbidity Units (NTUs).
Bacteria must be less than 1000 colony-forming units (CFUs)/ml.
The water must be as free of particulate matter as feasible.
The allowable system inlet water temperature range is 6°C - 20°C (43°F - 68°F) by using
standard building chilled water. A special water system is typically not required.
The required flow rate to the frame is 3.7 - 79.4 lpm (1 - 21 gpm), depending on the inlet
water temperature and the number of processor drawers in the z13 server. Colder inlet
water temperatures require less flow than warmer water temperatures. Fewer processor
drawers require less flow than a maximum populated z13 server.
The minimum water pressure that is required across the IBM hose ends is 0.34 - 2.32 BAR
(5 - 33.7 psi), depending on the minimum flow required.
The maximum water pressure that is supplied at the IBM hose connections to the client’s
water supply cannot exceed 6.89 BAR (100 psi).
For more information about the requirements for water-cooling options, see IBM 3906
Installation Manual for Physical Planning, GC28-6965, and see Figure 10-3.
Supply hoses
The z14 water-cooled system includes 4.2 m (13.7 ft) water hoses. The WCU water supply
connections are shown in Figure 10-3.
The client’s ends of the hoses are left open, which allows you to cut the hose to a custom
length. An insulation clamp is provided to secure the insulation and protective sleeving after
you cut the hose to the correct length and install it onto your plumbing.
The IBF can provide emergency power for the estimated time that is listed in Table 10-5. The
number of IFBs depends on the number of BPRs. For the number of BPRs that are installed
in relation to I/O units and the number of CPC drawers, see Table 10-5. They are installed in
pairs. You can have two, four, or six batteries (odd numbers are not allowed).
0 1 2 3 4 5
1 19.9 min 13.7 min 10.3 min 8.9 min 13.9 min 12.4 min
2 8.8 min 12.5 min 10.5 min 9.0 min 7.9 min 7.1 min
2 9.6 min 8.3 min 7.4 min 6.6 min 6.1 min 5.0 min
4 6.7 min 6.1 min 5.0 min 4.5 min 4.0 min 3.7 min
a. I/O units = the number of I/O drawers or PCIe I/O drawers.
Consideration: The system holdup times that are listed in Table 10-5 assume that both
sides are functional and have fresh batteries under normal room ambient conditions.
Holdup times are greater for configurations that do not have every I/O slot plugged, the
maximum installed memory, and are not using the maximum processors.
These holdup times are estimates. Your particular battery holdup time for any specific
circumstance might be different.
Holdup times vary depending on the number of BPRs that are installed. As the number of
BPRs increases, the holdup time also increases until the maximum number of BPRs is
reached. After six BPRs (three per side) are installed, no other batteries are added;
therefore, the time decreases from that point.
Holdup times for actual configurations are provided in the power estimation tool at IBM
Resource Link website.
On the Resource Link page, click Tools → Machine information, select your IBM Z
system, and click Power Estimation Tool.
If the server is connected to a room’s emergency power-off switch and the IBF is installed, the
batteries take over if the switch is engaged.
To avoid the takeover, connect the room emergency power-off switch to the server power-off
switch. Then, when the room emergency power-off switch is engaged, all power is
disconnected from the power cords and the IFBs. However, all volatile data in the server is
lost.
z14 server can be installed on a raised or on a non-raised floor. For more information about
weight distribution and floor loading tables, see the IBM 3906 Installation Manual for Physical
Planning, GC28-6965. This data is used with the maximum frame weight, frame width, and
frame depth to calculate the floor loading.
The maximum system dimensions and weights for the M04/M05 model are listed in
Table 10-6. The weight ranges are based on configuration models with five PCIe I/O drawers,
IBFs, and with the top exit cable features.
Radiator-cooled servers
Water-cooled servers
The power and weight estimation tool for Z servers on Resource Link covers the estimated
weight for your designated configuration. It is available on IBM Resource Link website.
On the Resource Link page, click Tools → Power and weight estimation.
Raised floor
If the z14 server is installed in a raised floor environment, air-cooled and water-cooled models
are supported. You can select top exit features to route I/O cables and power cables from the
top frame of the z14 server.
The following top exit options are available for z14 servers:
Top Exit I/O Cabling feature code (FC 7942)
Top Exit Line Cord for DC (FC 8948)
3-phase, Low Voltage Top Exit Line Cord (FC 8949)
3-phase, High Voltage Top Exit Line Cord (FC 8951)
Line c o rd s
Overh ea d
Overh ead
I/ O c a bl es I/O
I/O
pow er
a nd
un der
Opt water p owe r
the
fl oor
RF T a ilg a t e RF T a ilg a t e RF T a ilg a te RF T a ilg a te
Note: Top exit feature support is not available for water hoses. Such hoses must go
through the system from underneath the raised floor.
Non-raised floor
If you install the z14 server in a non-raised floor environment, you can select only
radiator-cooled models. The Non-Raised Floor Support feature code (FC 7998) is required.
The Top Exit I/O Cabling feature code (FC 7942) and one of three types of Top Exit Line
Cords (FC 8948, FC 8949, FC 8951) also must be ordered. All cables must exit from the top
frame of the z14 server, as shown in Figure 10-5.
Li ne C ords
I/ O C able s
OH I/O
OH
Powe r
The difference between cut cords and plugged cords is shown in Figure 10-6.
plugged c onnections
cut cords
A frame A frame
Z fra me Z fr ame
“s trap”
organizer
EIA 27
Latch
EIA 23
Frame openings (7)
For passing fibers
Into frame from EIA 19
etc.
chimney
EIA 14
frame chimney
frame chimney
EIA 10
Front
EIA 6
EIA 2
The Top Exit I/O Cabling feature adds 15 cm (6 in.) to the width of each frame and
approximately 95 lbs (43 kg) to the weight.
For z13 servers, the Top Exit I/O Cabling feature (FC 7942) is available for radiator-cooled
models and water-cooled models.
In a multiple system installation, one floor panel can have two casters from two adjacent
systems on it, which can induce a highly concentrated load on a single floor panel. The weight
distribution plate distributes the weight over two floor panels. The weight distribution kit is
ordered and delivered by using FC 9970.
Always consult the floor tile manufacturer to determine the load rating of the tile and pedestal
structure. More panel support might be required to improve the structural integrity because
cable cutouts reduce the floor tile rating.
The kits help secure the frames and their contents from damage when exposed to shocks and
vibrations, such as in a seismic event. The frame tie-downs are intended for securing a frame
that weighs up to 1632 kg (3600 lbs).
The Sds parameter 2.5g represents the high magnitude covering most of densely populated
area in California. For example Sds values for Los Angeles (1.29 g), San Francisco (2.00 g),
Santa Barbara (2.00 g), and San Diego (1.60 g).
z14 structure consists of a frame of rack, drawers with central processor units, I/O equipment,
memory, and other electronic equipment. The primary function of the frame is to protect
critical electronic equipment in two modes. The first mode is during shipping shock and
vibration, which provides excitation primary in the vertical direction. The second mode is
protecting the equipment during seismic events where horizontal vibration can be significant.
For more information see IBM 3906 Installation Manual for Physical Planning, GC28-6965.
For more information, see IBM 3906 Installation Manual for Physical Planning, GC28-6965.
The hardware components in the z14 server are monitored and managed by the energy
management component in the Support Element (SE) and HMC. The graphical user
interfaces (GUIs) of the SE and HMC provide views, such as the Monitors Dashboard and
Environmental Efficiency Statistics Monitor Dashboard.
The following tools are available to plan and monitor the energy consumption of z14 servers:
Power estimation tool on Resource Link
Energy Management task for maximum potential power on HMC and SE
Monitors Dashboard and Environmental Efficiency Statistics tasks on HMC and SE
The data is presented in table format and graphical “histogram” format. The data can also be
exported to a .csv-formatted file so that the data can be imported into a spreadsheet. For this
task, you must use a web browser to connect to an HMC.
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
The HMC is used to set up, manage, monitor, and operate one or more CPCs. It manages
IBM Z hardware, its logical partitions (LPARs), and provides support applications. At least one
HMC is required to operate an IBM Z server. An HMC can manage multiple Z CPCs, and can
be at a local or a remote site.
The SEs are two integrated servers in the A frame that are supplied together with the z14
server. One SE is the primary SE (active) and the other is the alternative SE (backup). As with
the HMCs, the SEs are closed systems, and no other applications can be installed on them.
When tasks are performed at the HMC, the commands are routed to the active SE of the
CPC. The SE then issues those commands to their CPC. One HMC can control up to 100
SEs and one SE can be controlled by up to 32 HMCs.
Some functions are available only on the SE. With Single Object Operations (SOOs), these
functions can be used from the HMC. For more information, see “Single Object Operations”
on page 427.
With Driver 27 (Version 2.13.1), the IBM Dynamic Partition Manager (DPM) was introduced
for Linux only CPCs with Fibre Channel Protocol (FCP) attached storage. DPM is a mode of
operation that enables customers with little or no knowledge of IBM Z technology to set up the
system efficiently and with ease. For more information, see IBM Knowledge Center.
At IBM Knowledge Center, click the search engine window and enter dpm.
For more information, see the HMC and SE (Version 2.14.0) console help system or see IBM
Knowledge Center. At IBM Knowledge Center, click IBM Z. Then, click z14.
The HMC is a 1U IBM server and includes an IBM 1U standard tray that features a monitor
and a keyboard. The system unit and tray must be mounted in the rack in two adjacent 1U
locations in the “ergonomic zone” between 21U and 26U in a standard 19-inch rack.
The customer must provide the rack. Three C13 power receptacles are required: Two for the
system unit and one for the display and keyboard, as shown in Figure 11-2.
2x display with 2 x
keyboard and 2 x
mouse. Located
front / rear of tray.
Smart Card Reader
internal USB attached unit
to support Flash Express
and Feature on Demand.
Note: If you do a backup to an FTP server for a z14 server, ensure that you set up a
connection to the FTP server by using the Configure Backup Setting task. If a connection
to the FTP server is not set up, a message appears that prompts you to configure the
connection.
The FTP server must be supplied by the customer. You can enable a secure FTP connection
to your server.
Note: Backup FTP site is a static setting for an HMC. If an alternative FTP site is needed to
perform a backup, this process is done from another HMC.
Backing up HMCs
A backup of the HMC can be performed to the following media:
USB flash memory drive (UFD)
FTP server
UFD and FTP server
The destination options of the Backup Critical Console Data task are shown in Figure 11-5.
z14 No Yes
z13/z13s No Yes
zEC12/zBC12 Yes No
z196/z114 Yes No
z10EC/z10BC Yes No
z9EC/z9BC Yes No
Examples of the different destination options of the SE Backup Critical Data for different CPC
machine types are shown in Figure 11-5 on page 413.
For more information, see the HMC and SE console help system or IBM Knowledge Center.
HMC legacy systems support (Statement of Directiona): IBM z14 is planned to be the
last release that will allow HMC support across the prior four generations of server (N
through N-4).
Future HMC releases are intended to be tested for support of the previous two generations
(N through N-2). For example, the next HMC release would support the zNext generation,
plus z14 generation and z13/z13s generation.
This change will improve the number and extent of new features and functions that can be
pre-tested and maintained in a given release with IBM’s continued high-reliability
qualification procedures.
a. All statements regarding IBM plans, directions, and intent are subject to change or withdrawal
without notice. Any reliance on these statements of general direction is at the relying party’s
sole risk and will not create liability or obligation for IBM.
The driver of the HMC and SE is equivalent to a specific HMC and SE version, as shown in
the following examples:
Driver 79 is equivalent to Version 2.10.2
Driver 86 is equivalent to Version 2.11.0
Driver 93 is equivalent to Version 2.11.1
Driver 15 is equivalent to Version 2.12.1
Driver 22 is equivalent to Version 2.13.0
Driver 27 is equivalent to Version 2.13.1
Driver 32 is equivalent to Version 2.14.0
Driver 36 is equivalent to Version 2.14.1
An HMC with Version 2.14.1 or Version 2.14.0 can support different IBM Z types. Some
functions that are available on Version 2.14.1 and later are supported only when the HMC is
connected to an IBM Z server with Version 2.14.1.
Note: The z9 EC / z9 BC (Driver 67, SE version 2.9.2), the z900/z800 (Driver 3G, SE
Version 1.7.3) and z990/z890 (Driver 55, SE Version 1.8.2) systems are no longer
supported. If you are using these older systems, consider managing these systems by
using separate HMCs that are running older drivers.
The following previous HMCs can be carried forward (the carry forward HMCs do not provide
the Enhanced feature):
Tower FC 0092
Tower FC 0095
1U Rack FC 0094
1U Rack FC 0096
The task to set the authorization for IBM Product Engineering access to the console is shown
in Figure 11-8. When access is authorized, an IBM product engineer can use an exclusive
user ID and reserved password to log on to the console that provides tasks for problem
determination.
As shown in Figure 11-8, the task is available only to users with ACSADMIN authority.
Consider the following points:
Customers must ensure that they have redundant administrator users for each console.
Customers must document contact information and procedures.
The “Welcome Text” task can be used to identify contact information so that IBM Service
personnel know how to engage customer administrators if HMC/SE access is needed.
The options are disabled by default.
The SEs on z14 M0x servers are connected to the System Control Hubs (SCH) to control the
internal network. In previous IBM Z servers, the customer network was connected to the bulk
power hub (BPH). Now, the SEs are directly connected to the customer network.
Only the switch (and not the HMC directly) can be connected to the SEs.
The connectivity between HMCs and the SEs is shown in Figure 11-9.
The LAN ports for the SEs that are installed in the CPC are shown in Figure 11-10.
Various methods are available for setting up the network. It is your responsibility to plan and
design the HMC and SE connectivity. Select the method that is based on your connectivity
and security requirements.
For more information about the HMC settings that are related to access and security, see
the HMC and SE (v.2.14.1) console help system or see IBM Knowledge Center.
HMC
HMC remote
web browser
Ethernet
switch
HMC
Internet
IBM Remote
Support
Facility (RSF)
FTPS is based on Secure Sockets Layer cryptographic protocol (SSL) and requires
certificates to authenticate servers. SFTP is based on Secure Shell cryptographic protocol
(SSH) and requires SSH keys to authenticate servers. Required certificates and key pairs are
hosted on the z14 HMC Console.
All three protocols are supported for tasks that previously used only FTP. Although several
tasks used only removable media, FTP connections are used with z14 HMC console. The
recommended network topology for HMC, SE, and FTP server is shown in Figure 11-12.
Figure 11-12 Recommended Network Topology for HMC, SE, and FTP server
With z14 HMC, all FTP connections that originate from SEs are taken to HMC consoles.
Secure FTP server credentials must be imported to one or more managing HMC consoles.
After the HMC console completes all FTP operations, the HMC console performs the FTP
operation on the SE’s behalf and returns the results. The IBM Z platform must be managed by
at least one HMC to allow FTP operations to work.
Therefore, z14 HMC consoles abandon anonymous cipher suite and implement an industry
standard-based, password-driven cryptography system. The Domain Security Settings are
used to provide authentication and high-quality encryption. Because of these changes, we
now recommend that customers use unique Domain Security settings to provide maximum
security. The new system provides greater security than anonymous cipher suites, even if the
default settings are used.
To allow greater flexibility in password selection, the password limit was increased to 64
characters and special characters are allowed for z14 only installations. If communication with
older systems is needed, the previous password limits must be followed (6 - 8 characters, only
uppercase and number characters allowed).
For more information about HMC networks, see the following resources:
The HMC and SE (Version 2.14.0) console help system, or see IBM Knowledge Center.
At IBM Knowledge Center, click IBM Z. Then, click z14.
IBM z14 Installation Manual for Physical Planning, GC28-6965.
Ethernet switches
Ethernet switches for HMC and SE connectivity are provided by the customer. Existing
supported switches can still be used.
RSF is broadband-only
RSF through a modem is not supported on the z14 HMC. Broadband is needed for hardware
problem reporting and service. For more information, see 11.4, “Remote Support Facility” on
page 425.
IPv6 addresses are easily identified. A fully qualified IPV6 address features 16 bytes. It is
written as eight 16-bit hexadecimal blocks that are separated by colons, as shown in the
following example:
2001:0db8:0000:0000:0202:b3ff:fe1e:8329
Because many IPv6 addresses are not fully qualified, shorthand notation can be used. In
shorthand notation, the leading zeros can be omitted, and a series of consecutive zeros can
be replaced with a double colon. The address in the previous example also can be written in
the following manner:
2001:db8::202:b3ff:fe1e:8329
If an IPv6 address is assigned to the HMC for remote operations that use a web browser,
browse to it by specifying that address. The address must be surrounded with square
brackets in the browser’s address field, as shown in the following example:
https://[fdab:1b89:fc07:1:201:6cff:fe72:ba7c]
Multi-factor authentication first factor is login and password; the second factor is a
time-based, one-time password that is sent to your smartphone. This password is defined in
RFC 6238 standard and uses a cryptographic hash function that combines a secret key with
the current time to generate a one-time password.
The secret key is generated by HMC/SE/TKE while the user is performing first factor logon.
The secret key is known only to HMC/SE/TKE and to the user’s smartphone. For that reason
it needs to be protected as much as your first factor password.
Multi-factor authentication code (MFA code) that was generated as a second factor is
time-sensitive. Therefore, it is important to remember that it should be used soon after it is
generated.
The algorithm within the HMC that is responsible for MFA code generation changes the code
every 30 seconds. However, to make things easier, the HMC and SE console accepts current,
previous, and next MFA codes. It is also important to have HMC, SE, TKE, and smartphone
clocks synced. If the clocks are not synced, the MFA logon attempt fails. Another important
fact is that time zone differences are irrelevant because the MFA code algorithm uses UTC.
Consideration: RSF through a modem is not supported on the z14 HMC. Broadband
connectivity is needed for hardware problem reporting and service.
More information: For more information about the benefits of Broadband RSF and the
SSL/TLS-secured protocol, and a sample configuration for the Broadband RSF
connection, see Integrating the HMC Broadband Remote Support Facility into Your
Enterprise, SC28-6927.
11.4.2 RSF connections to IBM and Enhanced IBM Service Support System
If the HMC and SE are at Driver 22 or later, the driver uses a new remote infrastructure at IBM
when the HMC connects through RSF for certain tasks. Check your network infrastructure
settings to ensure that this new infrastructure works.
At the time of this writing, RSF still uses the “traditional” RETAIN connection. You must add
access to the new Enhanced IBM Service Support System to your current RSF infrastructure
(proxy, firewall, and so on).
To have the best availability and redundancy and to be prepared for the future, the HMC must
access IBM by using the internet through RSF in the following manner: Transmission to the
enhanced IBM Support System requires a domain name server (DNS). The DNS must be
configured on the HMC if you are not using a proxy for RSF. If you are using a proxy for RSF,
the proxy must provide the DNS.
The following host names and IP addresses are used and your network infrastructure must
allow the HMC to access the following host names:
www-945.ibm.com on port 443
esupport.ibm.com on port 443
Microsoft Internet Explorer, Mozilla Firefox, and Goggle Chrome were tested as remote
browsers. For more information about web browser requirements, see the HMC and SE
console help system or IBM Knowledge Center.
A full set of granular security controls are provided from the HMC console, to the user, to the
monitor only, and mobile app password, including multi-factor authentication. This mobile
interface is optional and is disabled by default.
The HMC is used to start the power-on reset (POR) of the server. During the POR, processor
units (PUs) are characterized and placed into their respective pools, memory is put into a
single storage pool, and the IOCDS is loaded and initialized into the hardware system area
(HSA).
The hardware messages task displays hardware-related messages at the CPC, LPAR, or SE
level. It also displays hardware messages that relate to the HMC.
You can use the Load task on the HMC to perform an IPL of an operating system. This task
causes a program to be read from a designated device, and starts that program. You can
perform the IPL of the operating system from storage, the HMC DVD-RAM drive, the USB
flash memory drive (UFD), or an FTP server.
When an LPAR is active and an operating system is running in it, you can use the HMC to
dynamically change certain LPAR parameters. The HMC provides an interface to change
partition weights, adds logical processors to partitions, and adds memory.
LPAR weights can also be changed through a scheduled operation. Use the Customize
Scheduled Operations task to define the weights that are set to LPARs at the scheduled time.
Channel paths can be dynamically configured on and off (as needed for each partition) from
an HMC.
The Change LPAR Controls task for z14 servers can export the Change LPAR Controls table
data to a comma-separated value (.csv)-formatted file. This support is available to a user
when they are connected to the HMC remotely by a web browser.
Partition capping values can be scheduled and are specified on the Change LPAR Controls
scheduled operation support. Viewing more information about a Change LPAR Controls
scheduled operation is available on the SE.
To indicate that the LPAR can use the undedicated processors absolute capping, select
Absolute capping on the Image Profile Processor settings to specify an absolute number of
processors at which to cap the LPAR’s activity. The absolute capping value can be “None” or
a value for the number of processors (0.01 - 255.0).
A group name, processor capping value, and partition membership are specified at the
hardware console, along with the following properties:
Set an absolute capacity cap by CPU type on a group of LPARs.
Allows each of the partitions to use capacity up to their individual limits if the group's
aggregate consumption does not exceed the group absolute capacity limit.
Includes updated SysEvent QVS support (used by vendors who implement software
pricing).
Only shared partitions are managed in these groups.
Can specify caps for one or more processor types in the group.
Specified in absolute processor capacity (for example, 2.5 processors).
Use Change LPAR Group Controls (as with windows that are used for software
group-defined capacity), as shown in Figure 11-16 (snapshot on a z13 server).
The value is not tied to the Licensed Internal Code (LIC) configuration code (LICCC). Any
value 0.01 - 255.00 can be specified. This configuration makes the profiles more portable and
means that you do not have issues in the future when profiles are migrated to new machines.
Although the absolute cap can be specified to hundredths of a processor, the exact amount
might not be that precise. The same factors that influence the “machine capacity” also
influence the precision with which the absolute capping works.
The HMC also provides integrated 3270 and ASCII consoles. These consoles allow an
operating system to be accessed without requiring other network or network devices, such as
TCP/IP or control units.
Use the Certificate Management task if the certificates that are returned by the 3270 server
are not signed by a well-known trusted certificate authority (CA) certificate, such as VeriSign
or Geotrust. An advanced action within the Certificate Management task, Manage Trusted
Signing Certificates, is used to add trusted signing certificates.
For example, if the certificate that is associated with the 3270 server on the IBM host is
signed and issued by a corporate certificate, it must be imported, as shown in Figure 11-17.
The import from the remote server option can be used if the connection between the console
and the IBM host can be trusted when the certificate is imported, as shown in Figure 11-18 on
page 431. Otherwise, import the certificate by using removable media.
A secure Telnet connection is established by adding the prefix L: to the IP address:port of the
IBM host, as shown in Figure 11-19.
When you perform a driver upgrade, always check the Driver (xx) Customer Exception Letter
option in the Fixes section at the IBM Resource Link.
For more information, see 9.9, “z14 Enhanced Driver Maintenance” on page 383.
Tip: The IBM Resource Linka provides access to the system information for your IBM Z
server according to the system availability data that is sent on a scheduled basis. It
provides more information about the MCL status of your z14 servers. For more information
about accessing the Resource Link, see the IBM Resource Link website.
At the Resource Link website, click Tools → Machine Information, choose your IBM Z
server, and then, click EC/MCL.
a. Registration is required to access the IBM Resource Link.
How the driver, bundle, EC stream, MCL, and MCFs interact with each other is shown in
Figure 11-20.
Multiple graphical views exist for displaying data, including history charts. The Open Activity
task (known as SAD) monitors processor and channel usage. It produces data that includes
power monitoring information, power consumption, and the air input temperature for the
server.
The data is presented in table format and graphical “histogram” format. The data also can be
exported to a .csv-formatted file so that the data can be imported into a spreadsheet. For this
task, you must use a web browser to connect to an HMC.
The HMC for IBM z14 servers features the following CoD capabilities:
SNMP API support:
– API interfaces for granular activation and deactivation
– API interfaces for enhanced CoD query information
– API event notification for any CoD change activity on the system
– CoD API interfaces, such as On/Off CoD and Capacity BackUp (CBU)
HMC and SE are a part of the z/OS Capacity Provisioning environment. The Capacity
Provisioning Manager (CPM) communicates with the HMC through IBM Z APIs, and enters
CoD requests. For this reason, SNMP must be configured and enabled by using the
Customize API Settings task on the HMC.
For more information about using and setting up CPM, see the following publications:
z/OS MVS™ Capacity Provisioning User’s Guide, SC33-8299
IBM Z System Capacity on Demand User’s Guide, SC28-6943
In a STP-only CTN, the HMC can be used to perform the following tasks:
Initialize or modify the CTN ID.
Initialize the time (manually or by contacting an NTP server).
Initialize the time zone offset, Daylight Saving Time offset, and leap second offset.
Assign the roles of preferred, backup, and current time servers, and arbiter.
Adjust time by up to plus or minus 60 seconds.
Schedule changes to the offsets listed. STP can automatically schedule Daylight Saving
Time, based on the selected time zone.
Monitor the status of the CTN.
Monitor the status of the coupling links that are initialized for STP message exchanges.
For diagnostic purposes, the PPS port state on a z14 server can be displayed and fenced
ports can be reset individually.
The in-line definition of technical terms eliminates the need to look up documentation to
determine definitions. Detailed instructions and guidelines are provided within task workflow.
New tasks provide a visual representation of STP topology. Current system time networks are
shown in topological display. A preview of any configuration action is also shown in topological
display. Those changes make administrator more confident and enable catching more errors.
Attention: A Schedule leap second offset change to 26 seconds that is scheduled for
12/31/2014 is shown in Figure 11-25. This leap is not a real leap second that is released by
the International Earth Rotation and Reference System Services. It was temporarily set
only to show the panel appearance.
Figure 11-25 CTN topology visible on HMC Manage System Time window
Requirements
ECAR is available on z14 and z13/z13s servers on Driver 27 and later only. In a mixed
environment with previous generation machines, you should define a z14, z13, or z13s server
as the PTS and CTS.
For more information about planning and setup, see the following publications:
Server Time Protocol Planning Guide, SG24-7280
Server Time Protocol Implementation Guide, SG24-7281
Server Time Protocol Recovery Guide, SG24-7380
CTN Split
The HMC menus for Server Time Protocol (STP) were enhanced to provide support when
one or more systems must be split in to a separate CTN without interruption in the clock
source.
The task is available under the Advanced Actions option in the Manage System Time task.
Several checks are performed to avoid potential disruptive actions. If targeted CTN only has
members with the roles, task launch fails with an error message. If targeted CTN has at least
one system without any roles, task launches. An informational warning is presented to the
user to acknowledge that sysplex workloads are divided appropriately.
Note: After joining the selected CTN, all systems within the current CTN are synchronized
with the Current Time Server of the selected CTN. A coupling link must be in place
connecting the CTS of the selected CTN and the CTS of the current CTN.
During the transition state, most of the STP actions for the two affected CTNs are disabled.
After the merge is completed, STP actions are enabled again.
For more information about planning and understanding STP server roles, see the following
publications:
Server Time Protocol Planning Guide, SG24-7280
Server Time Protocol Implementation Guide, SG24-7281
Server Time Protocol Recovery Guide, SG24-7380
The NTP server becomes the single time source (the ETS) for STP and other servers that are
not IBM Z servers (such as AIX®, and Microsoft Windows) that include NTP clients.
The HMC can act as an NTP server. With this support, the z14 server can receive the time
from the HMC without accessing a LAN other than the HMC and SE network. When the HMC
is used as an NTP server, it can be configured to receive the NTP source from the internet.
For this type of configuration, a LAN that is separate from the HMC/SE LAN can be used.
The HMC offers the following symmetric key and autokey authentication and NTP commands:
Symmetric key (NTP V3-V4) authentication
Symmetric key authentication is described in RFC 1305, which was made available in
NTP Version 3. Symmetric key encryption uses the same key for encryption and
decryption. Users that are exchanging data keep this key to themselves. Messages
encrypted with a secret key can be decrypted only with the same secret key. Symmetric
key authentication supports network address translation (NAT).
Symmetric key autokey (NTP V4) authentication
This autokey uses public key cryptography, as described in RFC 5906, which was made
available in NTP Version 4. You can generate keys for the HMC NTP by clicking Generate
Local Host Key in the Autokey Configuration window. This option issues the ntp-keygen
command to generate the specific key and certificate for this system. Autokey
authentication is not available with the NAT firewall.
Issue NTP commands
NTP command support is added to display the status of remote NTP servers and the
current NTP server (HMC).
With z14 servers, you can offload the following HMC and SE log files for customer audit:
Console event log
Console service history
Tasks performed log
Security logs
System log
Full log offload and delta log offload (since the last offload request) are provided. Offloading
to removable media and to remote locations by FTP is available. The offloading can be
manually started by the new Audit and Log Management task or scheduled by the Customize
Scheduled Operations task. The data can be offloaded in the HTML and XML formats.
Each HMC user ID template defines the specific authorization levels for the tasks and objects
for the user who is mapped to that template. The HMC user is mapped to a specific user ID
template by user ID pattern matching. The system then obtains the name of the user ID
template from content in the LDAP server schema data.
If you want to change the roles for a default user ID, create your own version by copying a
default user ID.
The Secure FTP infrastructure allows HMC and SE applications to query whether a public key
is associated with a host address and to use the Secure FTP interface with the appropriate
public key for a host. Tasks that use FTP now provide a selection for the secure host
connection.
When selected, the task verifies that a public key is associated with the specified host name.
If a public key is not provided, a message window opens that points to the Manage SSH Keys
task to enter a public key. The following tasks provide this support:
Import/Export IOCDS
Advanced Facilities FTP IBM Content Collector Load
Audit and Log Management (Scheduled Operations only)
FCP Configuration Import/Export
OSA view Port Parameter Export
OSA Integrated Console Configuration Import/Export
Notes: Memory is always cleared as part of activating an image before any load is
performed. Therefore, not clearing the memory is not an option when activating with an
image profile.
When managed by HMC version 2.14.1, a z14 Driver level 32 or older system cannot take
advantage of the SCSI load normal option.
Note: When the user is physically logged in (that is, by using the SE’s keyboard or display),
sessions are not disconnected. Only the “Chat” option is available.
The information that is needed to manage a system’s I/O configuration must be obtained from
many separate sources. The System Input/Output Configuration Analyzer task enables the
system hardware administrator to access, from one location, the information from those
sources. Managing I/O configurations then becomes easier, particularly across multiple
servers.
The System Input/Output Configuration Analyzer is a view-only tool. It does not offer any
options other than viewing. By using the tool, data is formatted and displayed in five different
views. The tool provides various sort options, and data can be exported to a UFD for later
viewing.
The HMC supports the CIM as an extra systems management API. The focus is on attribute
query and operational management functions for IBM Z servers, such as CPCs, images, and
activation profiles. z13 servers contain a number of enhancements to the CIM systems
management API. The function is similar to the function that is provided by the SNMP API.
For more information about APIs, see IBM Z Application Programming Interfaces,
SB10-7164.
Cryptographic hardware
z14 servers include standard cryptographic hardware and optional cryptographic features for
flexibility and growth capability.
When EP11 mode is selected, a unique Enterprise PKCS #11 firmware is loaded into the
cryptographic coprocessor. It is separate from the Common Cryptographic Architecture
(CCA) firmware that is loaded when a CCA coprocessor is selected. CCA firmware and
PKCS #11 firmware cannot coexist in a card.
The Trusted Key Entry (TKE) Workstation with smart card reader feature is required to
support the administration of the Crypto Express6S when configured as an Enterprise
PKCS #11 coprocessor.
To support the new Crypto Express6S card, the Cryptographic Configuration window was
changed to support the following card modes:
Accelerator mode (CEX6A)
CCA Coprocessor mode (CEX6C)
PKCS #11 Coprocessor mode (CEX6P)
The Usage Domain Zeroize task is provided to clear the appropriate partition crypto keys for a
usage domain when you remove a crypto card from a partition. Crypto Express6/5S in EP11
mode is configured to the standby state after the zeroize process.
The following CCA compliance levels for Crypto Express6S are available on SE:
CCA: Non-compliant (default)
CCA: PCI-HSM 2016
CCA: PCI-HSM 2016
The following EP11 compliance levels (Crypto Express5S and Crypto Express6S) are
available:
FIPS 2009 (default
FIPS 2011
BSI 2009
BSI 2011
Setting up is a disruptive action. The selection of the DPM mode of operation is done by using
a function that is called “Enable Dynamic Partition Manager”, which is under the SE CPC
Configuration menu.
After the CPC is restarted and you log on to the HMC in which this CPC is defined, the HMC
shows the welcome window that is shown in Figure 11-28.
New LPARs can be added by selecting Get Started. For more information, see IBM
Knowledge Center.
At IBM Knowledge Center, click the search engine window and enter dpm.
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
Uniprocessor performance also increased. On average, a z14 Model 701 offers performance
improvements of more than 8% over the z13 Model 701. The estimated capacity ratios for
z13, zEC12, z196, and z10 EC are shown in Figure 12-1.
The Large System Performance Reference (LSPR) provides capacity ratios among various
processor families that are based on various measured workloads. It is a common practice to
assign a capacity scaling value to processors as a high-level approximation of their
capacities.
For z/OS V2R2 studies, the capacity scaling factor that is commonly associated with the
reference processor is set to a 2094-701 with a Processor Capacity Index (PCI) value of 593.
This value is unchanged since z/OS V1R11 LSPR. The use of the same scaling factor across
LSPR releases minimizes the changes in capacity results for an older study and provides
more accurate capacity view for a new study.
Performance data for z14 servers were obtained with z/OS V2R2 (running Db2 for z/OS V11,
CICS TS V5R3, IMS V14, Enterprise COBOL V6R1, and WebSphere Application Server for
z/OS V8.5.5.9). All IBM Z server generations are measured in the same environment with the
same workloads at high usage.
Consult the LSPR when you consider performance on the z14. The range of performance
ratings across the individual LSPR workloads is likely to include a large spread. Performance
of the individual logical partitions (LPARs) varies depending on the fluctuating resource
requirements of other partitions and the availability of processor units (PUs). For more
information, see 12.7, “Workload performance variation” on page 454.
For detailed performance information, see the Large Systems Performance Reference for
IBM Z page of the Resource Link website.
For more information about millions of service units (MSU) ratings, see the IBM z Systems
Software Contracts page of the IBM IT infrastructure website.
The CPU Measurement Facility (CPU MF) data that was introduced on the z10 provides
insight into the interaction of workload and hardware design in production workloads. CPU
MF data helps LSPR to adjust workload capacity curves that are based on the underlying
hardware sensitivities, in particular, the processor access to caches and memory. This
processor access to caches and memory is called nest. By using this data, LSPR introduces
three workload capacity categories that replace all older primitives and mixes.
LSPR contains the internal throughput rate ratios (ITRRs) for the z14 and the previous
generation processor families. These ratios are based on measurements and projections that
use standard IBM benchmarks in a controlled environment.
The throughput that any user experiences can vary depending on the amount of
multiprogramming in the user’s job stream, the I/O configuration, and the workload
processed. Therefore, no assurance can be given that an individual user can achieve
throughput improvements that are equivalent to the performance ratios that are stated.
The path length varies for each transaction or job, and depends on the complexity of the tasks
that must be run. For a particular transaction or job, the application path length tends to stay
the same, assuming that the transaction or job is asked to run the same task each time.
However, the path length that is associated with the operating system or subsystem can vary
based on the following factors:
Competition with other tasks in the system for shared resources. As the total number of
tasks grows, more instructions are needed to manage the resources.
The number of logical processors (n-way) of the image or LPAR. As the number of logical
processors grows, more instructions are needed to manage resources that are serialized
by latches and locks.
As workloads are moved between microprocessors with various designs, performance varies.
However, when on a processor, this component tends to be similar across all models of that
processor.
A memory nest in a fully populated z14 CPC drawer is shown in Figure 12-2.
Memory
L4 Cache SC SCM
L2 L2 L2 L2 L2 L2 L2 L2 L2 L2 L2 L2
L1 L1 L1 L1 L1 L1 L1 L1 L1 L1 L1 L1
PU1 PU8 PU1 PU8 PU1 PU8 PU1 PU8 PU1 PU8 PU1 PU8
Workload performance is sensitive to how deep into the memory hierarchy the processor
must go to retrieve the workload instructions and data for running. The best performance
occurs when the instructions and data are in the caches nearest the processor because little
time is spent waiting before running. If the instructions and data must be retrieved from farther
out in the hierarchy, the processor spends more time waiting for their arrival.
As workloads are moved between processors with various memory hierarchy designs,
performance varies because the average time to retrieve instructions and data from within the
memory hierarchy varies. Also, when on a processor, this component continues to vary
because the location of a workload’s instructions and data within the memory hierarchy is
affected by several factors that include, but are not limited to, the following factors:
Locality of reference
I/O rate
Competition from other applications and LPARs
The term Relative Nest Intensity (RNI) indicates the level of activity to this part of the memory
hierarchy. By using data from CPU MF, the RNI of the workload that is running in an LPAR can
be calculated. The higher the RNI, the deeper into the memory hierarchy the processor must
go to retrieve the instructions and data for that workload.
The “Nest”
L2LP L2RP
L1 MEMP
L3P L4LP L4RP
Many factors influence the performance of a workload. However, these factors often are
influencing is the RNI of the workload. The interaction of all these factors results in a net RNI
for the workload, which in turn directly relates to the performance of the workload.
These factors are tendencies, not absolutes. For example, a workload might have a low I/O
rate, intensive processor use, and a high locality of reference, which all suggest a low RNI.
But, it might be competing with many other applications within the same LPAR and many
other LPARs on the processor, which tends to create a higher RNI. It is the net effect of the
interaction of all these factors that determines the RNI.
The traditional factors that were used to categorize workloads in the past are shown with their
RNI tendency in Figure 12-4.
Little can be done to affect most of these factors. An application type is whatever is necessary
to do the job. The data reference pattern and processor usage tend to be inherent to the
nature of the application. The LPAR configuration and application mix are mostly a function of
what must be supported on a system. The I/O rate can be influenced somewhat through
buffer pool tuning.
However, one factor, software configuration tuning, is often overlooked but can have a direct
effect on RNI. This term refers to the number of address spaces (such as CICS
application-owning regions (AORs) or batch initiators) that are needed to support a workload.
Tuning to reduce the number of simultaneously active address spaces to the optimum number
that is needed to support a workload can reduce RNI and improve performance. In the LSPR,
the number of address spaces for each processor type and n-way configuration is tuned to be
consistent with what is needed to support the workload. Therefore, the LSPR workload
capacity ratios reflect a presumed level of software configuration tuning. Retuning the
software configuration of a production workload as it moves to a larger or faster processor
might be needed to achieve the published LSPR ratios.
These categories are based on the RNI. The RNI is influenced by many variables, such as
application type, I/O rate, application mix, processor usage, data reference patterns, LPAR
configuration, and the software configuration that is running. CPU MF data can be collected
by z/OS System Measurement Facility on SMF 113 records or z/VM Monitor starting with
z/VM V5R4.
The IBM Processor Capacity Reference for IBM Z (zPCR) tool supports the following
workload categories:
Low
Low-Average
For more information about the no-charge IBM zPCR tool (which reflects the latest IBM LSPR
measurements), see the Getting Started with zPCR (IBM's Processor Capacity Reference)
page of the IBM Techdocs Library website.
Beginning with the z10 processor, the hardware characteristics can be measured by using
CPU MF (SMF 113) counters data. A production workload can be matched to an LSPR
workload category through these hardware characteristics. For more information about RNI,
see 12.5, “LSPR workload categories based on relative nest intensity” on page 453.
The AVERAGE RNI LSPR workload is intended to match most client workloads. When no
other data is available, use the AVERAGE RNI LSPR workload for capacity analysis.
For z10 and newer processors, the CPU MF data can be used to provide an extra hint as to
workload selection. When available, this data allows the RNI for a production workload to be
calculated.
By using the RNI and another factor from CPU MF, the L1MP (percentage of data and
instruction references that miss the L1 cache), a workload can be classified as LOW,
AVERAGE, or HIGH RNI. This classification and resulting hit are automated in the zPCR tool.
It is preferable to use zPCR for capacity sizing.
Starting with z13, Instructions Per Cycle (IPC) improvements in core and cache became the
driving factor for performance gains. As these microarchitectural features increase (which
contributes to instruction parallelism), overall workload performance variability also increases
because not all workloads react the same way to these enhancements.
Because of the nature of the z14 multi-CPC drawer system and resource management
across those drawers, performance variability from application to application is expected.
Also, the memory and cache designs affect various workloads in many ways. All workloads
are improved, with cache-intensive loads benefiting the most. For example, having more PUs
per CPC drawer, each with higher capacity than z13, more workload can fit on a z14 CPC
drawer. This configuration can result in better performance.
6 For more information, see the Moore’s Law website.
The workload variability for moving from z13 to z14 is expected to be stable. Workloads that
are migrating from zEC12 and prior generations to z14 can expect to see similar results with
slightly less variability than the typical z13 experience.
Experience demonstrates that IBM Z servers can be run at up to 100% utilization levels,
sustained. However, most clients prefer to leave some room and run at 90% or slightly under.
For any capacity comparison exercise that uses a single metric, such as MIPS or MSU, is not
a valid method. When deciding the number of processors and the uniprocessor capacity,
consider the workload characteristics and LPAR configuration. For these reasons, the use of
zPCR and involving IBM technical support are recommended when you plan capacity.
The z14 design features the following enhancements as compared with the z13:
Increased total number of PUs that are available on the system from 168 to 196, and
number of characterizable cores, from 141 to 170
Hardware system area (HSA) increased from 96 GB to 192 GB
A total of 32 TB of addressable memory (configurable to LPARs) with up to 16 TB of
memory per LPAR
PR/SM enhancements:
– Improved memory affinity
– Optimized LPAR placement algorithms
Dynamic Partition Manager Version 3.2
– FC (with z14 hardware) and FCP storage support
– Storage Groups management enhancements
SMT enablement for system assist processor (SAP) processors
New Coupling Facility Control Code (CFCC) with improved performance and following
enhancements:
– Asynchronous Cross-Invalidate (XI) of CF cache Structures
7 Check the IBM United States Hardware Announcement 118-075 and Driver Exception Letter for feature availability.
This appendix also briefly describes the reason why IBM created the SSC framework and
how the SSC environment is intended to be used.
1 Secure Service Container is the infrastructure that is required to deploy appliances (framework) in a secure
container on supported IBM Z hardware. With IBM United States Software Announcement 218-152, dated October
2, 2018, IBM introduces IBM Secure Service Container for IBM Cloud Private. IBM Cloud™ Private is a Platform as
a Service (PaaS) environment for developing and managing containerized applications.
An appliance must satisfy various requirements, such as certified functionality and security
(the function it provides must be tamper resistant even from a system administrators or other
privileged users) and simple deployment and maintenance.
In the current IT deployments, various components that serve the business processes
(databases, middleware, applications, and so on) require specialized management functions
(such as access management, enterprise directories, secure key management, backup and
restore). The development requirements of the management functions do not follow the
dynamic of the actual business functions.
Because of the diversity of the platforms on which the business applications run, the
management function must be maintained (updated, tested or even certified) if the
management functions are deployed alongside the mainstream business applications when
the platform must be maintained or upgraded. However, the complexity and associated IT
spending is increased.
As such, these management functions can be deployed by using an appliance model in which
the functions that are provided are available and accessible through standardized methods.
Many appliances are available from various suppliers. Each appliance includes the following
features:
Separate administration (and deployment process)
Different hardware configuration requirements
Different performance profile and management requirements
Different security characteristics (that require alignment with enterprise requirements)
IBM Z Appliance
An IBM Z Appliance is an integration of operating system, middleware, and software
components that work autonomously. They also provide core services and infrastructure that
focuses on consumability and security.
An appliance is deployed as system image that can be started that contains all of the
necessary layers to provide a specific set of services or functions. IBM Z Appliance are
implemented as a firmware appliance or a software appliance.
Multiple virtual appliances integrated into IBM Secure Service Container can be deployed on
IBM z14 (z13 and z13s also). These virtual appliances include the following common
features:
Administration (deployment)
Hardware configuration
Managed performance profiles
Security characteristics (aligned with enterprise requirements)
At the time of this writing, the following appliances are available from IBM:
z/VSE Network Appliance.
IBM Z Advanced Workload Analysis Reporter (IBM zAware), which is now deployed as a
software appliance and integrated with IBM Operations Analytics for Z.
More appliances are expected in the future. Appliances can be implemented as firmware or
software, depending on the environment on which the appliance runs and the function it must
provide.
The SSC framework is available on IBM z14, z13, and z13s servers.
The SCC framework also provides a set of utilities to implement the common functions that all
appliances need (FFDC, network setup, appliance configuration, and so on.). An application
developer can use the SSC framework to turn a solution into a stand-alone appliance that is
easily installed onto the IBM Z platform.
The SSC framework enables the release a product as software or firmware based on a
business decision, not on a technical decision.
Deploying an appliance takes minutes. Appliances do not require any operating system
knowledge or middleware knowledge. They allow users to focus on the core services they
deliver.
SSC provides a highly secure context (see Figure A-2) for deploying appliances that include
the following features:
Allows no system admin access:
– After the appliance image is built, OS access (ssh) is not possible
– Only Remote APIs are available
– Memory access of system admin is disabled
Data storage uses encrypted disk
Debug data (dumps) are encrypted
Strong isolation between container instances
High assurance isolation
The SSC framework provides following appliance management controls for appliance
administrators:
View messages and events
Manage network, users and disks
View appliance status
Export and import data
Apply services and updates
Support for software license
At the time of this writing, the SSC software framework provides support for the following
components:
FCP and ECKD storage
Dynamic Partition Manager
User management within appliance with LDAP
Enhanced network and storage management user interface (UI)
File system with embedded CRC checking
Include KVM, qemu, virsh packages
Embedded OS upgrades
Support smart card machine unique key handling
For all optical links, the connector type is LC Duplex, except for the zHyperLink, the 12xIFB,
and the ICA SR connections, which are established with multifiber push-on (MPO)
connectors. The MPO connector of the 12xIFB connection includes one row of 12 fibers.
The MPO connector of the zHyperLink, and the ICA connection features two rows of 12 fibers
and are interchangeable. The electrical Ethernet cable for the Open Systems Adapter (OSA)
connectivity is connected through an RJ45 jack.
The attributes of the channel options that are supported on z14 servers are listed in
Table B-1.
OM4 150 m
Open Systems Adapter (OSA) and Remote Direct Memory over Converged Ethernet (RoCE)
Parallel Sysplex
The maximum unrepeated distances for FICON short wavelength (SX) features are listed in
Table B-2.
OM3 860 meters 500 meters 380 meters 150 meters 100 meters
(50 µm at 2000 MHz·km)
2822 feet 1640 feet 1247 feet 492 feet 328 feet
OM4a N/A 500 meters 400 meters 190 meters 125 meters
(50 µm at 4700 MHz-km)
N/A 1640 feet 1312 feet 693 feet 410 feet
a. Fibre Channel Standard (not certified for Ethernet)
1
IBM zEC12 and zBC12 also support the 10GbE RoCE Express feature (FC 0411), but one feature must be
dedicated to one LPAR.
These adapters are installed into a PCIe I/O drawer with the I/O features and include a
physical channel ID (PCHID) that is assigned according to its physical location.
For all the feature adapters that are installed in an I/O drawer, management functions in the
form of device drivers and diagnostic tools are always implemented to support virtualization of
the adapter, service, and maintenance.
Traditionally, these management functions are integrated on the adapter with specific
hardware design. For the newly introduced native PCIe adapters, these functions are moved
out of the adapter and are now handled by an IFP.
For the RoCE Express, Coupling Express Long Reach, and zEDC, device drivers and
diagnostic tools are now running on the IFP and use four RGs. Management functions,
including virtualization, servicing and recovery, diagnostics, failover, firmware updates against
an adapter, and other functions, are still implemented.
If a native PCIe feature is installed in the system, the system allocates and initializes an IFP
during its power-on reset (POR) phase. Although the IFP is allocated to one of the physical
PUs, it is not visible to the users. In an error or failover scenario, PU sparing also happens for
an IFP, with the same rules as other PUs.
2
Unless otherwise specified, RoCE Express2 refers to both 25GbE and 10GbE RoCE Express2 features (FC 0430
and FC 0412 respectively) for the remainder of this chapter.
As shown in Figure C-1, each I/O domain in a PCIe I/O drawer of a z14 server is logically
attached to one resource group. The native PCIe I/O feature adapters are managed by their
respective RG for device drivers and diagnostic tools functions.
Figure C-1 I/O domains and resource groups that are managed by the IFP - z14
Up to five PCIe I/O drawers are supported on z14 servers. The same type of native PCIe
features is always assigned to different I/O domains in different resource groups (and different
PCIe I/O drawers if the configuration includes them) to eliminate the possibility of a single
point of failure.
As of this writing, an I/O domain of the PCIe I/O drawer can support a total of two native PCIe
feature adapters with the PCIe feature cards (FICON, OSA, and Crypto).
Considering availability, install adapters of the same type in slots of different I/O domains,
drawers, fanouts, and resource groups. The next sections provide more information about
achieving a highly available configuration.
A sample PCHID report of a z13 configuration with four zEDC Express features and four
10GbE RoCE Express features is shown in Figure C-2 on page 473. The following
information is listed for each adapter:
PCHID and ports
The Resource Group that the adapter is attached to (Comment column)
Physical location (drawer, slot)
The native PCIe features are not part of the traditional channel subsystem (CSS). Although
they do not include a channel-path identifier (CHPID) assigned, they include a PCHID that is
assigned according to their physical location in the PCIe I/O drawer.
To define the native PCIe adapters in the HCD or HMC, a new I/O configuration program
(IOCP) FUNCTION statement is introduced that includes several feature-specific parameters.
The IOCP example that is shown in Example C-3 defines zEDC Express and 10GbE RoCE
Express2 features to LPARs LP14 and LP15.
10GbE RoCE Express Functions for LPAR LP14, Reconfigurable to LP03 or LP04:
FUNCTION FID=9,VF=01,PART=((LP14),(LP03,LP04)),PNETID=(NET1,NET2), *
TYPE=ROCE,PCHID=13C
FUNCTION FID=A,VF=1,PART=((LP14),(LP03,LP04)),PNETID=(NET1,NET2), *
TYPE=ROCE,PCHID=17C
10GbE RoCE Express Functions for LPAR LP15, Reconfigurable to LP03 or LP04:
FUNCTION FID=B,VF=01,PART=((LP15),(LP03,LP04)),PNETID=(NET1,NET2), *
TYPE=ROCE,PCHID=184
FUNCTION FID=C,VF=01,PART=((LP15),(LP03,LP04)),PNETID=(NET1,NET2), *
TYPE=ROCE,PCHID=1C0
Figure C-3 Example of IOCP statements for zEDC Express and 10GbE RoCE Express2
3 The zHyperLink Express feature is not managed by the Resource Groups firmware.
This appendix briefly describes the optional Shared Memory Communications (SMC) function
that is implemented on IBM Z servers as Shared Memory Communications over Remote
Direct Memory Access (SMC-R) and Shared Memory Communications - Direct Memory
Access (SMC-D) of IBM z14, z13, and z13s servers.
Note: Throughout this chapter, “z14” refers to IBM z14 Model M0x (Machine Type 3906)
unless otherwise specified.
Traditional Ethernet transports, such as TCP/IP, typically use software-based mechanisms for
error detection and recovery. They also are based on the underlying Ethernet fabric that uses
a “best-effort” policy. With the traditional policy, the switches typically discard packets that are
in congestion and rely on the upper-level transport for packet retransmission.
However, RoCE uses hardware-based error detection and recovery mechanisms that are
defined by the InfiniBand specification. A RoCE transport performs best when the underlying
Ethernet fabric provides a lossless capability, where packets are not routinely dropped.
The following key requirements for RDMA are shown in Figure D-1:
A reliable “lossless” Ethernet network fabric (LAN for layer 2 data center network distance)
An RDMA network interface card (RNIC)
Host A Host B
Memory A Memory B
CPU CPU
RDMA technology is now available on Ethernet. RoCE uses an Ethernet fabric (switches with
Global Pause enabled) and requires advanced Ethernet hardware (RNICs on the host).
Middleware/Application Middleware/Application
Sockets Sockets
TCP TCP
SMC-R SMC-R
IP IP
Interface Interface
IP Network (Ethernet)
Dynamic (in-line) negotiation for SMC-R is initiated by presence of TCP Option (SMCR)
TCP connection transitions to SMC-R allowing application data to be exchanged using RDMA
z/OS CS
implements N number of z/OS instances CPC
PCIe RoCE 2 Transparent Firmware SPs
Device Driver Firmware SPs
(with SR-IOV) provide PCIe RoCE
z/OS LP PCIe Firmware
Support Partition SR-IOV Physical
Function
RoCE PCI
Virtual Function Physical Function
(physical resources)
DLC Function
ID (VF n) PF
vHCR
(zPCIe)
I/O Draw
10GbE RoCE Express
SE
RNIC RoCE PCHID
Ports 1 & 2
DMA (via PCIe bus Comm Channel
I/O) path is direct used for vHCR HMC
(adapter to/from LP command
memory) processing
The Physical Function Driver communicates with the physical function in the PCIe adapter
and is responsible for the following functions:
Manage resource allocation
Perform hardware error handling
Perform code updates
Run diagnostics
The device-specific IBM Z Licensed Internal Code (LIC) connects Physical Function Driver to
Support Elements (SEs) and limited system level firmware required services.
D.2.4 Hardware
The 10GbE RoCE Express feature (FC 0411), 25GbE RocE Express2 (FC 0430), and 10GbE
RoCE Express2 (FC 0412) are RDMA-capable NICs. The integrated firmware processor
(IFP) includes four resource groups (RGs) that contain firmware for the RoCE Express
feature. For more information, see C.1.3, “Resource groups” on page 471.
The number of ports and shared support for different systems are listed in Table D-1.
z14 2 YESa No
z13 2 YESb No
z13s 2 YES No
zEC12 1 NO YES
zBC12 1 NO YES
a. Up to 126 Virtual Functions (VFs) per PCHID for RoCE Express2 (FC 0430 and FC 0412)
b. Up to 31 VFs supported per PCHID for RoCE Express (FC 0411)
The 10GbE RoCE Express feature that is shown in Figure D-4 is installed in the PCIe I/O
drawer.
RoCE Physical Connectivity: Because the 25GbE RoCE Express2 feature does not
support negotiation (to a lower speed), it must be connected to a 25 Gbps port of an
Ethernet Switch or to another 25GbE RoCE Express2 feature.
The 10GbE RoCE Express and 10GbE RoCE Express2 features can be connected to
each other in a point-to-point connection or to a 10 Gbps port of an Ethernet switch.
SMC-R can put to use by using direct RoCE Express to RoCE Express connectivity (without
any switch). However, this type of direct physical connectivity forms a single physical
point-to-point connection, which disallows any other connectivity with other LPARs, such as
other SMC-R peers. Although this option is viable for test scenarios, it is not practical (nor
recommended) for production deployment.
If the IBM RoCE Express/Express2 features are connected to Ethernet switches, the switches
must support the following requirements:
10 Gbps or 25 Gbps ports (depending on the RoCE feature specifications
Global Pause function frame (as described in the IEEE 802.3x standard) must be enabled
Priority flow control (PFC) disabled
No firewalls, no routing, and no intraensemble data network (IEDN)
The maximum supported unrepeated distance, point-to-point is 300 meters (984.25 feet) for
10 Gbps and 100 meters (328 feet) for the 25GbE features.
Note: For more information about supported fiber cable types and lengths, see 4.7,
“Connectivity” on page 163.
Mixing of RoCE Generations: Mixing generations of RoCE adapters on the same stack is
supported with the following considerations:
25GbE RoCE Express2 should not be mixed with 10GbE RoCE Express2 or 10GbE
RoCE Express in the same SMC-R Link Group
10GbE RoCE Express2 can be mixed with 10GbE RoCE Express (that is, provisioned
to the same TCP/IP stack or same SMC-R Link Group)
A sample configuration that allows redundant SMC-R connectivity among LPAR A and C, and
LPAR 1, 2 and 3 is shown in Figure D-5. Each feature can be shared or dedicated to an
LPAR. As shown in Figure D-5, two features per LPAR are advised for redundancy.
The configuration that is shown in Figure D-5 allows redundant SMC-R connectivity among
LPAR A, LPAR C, LPAR 1, LPAR 2, and LPAR 3. LPAR to LPAR OSD connections are
required to establish the SMC-R communications. The 1 GbE OSD connections can be used
instead of 10 GbE. OSD connections can flow through the same switches or different
switches.
Note: The OSA-Express Adapter and the RoCE Express feature must be associated to
each other by having equal PNET IDs, which is defined in the hardware configuration
definition (HCD).
z/OS
Single IP address
(single network interface)
Converged Interface
OSD RNIC
OSD RNIC
RNIC activation
Is AUTOMATIC
RoCE fabric
Customer Network
The OSA feature might be a single or pair of 10 GbE, 1 GbE, or 1000Base-T OSAs. The OSA
must be connected to another OSA on the system with which the RoCE feature is
communicating. As shown in Figure D-5 on page 482, 1 GbE OSD connections can still be
used instead of 10 GbE and OSD connections can flow through the same 10 GbE switches.
With SMC-R, the RNIC interface is dynamically and transparently added and configured.
Attention: Activation fails if you do not configure a PNet ID for the RNIC adapter.
Activation succeeds if you do not configure a PNet ID for the OSA adapter; however, the
interface is not eligible to use SMC-R.
The three physically separate networks that are defined are shown in Figure D-7.
CPC
Logical Partitions Guest Virtual Machines
z/VM
IODF HCD
Switch A Switch B Switch A Switch B Switch A Switch B
PCHID
PNet ID
Network A Network B Network C
Figure D-8 Reduced latency and improved wall clock time with SMC-R
Hardware
SMC-R requires the following hardware:
PCIe-based RoCE Express2:
– z14 servers
– Dual port 25GbE or 10GbE adapter
– Maximum of 8 RoCE Express2 features per CPC
PCIe-based RoCE Express:
– z14, z13, z13s, zEC12, and zBC12 servers
– Dual port 10GbE adapter
– Maximum of 8 RoCE Express features per CPC
HCD and input/output configuration data set (IOCDS): PCIe FID, VF (sharing), and RoCE
configuration with PNet ID.
Optional: Standard Ethernet switch (CEE-enabled switch is not required).
Required queued direct input/output (QDIO) Mode OSA connectivity between z/OS
LPARs, as shown in Figure D-5 on page 482.
Adapter MUST be dedicated to an LPAR on a zEC12 or zBC12. It must be shared (or at
least in shared mode) to one or more LPARs on a z14, z13, or z13s server.
SMC-R cannot be used in IEDN.
Software
SMC-R requires the following software:
z/OS V2R1 (with PTFs) or higher are the only supported operating systems for the SMC-R
protocol. You cannot roll back to previous z/OS releases.
z/OS guests under z/VM 6.4 or later are supported to use RoCE features.
IOCP required level for z14 servers: The required level of IOCP for z13 servers is V5 R2
L1 (IOCP 5.4.1) or later with program temporary fixes (PTFs). For more information, see
the following publications:
IBM Z Stand-Alone Input/Output Configuration Program User's Guide, SB10-7166
IBM Z Input/Output Configuration Program User's Guide for ICP IOCP, SB10-7163
When the clients and servers are all in the same sysplex, SMC-R offers a significant
performance advantage. Traffic between client and server can flow directly between the two
servers without having to traverse the Sysplex Distributor node for every inbound packet,
which is the current model with TCP/IP. In the new model, only connection establishment
flows must go through the Sysplex Distributor node.
OSA SD VIPA
z/OS TCP/IP Ethernet OSA
z/OS TCP/IP
Client Stack Sysplex Distribuitor Stack
OSA
z/OS TCP/IP
Target Stack
XCF
z/OS TCP/IP
Target Stack CPC
OSA SD VIPA
z/OS TCP/IP Ethernet OSA
z/OS TCP/IP
Client Stack Sysplex Distribuitor Stack
RoCE
OSA
z/OS TCP/IP
RoCE Target Stack
XCF
z/OS TCP/IP
Target Stack CPC
Note: The IPv4 INTERFACE statement (IPAQENET) must also specify an IP subnet
mask
Start the TCP/IP traffic and monitor it with Netstat and IBM VTAM displays.
Note: For RoCE Express2, the PCI Function IDs (PFIDs) are now associated with a
specific (single) physical port (that is, port 0 or port 1). The port number is now configured
with the FID number in HCD (or IOCDS) and the port number must be configured (no
default exists). z/OS CommServer does not learn the RoCE generation until activation.
During activation, CommServer learns the port number for RoCE Express2.
D.3.1 Concepts
The colocation of multiple tiers of a workload onto a single IBM Z physical server allows for
the use of HiperSockets, which is an internal LAN technology that provides low-latency
communication between virtual machines within a physical IBM Z CPC.
SMC-D maintains the socket-API transparency aspect of SMC-R so that applications that use
TCP/IP communications can benefit immediately without requiring any application software or
IP topology changes. SMC-D completes the overall Shared Memory Communications
solution, which provides synergy with SMC-R. Both protocols use shared memory
architectural concepts, which eliminates TCP/IP processing in the data path, yet preserves
TCP/IP Qualities of Service for connection management purposes.
ISM interfaces are not defined in software. Instead, ISM interfaces are dynamically defined
and created, and automatically started and stopped. You do not need to operate (Start or
Stop) ISM interfaces. Unlike RoCE, ISM FIDs (PFIDs) are not defined in software. Instead,
they are auto-discovered based on their PNet ID.
SMC-R uses RDMA (RoCE), which is based on Queue Pair (QP) technology. Consider the
following points:
RC-QPs represent SMC Links (logical point-to-point connection).
RC-QPs over unique RNICs are logically bound together to form Link Groups (used for HA
and load balancing).
Link Groups (LGs) and Links are provided in many Netstat displays (for operational and
various network management tasks).
SMC-D over ISM does not use QPs. Consider the following points:
Links and LGs based on QPs (or other hardware constructs) are not applicable to ISM.
Therefore, the SMC-D information in the Netstat command displays is related to ISM link
information rather than LGs.
SMC-D protocol (such as SMC-R) feature a design concept of a “logical point-to-point
connection” and preserves the concept of an SMC-D Link (for various reasons that include
network administrative purposes).
Note: The SMC-D information in the Netstat command displays is related to ISM link
information (not LGs).
Figure D-11 Connecting two LPARs on the same CPC by using SMC-D
SMC-D and SMC-R technologies can be used at the same time on the same CPCs. A fully
configured three-tier solution that uses SMC-D and SMC-R is shown in Figure D-12.
Clustered Systems: Example: Local and Remote access to DB2 from WAS (JDBC using DRDA)
SMC-R and SMC-D enabled platform SMC-R enabled platform
Figure D-12 Clustered systems: Multitier application solution. RDMA, and DMA
SMC-D is a protocol that allows TCP socket applications to transparently use ISM. It is a
“hybrid” solution, as shown in Figure D-13 on page 491.
Sockets Sockets
TCP TCP
SMC-D SMC-D
IP IP
Data exchanged using TCP connection establishment over IP data exchanged using
native PCI operations native PCI operations
(PCI STB) (PCI STB)
TCP syn flows (with TCP Options
CLC indicating SMC-R+SMC-D capability)
OSA and ISM
ISM VCHID (within System z) have the same
PNet ID
IP Network (Ethernet)
TCP connection transitions to SMC-D allowing application data to be exchanged using Direct
Memory Access (LPAR to LPAR)
Figure D-13 Dynamic transition from TCP to SMC-D by using two OSA-Express adapters
This model preserves many critical operational and network management features of TCP/IP.
ISM introduces a new static virtual channel identifier (VCHID) Type. The VCHID is referenced
in IOCDS / HCD. The ISM VCHID concepts are similar to the IQD (HiperSockets) type of
virtual adapters. ISM is based on IBM Z PCIe architecture (that is, virtual PCI function or
adapter). It introduces a new PCI Function Group and type (ISM virtual PCI). A new virtual
adapter is scheduled for release.
The system administrator, configuration, and operations tasks follow the same process
(HCD/IOCDS) as PCI functions, such as RoCE Express and zEDC Express. ISM supports
dynamic I/O.
ISM Provides adapter virtualization (Virtual Functions) with high scalability. Consider the
following points:
It supports up to 32 ISM VCHIDs per CPC (z14, z13, or z13s servers, each VCHID
represents a unique internal shared memory network each with a unique Physical
Network ID).
Each VCHID supports up to 255 VFs per VCHID (the maximum is 8 k VFs per CPC),
which provide significant scalability.
Each ISM VCHID represents a unique and isolated internal network, each having a unique
Physical Network ID (PNet IDs are configured in HCD/IOCDS).
ISM VCHIDs support VLANs; therefore, subdividing a VCHID by using virtual LANs is
supported.
ISM provides a Global Identifier (GID) that is internally generated to correspond with each
ISM FID.
ISM is supported by z/VM in pass-through mode (PTF required).
PNet X could be an
Both hosts have access to the external LAN or an internal
same IP subnet HS Network (IQD VCHID
with PNet X)
IP Subnet ‘A’
IQD VCHID (PNet X)
PNet X could be an
external LAN or an internal
Both hosts use the same IP subnet HS Network (IQD VCHID
with PNet X)
z/OS Communications Server requires one ISM FID per ISM PNet ID per TCP/IP stack. This
requirement is not affected by the version of the IP (that is, it is true even if both IPv4 and IPv6
are used).
z/OS might use more ISM FIDs for the following reasons:
IBM supports up to eight TCP/IP stacks per z/OS LPAR. SMC-D can use up to eight FIDs
or VFs (one per TCP/IP stack).
IBM supports up to 32 ISM PNet IDs per CEC. Each TCP/IP stack can have access to
PNet ID that uses up to 32 FIDs (one VF per PNet ID).
ISM Functions must be associated with another channel (CHID) of one of the following types:
IQD (a single IQD HiperSockets) channel
OSD channels
Note: A single ISM PCHID cannot be associated with both IQD and OSD.
ISM
FID 1017 FID 1018
Note: On the IOCDS statement, the VCHID is defined as 7E1. AS shown in Figure D-17,
the ISM network “PNET 1” is referenced by the IOCDS VCHID statement. ISM (as with
IQD) does not use physical cards or card slots (PCHID). Instead, only logical (firmware)
instances that are defined as VCHIDs in IOCDS are used.
A sample IOCP FUNCTION configuration (see Example D-2) that defines ISM adapters that
are shared between LPSRs and multiple VLANs on the same CPC as shown in Figure D-18
on page 497.
Workloads can be logically isolated on separate ISM VCHIDs. Alternatively, workloads can be
isolated by using VLANs. The ISM VLAN definitions are inherited from the associated IP
network (OSA or HiperSockets).
Configuration considerations
The IOCDS (HCD) definitions for ISM PCI VFs are not directly related to the software
(SMC-D) use of ISM (that is, the z/OS TCP/IP and SMC-D implementation and usage are not
directly related to the I/O definition).
The user defines a list if ISM FIDs (VFs) in IOCDS (HCD), and z/OS dynamically selects an
eligible FID that is based on the required PNet ID. FIDs or VFs are not defined in
Communications Server for z/OS TCP/IP. Instead, z/OS selects an available FID for a specific
PNET. Access to more VLANs does not require configuring extra VFs.
Note: Consider over-provisioning the I/O definitions; for example, consider defining eight
FIDs instead of five.
For native PCI devices, FIDs must be defined. Each FID in turn also defines a corresponding
VF. In terms of operating system administration tasks, the administrator typically references
FIDs. VFs (and VF numbers) often are transparent.
Note: ISM FIDs must be associated with HiperSockets or with an OSA adapter by using a
PNet ID. It cannot be associated to both.
The required APARs per z/OS subsystem are listed in Table D-2.
The key difference from the SMCR parameter is that ISM PFIDs are not defined in TCP/IP.
Rather, ISM FIDs are discovered automatically based on matching PNETID that is associated
with the OSD or HiperSockets. An extract from z/OS Communications Server: IP
Configuration Reference is shown in Figure D-19.
>>-GLOBALCONFig-------------------------------------------------------->
.--------------------------------------------------------------.
V |
>-----+--------------------------------------------------------+--+--><
: :
| .-NOSMCD---------------------------------------------. |
| | | |
+-+----------------------------------------------------+-+
| | .-------------------------------------------. | |
| | V | | |
| '-SMCD---+---------------------------------------+-+-' |
| | .-FIXEDMemory--256------. | |
| +---+-----------------------+-----------+ |
| | '-FIXEDMemory--mem_size-' | |
| | .-TCPKEEPmininterval--300------. | |
| '---+------------------------------+----' |
| '-TCPKEEPmininterval--interval-' |
IOCP required level: The required level of IOCP for the z14 server is V5 R4 L1 or later
with PTFs. Defining ISM devices on machines other the z14, z13, or z13s servers is not
possible. For more information, see the following publications:
IBM Z Stand-Alone Input/Output Configuration Program User's Guide, SB10-7166
IBM Z Input/Output Configuration Program User's Guide for ICP IOCP, SB10-7163
More information
For more information about a configuration example for SMC-D, see IBM z/OS V2R2
Communications Server TCP/IP Implementation - Volume 1, SG24-8360.
The implementation provides built-in integrated capabilities that allow advanced virtualization
management on IBM Z servers. With DPM, customers can use their Linux and virtualization
skills while getting the full value of IBM Z hardware’s robustness and security in a workload
optimized environment.
DPM provides facilities to define and run virtualized computing systems by using a
firmware-managed environment that coordinates the physical system resources that are
shared by the partitions1. The partitions’ resources include processors, memory, network,
storage, Crypto, and Accelerators.
DPM provides a new mode of operation for IBM Z servers that provides the following benefits:
Facilitates defining, configuring, and operating partitions, similar to the way these tasks
are performed on other platforms.
Lays the foundation for a general IBM Z new user experience.
DPM is not an extra hypervisor for IBM Z servers. DPM uses the PR/SM hypervisor
infrastructure and provides an intelligent interface that allows customers to define, use, and
operate the platform virtualization with little or no IBM Z experience.
Note: When IBM z14 servers are set to run in DPM mode, the following components are
supported:
Linux virtual servers
KVM hypervisora for Linux guests
z/VM with Linux guests
Virtual appliances running in Secure Service Container (SSC)
a. Available with Linux distributions.
DPM is of special value for customer segments with the following characteristics:
New IBM Z, or Linux adopters, or distributed-driven:
– Likely not z/VM users
– Looking for integration into their distributed business models
– Want to ease migration of distributed environments to IBM Z servers and improve
centralized management
1 DPM uses the term “partition”, which is the same as logical partition (LPAR).
Virtualization requires a hypervisor, which manages resources that are required for multiple
independent virtual machines. The IBM Z hardware hypervisor is known as IBM Processor
Resource/Systems Manager (PR/SM). PR/SM is implemented in firmware as part of the base
system. It fully virtualizes the system resources, and does not require extra software to run.
PR/SM allows the defining and managing of subsets of the IBM Z resources in LPARs. The
LPAR definitions include several logical processing units (LPUs), memory, and I/O resources.
LPARs can be added, modified, activated, or deactivated in IBM Z platforms by using the
traditional Hardware Management Console (HMC) interface.
DPM uses all its capabilities as the foundation for the new user experience. In addition to
these capabilities, DPM provides an HMC user interface that allows customers to define,
implement, and run Linux partitions without requiring deep knowledge of the underlying IBM Z
infrastructure management; for example, input/output configuration program (IOCP) or
hardware configuration definition (HCD).
The firmware partition (similar to the PCIe support partitions, which is also known as master
control services [MCS] partition), along with the Support Element (SE), provides services to
create and manage the Linux native partitions, or partitions that are running kernel-based
virtual machine (KVM) code. The connectivity from the SE to the MCS is provided through the
internal management network by two OSA-Express 1000BASE-T that are acting as OSA
Management adapters.
This implementation integrates platform I/O resource management and dynamic resource
management.
Note: DPM is a feature code (FC 0016) that can be selected during the machine order
process. After it is selected, a pair of OSA Express 1000BASE-T adapters must be
included in the configuration.
After the option is selected, a new window opens (see Figure E-3) in which you enter the two
OSA Express 1000BASE-T ports that are selected and cabled to the System Control Hubs
(SCHs) during the Z server installation.
Figure E-3 Entering the OSA ports that are used by the management network
Note: During the machine installation process, the IBM SSR connects the two OSA
Express4/5/6s 1000BASE-T cables to the SCHs provided ports.
The DPM mode welcome window is shown in Figure E-4. The three options at the bottom
(Getting Started, Guides, and Learn More) include mouse-over functions that briefly describe
their meaning or provide more functions.
The HMC can monitor and control up to 32 IBM Z CPCs. The monitored and controlled CPCs
must be defined to the HMC by using the Object Definition task and adding the CPC object.
The welcome window that is shown in Figure E-4 opens only when at least one HMC defined
CPC is active in DPM mode. Otherwise, the traditional HMC window is presented when you
log on to the HMC.
Figure E-5 Traditional HMC Welcome window when no defined CPCs are running in DPM mode
The three options that are presented to the user in the HMC welcome page when at least one
CPC is running in DPM mode are shown in Figure E-6.
Figure E-6 User Options when the HMC presents the DPM welcome window
The first option on the left of the window that is shown on Figure E-6 on page 507 is Getting
Started. This option starts the DPM wizard application on the HMC, which allows users to
define their partitions and associate processor and memory resources, network and storage
I/O, crypto adapters, and accelerators to them.
From the Getting Started with DPM window, users can select the Partition option that opens
the Create Partition wizard. The Create Partition wizard can also be accessed clicking Next at
the bottom of Getting Started with DPM window.
On the left banner (see Figure E-7 on page 509), the following HMC create partition wizard
steps are available to define and activate a partition:
Welcome: Initial window that contains basic information about the process.
Name: This window is used to provide name and description for the partition being
created.
Processors: The partition’s processing resources are defined in this window.
Memory: This window is used to define partition’s initial and maximum memory.
Network: The window in which users define partition’s network NICs resources.
Storage: The Storage Groups are used to manage FC2 (CKD) and FCP (SCSI) storage.
Accelerators: Partition resources, such as zEDC, can be added in this window.
Cryptos: Wizard window where users define their cryptographic resources.
Boot: In this window, users define the partition’s OS and their source. The following
options as the source for loading an OS are available:
– FTP Server
– Storage Device (SAN)
– Network Server (PXE)
– Hardware Management Console removable media
– ISO image
Summary: This window provides a view of all defined partition resources.
The final step after the partition creation process is to start it. After the partition is started
(Status: Active), the user can start the messages or the Integrated ASCII console interface to
operate it.
2
FC (FICON) storage management requires HMC 2.14.1 or later (which includes DPM 3.2) and z14 Hardware
(Driver level 36 or later). For z13/z13s DPM supports FCP (SCSI) storage only.
An important other facility that is provided by the DPM is the Monitor System. This option
allows users to monitor and manage their DPM environment. The following monitoring and
management capabilities are available:
Partition overall performance, shown in usage percentages, including:
– Processors
– Storage utilization
– Network adapters
– Storage adapters
– Cryptos
– Accelerators
– Power Consumption in KW
– Environmentals (Ambient Temperature in Fahrenheit)
Adapters that exceed a user predefined threshold value
Overall port utilization in the last 36 hours
Utilization details are available by selecting one of the performance indicators
Manage Adapters Task
The new mode, DPM, provides partition lifecycle and dynamic I/O management capabilities
by using the HMC for the following tasks:
Create and provision: Creating partitions, assigning processors and memory, configuring
I/O Adapters (Network, FCP Storage, Crypto, and Accelerators).
Manage the environment: Modification of system resources without disrupting running
workloads.
Monitor and troubleshoot the environment: Source identification of system failures,
conditions, states, or events that might lead to workload degradation.
A CPC can be in DPM mode or standard PR/SM mode. The mode is enabled before the CPC
power-on reset.
DPM mode requires two OSA-Express 1000BASE-T Ethernet features for primary and
backup connectivity (OSA-Express4/5/6S 1000BASE-T Ethernet), along with associated
cabling (hardware for DPM FC 0016).
zEDC Express, which is an optional feature available for z14, z13, z13s, zEC12, and zBC12
servers, addresses these requirements by providing hardware-based acceleration for data
compression and decompression. zEDC provides data compression with lower CPU
consumption than compression technology that was available on the IBM Z server.
The use of the zEDC Express feature with the z/OS V2R1 zEnterprise Data Compression
acceleration capability (or later releases) is designed to deliver an integrated solution. It helps
reduce CPU consumption, optimize the performance of compression-related tasks, and
enable more efficient use of storage resources. This solution provides a lower cost of
computing and also helps to optimize the cross-platform exchange of data.
The feature installs exclusively on the Peripheral Component Interconnect Express (PCIe) I/O
drawer. A total of 1 - 8 features can be installed on the system. One PCIe adapter or
compression coprocessor is available per feature, which implements compression as defined
by RFC1951 (DEFLATE).
A zEDC Express feature can be shared by up to 15 logical partitions (LPARs) on the same
CPC.
Adapter support for zEDC is provided by Resource Group (RG) code that runs on the
system-integrated firmware processor (IFP). The recommended high availability configuration
per server is four features. This configuration provides continuous availability during
concurrent update.
For resilience, the z14 system always includes four independent RGs on the system, which
shares the IFP. Install a minimum of two zEDC features for resilience and throughput.
Figure F-1 Relationships among PCIe I/O drawer card slots, I/O domains, and resource groups
Software decompression is slow and can use considerable processor resources. Therefore, it
is not suggested for production environments.
A specific fix category that is named IBM.Function.zEDC identifies the fixes that enable or use
the zEDC function.
Reference: z/OS support for the zEDC can be found by using FIXCAT:
IBM.Function.zEDC.
z/OS guests that run under z/VM V6.3 with PTFs and later can use the zEDC Express
feature. zEDC for z/OS V2.1 or later and the zEDC Express feature are designed to support a
data compression function to help provide high-performance, low-latency compression
without significant CPU processor usage. This feature can help to reduce disk usage, provide
optimized cross-platform exchange of data, and provide higher write rates for SMF data.
For more information, see the Additional Enhancements to z/VM 6.3 page of the IBM
Systems website.
For more information about how to implement and use the zEDC feature, see Reduce
Storage Occupancy and Increase Operations Efficiency with IBM zEnterprise Data
Compression, SG24-8259.
zBNA replaces the BWATOOL. It is based on Microsoft Windows, and provides graphical and
text reports, including Gantt charts, and support for alternative processors.
zBNA can be used to analyze client-provided System Management Facilities (SMF) records
to identify jobs and data sets that are candidates for zEDC compression across a specified
time window (often a batch window).
Therefore, zBNA can help you estimate the use of zEDC features and help determine the
number of features needed. The following resources are available:
IBM Employees can obtain zBNA and other CPS tools at the IBM Z Batch Network
Analyzer (zBNA) Tool page of the IBM Techdocs website.
IBM Business Partners can obtain zBNA and other CPS tools at the IBM PartnerWorld
website (log in required).
IBM clients can obtain zBNA and other CPS tools at the IBM Z Batch Network Analyzer
(zBNA) Tool page of the IBM Techdocs Library website.
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Note that some publications that are referenced in this list might be available in
softcopy only:
IBM z14 Technical Introduction, SG24-8450
IBM Z Connectivity Handbook, SG24-5444
IBM Z Functional Matrix, REDP-5157
IBM z14 Configuration Setup, SG24-8460
z Systems Simultaneous Multithreading Revolution, REDP-5144
z/OS Infrastructure Optimization using Large Memory, REDP-5146
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft, and other materials at the following website:
ibm.com/redbooks
Other publications
The following publications are also relevant as further information sources:
Capacity on Demand User's Guide, SC28-6985
Installation Manual for Physical Planning, GC28-6965
PR/SM Planning Guide, SB10-7169
IOCP Users Guide, SB10-7163
Online resources
The following websites are also relevant as further information sources:
IBM Resource Link:
https://www.ibm.com/servers/resourcelink/hom03010.nsf?OpenDatabase&login
IBM Offering Information:
http://www.ibm.com/common/ssi/index.wss?request_locale=en
SG24-8451-01
ISBN 0738457256
Printed in U.S.A.
®
ibm.com/redbooks