Deploying SAP Software in Red Hat OpenShift On IBM Power Systems
Deploying SAP Software in Red Hat OpenShift On IBM Power Systems
Deploying SAP Software in Red Hat OpenShift On IBM Power Systems
Dino Quintero
Anastasiia Biliak
Christoph Gremminger
Thorsten Hesemeyer
Sabine Jaeschke
Sahitya K Jain
Jochen Röhrig
Andreas Schauberer
Redpaper
IBM Redbooks
April 2021
REDP-5619-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Use cases and value proposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Solution design overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Functional restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.5 Paper overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Contents v
vi Deploying SAP Software in Red Hat OpenShift on IBM Power Systems
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Watson® POWER9™
Db2® IBM Z® PowerVM®
IBM® POWER® Redbooks®
IBM Garage™ POWER8® Redbooks (logo) ®
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Ansible, OpenShift, Red Hat, RHCE, are trademarks or registered trademarks of Red Hat, Inc. or its
subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
viii Deploying SAP Software in Red Hat OpenShift on IBM Power Systems
Preface
This IBM® Redpaper publication documents how to containerize and deploy SAP software
into Red Hat OpenShift 4 Kubernetes clusters on IBM Power Systems by using predefined
Red Hat Ansible scripts, different configurations, and theoretical knowledge, and it documents
the findings through sample scenarios.
The target audiences for this paper are Chief Information Officers (CIOs) that are interested in
containerized solutions of SAP Enterprise Resource Planning (ERP) systems, developers
that need containerized environments, and system administrators that provide and manage
the infrastructure with underpinning automation.
This paper complements the documentation that is available at IBM Knowledge Center, and it
aligns with the educational materials that are provided by IBM Garage™ for
Systems Education.
Authors
This paper was produced in close collaboration with the IBM SAP International Competence
Center (ISICC) in Walldorf, SAP Headquarters in Germany, and IBM Redbooks®.
Christoph Gremminger is a Project Manager for the SAP on Power Systems Development
Team in St. Leon-Rot, Germany. He has 23 years of experience with IBM, and uses
cross-functional knowledge from various job roles to run successful projects.
Thorsten Hesemeyer is an IT Specialist working for Technical Field Enablement for SAP on
Power Systems in St. Leon-Rot, Germany. He is an LPIC-3 certified Linux expert with 30
years of onsite customer experience. His main areas of expertise are data center migrations,
server virtualization, and container orchestration with Red Hat products for many IBM
customers. Thorsten holds a Diploma in Physics degree from Ruhr-University Bochum.
Sabine Jaeschke is a software developer for SAP on IBM Z® Development in Germany. She
has 15 years of experience in adjusting SAP Software Provisioning Manager for specific
IBM Db2® on Z customer needs. She has worked at IBM for more than 33 years. Her areas of
expertise include container image building, databases, and SAP systems. She has written
extensively on building and deploying container images.
Sahitya K. Jain is an Advisory Software Engineer who works for SAP platform support with
IBM System Labs. He has over 13 years of experience in working with Power Systems
servers. He has worked on functional verification testing for Virtual I/O Server (VIOS) and
IBM AIX®. In his current role, he supports Power Systems customers running SAP
applications, such as SAP NetWeaver or SAP HANA. Sahitya holds a Bachelor of
Engineering (Computer Science) degree from Visvesvaraya Technological University,
Belagavi, India.
Jochen Röhrig is a Senior Software Engineer with the joint IBM/SAP platform team for SAP
on Power Systems at SAP in Walldorf, Germany. Having worked on enabling SAP software on
traditional IBM systems in the past, he is focusing on emerging topics like running SAP
systems on Red Hat OpenShift, using IBM Watson® services in Advanced Business
Application Programming (ABAP), or connecting SAP systems to IBM Blockchain. Having
worked for IBM for 20+ years, Jochen has 20+ years of experience in Linux and 16+ years of
experience in SAP on IBM platforms. He holds a German and a French master's degree in
computer science, and a Ph.D. in computer science from the Saarland University,
Saabrücken, Germany. He is a Red Hat Certified Engineer (RHCE, 2004) and holds
certificates LPIC-1 (2006) and LPIC-2 (2008) from the Linux Professional Institute. His areas
of expertise include emerging technologies like cloud computing, containerization, and AI and
blockchain, and traditional topics like software development, open source software, operating
systems, parallel computing, and SAP on IBM platforms.
Andreas Schauberer is a Senior Software Engineer working for the IBM Systems Lab in
Germany. He has 15 years of experience with the IBM POWER® platform, and with AIX and
Linux on Power Systems for SAP applications. In earlier years, he worked in different software
engineering roles on IBM high availability (HA) software for SAP applications. In his current
role, he leads the IBM development team that is responsible for SAP NetWeaver and
S/4HANA Foundation on the IBM PowerLinux platform. Andreas holds a German degree of
“Diplom Informatiker (FH)” from Fachhochschule Giessen.
Wade Wallace
IBM Redbooks, Austin Center
Wolfgang Reichert, IBM Distinguished Engineer, CTO for SAP on IBM Systems
IBM Germany
Chongshi Zhang, Software Engineer, Red Hat OpenShift on IBM Power Systems
IBM Austin
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
redbooks@us.ibm.com
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xi
Stay connected to IBM Redbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xii Deploying SAP Software in Red Hat OpenShift on IBM Power Systems
1
Chapter 1. Introduction
This chapter provides an overview of the scope of this paper.
The first edition of this paper focused on functions and was targeted at test and
non-production use only. The solution uses dedicated software product versions, basic
configuration options such as SAP - Systems as Standard System (Primary Application
Server (PAS) + Advanced Business Application Programming (ABAP) SAP Central Services
(ASCS) + SAP HANA Database), related system resources, NFS storage attachment, and a
Red Hat OpenShift cluster minimum configuration. Non-functional characteristics such as
high availability (HA), vertical and horizontal scaling, and using alternative storage concepts
can be the scope of future extensions.
This paper explains concepts, all the components that are used (Figure 1-1), and the
structure of the solution. The paper provides usage guidance for the accompanying open
source automation scripts.
Red Hat
Reference Container Ansible Tower / GitHub
SAP System build / deploy Ansible Engine
S/4HANA
S H
HANA N
NetWeaver
On-Premise Edition
O
Non-Production LPARs
Power Systems
Note: In this current state, SAP on IBM Power Systems with Red Hat OpenShift covers a
feasibility study, and it targets test and other non-production landscapes. The created
deliverables are not supported by SAP or an agreed-to road map for official support in its
current state (for more information, see SAP Note 1122387 - Linux: SAP Support in
virtualized environments).
Explore and run an SAP standard configuration that consists of SAP HANA, S/4HANA, or
SAP NetWeaver on-premises editions for container deployment.
Shift and migrate an on-premises SAP standard configuration to Red Hat OpenShift
Container Platform automatically within the IBM Power platform.
Rapid provisioning of SAP HANA, S/4HANA, or SAP NetWeaver test and non-production
container instances.
GUI and command-line interface (CLI) automation options allow for end-to-end automation
and individual step executions.
Co-existence with SAP production systems, for example, on IBM Power Systems logical
partitions (LPARs).
Chapter 1. Introduction 3
1.3 Solution design overview
Understanding the solution design requires you to learn about various aspects to accomplish
optimal concept mappings from an on-premises instance to a container instance, such as
inter-communication and operations. The design has the following characteristics:
SAP system mapping into a container image (Service Distribution):
– Two types of containers: one for the SAP HANA database, and that is composed of the
ASCS and the PAS (the dialog instance (DI). Depending on the start parameters,
ASCS or PAS are instanced at run time.
– Persistent data is stored in a centrally accessible NFS share, which is outside of your
Red Hat OpenShift cluster.
Red Hat OpenShift feature mappings (Service Operation - lifecycle management):
– GitHub, Build Server, and Red Hat Ansible Tower are infrastructure services that you
use to automatically create and deploy the container images to Red Hat OpenShift
Image Registry.
– Container instances are created from Red Hat OpenShift Image Registry. To keep this
example simple, we use the all in one runtime approach, which means that all
container instances belonging to one SAP System are started automatically in a single
Kubernetes pod.
– Stopping and restarting container instances is managed by Red Hat OpenShift
standard features.
Component interaction model at run time (Service Interaction):
– Inter-container instance communication and a Container-NFS share data exchange
are based on TCP/IP.
– User access from the outside world is provided by SSH forwarding. The SAP GUI uses
the helper node to access the application server in the PAS container.
Red Hat
ASCS DI SAP HANA Ansible Tower/
(Container) (Container) (Container) Ansible Engine
Legend:
MSG Server
Application SAP ssh forwarding
ENQ Server
Server HANA Infrastructure/Service
Kubernetes Pod
GUI
Container Instance
Data Flow
Control Flow
NFS Share
The following chapters in this paper reflect the logical flow of the IBM Power Systems with
SAP software that is deployed in Red Hat OpenShift solution. The starting point is the
infrastructure setup guidance for a Red Hat OpenShift cluster, which is followed by the
on-premises SAP reference system that is converted into a containerized solution that is then
deployed and operated on the established Red Hat OpenShift environment.
When major changes are required, a revised edition of this IBM Redpaper publication
might be published. However, you should check official resources (release notes, online
documentation, and so on) for any changes to what is presented in this paper.
Chapter 1. Introduction 5
6 Deploying SAP Software in Red Hat OpenShift on IBM Power Systems
2
2.2.1 Software
Red Hat OpenShift Container Platform V4 is used for the SAP workload that is described in
this paper. Quality assurance is performed with Red Hat OpenShift Container Platform
V4.5.18. The Kubernetes release in Red Hat OpenShift is V1.18.3.
Red Hat OpenShift V4 is included with Red Hat Enterprise Linux CoreOS, which offers a fully
immutable, lightweight, and container-optimized Linux operating system distribution. Only
Red Hat Enterprise Linux CoreOS can be used on IBM Power Systems for all master and
worker logical partitions (LPARs).
2.2.2 Hardware
Only IBM Power Systems with a PowerVM hypervisor and Little Endian support can be used
for the SAP workload that is described in this paper. All IBM POWER8® and IBM POWER9™
processor-based scale-out and Enterprise models can be used.
The NFS share sizing for the helper node is based on the planned SAP HANA deployments,
as shown in Figure 2-1.
tns: The total number of SAP systems for which images will be created (for example, SAP
HANA data will be stored on the NFS server).
hs_i: The SAP HANA size of the SAP system i at the time of image creation.
enc_i: The expected maximum number of simultaneously running container instances of
SAP system i.
ews_i: The expected total write size for one container instance of SAP system i during the
container lifetime.
Note: The installer needs this pull-secret file for the installation.
Your cluster is automatically registered with a 60-day evaluation subscription that does not
include support. To receive support for your cluster, you must edit the subscription settings in
the Cluster Details page in the Red Hat OpenShift Cluster Manager.
The playbook that is described in this section sets up a helper node that has all the
infrastructure and services to install Red Hat OpenShift V4. This playbook also installs a Red
Hat OpenShift V4 cluster with three master nodes and two worker nodes. After you run the
playbook, you are ready to log in to the Red Hat OpenShift cluster.
Intranet / Internet
You can delegate the DNS to the ocp4-helpernode if you do not want to use it as your main
DNS server. You must delegate $CLUSTERID.$DOMAIN to this helper node.
For example, if you want a $CLUSTERID of ocp4, and you have a $DOMAIN of example.com, then
you delegate ocp4.example.com to this ocp4-helpernode.
Bootstrap
Complete the following steps:
1. Create one bootstrap LPAR with the following configuration parameters:
– Two vCPUs (desired_procs).
– 32 GB of RAM (desired_mem).
– 120 GB HD (operating system).
$ mksyscfg -r lpar -m <managed_system> -i name=ocp4-bootstrap,
profile_name=default_profile, lpar_env=aixlinux, shared_proc_pool_util_auth=1,
min_mem=8192, desired_mem=32768, max_mem=32768, proc_mode=shared,
min_proc_units=0.2, desired_proc_units=0.2, max_proc_units=4.0, min_procs=1,
desired_procs=2, max_procs=4, sharing_mode=uncap, uncap_weight=128,
max_virtual_slots=64, boot_mode=norm, conn_monitoring=1
2. Attach the LPAR to the appropriate network and add storage (use the HMC GUI or the
HMC chsyscfg command) after successfully creating the LPAR.
3. Go to Red Hat Enterprise Linux V8 and follow the instructions there to install Red Hat
Enterprise Linux V8 in to the PowerVM LPAR.
The operating system is replaced later by the Red Hat OpenShift installer with a Red Hat
Enterprise Linux CoreOS.
Master LPARs
Complete the following steps:
1. Create three master LPARs with the following configuration parameters:
– Two vCPUs (desired_procs)
– 32 GB of RAM (desired_mem)
– 120 GB HD (operating system)
The operating systems are replaced later by the Red Hat OpenShift installer with a Red Hat
Enterprise Linux CoreOS.
Worker LPARs
Complete the following steps:
1. Create two worker LPARs with the following configuration parameters:
– 4 vCPUs (desired_procs), more depending on the workload
– 256 GB of RAM (desired_mem), more depending on the workload
– 500 GB HD (OS), more depending on the workload
$ for i in worker{0..1}
do
mksyscfg -r lpar -m <managed_system> -i name="ocp4-${i}",
profile_name=default_profile, lpar_env=aixlinux, shared_proc_pool_util_auth=1,
min_mem=16384, desired_mem=262144, max_mem=262144, proc_mode=shared,
min_proc_units=0.2, desired_proc_units=0.8, max_proc_units=4.0, min_procs=1,
desired_procs=4, max_procs=16, sharing_mode=uncap, uncap_weight=128,
max_virtual_slots=64, boot_mode=norm, conn_monitoring=1
done
2. Attach the LPARs to the appropriate network and add storage (use the HMC GUI or the
HMC chsyscfg command) after successfully creating the LPAR.
3. Go to Red Hat Enterprise Linux V8 and follow the instructions there to install Red Hat
Enterprise Linux V8 in to the PowerVM LPAR.
The operating systems are replaced later by the Red Hat OpenShift installer with a Red Hat
Enterprise Linux CoreOS.
2.5.3 Obtaining the MAC address of the LPAR from the HMC
To obtain the MAC address, run the following command:
$ for i in <managed_systems>
do
lshwres -m $i -r virtualio --rsubtype eth --level lpar -F lpar_name,mac_addr
done
2.5.8 Authorizing password-less SSH for the helper node user on the HMC
Complete the following steps:
1. Log in to the HMC as <hmc_user>.
2. Authorize password-less SSH by running the mkauthkeys command and by using the
public SSH key from the root user of the helper node:
hmc_user@hmc_hostname:~> mkauthkeys -a "ssh-rsa
<secret_content_of_/root/.ssh/id_rsa.pub> <user@sample.com>"
2.5.9 Checking password-less SSH for the helper node user on the HMC
Log in to the helper node as root by running the following command:
$ ssh hmc_user@hmc_hostname lshwres -m <managed_system> -r virtualio --rsubtype
eth --level lpar -F lpar_name,mac_addr
ocp4-helper,664A9A48690B
ocp4-bootstrap,664A9EC9CE0B
ocp4-master0,664A91C9280B
ocp4-master1,664A927A570B
ocp4-master2,664A9838420B
ocp4-worker0,664A97C5BB0B
ocp4-worker1,664A949F5F0B
2.5.10 Downloading all playbooks for the Red Hat OpenShift installation
You can download the playbooks by running the following commands:
$ git clone https://github.com/ocp-power-automation/ocp4-upi-powervm-hmc.git
$ cd ocp4-upi-powervm-hmc/
$ git submodule update --init --recursive --remote
Attention: Update all <values> that are marked with less than and greater than characters
in the vars-powervm.yaml file, as shown in Example 2-1.
############################
# OCP4 helper node variables
# Docu:
https://github.com/RedHatOfficial/ocp4-helpernode/blob/master/docs/vars-doc.md
# pvmcec: The physical machine where the LPAR(node) is running on
# pvmlpar: The LPAR(node) name in HMC
### Note: pvmcec and pvmlpar are required for all cluster nodes that are defined
in this yaml file
disk: sda
helper:
name: "<ocp4-helper_hostname>"
ipaddr: "<helper_ip>"
dns:
domain: "<sample.com>"
clusterid: "ocp4"
forwarder1: "<existing_dns_1_ip>"
forwarder2: "<existing_dns_2_ip>"
dhcp:
router: "<router_ip_c_net>.1"
bcast: "<router_ip_c_net>.255"
netmask: "255.255.255.0"
poolstart: "<helper_ip>"
poolend: "<worker2_ip>"
ipid: "<router_ip_c_net>.0"
netmaskid: "255.255.255.0"
bootstrap:
name: "<ocp4-bootstrap_hostname>"
ipaddr: "<bootstrap_ip>"
macaddr: "<66:4a:9e:c9:ce:0b>"
pvmcec: <managed_system>
pvmlpar: ocp4-bootstrap
masters:
- name: "<ocp4-master0_hostname>"
ipaddr: "<master0_ip>"
macaddr: "<66:4a:91:c9:28:0b>"
###########################
# OCP 4 release to install
# Before changing check if new download location exists:
# https://mirror.openshift.com/pub/openshift-v4/ppc64le/dependencies/rhcos/{{
ocp_release }}/latest/
# https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable-{{
ocp_release }}/
ocp_release: 4.6
##########################################################
# The variables below should be changed only if needed.
##########################################################
ssh_gen_key: false
ppc64le: true
setup_registry:
deploy: false
autosync_registry: true
registry_image: docker.io/ppc64le/registry:2
local_repo: "ocp4/openshift4"
product_repo: "openshift-release-dev"
release_name: "ocp-release"
release_tag: "4.3.27-ppc64le"
chronyconfig:
enabled: false
###############################
# URL path to OCP download site
ocp_base_url: "https://mirror.openshift.com/pub/openshift-v4/ppc64le"
##########################################################
# Variables used by ocp4-playbook
# Docu: https://github.com/ocp-power-automation/ocp4-playbooks
# pull_secret: pull-secret file for access OpenShift repo
# public_ssh_key: the public key for ssh to access the cluster nodes from helper
##########################################################
install_config:
cluster_domain: "{{ dns.domain }}"
cluster_id: "{{ dns.clusterid }}"
pull_secret: '{{ lookup("file", "~/.openshift/pull-secret") | from_json |
to_json }}'
public_ssh_key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
#####################################################
# Set up the proxy server on helper node if set it to true
setup_squid_proxy: false
#################################
# using a predefined proxy server
#proxy_url: "http://192.168.79.2:3128"
#no_proxy: "127.0.0.1,localhost,192.168.0.0/16"
proxy_url: ""
no_proxy: ""
#ocp_haproxy_vip: 9.47.89.173
ocp_haproxy_vip: ""
You see that the bootstrap LPAR turns green, then the masters turn green, and then the
bootstrap turns red. Next, all workers turn green.
Also, you can check all th cluster node LPAR statuses by going to the HMC partition list view.
The Identity Provider HTPasswd can be used for the first tests with Red Hat OpenShift by
completing the following steps:
1. Create the users.htpasswd file on the helper node and add multiple users by running the
following commands:
– htpasswd -c -B -b users.htpasswd <userid1> <init_passwd>
– htpasswd -B -b users.htpasswd <userid2> <init_passwd>
– htpasswd -B -b users.htpasswd <userid3> <init_passwd>
3. Click Add and select HTPasswd for your first tests with Red Hat OpenShift, as shown in
Figure 2-4.
Figure 2-4 Red Hat OpenShift Container Platform Console: Identity Providers window
4. In the Add Identity Provider: HTPasswd window, click Browse to select the
users.htpasswd file from the host where you started the browser. Then, click Add to
activate the users.htpasswd file in the Red Hat OpenShift cluster, as shown in Figure 2-5.
The commands work only if all nodes in the cluster have the status UPDATED=True and
UPDATING=False, as shown in Example 2-2.
The command triggers a restart of all worker nodes in sequence (one node after the
other). The command finishes when all worker nodes in the cluster have the status Ready,
as shown in Example 2-4.
2. Check the SELinux setting by running the following command for all worker nodes:
root@ocp4-<helper_hostname> ~]# ssh core@<worker_hostname> "getenforce"
The output is:
Disabled
The commands work only if all nodes in the cluster have the status UPDATED=True and
UPDATING=False, as shown in Example 2-5.
The command triggers a restart of all worker nodes. This is done sequentially (one node
after the other). The command finishes when all worker nodes in the cluster have the
status Ready, as shown in Example 2-7.
2. Check the pids_limit parameter by running the following command for all worker nodes:
root@ocp4-<helper_hostname> ~]# ssh core@<worker_hostname> "crio config
2>/dev/null | grep 'pids_limit'"
2.6.4 Setting up an NFS server for database data and logs on the helper node
After the Red Hat OpenShift cluster is running, look for an NFS server that is configured on
the helper node, as shown in Example 2-8.
The disk space that is available for the NFS server to export files can be checked by running
the following command:
# df -h /export
If needed, you can increase the logical volume for rootvg-root or assign another data disk to
the LPAR and mount it to /export. For more information, see Chapter 29, “Exporting NFS
shares”, of the Red Hat Enterprise Linux 8 System Design Guide.
If you run out of disk space on a worker node, see Running out of space under
/var/lib/containers/storage.
Red Hat Ansible Tower is a web-based console that makes Red Hat Ansible adaptable for IT
teams. Red Hat Ansible Tower helps IT teams to scale automation, roll out updates, build
configurations, organize inventory management, and schedule jobs. Red Hat Ansible Tower
comes with a web interface and a REST API that can be embedded into other IT processes
and tools. The Red Hat Ansible Tower web-based user interface (UI) provides an overview
dashboard of all job exit statuses, successful and failed playbook runs, statuses of host
inventories, role-based access control (RBAC), and a permission system for playbooks. For
more information, see Red Hat Ansible Tower.
You can use either one of them depending on your skill set and purpose for use. To install
SAP HANA, SAP S/4HANA, and SAP NetWeaver, you must install prerequisites that are
specific to the Red Hat Enterprise Linux operating system for target systems like IBM Power
Systems by running SAP installer and SAP product packages. These tasks are automated
with Red Hat Ansible CLI or Red Hat Ansible Tower.
Figure 3-1 Red Hat Ansible CLI versus Red Hat Ansible Tower
For more information about how to use Red Hat Ansible automation to deploy SAP Solutions
on other hardware architectures, see Automating your SAP HANA and S/4HANA by SAP
deployments using Ansible.
Red Hat Ansible Tower is a web-based GUI solution to automate the installation of SAP
S/4HANA and SAP HANA on IBM Power Systems. Red Hat Ansible Tower offers a graphical
dashboard and a navigation menu to show your host status and configuration, go to your job
runs and templates, show your projects, and more. Therefore, by using the Red Hat Ansible
Tower visual UI, you can create a job template to automate the complete SAP
software installation.
Red Hat Ansible runs commands by using SSH, so you do not need to install extra software
on your server to implement authentication for client hosts.
Table 3-1 shows many of the features of Red Hat Ansible CLI or Red Hat Ansible Tower.
Table 3-1 Red Hat Ansible and Red Hat Ansible Tower features
Key value Red Hat Ansible CLI Red Hat Ansible Tower
Free for commercial use. Yes, under the policies of General Yes, with a trial license.
Public Access.
Usability Has a CLI for those users who are Has a web-based GUI that is
familiar with using CLI tools. easy to use and browse.
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 27
Key value Red Hat Ansible CLI Red Hat Ansible Tower
The chosen SAP instance number and SAPSIDs are examples only, and may be customized
by using variables.
The following values are used throughout this configuration and can be adapted to match
your system characteristics:
LPAR hostname: <yourhostname>
Directory for installation files: /data/installer.
Password: XXpasswd1
Before you work with Red Hat Ansible, you must check that all machines and hosts are
configured correctly.
Hint: If your root file system space is limited and a single large file system is mounted for
the SAP application, you must link the various locations where the SAP application is
stored to the single, large volume.
For example, if the large volume is mounted at /data, then create symbolic links from these
directories to the new volume before starting the installation:
$ ls -ld /sapmnt /hana /usr/sap
lrwxrwxrwx 1 root root 10 May 6 14:31 /hana -> /data/hana
lrwxrwxrwx 1 root root 12 May 6 14:31 /sapmnt -> /data/sapmnt
lrwxrwxrwx 1 root root 13 May 6 14:30 /usr/sap -> /data/usr_sap
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 29
3.4.1 Repeating a playbook and uninstalling SAP
If there are typographical errors or other errors, a playbook run can fail.
The current set of playbooks does not use the resume option of the SAP installer. To start
again after a failed attempt, complete the following steps:
1. Correct the errors or typographical errors in the variables and playbooks.
2. Uninstall all SAP instances that you intended to install.
3. Check that no SAP processes that contain your SAPSID are running on your target
system (especially sapstartsrv processes).
You do not need to remove SAP related UNIX user or group accounts of an SAP instance
because they can be reused without errors. Clean up your system so that the playbooks can
start a full SAP installation from scratch.
If you have a different platform than Red Hat Enterprise Linux, see Installing Ansible.
To implement your configuration for installing SAP software, define a playbook. Red Hat
Ansible playbooks are configuration files that are written in YAML and contain all the
information about target system requirements, tasks, variables, and so on. If you have a large
system environment and you must automate many processes on multiple machines, divide
your configuration into different files. This bundle of files is defined as Ansible Roles. They are
reusable components and can be included inside any playbook. Ansible Roles are stored in
their own repository that is called Ansible Galaxy.
For the example SAP software deployment, we use two sets of Ansible Roles:
Red Hat Enterprise Linux System Roles for SAP to configure the system settings and
install extra software according to the SAP Notes for Red Hat Enterprise Linux.
Community Roles for SAP to deploy the software that is needed to run an SAP S/4HANA
and SAP HANA database.
Ansible Galaxy CLI is used later to retrieve these two packages. Before you start writing
playbooks, create a working directory where the playbooks will be stored. For example,
Figure 3-2 on page 31 shows the project directory that stores the files and configuration files.
The community roles that are mentioned use two hosts: one host for SAP S/4HANA, and
another host for SAP HANA. In our setup, SAP S/4HANA and SAP HANA are installed on
one machine that is named <yourhostname>, but the structure is kept if you want to divide
the installation.
The Ansible inventory file hosts in the INI format with joined groups hana and s4hana points to
the same hostname <yourhostname>, as shown in Example 3-1.
To run a playbook, add the -i option and directory path to tell Red Hat Ansible where your
inventory file is. To test whether all defined hosts are accessible to Red Hat Ansible, try to
ping your machines by running the following command:
ansible all -i /path/to/your/inventory/file -m ping
This command displays a result for all host machines that are available for your SAP
installation. If a host is not accessible from a remote machine, check your host credentials,
SSH settings like SSH private key, and so on. If the SSH private key requires a passphrase,
you must specify it in your inventory file. To avoid this complexity, use an SSH private key
without the passphrase.
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 31
3.5.3 General installation definitions
SAP Host installation settings, SAP Domain, and other settings that are general to all hosts of
the group sapservers are stored in a group variable file that is named
group_vars/sapservers.yml.
SAP host agent software is installed on all hosts, so these SAP host agent settings are
defined in the group variable file:
SAP host agent installation type
SAP host agent paths and file names
The SAP installer sapinst needs a host entry in /etc/hosts to resolve your hostname. You
can either create a manual entry in order <ip> <full qualified domain name> <short
hostname> and set sap_preconfigure_modify_etc_hosts: false. Alternatively, Red Hat
Ansible can add the hostname entry if you set the variable
sap_preconfigure_modify_etc_hosts: true and add the host DNS domain in
variable sap_domain.
The group variable file sapservers.yml file in the group_vars directory is shown in
Example 3-2.
sap_hostagent_clean_tmp_directory: true
#Defined variables for sap_preconfigure role
sap_preconfigure_selinux_state: permissive
# If you need to modify your hostnames set up it as true
sap_preconfigure_modify_etc_hosts: false
# define the SAP domain name only if you set 'sap_preconfigure_modify_etc_hosts:
true'
#sap_domain: "subdomain.enterprise-domain-name.com"
To keep this demonstration simple and start a working setup quickly, passwords are added to
the host variables without encryption.
Note: The file name for the SAPCAR tool cannot be SAPCAR because this file is also inside the
host agent SAR file, and an error occurs when the file already exists while extracting this
tool. Rename the SAPCAR tool to SAPCAR.EXE or SAPCAR_ <version>.EXE and specify this
name as the value of the variable sap_hostagent_sapcar_file_name.
For more information, see the GitHub repository where the roles are implemented:
GitHub redhat-sap/sap-hostagent
GitHub linux-system-roles/sap-preconfigure
To keep this demonstration simple, passwords are added to the host variables that are
not encrypted.
Red Hat Ansible password vault: In sensitive environments, passwords can be managed
by an encrypted Ansible-Vault. For more information, see Encrypting content with Ansible
Vault and the description of the command-line tool in ansible-vault.
sap_s4hana_deployment_sid: "AB1"
sap_s4hana_deployment_ascs_instance_nr: "21"
sap_s4hana_deployment_pas_instance_nr: "22"
sap_s4hana_deployment_db_host: "<yourhostname>"
# these two lines must be changed in sync with the sap_hana settings above:
sap_s4hana_deployment_db_sid: "ABD"
sap_s4hana_deployment_hana_instance_nr: "20"
sap_s4hana_deployment_db_schema_password: "XXpasswd"
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 33
sap_s4hana_deployment_db_schema_abap_password: "XXpasswd"
sap_s4hana_deployment_master_password: "XXpasswdM"
sap_s4hana_deployment_hana_systemdb_password: "xxPasswd"
sap_s4hana_deployment_hana_system_password: "xxSystemPsw"
sap_s4hana_deployment_parallel_jobs_nr: "30"
sap_s4hana_deployment_db_sidadm_password: "yourPasswd"
sap_s4hana_deployment_igs_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_igs_file_name: "igsexe_10-80003246.sar"
sap_s4hana_deployment_igs_helper_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_igs_helper_file_name: "igshelper_17-10010245.sar"
sap_s4hana_deployment_kernel_dependent_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_kernel_dependent_file_name: "SAPEXEDB_100-80004417.SAR"
sap_s4hana_deployment_kernel_independent_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_kernel_independent_file_name: "SAPEXE_100-80004418.SAR"
sap_s4hana_deployment_software_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_sapadm_password: "spAdmpass1"
sap_s4hana_deployment_sap_sidadm_password: "spAdmpass2"
3.5.5 Getting Community and System Roles from the Red Hat Ansible Galaxy
requirements.yml file
According to SAP Note 2772999, there are prerequisites for installing SAP, such as packages
and system settings. These prerequisites must be implemented before installing and running
SAP systems. These prerequisites are implemented as Ansible Roles and can be used to
configure all required changes on the Red Hat target server. For Red Hat Enterprise Linux
V8.1, the following three Red Hat Enterprise Linux System Roles for SAP prerequisites must
be applied:
sap-preconfigure
sap-netweaver-preconfigure
sap-hana-preconfigure
Additionally, the following community roles for SAP software deployment are required:
sap-hostagent
sap-s4hana-deployment
sap-hana-deployment
You install both package groups in one step. Therefore, your requirements.yml file contains
two files, which are described in Automating your SAP HANA and S/4HANA by SAP
deployments using Ansible - Part 2 and Automating your SAP HANA and S/4HANA by SAP
deployments using Ansible - Part 3.
These roles are available in Red Hat Community repositories and in Ansible Galaxy. You can
choose which source is defined in your playbook. For our example, we add all required
Ansible Roles to the playbook requirements.yml file, as shown in Example 3-4 on page 35.
Before the ansible-galaxy command can be run to download files from GitHub, check that
the Git software is installed on your machine. If it is not installed, run the following command:
sudo yum install git
Now, install all the required roles in the directory roles by running the following Ansible Galaxy
command:
ansible-galaxy install -r requirements.yml -p roles
After the playbook finishes without errors, SAP hostagent, SAP HANA, and S/4HANA are
installed on your host.
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 35
3.6 Installing SAP software with Red Hat Ansible Tower
This section describes how to install SAP with Red Hat Ansible Tower.
The following guidelines are used for configuring Red Hat Ansible Tower:
Define your inventory by adding groups and hosts to the configuration if needed.
Create or choose the credential for Ansible Tower to connect to and run Red Hat Ansible
playbooks.
Create or choose one project that will be used for your playbook to run the SAP
software installation.
Create a template that contains the playbook and installation parameters.
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 37
- name: redhat_sap.sap_hostagent
- name: redhat_sap.sap_hana_deployment
- name: redhat_sap.sap_s4hana_deployment
Note: The requirements.yml file is intentionally not copied to avoid accidentally overwriting
customized rules by automatically downloading them again.
Using the Red Hat Tower web interface, create a new project, as shown in Figure 3-4. Select
Projects in the left menu, and then click + at the upper right.
The current setup description does not use groups because in a standard SAP setup all
instances are installed on one host.
2. Add your host by clicking + in the Hosts tab, as shown in Figure 3-6.
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 39
With Red Hat Ansible Tower, you can define variables at different locations. This setup uses
extra variables that are defined in the template. Extra variables overwrite all values that are
defined for the same variable at other locations. For more information about this topic, see the
“Ansible Tower Variable Precedence Hierarchy (last listed wins)” table in Extra Variables.
For this reference system, all installation parameters are stored in template variables, as
described in 3.6.7, “Defining a job template” on page 41.
After saving the new inventory, proceed by configuring permissions for users and team
members. For more information about how to configure your inventory, see Inventories.
If you do not have an SSH key, you can use the ssh-keygen tool to generate it on the target
host and copy it to the Red Hat Ansible Tower credentials. Click Credentials in the left menu
to see the window that displays all the available credentials (Figure 3-7).
To set the SAP application installation host credentials, complete the following steps:
1. Enter a hostname, for example, yourhostname.
2. Enter a description, for example, “SAP S/4HANA reference system”.
3. Use Machine as the credential type.
4. Enter root as the username for installation.
5. Enter the password to be used for the SSH authentication.
6. Enter the SSH private key and, if used, the passphrase for your key.
The SSH key is used when copying files by using the username and password to run the
playbook on the target host. For more information, see Credentials.
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 41
The job template variables for the SAP software installation are defined in Example 3-8.
sap_hostagent_hostname: <yourhostname>
sap_hana_hostname: <yourhostname>
sap_s4hana_hostname: <yourhostname>
sap_s4hana_deployment_product_id: "NW_ABAP_OneHost:S4HANA1909.CORE.HDB.ABAP"
sap_s4hana_deployment_sapcar_path: "/data/installer/SAPCAR"
sap_s4hana_deployment_sapcar_file_name: "SAPCAR.EXE"
sap_s4hana_deployment_sid: "AB1"
sap_s4hana_deployment_ascs_instance_nr: "21"
sap_s4hana_deployment_pas_instance_nr: "22"
sap_s4hana_deployment_db_host: "<yourhostname>"
# The following two lines must be changed in sync with two lines below:
# sap_hana_deployment_hana_sid = sap_s4hana_deployment_db_sid
# sap_hana_deployment_hana_instance_number =
sap_s4hana_deployment_hana_instance_nr
sap_s4hana_deployment_db_sid: "ABD"
sap_s4hana_deployment_hana_instance_nr: "20"
sap_s4hana_deployment_db_schema_password: "XXpasswd"
sap_s4hana_deployment_db_schema_abap_password: "XXpasswd"
sap_s4hana_deployment_master_password: "XXpasswdM"
sap_s4hana_deployment_hana_systemdb_password: "xxPasswd"
sap_s4hana_deployment_hana_system_password: "xxSystemPsw"
sap_s4hana_deployment_parallel_jobs_nr: "30"
sap_s4hana_deployment_db_sidadm_password: "yourPasswd"
sap_s4hana_deployment_igs_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_igs_file_name: "igsexe_10-80003246.sar"
sap_s4hana_deployment_igs_helper_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_igs_helper_file_name: "igshelper_17-10010245.sar"
sap_s4hana_deployment_kernel_dependent_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_kernel_dependent_file_name: "SAPEXEDB_100-80004417.SAR"
sap_s4hana_deployment_kernel_independent_path:
"/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_kernel_independent_file_name: "SAPEXE_100-80004418.SAR"
sap_s4hana_deployment_software_path: "/data/installer/S4HANA1909FPS01"
sap_s4hana_deployment_sapadm_password: "spAdmpass1"
sap_s4hana_deployment_sap_sidadm_password: "spAdmpass2"
Attention: When you check variable content, pay close attention to verifying and
modifying these settings:
Hostname: The <youhostname> variable should match your target hostname.
File name and paths: Depending on the SAP installation software package, the
software version and local storage location on your target machine are likely
to change.
SAPCAR: Verify that a copy that is named SAPCAR.EXE of the SAPCAR file is stored in the
sapcar_path directory.
SAP SIDs and instance numbers: Matched to your needs.
Passwords: Change the example passwords.
For more information about variable precedence in Red Hat Ansible Tower (if you already
used them in Red Hat Ansible CLI), see Extra Variables.
Chapter 3. Automated installation of SAP S/4HANA and SAP HANA on IBM Power Systems with Red Hat Ansible 43
5. After you finish configuring the job template, click Save. Click Launch to run the job, as
shown in Figure 3-9.
If the job run is successful, a green color is shown. The Completed Jobs view shows the list
of all job templates that complete. From this tab, you can also see the job status and detailed
information. The Templates view provides the list of all defined job templates. If you click the
rocket icon, you can restart the job run, as shown in Figure 3-10.
For more information about using Red Hat Ansible Tower to run playbooks, see Job Template.
3.7 Conclusion
Red Hat Ansible CLI and Red Hat Ansible Tower can speed up your productivity by helping
you manage complex processes with job automation and scheduling. Red Hat Ansible Tower
provides a dashboard to view every job run and status. You have a GUI and many features to
support the automation process. You can also customize Red Hat Ansible Tower for your
needs. If you are familiar with CLI tools, then Red Hat Ansible CLI is a perfect solution to use
in infrastructure workflows. You can easily integrate Red Hat Ansible Engine with other
building tools for continuous integration and deployment of systems.
During the build phase, three different images (Init, SAP AppServer, and SAP HANA) are
created, as shown in Figure 4-1.
Note: The build logical partition (LPAR) should be different from the cluster helper node. All
actions that are described in Figure 4-1 are performed on the build machine, not on the
helper node.
<NWS4-SID> is the image that is used for starting both the ASCS and the DI container.
During the build phase of the images, these two directories must be copied to the replica file
system on the NFS server. To make sure that every pod uses its own SAP HANA database
content, an overlay file system is created during the deployment.
Two subtrees must be moved from the root/ file system to the /data file system because they
are heavily used during the image build process, which might lead to a 100% filled root/
file system.
As the root user, move the /var/lib/containers subtree from the root / file system to the
/data file system by running the following commands:
$ mkdir -p /data/var/lib
$ mv /var/lib/containers /data/var/lib/containers
$ ln -s /data/var/lib/containers /var/lib/containers
As the root user, move /var/tmp subtree from the root / file system to the /data file system
by running the following commands:
$ mkdir -p /data/var/
$ mv /var/tmp /data/var/tmp
$ ln -s /data/var/tmp /var/tmp
5. Enter a meaningful name for your project and click Create, as shown in Figure 4-3.
Figure 4-3 Red Hat OpenShift Container Platform: Create Project window
4.4.4 Retrieving login tokens from the Red Hat OpenShift Console
To retrieve login tokens, complete the following steps:
1. Log in to your Red Hat OpenShift Console.
2. Click your username in the upper right.
3. Click Copy Login Command.
4. Log in again with your credentials.
5. Click Display Token. Figure 4-4 shows the token details.
Copy the oc login --token=… command. You can use this command to log in to the Red Hat
OpenShift cluster instead of providing a user and a password. Paste the full command and
run it on your system, as shown in Example 4-1.
You have access to the following projects and can switch between them with 'oc
project <projectname>':
jaeschke-soos
* jaeschke-th1-thd
jaeschke-thh-hdb
4.4.7 Enabling the default route to the internal Red Hat OpenShift registry
To push images to the internal Red Hat OpenShift registry, you must enable the default route
to the registry. For more information, see Enable the Image Registry default route with the
Custom Resource Definition.
For more information, see Containerization by IBM for SAP S/4HANA with Red Hat
OpenShift.
For more information about how you can create a deployment configuration file
<deployment-config-file> that suits your SAP system setup, see Containerization by IBM for
SAP S/4HANA with Red Hat OpenShift.
service/soos-th1-np created
deployment.apps/soos-th1 created
For more information about how to verify whether the SAP system was correctly started, see
Containerization by IBM for SAP S/4HANA with Red Hat OpenShift.
Before using the image locally, you must create an overlay file system by running the
following command on your build machine:
$ tools/containerize -o
The <overlay-uuid> is the unique ID that is obtained during the creation of the replica
file system.
In addition, the <HDB-SID>-HDB directory is created in the working directory to hold the
soos-env file, which is needed during the start of the container.
You are now logged in to your container. To check for messages, view the contents of the
/var/log/messages file.
To check the status of the SAP HANA database, run the HDB info command as the
<hdb-sid>adm user.
The container name is returned by the container-local script. You can also view it by
displaying the running containers by running the following command:
$ podman ps --filter 'ancestor=localhost/soos-<nws4-sid>:latest' --format
'{{.Names}}'
You are now logged on to your container. To check for messages, see the /var/log/messages
file.
To check whether your ASCS instance is running, switch to the <nws4-sid>adm user and call
sapcontrol, as shown in Example 4-2.
24.09.2020 09:16:18
GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
msg_server, MessageServer, GREEN, Running, 2020 09 24 09:15:43, 0:00:35, 610
enq_server, Enqueue Server 2, GREEN, Running, 2020 09 24 09:15:43, 0:00:35, 611
4.9.1 Introduction
The first time that you deploy the images, they are pulled from the Red Hat OpenShift cluster
registry to one of your worker nodes. You can check the progress of the deployment by
running the oc describe command.
Important: Do not change the names of the working directories in your deployment
configuration file.
App containers
The App containers (ASCS, DI, and SAP HANA containers) are started in parallel when the
running of the Init container finishes.
ASCS container
During the startup of the ASCS container, first the ASCS instance exe directory
/usr/sap/<NWS4-SID>/ASCS<ASCS-InstNo>/exe, is created. Then, the SAP service is created.
Finally, the ASCS instance starts.
For more information about how to operate the containers, see Chapter 6, “Operating the
containers” on page 67.
Two subtrees must be moved from the root / file system to the /data file system because
they are used heavily during the image build process, which might lead to 100% of the root /
file system being used, which is unwanted.
To move the /var/lib/containers subtree from the root / file system to the /data file
system, run the following commands as the root user:
$ mkdir -p /data/var/lib
$ mv /var/lib/containers /data/var/lib/containers
$ ln -s /data/var/lib/containers /var/lib/containers
To move the /var/tmp subtree from the root / file system to the /data file system, run the
following commands as the root user:
$ mkdir -p /data/var/
$ mv /var/tmp /data/var/tmp
$ ln -s /data/var/tmp /var/tmp
In your cloned GitHub repository, there is a directory that is named ansible that has the
following structure:
|__ansible
|__roles
|__tasks
|__vars
|__ocp-deployment.yml
The directory that is called roles has reusable Ansible playbooks that will be included in
the ocp-deployment.yml playbook to deploy SAP HANA and SAP S/4HANA. Each role
includes a set of related tasks to organize them more efficiently. There are roles for
checking general and OpenShift prerequisites; copying SAP HANA to the NFS server;
building images, pushing images, and creating an SAP HANA overlay share; and
starting deployment.
The tasks directory has files that are reused more than once in playbooks. There are
tasks like prerequisites for Red Hat Enterprise Linux 8.x, log in as a user in the OpenShift
cluster, log in as admin in the OpenShift cluster, and installing the GNU GCC-compiler and
GNU Make utilities and other packages that are needed for the Paramiko SSH client.
Defined roles include task files within playbooks. You can extend tasks by defining one to
customize your system requirements.
The vars directory is for extra variables and contains a file with default variables, which are
used in all playbooks. You can name it <your-extra-vars>.yml and specify your variables
as key-value pairs. The variables will be included in roles and used multiple times. They
are referenced by using the Jinja2 syntax as double curly braces.
The ocp-deployment.yml file is a main playbook that contains one play with included roles.
Chapter 5. Building and deploying container images with Red Hat Ansible 61
Roles have the following directory structure:
|__roles
|__os-prerequisites
|__ocp-prerequisites
|__copy-hdb-nfs
|__build-images
|__push-images
|__create-overlay-share
|__deploy-images
Each role contains the tasks/main.yml file, where our list of tasks that the role runs are
defined. The roles have different functions:
The os-prerequisites role installs those packages as Pod Manager tool (podman), git,
python3, python3-devel, and Paramiko, and includes tasks for Red Hat Enterprise Linux
8.x to install more requirements. The role checks the connection to the Red Hat OpenShift
cluster and to the default route to the image registry, and it verifies whether the local
OpenShift client tool exists. The role also verifies the NFS server connections and
generates a config.yaml file from a file template, and then the script verifies whether all
input variables in the config.yaml file are valid.
The ocp-prerequisites role ensures that the prerequisites are met for image pushing and
deployment on the Red Hat OpenShift Container Platform. The role verifies and then sets
up a new project, and then checks whether the default route to the internal registry of the
Red Hat OpenShift cluster is enabled. It also sets up permissions to run containers in the
defined project and generates a file for a service account.
The copy-hdb-nfs role creates a snapshot copy of your SAP HANA data and log
directories on the NFS server. Check that your SAP HANA is stopped before running this
role. Before running this role, you might need to copy the SSH key of the NFS server to
your build host by running the following command:
ssh-copy-id -i ~/.ssh/<nfs_rsa_key>.pub <user_name>@<build_host_name>
The build-images role runs the image build process for your SAP HANA and SAP
S/4HANA instances. The three images will be built and stored in the local podman registry
on the build machine.
The vars directory has a file with variables that can contain sensitive content like IP
addresses, passwords, and usernames. Therefore, use the Ansible Vault utility to protect your
content by encrypting it. To keep sensitive information hidden in a playbook when using
verbose output, add the no_log attribute to a playbook at the beginning. We do not show how
to use Ansible Vault because of its complexity. For more information about Ansible Vault, see
Encrypting content with Ansible Vault.
Roles and tasks make playbooks reusable to avoid duplication of source code. The main
playbook ocp-deployment.yml includes all roles for building images, and it has the following
directory structure:
---
- hosts:
roles:
- os-prerequisites
- ocp-prerequisites
- copy-hdb-nfs
- build-images
In the <your_build_server>.yml file, you can define other configuration parameters that are
needed to connect to your remote host. After this task is done, the ansible directory is
organized as follows:
|__ansible
|__hosts
|__host_vars/<your_build_server>.yml
|__roles
|__tasks
|__vars
|__ocp-deployment.yml
Run the ocp-deployment.yml playbook by passing variables at the CLI by using the -e option
for extra variables. Run your Ansible playbook by running the following command:
ansible-playbook -i hosts -e @vars/ocp-extra-vars.yml ocp-deployment.yml
After running the ocp-deployment.yml playbook, the prerequisites are installed and three
images are created: Init, SAP AppServer SID, and SAP HANA SID.
Chapter 5. Building and deploying container images with Red Hat Ansible 63
Comment out the roles that already were used. Run the playbook by using these roles as
push-images, create-overlay-share, and deploy-images, and add the -e option for extra
variables as follows:
ansible-playbook -i hosts -e @vars/ocp-extra-vars.yml ocp-deployment.yml
To build and deploy with Red Hat Ansible Tower, complete the following steps:
1. You must have a project that will be used in a job template for building and deploying
images, so you must either define one or choose an existing project directly in the
job template.
To set up a new project, log in to the Red Hat Ansible Tower web GUI with Administrator
user authority and click Projects in the left menu. You see a list of available projects. To
get a new project, click the + at the upper right and complete the required fields:
a. Define a project name.
b. Add a description.
c. Select an organization. For this example, you can use Default.
d. For the SCM TYPE, copy the URL link of the GitHub repository where the Ansible
playbooks are stored.
e. Input the scm branch to check out source code. For this example, you can use master.
f. Select the SCM UPDATE OPTIONS check boxes, such as CLEAN, DELETE ON
UPDATE, and UPDATE REVISION ON LAUNCH.
You do not need credentials for an open-source GitHub repository because the provided
URL where all scripts are stored is public, and you can copy the URL into the SCM URL
field of the Projects template, as shown in Figure 5-1.
3. A new job template opens where you can complete required and optional fields, as
described in 3.6.7, “Defining a job template” on page 41. Before completing the job
template, check whether you have a defined inventory, as described in 3.6.5, “Setting up
inventory” on page 39 and 3.6.6, “Setting up target host credentials” on page 40. In the
Extra Variables field, add the specified variables from the file in the vars directory. In the
Playbook field, select the playbook that is defined for Red Hat Ansible Tower deployment.
It is also inside the GitHub ansible/ directory.
4. Save the job template for building and deployment, and then start the job. If the job run is
successful, it has a green status, which means that the building and deployment of the
SAP HANA and SAP S/4HANA images successfully completed.
Chapter 5. Building and deploying container images with Red Hat Ansible 65
66 Deploying SAP Software in Red Hat OpenShift on IBM Power Systems
6
You can check the status of your pod by running the following command:
tools/ocp-pod-status
If the status of the pod is Running, the Pod is running. In all other cases, the containers might
still be in the startup phase or an error occurred.
For more information about how to check the status of your SAP system in your Red Hat
OpenShift cluster, see Containerization by IBM for SAP S/4HANA with Red Hat OpenShift.
If you want to log in to your SAP HANA container, run the following command:
tools/ocp-container-login -f hdb
Use this command from the machine on which your SAP GUI runs to establish port
forwarding, as shown in Example 6-1.
Last failed login: Fri Sep 25 07:12:23 UTC 2020 from 56.76.112.114 on ssh:notty
There were 2 failed login attempts since the last successful login.
Last login: Thu Sep 24 10:32:41 2020 from 56.76.112.114
Note: If your SAP GUI is running on Windows, do not use the Power Shell for establishing
the SSH port forwarding tunnel, but instead use tools like MobaXterm or Cygwin.
Taking a closer look to the pod list, you recognize that a new pod is automatically started at
the same time the old one is terminating, as shown in Figure 6-1.
Here is what happens to the SAP system inside the pod when you stop the pod: There are
changes to the SAP DIs. For example, changes in the profiles do not persist. Considering all
the changes that you made to the SAP HANA database content are stored in the overlay file
system, they are persistent if you do not tear down the overlay file system.
3. To scale down the number of running pods to zero, click the down arrow near the number
of pods. The pod stops, and no restart is initiated.
To restart this pod, scale the number of pods to 1. The pod automatically starts again.
The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only.
Red Hat OpenShift V4.3 on IBM Power Systems Reference Guide, REDP-5599
Red Hat OpenShift V4.X and IBM Cloud Pak on IBM Power Systems Volume 2,
SG24-8486
Software Defined Data Center with Red Hat Cloud and Open Source IT Operations
Management, SG24-8473
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, web docs, drafts, and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
Ansible Galaxy Repository
https://galaxy.ansible.com/redhat_sap
Automating the Installation of SAP S/4HANA and SAP HANA on IBM Power Systems
using Red Hat Ansible
https://blogs.sap.com/2020/11/03/automating-the-installation-of-sap-s-4hana-and
-sap-hana-on-ibm-power-systems-using-red-hat-ansible/
Building and deploying with Red Hat Ansible
https://github.ibm.com/SAP-OpenShift/containerization-for-sap-s4hana/tree/maste
r/ansible
Community Roles
https://github.com/redhat-sap
Containerization by IBM for SAP S/4HANA with Red Hat OpenShift
https://github.com/ibm/containerization-for-sap-s4hana
Installing Red Hat Ansible
https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.h
tml
Red Hat Ansible Tower docs
https://docs.ansible.com/ansible-tower/latest/html/quickstart/create_job.html
REDP-5619-00
ISBN 0738459585
Printed in U.S.A.
®
ibm.com/redbooks