Sap SCM WP
Sap SCM WP
Sap SCM WP
Colgate
Customer Test-Box Whitepaper
IBM SAP
Version: 1.5
July 2004
Introduction 3
About the Team and the Authors 3
Executive Summary 5
Document Objectives 5
Purpose of the Hot-Standby in SCM 5
Summary of Results 6
Overview of Application Level Tests 6
Application Failover Test Results 7
Application Team Conclusions 7
Implementation into SAPGUI 7
Concept of liveCache Hot-Standby 8
Proof of Concept Environment 10
SCOPE of the Proof of Concept 10
Migration to Hot Standby Configuration 11
Preparation 12
Preparing the SVC 12
Building the hot environment 12
SAN Infrastructure 12
Comments 13
Heterogeneous Storage Layout under SVC 13
Preparing the Primary liveCache Instance 15
LiveCache Migration 18
Attaching the APO System 20
Installation of the Standby 21
Integrating SCM 4.1 with Hot-Standby liveCache 21
Issues and Recovery 25
Failover Tests 27
Application Level Hot Standby Verification Tests In Detail 27
Technical View of the Failover Test in Detail 30
Summary of the PoC 39
Failover Solution Comparison 39
Traditional HA Cluster 39
Shadow Database 40
Hot Standby Database 40
Highly Available SCM Landscape Designs 41
Using a “Traditional” Failover Cluster 41
Using SAP Multiple Components in One Database (MCOD) 42
Overview of the Consolidated Colgate Asia Test-Box Project 47
The Functional Migration 47
Introducing Storage Virtualization 48
Overview of Hardware Infrastructure 50
Software Versions 51
SAN Environment 51
Hardware Environment 51
APPENDIX 53
RTE_config FIle 53
LiveCache Parameters 54
Introduction
This document is one of a series of documents produced in a joint IBM SAP customer test-
box in Walldorf, Germany. This test case project was carried out with the support of Colgate
USA. Colgate provided the system clones which formed the landscape for this test series, and
helped to implement the job-load scenarios which allowed a life-like customer system to be
emulated as verification of the successful proof of concept stages. The project infrastructure
was based on pSeries p690 LPARs and IBM TotalStorage SAN components, including ESS
and FastT storage systems. This infrastructure allowed the initial clone, built from Colgate
tapes, to be further “multi-cloned“ and for several project to run simultaneously sharing the
environment. One of these projects dealt with the functional upgrade of the Colgate SAP
landscape, including the first SCM 4.1 implementation and the first mySAP ECC 5.0 upgrade
from R/3. Another project covers the migration of the Colgate system to an adaptive
computing basis and explores the benefits of the new SAN functionality offered with the SAN
Volume Controller and the SAN File System.
This project focuses on new hot-standby solution for SCM 4.1 liveCache and its integration
into the IBM TotalStorage products.
Related Documents
IBM/SAP Test-Box Whitepaper: “Moving Toward Adaptive Computing SAP on pSeries and
IBM TotalStorage” Walldorf June 2004
IBM/SAP Test-Box Whitepaper: “Moving forward: early upgrade insights of SAP ERP and
SAP SCM” Walldorf June 2004
IBM/SAP “MaxDB 7.5 on IBM TotalStorage® Enterprise Storage Server Integration of
the Hot-StanbySystem Solution” Mainz 2004
The document was supported by the Colgate Test-Box team and other technical colleagues
who all contributed to the test scenarios, the evaluation, and the success of the PoC.
Rajeev Kumar Das SAP Bangalore, SCM Consultant SNP and jobflow
India International Consulting Group
Frank Eurich SAP Walldorf Quality Engineer Demand Planning
Wolfgang Wolesak SAP Walldorf Quality Engineer LiveCache
Vilas Patil SAP Bangalore, Technology Consultant SAP/Oracle/liveCache Basis
India International Consulting Group
Peter Jäger SAP Walldorf SAP Technical Consultant, SAP Basis
SCM CoE
Patty Vollmar Colgate USA Team Leader, Global Supply Colgate Project Lead,
Chain Development VMI
Ambrish Mathur Colgate USA Application Team Leader, Colgate Co-Lead,
Global Supply Chain SNP GSN
Development
Dave Agey Colgate USA Assoc Director, Global Supply APO Project Manager
Chain Development DP
Mike Crowe Colgate USA Director, Global Supply Chain APO Project Sponsor
Development
Siew Lan Chai Colgate Asia APO Team Leader Asia Pacific All APO functional areas
Shared Service Organization
Geoff Graham Colgate USA Global Supply Chain PP/DS
Development
Executive Summary
Document Objectives
The purpose of this document is to show how a customer might implement the new hot-
standby functionality into an existing SCM system. It demonstrates the successful integration
of this solution with the new mySAP SCM, and highlights the additional functionality,
flexibility, and high availability features offered by pSeries and TotalStorage in the IBM hot-
standby integration. This document covers migration of an existing system to a hot-standby
environment, tests the hot standby solution in a real-life scenario, and proposes several HA
SCM landscapes based on hot-standby.
After the failover, the liveCache memory must be rebuilt and loaded.
In a normal failover situation in which a database is taken over, the activities depicted above
take place when the original running database instance is lost
- the server cluster software is triggered by the failure
- the disks are moved to the standby server,
- the service address for the application is moved to the standby server
- the application on the new server is started.
In the picture above, the application being failed over is liveCache, and the user waiting for
the liveCache is the SCM system. This solution has been available for pSeries with HACMP,
and endorsed by SAP, since liveCache version 7.2. It provides automatic failover in case of a
liveCache server or liveCache database crash.
The real issue being addressed by the hot-standby solution is the speed of recovery. As SCM
systems continue to grow and take a more and more important position in the supply chain
management landscape, this recovery time is becoming an increasingly sensitive issue. The
size of the memory structure being used to support the planning algorithms is many gigabytes
15.07.04 8:31 Page 5 of 57
ISICC Walldorf Technical Brief 23/07/04
in size: tendency growing exponentially. (30-80 GB currently, with planned systems in the 3
figure range). A traditional failover scenario must first take over and activate the resources
(disk and network), restart the application and rebuild this large memory structure, reload the
memory structure with the most critical data (continuing to loading the rest in background),
perform any rollback requirements and then any redo actions. All this must be done before
liveCache is ready to resume production. This effort can require a time-span from several
minutes to several hours depending on the state of the application at crash time. An
application level data resynchronization with the SCM system may also be necessary to
achieve data consistency between the SCM database and the liveCache after a liveCache
recovery.
The hot-standby solution provides for a duplicate cache filled with the most relevant data,
removes the need to move physical resources other than the IP service address, and requires
no roll-back and only minimal roll-forward activity. It can be online in seconds.
Summary of Results
Details of the implementation, migration, and the workings of the hot-standby solution are
documented in the subsequent chapters. The real proof of concept for a hot-standby solution is
recovery at the application level. The IBM basis team therefore requested the SAP application
team to define a scenario which would prove data consistency following a failover. The
definition of this scenario and the results are presented here in summary and in detail later in
the document.
Test Criteria: Hot standby technology for liveCache is supposed to enable fail-over of
liveCache data from one liveCache server to another while the SCM system is running. No
loss of persistent data is expected however, the session link to liveCache will be terminated
and any open OMS versions will be lost (transactional simulations).
Test Scenario : In order to test the smooth fail-over from one LC to another LC as part of
hot-standby functionality :
ip-Alias
ip-Alias ip-Alias
Application
HA Software
Primary (HACMP) Standby
Cluster
After Data
image
lliiv
ve C a c h e liv e C a c h e
continous
RESTART
IBM TotalStorage is the first storage infrastructure to support the special SAP design for a
hot-standby solution, announced for liveCache 7.5 with mySAP Supply Chain Management
(SCM) version 4.1.
This integration of the SAP Hot Standby API, done for the TotalStorage products, ESS and
SVC, uses flash-copy functionality to fulfil the requirement for “split mirror”, used by the
primary to initiate a secondary hot-standby server. IBM’s SVC implementation allows high
This project was done using a standalone liveCache, as the first application to support it,
SCM 4.1, was not yet available. During these tests, an automated failover-recovery-reconnect
mechanism was implemented and liveCache was
Simulated failed and recovered some 5000 times over the year-
Application end holiday using a 10GB large liveCache. These
tests verified the robustness of the technology and
IP-Alias the hot-standby performance. These tests were done
for both SVC and ESS native configurations.
The next step for verification of this solution is the
successful use of this technology with the application
Primary 2ndary to verify integrity at database level.
The SAP/IBM test-box project being carried out in
the spring of 2004 on behalf of Colgate targeted an
liveCache1 liveCache2
upgrade test to SCM version 4.1. This test
environment provided the opportunity for the
application level test of the new hot-standby
Primary technology and this environment provides the basis
2ndary
for the PoC described in the following chapters of
this document.
Failover LC1 to LC2:
LC2 to LC1
5000 times - under simulated application load
As this project was embedded in a customer test-box, it was able to take advantage of the test
scenarios built for the main upgrade project to simulate a production system and verify the
integrity of this solution at application level. A load scenario which concentrates on moving
data from the SCM business warehouse info-cubes into the liveCache, provides the storage
stress test as this involves massive write activity in liveCache. A further functional test which
uses the data in the liveCache for supply and network planning heuristics provides the
verification of the data consistency at application level. An additional built-in tool
( transaction /sapapo/om17) allows us to verify the data consistency between liveCache and
the SCM database directly, providing an additional data verification.
LC
APO
LC APO
APO/LC Primary Primary
SAN Switch
SVC
APO
APO LC
LC 7,5 LC log APO LC log
4.1 LC
LC 7,5
4.1 LC 7,5 data data
data
LC log New LC on Raw Connect new
Consolidated
Devices loaded LC via service
APO/LC on from export
JFS2 address
LC data export
IP Alias
LC LC
APO Secondary
Primary
Initiate a new
Hot-Standby SAN Infrastructure
Secondary
APO LC 7,5
LC
LC 7,5
4.1 data data copy
LC log
Migration Steps
Preparation:
Documented in detail in “SAP liveCache 7.5 and MaxDB 7.5 on IBM TotalStorage”
1. install and configure the hot-standby servers
2. prepare the raw-devices for hot-standby
3. install the liveCache executables
4. install HACMP for ip-alias takeover
5. test hot-standby and HACMP failover
Migration:
Documented in this POC
6. backup the content of the file system based liveCache
7. import liveCache data into hot-standby liveCache on raw devices
8. connect APO to new liveCache
APO should not be allowed to continue any activity to the source liveCache during the
migration as this may cause a loss in data synchronization. Ideally APO is offline until the
liveCache is switched to and activated on the hot-standby server. The steps which define the
downtime are the migration steps only.
Preparation
Hot-standby spans storage server boundaries: one data copy on ESS one on
FastT
SVC
SAN Volume Controller
ESS FastT600
FastT600
ESS offline
emergency
recovery
pprc log copy
As the hot-standby architecture provides a complete backup copy of the liveCache data, the
SVC implementation would optionally allow each copy to be placed on separate storage
servers to increase high availability. In this case the liveCache log would also need to be
duplicated for disaster recovery. There are various options for doing this and they will be
explored and tested soon albeit not within the scope of this POC. The method used, however,
must ensure that both primary and secondary see the same view of the log. A solution such as
PPRC is recommended.
Until now APO (Advanced Planner and Optimizer of SCM) and the LiveCache were running
on a single host. This is a recommended configuration on AIX for resource sharing. In order
to move from this configuration to a hot-standby environment, liveCache was moved to
separate server, and the liveCache data files (devspaces) were changed from file-system based
to raw devices. It is possible for one instance of a hot-standby liveCache to inhabit the same
server as the APO system, but it would complicate the HACMP configuration which was not
the focus of this PoC.
SAN Infrastructure
We started with a fresh copy of an APO release 4.1, (a flash-copy clone). The APO and
LiveCache file systems were mounted on the designated APO host (is02d4t. For the
LiveCache cluster, three SAN disks were defined; all under the SAN volume controller.
The rotating resource used for archive logs was not taken under SVC control for the POC as
this functionality is driven by HACMP in either case and it can be SVC or not.
sapbackupvg
HACMP Rotating resource for Archive logs
located on ESS - not controlled by SVC
is02d8 is02d6
enhot1 enhot2
SVC
sapdatavg
sapdatavg
Data Disk Copy1
Data Disk Copy2
Located on Ess
Located on Ess
saplogvg
concurrent access:
liveCache Log
Located on Ess
The only difficulty encountered in building the above configuration for hot-standby under the
SVC was when it came to making the shared log available to both liveCache hosts. The SVC
GUI did not support the ability to export a volume to multiple hosts. It was necessary to do
this via the command line interface as shown below.
svc> svctask addvdiskhostmap –host is02d6 vd_LCcp4_703
svc> svctask addvdiskhostmap –host is02d8 –force vd_LCcp4_703
Since the volume was already mapped to the first hot standby host, the GUI didn’t allow us to
define a further mapping (see message response below). Be aware that when you create disks
with concurrent access, the concurrency synchronization and all access issues are left to the
application. In this case liveCache handles the concurrent access to the online log.
IBM_2145:admin>svctask mkvdiskhostmap -host is02d8 vd_LCcp4_703 -force
CMMVC5701E No object ID was specified.
IBM_2145:admin>svctask mkvdiskhostmap -host is02d8 vd_LCcp4_703
CMMVC6071E This action will result in the creation of multiple mappings.
Use the -force flag if you are sure that this is what you wish to do.
IBM_2145:admin>svctask mkvdiskhostmap -host is02d8 -force vd_LCcp4_703
Virtual Disk to Host map, id [0], successfully created
Comments
Preparing the SAN environment for the hot standby test was very easy. The only thing that
was not possible with the SAN volume controller GUI was to map a vdisk to a second host.
This was accomplished via the command line.
separate storage servers. Details are shown below in the edited output of „svcinfo
lsmdisk“ on the SAN volume controller.
IBM_2145:admin>svcinfo lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
2 mdisk_210543508 online managed
1 LC_Cp4_hot_ess 32.6GB 0000000005508
ESS 49424d2020202020323130352020202020202020202020203530383231303534
...
5 mdisk_210543703 online managed
1 LC_Cp4_hot_ess 32.6GB 0000000005703
ESS 49424d2020202020323130352020202020202020202020203730333231303534
...
19 mdisk_LCcp4 online managed
8 LC_Cp4_hot_fstt 70.0GB 00000000000006
fastt600 600a0b80001253b2000001ef40984c2100000000000000000000000000000000
The mdisks are part of two mdisk groups (LC_Cp4_hot_ess and LC_Cp4_hot_fstt), as you
see in the output above. The output is unfortunately wrapped around, but with effort it is
possible to see that two have the controller_name of ESS and one is fastT. In the mdisk
groups we defined three vdisks, see the output of „svcinfo lsmdiskgrp“ below.
IBM_2145:admin>svcinfo lsmdiskgrp LC_Cp4_hot_ess
id 1
name LC_Cp4_hot_ess
status online
mdisk_count 2
vdisk_count 2
capacity 65.2GB
extent_size 16
free_capacity 0
IBM_2145:admin>svcinfo lsmdiskgrp LC_Cp4_hot_fstt
id 8
name LC_Cp4_hot_fstt
status online
mdisk_count 1
vdisk_count 1
capacity 70.0GB
extent_size 16
free_capacity 37.4GB
We’ll find the vdisks in the output of „svcinfo lsvdisk“ (again edited):
IBM_2145:admin>svcinfo lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name
...
27 vd_LCcp4_508_1 0 io_grp0 online
8 LC_Cp4_hot_fstt 32.6GB striped 0
...
67 vd_LCcp4_508 0 io_grp0 online
1 LC_Cp4_hot_ess 32.6GB striped 0
...
68 vd_LCcp4_703 0 io_grp0 online
1 LC_Cp4_hot_ess 32.6GB striped
The vdisk vd_LCcp4_508_1 (id 27, using the mdisk defined on the FAStT) is exported to the
host is02d8:
15.07.04 8:31 Page 14 of 57
ISICC Walldorf Technical Brief 23/07/04
The vdisk vd_LCcp4_508 (id 67, defined on the ESS) is exported to the host is02d6:
IBM_2145:admin>svcinfo lsvdiskhostmap 67
id name SCSI_id host_id host_name
wwpn vdisk_UID
67 vd_LCcp4_508 0 1 is02d6
10000000C92BDE98 600507680180801CA0000000000000C2
The vdisk vd_LCcp4_703 (id 68, defined in the ESS) is exported to both hosts:
IBM_2145:admin>svcinfo lsvdiskhostmap 68
id name SCSI_id host_id host_name
wwpn vdisk_UID
68 vd_LCcp4_703 0 0 is02d8
10000000C92D2A5B 600507680180801CA0000000000000C3
68 vd_LCcp4_703 1 1 is02d6
10000000C92BDE98 600507680180801CA0000000000000C3
The following two disks are the ESS volumes which will be used for is02d6 data and the
shared log. The log volume is seen by both hot-standby liveCache servers (is02d6,is02d8).
Over the SVC, there are 8 paths to the actual disks which appear to the server. Each of these
disks will be placed in a separate volume group as there must be a local volume group for the
data, and a shared volume group for the log.
Data:
vpath2 (Avail ) 600507680180801CA0000000000000C2 = hdisk2 (Avail pv ) hdisk4 (Avail
pv ) hdisk6 (Avail pv ) hdisk8 (Avail pv ) hdisk18 (Avail pv ) hdisk20 (Avail pv ) hdisk22
(Avail pv ) hdisk24 (Avail pv )
Log:
vpath3 (Avail ) 600507680180801CA0000000000000C3 = hdisk3 (Avail ) hdisk5 (Avail )
hdisk7 (Avail ) hdisk9 (Avail ) hdisk19 (Avail ) hdisk21 (Avail ) hdisk23 (Avail ) hdisk25
(Avail )
Neither of the volume groups should be automatically activated. They must be mounted using
the –u option to avoid the disks begin given reservation locks.
Source database is over 32GB in physical size, but has only 715MB of actual data.
LiveCache is currently 98% empty. The object of this test is application level consistency,
large liveCache tests were done in the previous functionality stress tests so the size here is
immaterial.
(NOTE: enhot2 is an alias name for is02d6. The reason for this alias is described later in the
HACMP setup)
A single raw logical volume of 8GB will be used for the basis for import, with a single 2GB
log volume. The original calculations for the number of 64MB partitions did not later agree
with what liveCache expected. They were apparently a little bit too small. The problem is the
error message which accompanied this problem was not very definitive. The problem could
be seen in the knldiag
2004-05-11 20:08:32 59 11000 vdevsize '
/dev/rLCDATA1' , 1048576 failed
2004-05-11 20:08:32 59 ERR 16 IOMan Unknown Data volume 1: Could not read from volume
2004-05-11 20:08:32 59 ERR 8 Admin ERROR ' disk_not_accessibl'CAUSED EMERGENCY SHUTDOWN
The devices were slightly enlarged and the restore retried successfully.
LiveCache Migration
1) Shutdown SCM and from the original liveCache, create a full database backup to the NFS
exported file-system.Full database backup of liveCache on file-system, using dbmgui tool,
for restore into the new instance.
Label: DAT_00002
Date: 11.05.04 17:03:56
Medium: full
Volumes: 1
Size: 715328 KB | 89416 Pages
Log Page: 1140998
Last Savepoint: 11.05.04 17:03:55
Is consistent: Yes
The following client software was installed with profile 0, runtime for SAP AS.
Installation of MaxDB Software
*******************************
starting installation Th, May 13, 2004 at 20:36:39
operating system: AIX PowerPC 5.2.0.0
callers working directory: /hotsapdb/maxdb-server-aix5-64bit-ppc-7_5_0_10
installer directory: /hotsapdb/maxdb-server-aix5-64bit-ppc-7_5_0_10
archive directory: /hotsapdb/maxdb-server-aix5-64bit-ppc-7_5_0_10
existing profiles:
0: Runtime For SAP AS
1: DB Analyzer
2: JDBC
3: Server
4: Loader
5: ODBC
6: all
7: none
please enter profile id:
When liveCache application monitoring is active, and this is an optional but recommended
feature, it will react to the termination of the liveCache database and trigger a failover. This
reaction will also be triggered if the database is shutdown via LC10 (lcinit) unless
precautions are taken. In the case of a clean shutdown of the liveCache from APO, the
application monitoring under HACMP is turned off. It is restarted when the liveCache is
restarted from APO. APO starts and stops liveCache by means of the lcinit script so this
action will take place when the lcinit is invoked manually as well. In order to activate this
action, a link to the following script must be placed in /sapdb/<LCNAME>/db/sap to the file
/usr/es/sbin/cluster/local/lccluster, or the lccluster script must be copied into the directory
documented here.
To create a symbolic link use.
ln –s /usr/es/sbin/cluster/local/lccluster /sapdb/<LCNAME>/db/sap/lccluster
Example:
root@enhot1:/usr/es/sbin/cluster/events/utils>chmod 4755 clRMupdate cl_echo
On both hot standby servers, the users, sdb and <LC-SID>adm, must be members of the
system group or they will not have the authorization to run the HACMP utilities which
support the automatic starting, stopping, and initializing of the hot-standby servers.
datenLV: LCDATA1
vpath2 / SVC vdiskID: 67
logLV: LCLOG1
vpath3 / SVCvdiskID: 68
datenLV: LCDATA1
vpath4 / SVC vdiskID: 67
logLV: LCLOG1
vpath2 / SVCvdiskID: 68
Extract from the RTEHSS Configuration File (Valid for Both Nodes)
/usr/opt/ibm/ibmsap/HOTSVC/RTEHSS_config.txt
Variables:
MlCLogVdiskID (set to 68)
MlCDataVdiskID (set to 67)
(Delete the values for any variable which are not being used)
• Follow the instructions to initialize the standby to activate the first data copy.
• On the SVC delete any outstanding flashcopy tasks referring to the hot-standby disks.
• On NODE2 do the following:
- delete all disks and run cfgmgr to get the correct PVIDs following the flash-
copy initialization
- importvg -n -R -y sapdatavg hdisk3
The following error was received when importing the raw devices on NODE2. This happens
occasionally when importing raw devices. The result is that the LV cannot be expanded, but
that is of no consequence here as a formatted data device cannot be extended under liveCache
anyway.
Enable the instance on the primary server as a hot standby instance. This instance will be the
first master.
dbmcli -n <master> -u control,control -d HOTSVC
> db_offline
> param_directput ALLOW_MULTIPLE_SERVERTASKS_UKTS YES
> param_checkall
> hss_enable lib=libHSSibm2145 node=HOTLC
> db_online
Register the standby instance on the secondary server and activate it. Connect via dbmcli to
the primary server.
dbmcli -n <master> -u control,control -d HOTSVC
> hss_addstandby <standby> login=sapdb,passwd
> db_standby <standby>
The command db_standby copies all parameters from the master to the standby, starts and
initialises the standby instance. Initialisation means it starts the flashcopy tasks which copies
an I/O-consistent image of the master data area to the standby. The standby instance doesn’t
wait until all blocks are copied. The standby instance immediately can work with it’s data
image. At the end, the command db_standby sets the instance in mode STANDBY.
Under certain circumstances, it may be necessary to reinitialize the standby instance by hand.
The situation which led to this problem during the tests has been corrected, but the actions
have been captured here for reference..
A problem was experienced in the test-box configuration apparently related to the fact that we
had done the functional tests on this same server which was not being installed with a new
liveCache installation using the same name as the original. We did this purposely in order to
fit into the environment which had already been configured for HA. Unfortunately, the new
installation took the default DBM-user: dbm,dbm and the original had the DBM-user:
control,control. Some of the errors encountered as a result are documented below.
In LC overview transaction lc10, the connection light is green indicating that LCA is working,
but an attempt to access the new liveCache via the sql connection fails:
CX_SY_NATIVE_SQL_ERROR. SAP kernel developer traces show the following error:
3004-330 Your encrypted password is invalid.
Actions:
The dbm user on the new liveCache primary was renamed to control. The following activities
were carried out on the new hot-standby primary (is02d6).
Example on is02d6:
dbmcli –d HOTSVC –u dbm,dbm param_directput CONTROLUSERID control
On is02d6:
rm /sapdb/data/config/HOTSVC.upc
rm /sapdb/data/wrk/HOTSVC/dbm.upc
5. Restore the user profile containers with a fallback authorization against the paramfile
Format: dbmcli -s -d <dbname> -u <newuser>,<newpassword> db_state
Example on is02d6:
dbmcli -s -d HOTSVC -u control,control db_state
7. Tell the DBM the user SYDBA and DOMAIN via a "upgrade system tables"
Format:
dbmcli -d <dbname> -u <newuser>,<newpassword> load_systab -u <sysdba>,<pwd> -ud
<domainpwd>
Example at is02d6:
dbmcli -d HOTSVC -u control,control load_systab -u superdba,colduser -ud domain
Note this not a normal administrative operation. Normally it is not possible to rename the dbm
user.
Failover Tests
Application Level Hot Standby Verification Tests In Detail
Test Scenario in Detail : In order to test the smooth fail-over from one LC to another LC as
part of hot-standby functionality :
Test Execution :
1. Create a job consisting of a single step of SNP Heuristic run
Job GSN_CBSAP_DAILY_1_FAILOVER created in system is02d4 (SCM).
2. Run the job and download the Application log for the above job step run on Day1
( with liveCache 1 connected to the system )
Job run on 18.05.2004 ( Start time : 18.05.2004 19:47:09 )
4. Rerun the job on Day2 (while job is running, perform the fail-over wherein connection
to liveCache1 is broken and liveCache 2 is connected).
After technical setting up the system for hot-standby fail-over test, the Job
GSN_CBSAP_DAILY_1_FAILOVER was started ( by user BCKCTB ) at 19.09.05
14:45:02 hrs. It was allowed to run for over 100 seconds before Fail-over was
initiated. As expected, the job failed with the error ABAP/4 processor:
DBIF_DSQL2_CONNECTERR.
5. After fail-over, the liveCache assignment was confirmed to have been moved to the
hot-standby. See the basis test results for the details of this.
7. The application log was analyzed and comparison of planning output of both
successful cycles ( i.e. each on 18.05.2004 and 19.05.2004 ) was carried out and is
enclosed in the embedded MS-Excel file. Extracts from this data are added below.
As expected, certain order/dates have undergone change due to running the jobs on 2
different dates.
Conclusion: Following the failover which occurred in the middle of an SNP planning job, the
liveCache data was reset to a consistent state such that the subsequent rerun of the planning
job completed successfully and the output data was correct.
Note: the failover was caused by a simulated liveCache database crash. This was done by
terminating the liveCache process. HACMP is active using application monitoring and
registers the loss of the liveCache application. As the memory cache of the failed process is
lost, there is no purpose in restarting the liveCache on the same server. Instead, the secondary
is switched to primary and the failed server is subsequently recovered and re-established as
the new secondary.
1) Starting Situation
Below the kernel processes of liveCache are displayed on both the cluster nodes.
Is03d8/enhot1 is standby (is02d8 10.17.70.190)
root@enhot2:/>ps -ef | grep kernel
root 34794 27186 0 14:22:47 pts/2 0:00 grep kernel
sdb 36658 1 0 18:40:25 - 0:02 /sapdb/HOTSVC/db/pgm/kernel HOTSVC
sdb 38052 36658 0 18:40:27 - 2:14 /sapdb/HOTSVC/db/pgm/kernel HOTSVC
Looking at the network interfaces, we see that the rotating ip-alias used as the service address
for liveCache is visible on enhot2. This ip-alias, “hotlc” is controlled by HACMP and located
to the liveCache primary instance.
root@enhot2:/>netstat -i
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en1 1500 link#2 0.2.55.6a.31.7 15986625 0 1263577 0 0
The dbmgui is connected to the hot-standby master from the user network. It attaches to the
address of is02d6. To really be able to watch the hot-standby in action from the dbmgui, it
must have access to the service network where the ip-alias “hotlc” is known. In the test
environment, this was a private service network and not accessible via the normal SAP
network. Therefore it was only possible to directly to the individual LC nodes.
root@enhot2:/>kill -9 38052
root@enhot2:/>date
Wed May 19 14:30:55 DFT 2004
The liveCache instance is down, no kernel processes are active now on enhot2.
root@enhot2:/>ps -ef | grep kernel
root 26168 32594 1 14:31:00 pts/5 0:00 grep kernel
Croot@enhot1:/usr/es/sbin/cluster/local>netstat -i
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 0.6.29.6c.c1.42 11285309 0 932067 0 0
en0 1500 10.17.64 is02d8 11285309 0 932067 0 0
en2 1500 link#3 0.2.55.9a.3b.bd 7446184 0 1627382 3 0
en2 1500 10.17 is02b8 7446184 0 1627382 3 0
en2 1500 192.168.0 enhot1 7446184 0 1627382 3 0
en2 1500 192.168.0 hotlc 7446184 0 1627382 3 0
lo0 16896 link#1 1748990 0 1749857 0 0
lo0 16896 127 loopback 1748990 0 1749857 0 0
lo0 16896 ::1 1748990 0 1749857 0 0
root@enhot1:/usr/es/sbin/cluster/local>
RG_LChot1 cascading ONLINE is02d8 status of the cluster servers, both are online.
OK
State
ONLINE status of liveCache.. online
Wed May 19 14:30:55 DFT 2004
-----------------------------------------------------------------------------
Group Name Type State Location
-----------------------------------------------------------------------------
RG_LCmaster rotating OFFLINE is02d8
ONLINE is02d6
OK
State
OFFLINE LiveCache has failed!!
Wed May 19 14:31:02 DFT 2004
-----------------------------------------------------------------------------
Group Name Type State Location
-----------------------------------------------------------------------------
RG_LCmaster rotating OFFLINE is02d8 HACMP is active moving the rotating archive
RELEASING is02d6 resource and IP-Alias to the standby.
OK
State
ADMIN local liveCache being reactivated
:
:
Wed May 19 14:32:13 DFT 2004
-----------------------------------------------------------------------------
Group Name Type State Location
-----------------------------------------------------------------------------
RG_LCmaster rotating ONLINE is02d8
OFFLINE is02d6
OK
State
STANDBY liveCache on enhot2 is now standby.
Wed May 19 14:32:21 DFT 2004
-----------------------------------------------------------------------------
Group Name Type State Location
-----------------------------------------------------------------------------
OK
State
STANDBY the local instance (enhot1) is the standby
Wed May 19 14:30:56 DFT 2004
-----------------------------------------------------------------------------
Group Name Type State Location
-----------------------------------------------------------------------------
RG_LCmaster rotating OFFLINE is02d8 HACMP has recognized the failure of the
TEMPORARY ERROR is02d6 primary at application level.. the cluster
RG_LChot1 cascading ONLINE is02d8 members (the servers) remain online
OK
State
STANDBY
Wed May 19 14:31:04 DFT 2004
-----------------------------------------------------------------------------
Group Name Type State Location
-----------------------------------------------------------------------------
RG_LCmaster rotating OFFLINE is02d8 HACMP is moving the resources to the standby
RELEASING is02d6
OK
State
STANDBY
Wed May 19 14:31:11 DFT 2004
Wed May 19 14:31:34 DFT 2004
-----------------------------------------------------------------------------
Group Name Type State Location
-----------------------------------------------------------------------------
RG_LCmaster rotating ACQUIRING is02d8
OFFLINE is02d6
OK
State
STANDBY
Wed May 19 14:31:59 DFT 2004
-----------------------------------------------------------------------------
Group Name Type State Location
-----------------------------------------------------------------------------
RG_LCmaster rotating ONLINE is02d8 Enhot1 now has the ip-alias and the
OFFLINE is02d6 archive resource
OK
State
ONLINE enhot1 is now online as the primary
NOTE: Unfortunately in the following logs, the time of the APO server was not synchronized
with the time on the liveCache servers, so there is a deviation of 16 minutes and 10 seconds.
The equivalent timestamp on the liveCache servers is denoted in brackets.
Error entry shortdump resulting from the liveCache failure. Seen in transaction ST22.
[14320:42]
The following developer trace indicated when the APO work process discovered the error.
Once a connection error is found, several retries are attempted and then the work-process is
left in reconnect status. The 2nd extract shows the successful reconnection triggered by the
restart of the batch job at 14:55:37 [14:36:27].
The job was manually restarted at 14:55:37 [14:39:12] and the connection to liveCache was
successfully re-established as result of the first access attempt at 14:55:37 [14:39:27], see
above trace data.
================================================================
From the old sapdb knldiag: the signal 9 used to kill the liveCache kernel and simulate a
liveCache failure is seen arriving at 14:20:51.
From the new sapdb klndiag: HACMP is able to restart the liveCache application, and the
new primary places this restarted instance into standby mode.
:
:
2004-05-19 14:32:01 133 12821 TASKING Thread 133 starting
2004-05-19 14:32:01 133 11597 IO Open '/dev/rLCLOG1' successfull,
fd: 13
2004-05-19 14:32:01 133 11565 startup DEVi started
2004-05-19 14:32:01 14 11000 vattach '/dev/rLCLOG1' devno 1 T2
succeeded
2004-05-19 14:32:01 14 11000 vdetach '/dev/rLCLOG1' devno 1 T2
2004-05-19 14:32:01 11 12822 TASKING Thread 132 joining
2004-05-19 14:32:01 132 11566 stop DEVi stopped
2004-05-19 14:32:01 11 12822 TASKING Thread 133 joining
2004-05-19 14:32:01 133 11566 stop DEVi stopped
2004-05-19 14:32:01 14 13950 RTEHSS RTEHSS_API
[RTEHSS_API(COPY):RTEHSS_SetLogReadOnlyStatus]
2004-05-19 14:32:01 14 13950 RTEHSS RTEHSS_API [Got valid handle]
2004-05-19 14:32:01 14 13950 RTEHSS RTEHSS_API [Would set log access
to read only]
2004-05-19 14:32:01 14 13953 RTEHSS Standby role set: STANDBY
(master node ENHOT1)
2004-05-19 14:32:02 8 201 RTE Kernel state changed from
STARTING to ADMIN
======================================= end of startup part 2004-05-19
14:32:02 8 11570 startup complete
2004-05-19 14:32:03 10 11561 COMMUNIC Connecting T68 local 37194
2004-05-19 14:32:03 80 11561 COMMUNIC Connected T68 local 37194
2004-05-19 14:32:03 80 11560 COMMUNIC Releasing T68
2004-05-19 14:32:03 80 12929 TASKING Task T68 started
:
:
:
2004-05-19 14:32:14 81 42 Admin Hotstandby: register succeded;
Traditional HA Cluster
Application: general to most database management systems.
Concept:
A cluster solution with two servers in active/standby mode. Both servers have physical access
to the same disks, with mutually exclusive use. Cluster software (HACMP) communicates
between the two servers and reacts to a hardware failure. In the event of a hardware failure,
the disks are taken over by the standby and activated, the IP address for client access is
switched to the standby, and the database application is restarted on the standby.
Reaction time:
Dependent on the number of volume groups and disks in the takeover, this is estimated at 2-3
minutes.
In a failover scenario, the active disks and active log of the failed server are brought back
online. Uncommitted transactions at the time of failure must be reset. If the crash occurred
during heavy load with many uncommitted transactions this could take a considerable amount
of time. A guestimate is <1hr.
Benefits:
Provides automated failover and generally a fast reestablishment of service.
Considerations:
Does not protect against disk failure. Rollback of non-committed transactions required.
Relatively complex landscape. For liveCache, the backup database is started new which
requires the memory cache to first be rebuilt before resuming work.
Shadow Database
Application: general to most database management systems
Concept:
Two servers with separate database instances (separate data); one active and the other standby.
The logs of the first database are “shipped” to the second via network. The 2nd database is in
constant recovery mode and continually applies all the changes recorded in the logs coming
from the active server. When the active server fails, the final logs must be applied to the
shadow database, the database taken offline and restarted in active mode using a copy of the
failed server’s online log in order to recover to the point in time of the failure. Some solution
must be provided to takeover the IP address for client access. It is possible to automate these
steps using software like Libelle.
Reaction time:
Dependent on amount of data in the logs which still remains to be applied. Some shadow
databases are purposely run at 1 hr delay in order to protect from a logical error on the active
database. It is thought that this gives enough reaction time to protect the shadow.
If the scenario is automated, the time will include up to several hours of redo logging, and
then a stop and restart of the database.
Benefits:
Failover can be automated. Provides a disaster site recovery scenario. Protects from disk
and/or disk server failure. Rollback of failed transactions not required.
Considerations:
Complex landscape. If the solution is not automated, the reaction time can be greater and
more error prone due to manual intervention requirements. For liveCache, the database is
restarted and therefore the cache must be rebuilt before resuming work.
reversing the split mirror. HACMP provides the failover control, initiating the standby to
primary switch as result of either a hardware or application failure on the part of the primary.
Reaction time:
Very quick. <1 minute mostly due to HACMP having a small delay time in reacting.
Benefits:
Completely automated rotating failover. No rollbacks in liveCache; the roll forward is
seconds, and the liveCache data cache is already built and full of the most actual data.
Can protect against disk and disk server failure.
Considerations:
Supported for ESS with flash copy functionality, or SAN Volume Controller with either ESS
or FastT. Available with SCM 4.1 (liveCache and MaxDB 7.5).
In the following configurations the SAP SCM liveCache is generally protected using the hot-
standby solution. The SAP Central Instance is always running on the database server and a
replicated enqueue server always running on the complementary serve. Consequently the
subsequent scenarios will only show variations in the deployment of the underlying DB
database server.
Each scenario has advantages and disadvantages and thus needs to be selected depending on
the particular needs of customers.
Beyond the protection against server failure a complete high availability environment needs to
consider and protect against the lost of data (e.g. user or application error; disk system crash)
or disaster (e.g. destruction of a complete data center). Thes issues exceed the scope of this
document.
Depending on the application service requirements (e.g. response times) both servers may
need to be sized bigger than just for indented SAP CI and DB server workload so to be able
handle the additional workload in case of a failover and still meet service requirements.
Workload management can be used to force the optimal resource distribution under the extra
load of a takeover.
Application Application
Application Application SAP APO SAP APO
ApplicationServer ServerApplication LiveCache LiveCache
Server CIF Server DCOM
Server Server Server Server
HACMP
SAP CI / DB2 Failover SAP CI / DB2
Hot-Standby Solution
Server Cluster Server
This type of HA solution is chosen to protect against server failure (e.g. failure of the server
running the SAP Central Instance and DB2 server). It is not a protection against disk failure
(each database is stored only once and only accessed from the 2nd server in case of a failure)
nor it is a disaster recovery protection (e.g. usually both server are located relatively close).
This type of HA solution is the most commonly used implementation. It is sufficient to
address most customers’ high availability needs and it is also the most efficient and cost-
effective one. Depending on the setup of the cluster, usually only the application on the failing
server is affected by the failover but not the application on the other server. This HA
implementation supports most database management systems.
DB2 supports SAP’s MCOD by allowing each SAP system to be run as separate DB2
partitions all sharing the same database. This makes it possible to run multiple DB2 partitions
together on a common server or distributed over many servers or even combination thereof.
The benefit of customers deploying MCOD with DB2 is better scalability (e.g. a customer can
start “small” running all partitions on one server and later scale-out onto many servers as the
databases grow and require more resources). It also provides administration benefits as, for
example, all “logical” SAP databases (e.g. SAP R/3, SAP APO, …) can be backup’ed and
restored at once, making it easy restoring all coupled systems to a consistent point in time.
The scenario below is basically equivalent to the previously shown traditional failover
scenario, except for the fact that the SAP R/3 and the SAP APO system share the same
database.
HACMP
Failover
Cluster
One server runs the DB2 partition for the SAP R/3 system while the other runs the DB2
partition for the SAP APO system. They are linked into an HACMP cluster and act as mutual
takeover.
Under Oracle RAC, there are two database engines with shared access to the same physical
database. One database engine is dedicated to R/3 and the other dedicated to APO, but both
capable of doing either. In the case of a failure, HACMP with move the access service address
to the remaining engine and the the application servers of the failed database engine will
reconnect to the remaining engine. This solution provides a very fast failover as no database
takeover or application restart is necessary.
data resides in the same database, the SAP R/3 and the SAP APO system share the same
database configuration, which is usually only suitable for small databases.
Even though not stated in the chart this scenario could also be implemented together with
SAP Enqueue Server replication.
The following configuration is again based on Oracle RAC. In this case there are two separate
RAC databases, one for ERP and one for APO. Two dynamic LPARs on separate machines
can each contain an engine for each database, or the databases can be separated into an LPAR
each on both machines. A failure of any database engine will cause the 2nd engine to takeover
the load. Dynamic LPAR functionality (on-demand) can be used to activate and enable
additional resources to the remaining engine.
R/3 APO
GPFS GPFS
Using the Spare Failover Capacity for SAP Application Server Workload
The following scenario is basically identical to the scenario described in Using a “Traditional”
Failover Cluster except that the systems are each partitioned into 2 DLPAR. One server
partition on each server run the SAP CI and DB server workload and the other (using the
additional and spare capacity needed for the take-over) runs some of the application server
workload.
HACMP
APO Appl. Server Failover R/3 Appl. Server
Cluster
CI / DB2 Server
CI / DB2 Server IC Hot-Standby Solution
In case of a failure of the database server in one DLPAR the corresponding application server
on the other server is stopped and the partition used to run the SAP CI and DB server
workload. Dynamic LPAR functionality (on-demand) can be used to activate and enable
additional resources or steal resources from less priority workloads.
HACMP
CI / DB2 Server Cluster CI / DB2 Server
This chapter introduces the Colgate Asia project and describes the environment in which the
sub-projects (referred two as parallel project threads) were carried out. The main thread,
which launched the joint project, was a function upgrade of the Colgate SCM landscape to the
newest SAP versions which were in focus for the Asia system in 2004-2005. The parallel
project threads were designed to take advantage of the hardware and software landscape, to
explore new IBM and SAP technology. The parallel threads included the introduction of
storage virtualization into a production system landscape, and the migration of a production
SCM system to the new hot-standby solution for liveCache.
pSeries pSeries
p690 p690
6-32cpu 6-32cpu First Ever
1.3 GHz 1.3 GHz
The Asia landscape delivered by Colgate included the ERP and the SCM systems. These were
cloned onto the IBM hardware (described later) and re-established as a “production” pair in
Walldorf. With the help of Colgate, an SCM load scenario based on actual Colgate planning
jobs was designed which provided for repeatable functional and load testing. This test
scenario provided a reliable load generation which was then used for the comparison
performance tests through out the upgrade. For each of the upgrade steps, a series of these test
batteries was run to be compared against the previous baseline. This was done to ensure that
each upgrade either maintained or improved the system performance. The test scenario is
documented in detail for the main project thread. The above diagram depicts most, but not all
of the migration steps. Each of these steps was documented to provide a roadmap for
customer migration including the solutions to problems found and recommendations for
performance or functional improvements.
The current trend is moving in the direction of “On Demand” or “Adaptive Computing”. The
philosophy behind this move is that systems should have the resources they need when they
need them and these resources should be present in a very flexible form such that they can
“come and go” dynamically. IBM offers dynamic LPAR functionality which allows CPUs
and memory to be dynamically moved to and from running systems. SAP is focusing on an
architecture in which systems can be easily and quickly moved from one server to another
should more resources be required, or new application servers can be spawned into production
to help unburden an overloaded system. The SAP architecture is based on server and storage
virtualization: a decoupling of the service from the server, and server from storage. IBM is
working with SAP to implement the SAP adaptive computing architecture using the rich
functionality of the IBM on-demand components and building blocks for virtualization. The
first major requirement for adaptive is the storage virtualization, and IBM offers two powerful
new products in this area:
The SAN Volume Controller, which decouples the servers “disk” from any
dependency on an actual physical storage location. The disk can be actually move
from one storage server to another during active production and the servers access
moves with it.
The SAN File-System, which provides a high performance shared file-system which
removes any obvious file-system to disk dependency and any need for the server to
own and activate disks. All application data is resident in the shared filespace, and the
server currently responsible for the application simply mounts the file-system.
These two products form the basis for the ongoing Adaptive Computing Initiative in the
ISICC and this test box provided the landscape in which to test the migration of a production
system to storage virtualization. The objective of this project thread was to answer the
following questions:
1 How do we migrate?
2 What do the steps cost in terms of performance and downtime?
3 What do we gain in new flexibility and functionality with our new infrastructure?
Colgate
testbox Clone Clone Clone
SAN SW
3 fastT 4 fastT
image managed SANFS
5 SANFS
1 ESS 2 ESS
4 Hetergenous
SVC
Dirrect SVC 5 Migration to
CG FC
SANFS
3 Hetergenous CG FC
2 migration to svc
APO 3.0 LC 7.4.2
/ORACLE /SAPDB
Managed
ESS
ESS Image Disk Image Disk Disk
FastT600
FastT600
1 consistency group FC
Test
Point
POC activity
The diagram above depicts the steps taken to investigate the move to virtualization. Each of
the pink circles depicts a test point where comparison performance data was measured. Each
of the red arrows is a test-box activity. The following test scenarios are documented in detail
for this project:
production 10.17.70.197
Network 10.17.70.183 10.17.70.184 10.17.70.186 10.17.70.188 10.17.70.190
10.17.69.199
10.17.69.94
10.17.70.37 ibmcc51
F80 Browser &
FC switch Monitor
192.168.100.30 172.31.1.2
172.31.1.1
192.168.100.31
ESS enet 172.31.1.250
LTO Tape
192.168.100.40 SAN Data Library
GW
192.168.100.41
Servers
The server capacity was provided by a p690 with 32 CPUs at 1,3 GHz and 64 GB of memory.
The individual servers were implemented as dynamic LPARs. The dlpar functionality allowed
resources to be moved between the systems as required by the various performance tests. All
performance tests on SCM systems were done using 6 CPUs and 16 GB of memory. As only
64 GB memory was available, functional tests being done on parallel threads used a smaller
footprint until such a time as a performance test was required. During the height of the project
activity, 7 logical partitions were active simultaneously with 4 APO clones, an ERP and a hot-
standby liveCache cluster.
Software Versions
Operating System: AIX 5.2 ML2
SCM Database: Oracle 9.2.0.4.0
SCM liveCache: 7.5.0.12
SAN Volume Controller: 1.1.4
SAN Environment
Hardware Environment
Storage Systems
ESS Model F20
IBM ESS Storage subsystem is a proven platform for very high
availability , disaster tolerance, accessibility of mission-critical
information, high performance, flexibility, high scalability,
efficient managebility, data consolidation, and connectivity .
IBM ESS Storage subsystem used for these tests, was a model
ESS F20 with a capacity of more than 3 TB for data, 2 clusters,
internally build up with 16 Ranks and 36 GB drive, read and write
caches, fiber channel attachments and a number of software
functionality such as ' consistency group flashcopy' . The older
model ESS F20 used for the tests is not the fastest IBM ESS
model. The new model ESS 800 Enterprise Storage Server is
around twice as fast and is IBM’s most powerful disk storage server. Developed using IBM’s
Seascape architecture, the ESS 800 provides unmatchable functions for all the e-business
servers of the new IBM server brand.
FastT600
The FAStT600 is a mid-level storage server that can scale to over sixteen
eight terabytes of fibre channel disk. It uses the latest in storage networking
technology to provide an end-to-end 2 Gbps Fibre Channel solution. The
system used for this PoC supported the new FlashCopy with VolumeCopy,
a new function for a complete logical volume copy within the FAStT
Storage Server, and had storage capacity of 1.5 terabytes.
Server System
pSeries p690 High-End Unix Server
8-32way UNIX/Linux Server of the enterprise class offering high-
end performance and reliability as well as on-demand
functionality. The server can be used as a large SMP or logically
portioned into multiple logical systems. The p690 supports
dynamic logical partitioning which allows resources to be
reconfigured in running partitions: adapters, storage, and CPUs
can be moved between partitions without disrupting running
applications.
Processor: POWER4
CPU Speed: 1,3 / 1,9
System Memory: 8GB / 1TB
Internal Storage: 72,6 GB / 18,7 TB
APPENDIX
RTE_config FIle
########## RTEHSS_config.txt ##########
# set environment vatriables req. by
# RTEHSS_init()
# created by Oliver Goos (oliver.goos@de.ibm.com)
# created on 05/19/03
#######################################
# Copy Server
## IP adress
CSaIP 192.168.100.10
## User ID (admin)
CSaUID sdb_rsa
## User password, blank for SVC
CSapwd
TermDataCST_001_002 sapdat_1_2
TermDataCST_001_003
TermDataCST_002_001 sapdat_2_1
TermDataCST_002_003
TermDataCST_003_001
TermDataCST_003_002
TermLogCST_001_002
TermLogCST_001_003
TermLogCST_002_001
TermLogCST_002_003
TermLogCST_003_001
TermLogCST_003_002
LiveCache Parameters
KERNELVERSION KERNEL 7.5.0 BUILD 012-123-071-164
INSTANCE_TYPE LVC
MCOD NO
_SERVERDB_FOR_SAP YES
_UNICODE YES
DEFAULT_CODE ASCII
DATE_TIME_FORMAT INTERNAL
CONTROLUSERID CONTROL
CONTROLPASSWORD
MAXLOGVOLUMES 2
MAXDATAVOLUMES 11
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
BACKUP_BLOCK_CNT 8
LOG_MIRRORED NO
MAXVOLUMES 14
_MULT_IO_BLOCK_CNT 8
_DELAY_LOGWRITER 0
LOG_IO_QUEUE 50
_RESTART_TIME 600
MAXCPU 20
MAXUSERTASKS 50
_TRANS_RGNS 8
_TAB_RGNS 8
_OMS_REGIONS 8
_OMS_RGNS 33
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 1
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
_ROW_RGNS 8
_MIN_SERVER_DESC 21
MAXSERVERTASKS 21
_MAXTRANS 292
MAXLOCKS 2920
_LOCK_SUPPLY_BLOCK 100
DEADLOCK_DETECTION 0
SESSION_TIMEOUT 900
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 180
_IOPROCS_PER_DEV 2
_IOPROCS_FOR_PRIO 0
_USE_IOPROCS_ONLY NO
_IOPROCS_SWITCH 2
LRU_FOR_SCAN NO
_PAGE_SIZE 8192
_PACKET_SIZE 36864
_MINREPLY_SIZE 4096
_MBLOCK_DATA_SIZE 32768
_MBLOCK_QUAL_SIZE 16384
_MBLOCK_STACK_SIZE 16384
_WORKSTACK_SIZE 8192
_WORKDATA_SIZE 8192
_CAT_CACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 3264
INIT_ALLOCATORSIZE 245760
ALLOW_MULTIPLE_SERVERTASK_UKTS YES
_TASKCLUSTER_01 tw;al;ut;100*bup;10*ev,10*gc;
_TASKCLUSTER_02 ti,100*dw;3*us,2*sv;
_TASKCLUSTER_03 equalize
_DYN_TASK_STACK NO
_MP_RGN_QUEUE YES
_MP_RGN_DIRTY_READ YES
_MP_RGN_BUSY_WAIT YES
_MP_DISP_LOOPS 2
_MP_DISP_PRIO YES
XP_MP_RGN_LOOP 0
MP_RGN_LOOP 100
_MP_RGN_PRIO YES
MAXRGN_REQUEST 3000
_PRIO_BASE_U2U 100
_PRIO_BASE_IOC 80
_PRIO_BASE_RAV 80
_PRIO_BASE_REX 40
_PRIO_BASE_COM 10
_PRIO_FACTOR 80
_DELAY_COMMIT NO
_SVP_1_CONV_FLUSH NO
_MAXGARBAGE_COLL 10
_MAXTASK_STACK 1500
MAX_SERVERTASK_STACK 100
MAX_SPECIALTASK_STACK 100
_DW_IO_AREA_SIZE 50
_DW_IO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
_FBM_LOW_IO_RATE 10
CACHE_SIZE 100000
_DW_LRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
_DATA_CACHE_RGNS 32
CONVERTER_REGIONS 8
MAXPAGER 32
SEQUENCE_CACHE 1
_IDXFILE_LIST_SIZE 2048
_SERVER_DESC_CACHE 74
_SERVER_CMD_CACHE 22
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
_READAHEAD_BLOBS 25
RUNDIRECTORY /sapdb/data/wrk/HOTSVC
OPMSG1 /dev/console
OPMSG2 /dev/null
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 1
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 20
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 916
_AK_DUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
_UTILITY_PROTFILE dbm.utl
UTILITY_PROTSIZE 100
_BACKUP_HISTFILE dbm.knl
_BACKUP_MED_DEF dbm.mdf
_MAX_MESSAGE_FILES 64
_EVENT_ALIVE_CYCLE 0
_SHMCHUNK 256
_SHAREDDYNDATA 100301
_SHAREDDYNPOOL 22653
_SHMKERNEL 833002
LOG_VOLUME_NAME_001 /dev/rLCLOG1
LOG_VOLUME_TYPE_001 R
LOG_VOLUME_SIZE_001 262144
DATA_VOLUME_NAME_0001 /dev/rLCDATA1
DATA_VOLUME_TYPE_0001 R
DATA_VOLUME_SIZE_0001 1048576
DATA_VOLUME_GROUPS 1
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2004-05-19 14:31:52
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH /sapdb/data/wrk/HOTSVC/DIAGHISTORY
_DIAG_SEM 1
SHOW_MAX_STACK_USE NO
LOG_SEGMENT_SIZE 87381
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
OFFICIAL_NODE HOTLC
LOAD_BALANCING_CHK 0
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libHSSibm2145
HS_NODE_002 ENHOT1
HS_NODE_001 ENHOT2
HS_DELAY_TIME_002 0
HS_DELAY_TIME_001 0
HS_SYNC_INTERVAL 10
USE_OPEN_DIRECT NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
JOIN_OPERATOR_IMPLEMENTATION YES
JOIN_TABLEBUFFER 128
SET_VOLUME_LOCK NO
SHAREDSQL YES
SHAREDSQL_EXPECTEDSTATEMENTCOUNT 1500
SHAREDSQL_COMMANDCACHESIZE 32768
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES NO
USE_STACK_ON_STACK NO
USE_UCONTEXT YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
HASHED_RESULTSET NO
HASHED_RESULTSET_CACHESIZE 262144
AUTO_RECREATE_BAD_INDEXES NO