Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Application Programming Planning Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 261

IMS

Version 10

Application Programming Planning


Guide



SC18-9697-02
IMS
Version 10

Application Programming Planning


Guide



SC18-9697-02
Note
Before using this information and the product it supports, read the information in “Notices” on page 219.

This edition applies to IMS Version 10 (program number 5635-A01) and to all subsequent releases and modifications
until otherwise indicated in new editions. This edition replaces SC18-9697-01.
© Copyright IBM Corporation 1974, 2010.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

About this information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii


Prerequisite knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
IBM product names used in this information . . . . . . . . . . . . . . . . . . . . . . . . xiii
IMS function names used in this information . . . . . . . . . . . . . . . . . . . . . . . . xv
Accessibility features for IMS Version 10 . . . . . . . . . . . . . . . . . . . . . . . . . xv
Accessibility features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Keyboard navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Related accessibility information . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
IBM and accessibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
How to send your comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv

| Changes to the IMS library for IMS Version 10 . . . . . . . . . . . . . . . . . . xvii

Chapter 1. How application programs work with IMS Database Manager . . . . . . . . 1


IMS environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
DL/I and your application program . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DL/I codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Status, return, and reason codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Exceptional condition status codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
High Availability Large Databases (HALDBs) . . . . . . . . . . . . . . . . . . . . . . . 3
Error routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
| Database descriptions (DBDs) and program specification blocks (PSBs) . . . . . . . . . . . . . . . . 4
| DL/I for CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
DL/I using the ODBA interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Database hierarchy examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
| Medical hierarchy example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Bank account hierarchy example . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Chapter 2. How application programs work with IMS Transaction Manager . . . . . . 15


Application program environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
DL/I elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
DL/I calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Message call functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
System service call functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Status, return, and reason codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Exceptional condition status code . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Error routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Chapter 3. How CICS EXEC DLI application programs work with IMS . . . . . . . . . 19
Getting started with EXEC DLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

| Chapter 4. How Java application programs work with IMS . . . . . . . . . . . . . 23


| How Java application programs work with IMS databases. . . . . . . . . . . . . . . . . . . . 23
| Comparison of hierarchical and relational databases . . . . . . . . . . . . . . . . . . . . . 23
| Overview of the IMS hierarchical database interface for Java . . . . . . . . . . . . . . . . . . 27
| JDBC access to IMS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
| How Java application programs work with IMS transactions . . . . . . . . . . . . . . . . . . . 27
| Java message processing (JMP) regions . . . . . . . . . . . . . . . . . . . . . . . . . 29
| Java batch processing (JBP) regions . . . . . . . . . . . . . . . . . . . . . . . . . . 31

© Copyright IBM Corp. 1974, 2010 iii


Chapter 5. Designing an application: Introductory concepts . . . . . . . . . . . . . 35
Storing and processing information in a database. . . . . . . . . . . . . . . . . . . . . . . 35
Storing data in separate files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Storing data in a combined file . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Storing data in a database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
| Database hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Your program's view of the data . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Processing a database record . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
| Tasks for developing an application . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Designing the application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Developing specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Implementing the design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Chapter 6. Designing an application: Data and local views . . . . . . . . . . . . . 43


An overview of application design . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Identifying application data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Listing data elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Naming data elements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Documenting application data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Designing a local view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Analyzing data relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Local view examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Chapter 7. Designing an application for APPC . . . . . . . . . . . . . . . . . . . 63


Overview of APPC and LU 6.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Application program types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Standard DL/I application program . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Modified standard DL/I application program . . . . . . . . . . . . . . . . . . . . . . . 64
CPI Communications driven program . . . . . . . . . . . . . . . . . . . . . . . . . 64
Application objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Choosing conversation attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Synchronous conversation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Asynchronous conversation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Asynchronous output delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
MSC synchronous and asynchronous conversation . . . . . . . . . . . . . . . . . . . . . 66
Conversation type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Conversation state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Synchronization level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Distributed sync point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Distributed sync point concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Impact on the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Application programming interface for LU type 6.2 . . . . . . . . . . . . . . . . . . . . . . 72
Implicit API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Explicit API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
LU 6.2 partner program design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
LU 6.2 flow diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Integrity tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
DFSAPPC message switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Chapter 8. Analyzing IMS application processing requirements . . . . . . . . . . . 99


| Defining IMS application requirements . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Accessing databases with your IMS application program . . . . . . . . . . . . . . . . . . . . 100
Accessing data: the types of programs you can write for your IMS application . . . . . . . . . . . . 102
DB batch processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
TM batch processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Processing messages: MPPs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Processing messages: IFPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Batch message processing: BMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Java message processing: JMPs . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
| Java batch processing: JBPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

iv Application Programming Planning Guide


IMS programming integrity and recovery considerations . . . . . . . . . . . . . . . . . . . . 110
How IMS protects data integrity: commit points . . . . . . . . . . . . . . . . . . . . . . 110
Planning for program recovery: checkpoint and restart . . . . . . . . . . . . . . . . . . . 113
Data availability considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Use of STAE or ESTAE and SPIE in IMS programs . . . . . . . . . . . . . . . . . . . . . 118
Dynamic allocation for IMS databases . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Chapter 9. Analyzing CICS application processing requirements . . . . . . . . . . 121


| Defining CICS application requirements . . . . . . . . . . . . . . . . . . . . . . . . . 121
Accessing databases with your CICS application program . . . . . . . . . . . . . . . . . . . 123
Writing a CICS program to access IMS databases . . . . . . . . . . . . . . . . . . . . . . 124
Writing a CICS online program . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Writing an IMS batch program . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Writing a batch-oriented BMP program . . . . . . . . . . . . . . . . . . . . . . . . . 127
Using data sharing for your CICS program . . . . . . . . . . . . . . . . . . . . . . . . 128
Scheduling and terminating a PSB (CICS online programs only) . . . . . . . . . . . . . . . . . 129
Linking and passing control to other programs (CICS online programs only) . . . . . . . . . . . . . 129
| How CICS distributed transactions access IMS . . . . . . . . . . . . . . . . . . . . . . . 130
Maximizing the performance of your CICS system . . . . . . . . . . . . . . . . . . . . . . 130
Programming integrity and database recovery considerations for your CICS program . . . . . . . . . . 131
How IMS protects data integrity for your program (CICS online programs) . . . . . . . . . . . . 131
Recovering databases accessed by batch and BMP programs. . . . . . . . . . . . . . . . . . 131
Data availability considerations for your CICS program . . . . . . . . . . . . . . . . . . . . 135
Unavailability of a database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Unavailability of some data in a database . . . . . . . . . . . . . . . . . . . . . . . . 136
The SETS or SETU and ROLS functions . . . . . . . . . . . . . . . . . . . . . . . . 137
Use of STAE or ESTAE and SPIE in IMS batch programs . . . . . . . . . . . . . . . . . . . . 137
Dynamic allocation for IMS databases . . . . . . . . . . . . . . . . . . . . . . . . . . 138

Chapter 10. Gathering requirements for database options . . . . . . . . . . . . . 139


Analyzing data access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Direct access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Sequential access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Accessing z/OS files through IMS: GSAM . . . . . . . . . . . . . . . . . . . . . . . . 146
Accessing IMS data through z/OS: SHSAM and SHISAM . . . . . . . . . . . . . . . . . . 146
Understanding how data structure conflicts are resolved . . . . . . . . . . . . . . . . . . . . 147
Using different fields: field-level sensitivity . . . . . . . . . . . . . . . . . . . . . . . 147
Resolving processing conflicts in a hierarchy: secondary indexing . . . . . . . . . . . . . . . . 148
Creating a new hierarchy: logical relationships . . . . . . . . . . . . . . . . . . . . . . 152
Providing data security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Providing data availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
Keeping a program from accessing the data: data sensitivity. . . . . . . . . . . . . . . . . . 157
Preventing a program from updating data: processing options . . . . . . . . . . . . . . . . . 159
Read without integrity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
What read without integrity means . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Data set extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

Chapter 11. Gathering requirements for message processing options . . . . . . . . 163


Identifying online security requirements . . . . . . . . . . . . . . . . . . . . . . . . . 163
Limiting access to specific individuals: signon security . . . . . . . . . . . . . . . . . . . 163
Limiting access for specific terminals: terminal security . . . . . . . . . . . . . . . . . . . 164
Limiting access to the program: password security . . . . . . . . . . . . . . . . . . . . . 164
Allowing access to security data: authorization security . . . . . . . . . . . . . . . . . . . 164
How IMS security relates to DB2 for z/OS security. . . . . . . . . . . . . . . . . . . . . 164
Supplying security information . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Analyzing screen and message formats . . . . . . . . . . . . . . . . . . . . . . . . . . 165
An overview of MFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
An overview of basic edit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Editing considerations in your application . . . . . . . . . . . . . . . . . . . . . . . . 166
Gathering requirements for conversational processing . . . . . . . . . . . . . . . . . . . . . 168

Contents v
What happens in a conversation . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Designing a conversation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Important points about the SPA . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Recovery considerations in conversations . . . . . . . . . . . . . . . . . . . . . . . . 170
Identifying output message destinations . . . . . . . . . . . . . . . . . . . . . . . . . 171
The originating terminal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
To other programs and terminals . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

Chapter 12. Testing an IMS application program . . . . . . . . . . . . . . . . . 175


What you need to test an IMS program . . . . . . . . . . . . . . . . . . . . . . . . . 175
Testing DL/I call sequences (DFSDDLT0) before testing your IMS program. . . . . . . . . . . . . . 175
Using BTS II to test your IMS program . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Tracing DL/I calls with image capture for your IMS program . . . . . . . . . . . . . . . . . . 176
Using image capture with DFSDDLT0 . . . . . . . . . . . . . . . . . . . . . . . . . 177
Restrictions on using image capture output . . . . . . . . . . . . . . . . . . . . . . . 178
Running image capture online. . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Running image capture as a batch job . . . . . . . . . . . . . . . . . . . . . . . . . 178
Retrieving image capture data from the log data set . . . . . . . . . . . . . . . . . . . . 179
Requests for monitoring and debugging your IMS program . . . . . . . . . . . . . . . . . . . 179
Retrieving database statistics: the STAT call . . . . . . . . . . . . . . . . . . . . . . . 180
Writing Information to the system log: the LOG request . . . . . . . . . . . . . . . . . . . 193
What to do when your IMS program terminates abnormally . . . . . . . . . . . . . . . . . . 193
Recommended actions after an abnormal termination of an IMS program . . . . . . . . . . . . . 193
Diagnosing an abnormal termination of an IMS program . . . . . . . . . . . . . . . . . . . 194

Chapter 13. Testing a CICS application program . . . . . . . . . . . . . . . . . 197


What you need to test a CICS program. . . . . . . . . . . . . . . . . . . . . . . . . . 197
Testing your CICS program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Using the Execution Diagnostic Facility (command-level only) . . . . . . . . . . . . . . . . . 198
Using CICS dump control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Using CICS trace control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Tracing DL/I calls with image capture . . . . . . . . . . . . . . . . . . . . . . . . . 199
Requests for monitoring and debugging your CICS program . . . . . . . . . . . . . . . . . . 202
What to do when your CICS program terminates abnormally . . . . . . . . . . . . . . . . . . 202
Recommended actions after an abnormal termination of CICS . . . . . . . . . . . . . . . . . 202
Diagnosing an abnormal termination of CICS . . . . . . . . . . . . . . . . . . . . . . 203

Chapter 14. Testing an ODBA application program . . . . . . . . . . . . . . . . 205


Tracing DL/I calls with image capture to test your ODBA program . . . . . . . . . . . . . . . . 206
Using image capture with DFSDDLT0 to test your ODBA program . . . . . . . . . . . . . . . . 206
Running image capture online. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Retrieving image capture data from the log data set . . . . . . . . . . . . . . . . . . . . . 207
Requests for monitoring and debugging your ODBA program . . . . . . . . . . . . . . . . . . 208
What to do when your ODBA program terminates abnormally . . . . . . . . . . . . . . . . . . 208
Recommended actions after an abnormal termination of an ODBA program . . . . . . . . . . . . 208
Diagnosing an abnormal termination of an ODBA program . . . . . . . . . . . . . . . . . . 209

Chapter 15. Documenting an application program. . . . . . . . . . . . . . . . . 211


Documentation for other programmers . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Documentation for users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

Chapter 16. Managing the IMS Spool API overall design . . . . . . . . . . . . . . 213
IMS Spool API design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Sending data to the JES spool data sets . . . . . . . . . . . . . . . . . . . . . . . . . . 214
IMS Spool API performance considerations . . . . . . . . . . . . . . . . . . . . . . . . 214
JES initiator considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Application managed text units . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
BSAM I/O area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
IMS Spool API application coding considerations . . . . . . . . . . . . . . . . . . . . . . 215
Print data formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

vi Application Programming Planning Guide


Message integrity options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Programming interface information . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
IMS Version 10 library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Supplementary publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Publication collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Accessibility titles cited in the IMS Version 10 library . . . . . . . . . . . . . . . . . . . . . 224

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

Contents vii
viii Application Programming Planning Guide
Figures
| 1. Organization of the IMS Version 10 library in the information center. . . . . . . . . . . . . . xviii
2. DL/I program elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3. Normal relationship between programs, PSBs, PCBs, DBDs, and databases . . . . . . . . . . . . . 4
4. Relationship between programs and multiple PCBs (concurrent processing) . . . . . . . . . . . . 5
| 5. Structure of a call-level CICS program . . . . . . . . . . . . . . . . . . . . . . . . . 6
| 6. Medical hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7. DL/I program elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
8. Structure of a command-level batch or BMP program . . . . . . . . . . . . . . . . . . . 20
| 9. Segments of the Dealership sample database . . . . . . . . . . . . . . . . . . . . . . 24
| 10. Relational representation of the Dealership sample database . . . . . . . . . . . . . . . . . 25
| 11. Segment occurrences in the Dealership sample database . . . . . . . . . . . . . . . . . . 26
| 12. Relational representation of segment occurrences in the Dealership database . . . . . . . . . . . . 26
| 13. JMP or JBP applications that use the Java class libraries for IMS . . . . . . . . . . . . . . . . 28
14. Accounting program's view of the database . . . . . . . . . . . . . . . . . . . . . . . 38
15. Patient illness program's view of the database . . . . . . . . . . . . . . . . . . . . . . 39
16. Current roster for technical education example . . . . . . . . . . . . . . . . . . . . . . 46
17. Current roster after step 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
18. Current roster after step 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
19. Current roster after step 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
20. Schedule of courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
21. Course schedule after step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
22. Instructor skills report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
23. Instructor skills after step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
24. Instructor schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
25. Instructor schedules step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
26. Instructor schedules step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
27. Participants in resource recovery . . . . . . . . . . . . . . . . . . . . . . . . . . 69
28. Two-phase commit process with one resource manager . . . . . . . . . . . . . . . . . . . 70
29. Distributed resource recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
30. Flow of a local IMS synchronous transaction when Sync_level=None . . . . . . . . . . . . . . 75
31. Flow of a local IMS synchronous transaction when Sync_level=Confirm . . . . . . . . . . . . . 76
32. Flow of a local IMS asynchronous transaction when Sync_level=None . . . . . . . . . . . . . . 77
33. Flow of a local IMS asynchronous transaction when Sync_level=Confirm . . . . . . . . . . . . . 78
34. Flow of a local IMS conversational transaction when Sync_level=None. . . . . . . . . . . . . . 79
35. Flow of a local IMS command when Sync_level=None . . . . . . . . . . . . . . . . . . . 80
36. Flow of a local IMS asynchronous command when Sync_level=Confirm . . . . . . . . . . . . . 81
37. Flow of a message switch when Sync_level=None . . . . . . . . . . . . . . . . . . . . 82
38. Flow of a local CPI communications driven program when Sync_level=None . . . . . . . . . . . 83
39. Flow of a remote IMS synchronous transaction when Sync_level=None . . . . . . . . . . . . . 84
40. Flow of a remote IMS asynchronous transaction when Sync_level=None . . . . . . . . . . . . . 85
41. Flow of a remote IMS asynchronous transaction when Sync_level=Confirm . . . . . . . . . . . . 86
42. Flow of a remote IMS synchronous transaction when Sync_level=Confirm . . . . . . . . . . . . 87
43. Standard DL/I program commit scenario when Sync_Level=Syncpt. . . . . . . . . . . . . . . 88
44. CPI-C driven commit scenario when Sync_Level=Syncpt . . . . . . . . . . . . . . . . . . 89
45. Standard DL/I program U119 backout scenario when Sync_Level=Syncpt. . . . . . . . . . . . . 90
46. Standard DL/I program U0711 backout scenario when Sync_Level=Syncpt . . . . . . . . . . . . 91
47. Standard DL/I program ROLB scenario when Sync_Level=Syncpt . . . . . . . . . . . . . . . 92
48. Multiple transactions in same commit when Sync_Level=Syncpt . . . . . . . . . . . . . . . . 93
49. Documenting user task descriptions: current roster example . . . . . . . . . . . . . . . . . 100
50. Single mode and multiple mode . . . . . . . . . . . . . . . . . . . . . . . . . . 112
51. Current roster task description . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
52. Patient hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
53. Indexing a root segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
54. Indexing a dependent segment. . . . . . . . . . . . . . . . . . . . . . . . . . . 151
55. Patient and inventory hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . 154

© Copyright IBM Corp. 1974, 2010 ix


56. Logical relationships example . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
57. Supplies and purchasing hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . 156
58. Program B and program C hierarchies . . . . . . . . . . . . . . . . . . . . . . . . 156
59. Medical database hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
60. Sample hierarchy for key sensitivity example . . . . . . . . . . . . . . . . . . . . . . 159

x Application Programming Planning Guide


Tables
1. Licensed program full names and short names . . . . . . . . . . . . . . . . . . . . . xiii
| 2. High-level user tasks and the IMS Version 10 books that supports those tasks . . . . . . . . . . . xix
| 3. PATIENT segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
| 4. ILLNESS segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
| 5. TREATMNT segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
| 6. BILLING segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
| 7. PAYMENT segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
| 8. HOUSHOLD segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
9. Teller segment in a fixed related MSDB . . . . . . . . . . . . . . . . . . . . . . . . 12
10. Branch summary segment in a dynamic related MSDB . . . . . . . . . . . . . . . . . . . 12
11. Account segment in a nonrelated MSDB . . . . . . . . . . . . . . . . . . . . . . . . 13
12. Entities and data elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
13. Example of data elements information form. . . . . . . . . . . . . . . . . . . . . . . 49
14. Single occurrence of class aggregate . . . . . . . . . . . . . . . . . . . . . . . . . 51
15. Data aggregates and keys for current roster after step 1 . . . . . . . . . . . . . . . . . . . 52
16. Multiple occurrences of class aggregate . . . . . . . . . . . . . . . . . . . . . . . . 53
17. Data aggregates and keys for current roster after step 3 . . . . . . . . . . . . . . . . . . . 55
18. Course schedule data elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
19. Data aggregates and keys for course schedule after step 1 . . . . . . . . . . . . . . . . . . 58
20. Instructor skills data elements . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
21. Instructor schedules data elements . . . . . . . . . . . . . . . . . . . . . . . . . . 60
22. Using application programs in APPC . . . . . . . . . . . . . . . . . . . . . . . . . 65
23. Message integrity of conversations . . . . . . . . . . . . . . . . . . . . . . . . . . 94
24. Results of processing when integrity is compromised . . . . . . . . . . . . . . . . . . . 94
25. Recovering APPC messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
26. Program and database options in IMS environments . . . . . . . . . . . . . . . . . . . 101
27. Processing modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
28. The data that your CICS program can access . . . . . . . . . . . . . . . . . . . . . . 123
29. Program and database options in the CICS environments . . . . . . . . . . . . . . . . . . 123
30. Physical employee segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
31. Employee segment with field-level sensitivity . . . . . . . . . . . . . . . . . . . . . . 148
32. Tools you can use for testing your program . . . . . . . . . . . . . . . . . . . . . . 198
33. Tools you can use for testing your program . . . . . . . . . . . . . . . . . . . . . . 205

© Copyright IBM Corp. 1974, 2010 xi


xii Application Programming Planning Guide
About this information
This information provides guidance and planning information for application
programs that access IMS™ databases or messages. It also describes how to gather
and analyze program requirements, and how to design, test, and document an IMS
application program.

This information is available as part of the Information Management Software for


z/OS® Solutions Information Center at http://publib.boulder.ibm.com/infocenter/
imzic. A PDF version of this information is available in the information center.

Prerequisite knowledge
Before using this information, you should have knowledge of either IMS Database
Manager (DB) or IMS Transaction Manager (TM), including the access methods
used by IMS. You should also understand basic IMS concepts, your installation’s
IMS system, and have general knowledge of the tasks involved in project planning.

The following information from IBM® Press can help you gain an understanding of
basic IMS concepts: An Introduction to IMS by Dean H. Meltz, Rick Long, Mark
Harrington, Robert Hain, and Geoff Nicholls (ISBN # 0-13-185671-5). Go to the IMS
Web site at www.ibm.com/ims for details.

IBM offers a wide variety of classroom and self-study courses to help you learn
IMS. For a complete list of courses available, go to the IMS home page on the Web
at www.ibm.com/ims and link to the Training and Certification page.

If you are a CICS® user, you should understand a similar level of information for
CICS. The IMS concepts explained in this manual are limited to those concepts
pertinent to designing application programs. You should also know how to use
COBOL, PL/I, Assembler language, Pascal, or C language.

You can gain an understanding of basic IMS concepts by reading An Introduction to


IMS, an IBM Press publication written by Dean H. Meltz, Rick Long, Mark
Harrington, Robert Hain, and Geoff Nicholls (ISBN number 0-13-185671-5). An
excerpt from this publication is available in the Information Management Software
for z/OS Solutions Information Center.

IBM product names used in this information


In this information, the licensed programs shown in the following table are
referred to by their short names.
Table 1. Licensed program full names and short names
Licensed program full name Licensed program short name
IBM Application Recovery Tool for IMS and Application Recovery Tool
DB2®
IBM CICS Transaction Server for OS/390® CICS
IBM CICS Transaction Server for z/OS CICS
IBM DB2 for z/OS DB2 for z/OS

© Copyright IBM Corp. 1974, 2010 xiii


Table 1. Licensed program full names and short names (continued)
Licensed program full name Licensed program short name
IBM Data Facility Storage Management Data Facility Product (DFSMSdfp)
Subsystem Data Facility Product
IBM Data Facility Storage Management Data Set Services (DFSMSdss)
Subsystem Data Set Services
IBM Enterprise COBOL for z/OS and Enterprise COBOL
OS/390
IBM Enterprise PL/I for z/OS and OS/390 Enterprise PL/I

IBM High Level Assembler for MVS & VM High Level Assembler
& VSE
IBM IMS Advanced ACB Generator IMS Advanced ACB Generator
IBM IMS Batch Backout Manager IMS Batch Backout Manager
IBM IMS Batch Terminal Simulator IMS Batch Terminal Simulator
IBM IMS Buffer Pool Analyzer IMS Buffer Pool Analyzer
IBM IMS Command Control Facility for IMS Command Control Facility
z/OS
IBM IMS Connect for z/OS IMS Connect
IBM IMS Database Control Suite IMS Database Control Suite
IBM IMS Database Recovery Facility for Database Recovery Facility
z/OS
IBM IMS Database Repair Facility IMS Database Repair Facility
IBM IMS DataPropagator for z/OS IMS DataPropagator
IBM IMS DEDB Fast Recovery IMS DEDB Fast Recovery
IBM IMS Extended Terminal Option Support IMS ETO Support
IBM IMS Fast Path Basic Tools IMS Fast Path Basic Tools
IBM IMS Fast Path Online Tools IMS Fast Path Online Tools
IBM IMS Hardware Data IMS Hardware Data Compression-Extended
Compression-Extended
IBM IMS High Availability Large Database IBM IMS HALDB Conversion Aid
(HALDB) Conversion Aid for z/OS
IBM IMS High Performance Change IMS High Performance Change
Accumulation Utility for z/OS Accumulation Utility
IBM IMS High Performance Load for z/OS IMS HP Load
IBM IMS High Performance Pointer Checker IMS HP Pointer Checker
for OS/390
IBM IMS High Performance Prefix Resolution IMS HP Prefix Resolution
for z/OS
IBM Tivoli® NetView® for z/OS Tivoli NetView for z/OS
®
IBM WebSphere Application Server for WebSphere Application Server for z/OS
z/OS and OS/390
IBM WebSphere MQ for z/OS WebSphere MQ
IBM WebSphere Studio Application WebSphere Studio
Developer Integration Edition
IBM z/OS z/OS

xiv Application Programming Planning Guide


IMS function names used in this information
In this information, the term HALDB Online Reorganization refers to the
integrated HALDB Online Reorganization function that is part of IMS Version 10,
unless otherwise indicated.

IMS provides an integrated IMS Connect function, which offers a functional


replacement for the IMS Connect tool (program number 5655-K52). In this
information, the term IMS Connect refers to the integrated IMS Connect function
that is part of IMS Version 10, unless otherwise indicated.

Accessibility features for IMS Version 10


Accessibility features help a user who has a physical disability, such as restricted
mobility or limited vision, to use information technology products successfully.

Accessibility features
The following list includes the major accessibility features in z/OS products,
including IMS Version 10. These features support:
v Keyboard-only operation.
v Interfaces that are commonly used by screen readers and screen magnifiers.
v Customization of display attributes such as color, contrast, and font size.

Note: The Information Management Software for z/OS Solutions Information


Center (which includes information for IMS Version 10) and its related
publications are accessibility-enabled for the IBM Home Page Reader. You
can operate all features by using the keyboard instead of the mouse.

Keyboard navigation
You can access IMS Version 10 ISPF panel functions by using a keyboard or
keyboard shortcut keys.

For information about navigating the IMS Version 10 ISPF panels using TSO/E or
ISPF, refer to the z/OS TSO/E Primer, the z/OS TSO/E User’s Guide, and the z/OS
ISPF User’s Guide. These guides describe how to navigate each interface, including
the use of keyboard shortcuts or function keys (PF keys). Each guide includes the
default settings for the PF keys and explains how to modify their functions.

Related accessibility information


Online documentation for IMS Version 10 is available in the Information
Management Software for z/OS Solutions Information Center.

IBM and accessibility


See the IBM Accessibility Center at www.ibm.com/able for more information about
the commitment that IBM has to accessibility.

How to send your comments


Your feedback is important in helping us provide the most accurate and highest
quality information. If you have any comments about this or any other IMS
information, you can take one of the following actions:

About this information xv


v From any topic in the information center at http://publib.boulder.ibm.com/
infocenter/imzic, click the Feedback link at the bottom of the topic and
complete the Feedback form.
v Send your comments by e-mail to imspubs@us.ibm.com. Be sure to include the
title, the part number of the title, the version of IMS, and, if applicable, the
specific location of the text on which you are commenting (for example, a page
number in the PDF or a heading in the information center).

xvi Application Programming Planning Guide


|

| Changes to the IMS library for IMS Version 10


| The organization of the IMS Version 10 library is significantly different from that of
| earlier versions. The library has been reorganized and streamlined, and includes
| new and changed titles. Also, none of the IMS Version 10 information is licensed.

| The reasons for reorganizing and rearchitecting the IMS library so dramatically are
| to achieve the following goals:
| v Group similar information together in a more intuitive organization. For
| example, in the IMS Version 10 library, all messages and codes are in the
| messages and codes books, rather than distributed across multiple books, and all
| appear in the information center under Troubleshooting for IMS. As another
| example, all exit routines are now in one book, the IMS Version 10: Exit Routine
| Reference, and appear in the information center under IMS reference
| information ->Exit routines, rather than being distributed across six books as
| they were in the IMS Version 9 library.
| v Rewrite information to better support user tasks. Table 2 on page xix describes
| the high-level user tasks and the IMS Version 10 books that support those tasks.
| v Separate information into three basic types of topics: task, concept, and reference.
| v Utilize the DITA (Darwin Information Type Architecture) open source tagging
| language.

| Because IBM's strategy is to deliver product information in Eclipse information


| centers, IMS Version 10 is the final version of the IMS library that will be available
| in BookManager format. Information centers provide the following advantages
| over PDF and BookManager formats:
| v Improved retrievability: Users can search across the entire information center, or
| set filters to search categories of information (for example, search only IMS
| Version 10 information, or search only IMS Version 9 information). Users can
| also search for information center content via a search engine such as Google, or
| from www.ibm.com.
| v Improved information currency: Users can subscribe to information updates by
| using RSS feeds.
| v Accessibility support.
| v Multi-language support.

| There are known limitations with BookManager output. If you encounter problems
| in the BookManager information with Web addresses, syntax diagrams, wide
| examples, or tables, refer to the information in the information center or in a PDF
| book.

| The following figure illustrates the organization of the IMS Version 10 library,
| including how that information is organized within the information center, which
| is available at http://publib.boulder.ibm.com/infocenter/imzic.
|

© Copyright IBM Corp. 1974, 2010 xvii


|

|
| Figure 1. Organization of the IMS Version 10 library in the information center
|
| The following table describes high-level user tasks and the IMS Version 10 books
| that support those tasks. The IMS library also includes the IMS Version 10: Master
| Index and Glossary, which provides a central index for all of the IMS Versioná10
| information, as well as a glossary of IMS terms. The combined IMS Version 10:
| Master Index and Glossary is available only in PDF format. The master index for
| IMS Version 10 and the IMS glossary are included in the information center.

xviii Application Programming Planning Guide


| Table 2. High-level user tasks and the IMS Version 10 books that supports those tasks
| Task PDF title Contents
| IMS overview IMS Version 10: Licensed Summarizes the functions and capabilities of IMS, lists hardware
| Programming Specifications and software requirements, and provides licensing and warranty
| details.
| IMS Version 10: Fact Sheet Provides an overview of the IMS Version 10 features.
| Introduction to IMS Provides the first five chapters of a retail publication that
| introduces the IMS product and its features.
| Release planning IMS Version 10: Release Provides general information to help you evaluate and plan for
| for IMS Planning Guide (GC18-9717) IMS Version 10. It describes the new features and enhancements for
| IMS Version 10, the hardware and software requirements for these
| new features and enhancements, considerations for migration and
| coexistence for IMS Version 10, and an overview of the IMS Tools
| that are enhanced to work with IMS Version 10. It also includes an
| overview of the features and enhancements for IMS Version 9 and
| IMS Version 8.
| Installing IMS Program Directory for Provides information about the material and procedures that are
| Information Management associated with the installation of IMS Version 10.
| System Transaction and
| Database Servers (GI10-8754)
| IMS Version 10: Installation Provides guidance information for preparing for an IMS
| Guide (GC18-9710) installation and running the IMS installation verification program
| (IVP). It also provides information about the sample applications
| that are provided with IMS.
| IMS Version 10: System Provides guidance information for designing your IMS system,
| Definition Guide (GC18-9998) including information for defining and tailoring the IMS system,
| IMS Common Queue Server (CQS), IMS Common Service Layer
| (CSL), integrated IMS Connect, and IMS Transport Manager
| Subsystem (TMS). Reference information for IMS system definition
| is in IMS Version 10: System Definition Reference.
| IMS Version 10: System Provides reference information for defining and tailoring an IMS
| Definition Reference system, including descriptions of the IMS macros, procedures, and
| (GC18-9966) members of the IMS.PROCLIB data set. Guidance information for
| IMS system definition is in the IMS Version 10: System Definition
| Guide.

Changes to the IMS library for IMS Version 10 xix


| Table 2. High-level user tasks and the IMS Version 10 books that supports those tasks (continued)
| Task PDF title Contents
| IMS IMS Version 10: Describes the administration of IMS communications and
| administration Communications and connections: CPI Communications and APPC/IMS, facilities for
| Connections Guide (SC18-9703) attaching to external subsystems, IMS Extended Terminal Option
| (ETO), integrated IMS Connect function, external Java application
| environments, Multiple Systems Coupling (MSC), IMS Open
| Transaction Manager Access (OTMA), SLU P and Finance
| communication systems, TCP/IP communications, and VTAM®
| networking.
| IMS Version 10: Database Describes IMS database types and concepts, and also describes
| Administration Guide how to design, implement, maintain, modify, back up, and recover
| (SC18-9704) IMS databases.
| IMS Version 10: IMSplex Provides guidance information to manage the administration and
| Administration Guide operations of one or more IMS systems working as a unit (an
| (SC18-9709) IMSplex). It includes conceptual information about the IMS Base
| Primitive Environment (BPE), IMS Common Queue Server (CQS),
| and IMS Common Service Layer (CSL), all of which can be part of
| an IMSplex, as well as task information for sharing data and
| queues in an IMSplex.
| IMS Version 10: Operations and Provides guidance information for selecting tools and options for
| Automation Guide (SC18-9716) operating an IMS system, for recovering an IMS system, and for
| automating IMS operations and recovery tasks. It also describes
| how to develop procedures for the master terminal operator and
| the end user.
| IMS Version 10: System Provides guidance information for designing, documenting,
| Administration Guide operating, maintaining, and recovering a single IMS system. It also
| (SC18-9718) provides information about: the Database Recovery Control
| (DBRC) facility; Remote Site Recovery (RSR); and the Extended
| Recovery Facility (XRF). Guidance information for administering
| an IMSplex is in the IMS Version 10: IMSplex Administration Guide.
| Programming IMS Version 10: Application Provides guidance and planning information for application
| for IMS Programming Planning Guide programs that access IMS databases or messages. It also describes
| (SC18-9697) how to gather and analyze program requirements, and how to
| design, test, and document an IMS application program.
| IMS Version 10: Application Provides guidance information for writing application programs
| Programming Guide that access IMS databases or IMS messages. It describes how to use
| (SC18-9698) different programming languages to issue DL/I calls, and includes
| usage information about the Java class libraries and the JDBC
| driver for IMS. It also describes how to use different programming
| languages to issue EXEC DL/I calls. Application programming
| interface (API) information is in the IMS Version 10: Application
| Programming API Reference.
| IMS Version 10: Application Provides reference information for the IMS application
| Programming API Reference programming interfaces (APIs), including DL/I, EXEC DLI, and
| (SC18-9699) the Java class libraries for IMS. It also provides reference
| information for the IMS Adapter for REXX, the DL/I test program
| (DFSDDLT0), and the IMS Message Format Service (MFS).
| Guidance information for writing IMS application programs is in
| the IMS Version 10: Application Programming Guide.
| IMS Version 10: System Provides reference information for IMS system application
| Programming API Reference programming interface (API) calls for IMS Common Queue Server
| (SC18-9967) (CQS); IMS Common Service Layer (CSL); IMS data propagation
| with IMS DataPropagator for z/OS; IMS Database Resource
| Adapter (DRA); the IMS Database Recovery Control (DBRC)
| facility.

xx Application Programming Planning Guide


| Table 2. High-level user tasks and the IMS Version 10 books that supports those tasks (continued)
| Task PDF title Contents
| Troubleshooting IMS Version 10: Diagnosis Provides guidance information for setting up an IMS system for
| for IMS Guide (GC18-9706) diagnosis, collecting information to help diagnose IMS problems,
| and searching problem-reporting databases, such as the IBM
| Electronic Incident Submission (EIS) Web site. It also describes how
| to use keywords to develop a failure description that you can use
| to search problem-reporting databases and communicate with the
| IBM Support Line (or equivalent services for your geography).
| Reference information for IMS diagnosis service aids is in the IMS
| Version 10: Diagnosis Reference.
| IMS Version 10: Diagnosis Provides reference information for diagnosis service aids that apply
| Reference (GC18-9707) to all aspects of IMS. Guidance information for IMS diagnosis tasks
| is in the IMS Version 10: Diagnosis Guide.
| IMS: Messages and Codes Provides reference information for the IMS messages that have the
| Reference, Volume 1: DFS “DFS” prefix, along with their associated return codes. It also
| Messages (GC18-9712) provides diagnostic information that helps programmers, operators,
| and system-support personnel diagnose problems in IMS. IMS
| messages with other prefixes are in the IMS: Messages and Codes
| Reference, Volume 2: Non-DFS Messages.
| IMS: Messages and Codes Provides reference information for non-DFS prefixed IMS messages
| Reference, Volume 2: Non-DFS that are associated with IMS Base Primitive Environment (BPE);
| Messages (GC18-9713) IMS Common Queue Server (CQS); IMS Common Service Layer
| (CSL); Database Recovery Control (DBRC); and integrated IMS
| Connect. It also provides diagnostic reference information that
| helps programmers, operators, and system-support personnel
| diagnose problems in IMS. IMS messages that have the “DFS”
| prefix are in the IMS: Messages and Codes Reference, Volume 1: DFS
| Messages.
| IMS: Messages and Codes Provides reference information for all IMS abnormal termination
| Reference, Volume 3: IMS (abend) codes, including analysis, explanation, possible causes, and
| Abend Codes (GC18-9714) APAR processing instructions.
| IMS: Messages and Codes Provides return, reason, sense, function, and status codes for IMS
| Reference, Volume 4: IMS Base Primitive Environment (BPE); IMS Common Queue Server
| Component Codes (GC18-9715) (CQS); IMS Common Service Layer (CSL); Database Recovery
| Control (DBRC) facility; and integrated IMS Connect. It also
| provides diagnostic reference information that helps programmers,
| operators, and system-support personnel diagnose problems in
| IMS. IMS abend codes are in the IMS: Messages and Codes Reference,
| Volume 3: IMS Abend Codes.
| IRLM Messages and Codes Provides reference information for the IRLM messages and codes
| (GC19-2666) that are issued by the internal resource lock manager (IRLM) to
| IMS.
| IMS reference IMS Version 10: Command Provides reference information for the IMS type-1 and type-2
| information Reference, Volume 1 commands (/ACTIVATE through /MONITOR), including command
| (SC18-9700) syntax and usage. It also describes the IMS command language
| and how to send commands to IMS in different environments.
| Information about all non-type 1 and non-type 2 IMS commands is
| in the IMS Version 10: Command Reference, Volume 3.
| IMS Version 10: Command Provides reference information for the IMS type-1 and type-2
| Reference, Volume 2 commands (/MSACCESS through /VUNLOAD), including command
| (SC18-9701) syntax and usage. It also describes the IMS command language
| and how to send commands to IMS in different environments.
| Information about all non-type 1 and non-type 2 IMS commands is
| in the IMS Version 10: Command Reference, Volume 3.

Changes to the IMS library for IMS Version 10 xxi


| Table 2. High-level user tasks and the IMS Version 10 books that supports those tasks (continued)
| Task PDF title Contents
| IMS Version 10: Command Provides reference information, including command syntax and
| Reference, Volume 3 usage, for the following IMS commands: Base Primitive
| (SC18-9702) Environment (BPE); Common Service Layer (CSL); Database
| Recovery Control (DBRC) facility; IMS Transport Manager
| Subsystem (TMS); integrated IMS Connect; and the z/OS
| commands for IMS. Information about IMS type-1 and type-2
| commands is in the IMS Version 10: Command Reference, Volume 1
| and the IMS Version 10: Command Reference, Volume 2.
| IMS Version 10: Exit Routine Provides reference information for the exit routines that you can
| Reference (SC18-9708) use to customize IMS database, system, transaction management,
| IMSplex, Base Primitive Environment (BPE), Common Queue
| Server (CQS), and integrated IMS Connect environments.
| IMS Version 10: Database Provides reference information for the utilities that you can use
| Utilities Reference (SC18-9705) with IMS databases. The utilities can help you migrate, reorganize,
| and recover a database. IMS system utilities are described in the
| IMS Version 10: System Utilities Reference.
| IMS Version 10: System Provides reference information for the utilities that you can use
| Utilities Reference (SC18-9968) with the IMS system. The utilities can help you generate IMS
| resources, analyze IMS activity, manage IMS logging, run the IMS
| Database Recovery Control (DBRC) facility, maintain IMS
| networking services, convert Security Maintenance utility (SMU)
| source, and maintain the resource definition data sets (RDDSs).
| IMS database utilities are described in the IMS Version 10: Database
| Utilities Reference.
| IMS Version 10: Master Index Provides a central index for all of the IMS Version 10 information,
| and Glossary (SC18-9711) as well as a glossary of IMS terms. The combined IMS Version 10:
| Master Index and Glossary is available only in PDF format. The
| master index is included in the information center under IMS
| Version 10 ->Index for IMS Version 10. The IMS glossary terms
| are included in the information center under IMS glossary.
|

xxii Application Programming Planning Guide


|

Chapter 1. How application programs work with IMS Database


Manager
Application programs use Data Language I (DL/I) to communicate with the IMS.
This section gives an overview of the application programming techniques and the
application programming interface for IMS Database Manager (IMS DB).

Subsections:
v “IMS environments”
v “DL/I and your application program” on page 3
v “DL/I codes” on page 3
v “Database descriptions (DBDs) and program specification blocks (PSBs)” on
page 4
v “DL/I for CICS” on page 5
v “DL/I using the ODBA interface” on page 7
v “Database hierarchy examples” on page 8

Related Reading:
v If your installation uses the IMS Transaction Manager (IMS TM), see IMS Version
10: Communications and Connections Guide for information on transaction
management functions.
v Information on DL/I EXEC commands is in the IMS Version 10: Application
Programming Guide.

IMS environments
Your application program can execute in different IMS environments. The three
online environments are DB/DC, DBCTL, and DCCTL. The two batch
environments are DB batch and TM batch.

Related reading: For information on these environments, see IMS Version 10:
System Administration Guide.

The information in this section applies to all application programs that run in IMS.
The main elements in an IMS application program are:
v Program entry
v Program communication block (PCB) or application interface block (AIB)
definition
v I/O (input/output) area definition
v DL/I calls
v Program termination
Figure 2 on page 2 shows how these elements relate to each other. The numbers on
the right in Figure 2 on page 2 refer to the notes that follow.

© Copyright IBM Corp. 1974, 2010 1


Figure 2. DL/I program elements

Notes for Figure 2:


1. Program entry. IMS passes control to the application program with a list of
associated PCBs.
2. PCB or AIB. IMS describes the results of each DL/I call using the AIBTDLI
interface in the application interface block (AIB) and, when applicable, the
program communication block (PCB). To find the results of a DL/I call, your
program must use the PCB that is referenced in the call. To find the results of
the call using the AIBTDLI interface, your program must use the AIB.
Your application program can use the PCB address that is returned in the AIB
to find the results of the call. To use the PCB, the program defines a mask of
the PCB and can then reference the PCB after each call to determine the success
or failure of the call. An application program cannot change the fields in a PCB;
it can only check the PCB to determine what happened when the call was
completed.
3. I/O area. IMS passes segments to and from the program in the program's I/O
area.
4. DL/I calls. The program issues DL/I calls to perform the requested function.
5. Program termination. The program returns control to IMS DB when it has
finished processing. In a batch program, your program can set the return code
and pass it to the next step in the job.

Recommendation: If your program does not use the return code in this way,
set the return code to 0 as a programming convention. Your
program can use the return code for this same purpose in
Batch Message Processing (BMP) regions. Message
Processing Programs (MPPs) cannot pass return codes.

2 Application Programming Planning Guide


DL/I and your application program
When an application program call is issued to IMS, control passes to IMS from the
application program. Standard subroutine linkage and parameter lists link IMS to
your application program. After control is passed, IMS examines the input
parameters, which perform the request functions.

DL/I codes
This section contains information about the different DL/I codes that you will
encounter when working with IMS Database Manager Application Programs.

Status, return, and reason codes


| To provide information about the results of each DL/I call that your application
| program issues when it uses the PCB, IMS™ places a two-character status code in
| the PCB. If you use the AIB, return and reason codes are placed in the AIB after
| certain DL/I calls. The AIB also receives the PCB address, which can be used to
| access the status code in the PCB.

| The status codes your application program should test for are those that indicate
| exceptional but valid conditions. Your application program should check for status
| codes that indicate that the call was successful, such as blanks. If IMS returns a
| status code that you did not expect, your program should branch to an error
| routine. For information about the status codes for the DL/I calls, see IMS:
| Messages and Codes Reference, Volume 4: IMS Component Codes.

Exceptional condition status codes


Some status codes do not mean that your call was successful or unsuccessful; they
just give information about the results of the call. Your program uses this
information to determine what to do next. The meaning of these status codes
depend on the call.

In a typical program, status codes that you should test for apply to the get calls.
Some status codes indicate exceptional conditions for other calls, and you should
provide routines other than error routines for these situations. For example, AH
means that a required segment search argument (SSA) is missing, and AT means
that the user I/O area is too long.

High Availability Large Databases (HALDBs)


You need to be aware that the feedback on data availability at PSB schedule time
shows the availability of only the High Availability Large Database (HALDB)
master, not of the HALDB partitions. However, the error settings for data
unavailability of a HALDB partition are the same as those of a non-HALDB
database, namely status code 'BA' or pseudo abend U3303.

Also note that logical child segments cannot be loaded into a HALDB PHDAM or
PHIDAM database. Logical child segments must be inserted later in an update run.
Any attempt to load a logical child segment in either a PHDAM or PHIDAM
database results in status code LF.

Error routines
If your program detects an error after checking for blanks and exceptional
conditions in the status code, it should branch to an error routine and print as

Chapter 1. How application programs work with IMS Database Manager 3


much information as possible about the error before terminating. Determining
which call was being executed when the error occurred, what parameters were on
the IMS call, and the contents of the PCB will be helpful in understanding the
error. Print the status code to help with problem determination.

Two kinds of errors can occur in your program: programming errors and system or
I/O errors. Programming errors, are usually your responsibility to find and fix.
These errors are caused by things like an invalid parameter, an invalid call, or an
I/O area that is too long. System or I/O errors are usually resolved by the system
programmer or the equivalent specialist at your installation.

Because every application program should have an error routine, and because each
installation has its own ways of finding and debugging program errors, you
probably have your own standard error routines.

| Database descriptions (DBDs) and program specification blocks


| (PSBs)
| Application programs can communicate with databases without being aware of the
| physical location of the data they possess. To do this, database descriptions (DBDs)
| and program specification blocks (PSBs) are used.

A DBD describes the content and hierarchic structure of the physical or logical
database. DBDs also supply information to IMS to help in locating segments.

A PSB specifies the database segments an application program can access and the
functions it can perform on the data, such as read only, update, or delete. Because
an application program can access multiple databases, PSBs are composed of one
or more program control blocks (PCBs). The PSB describes the way a database is
viewed by your application program.

Figure 3 shows the normal relationship between application programs, PSBs, PCBs,
DBDs, and databases.

Figure 3. Normal relationship between programs, PSBs, PCBs, DBDs, and databases

Figure 4 on page 5 shows concurrent processing, which uses multiple PCBs for the
same database.

4 Application Programming Planning Guide


Figure 4. Relationship between programs and multiple PCBs (concurrent processing)

| DL/I for CICS


| This topic applies to call-level CICS programs that use Database Control (DBCTL).
| DBCTL provides a database subsystem that runs in its own address space and
| gives one or more CICS systems access to IMS DL/I full-function databases and
| data entry databases (DEDBs).

| Figure 5 on page 6 shows the structure of a call-level CICS program. See Figure 5
| on page 6 notes for a description of each program element depicted in the figure.

Chapter 1. How application programs work with IMS Database Manager 5


|

|
| Figure 5. Structure of a call-level CICS program
|
Notes to Figure 5:
1. I/O area. IMS passes segments to and from the program in the program's I/O
area.
2. PCB. IMS describes the results of each DL/I call in the database PCB mask.
| 3. One of the following:
| v Application interface block (AIB). If you chose to use the AIB, the AIB
| provides the program with addresses of the PCBs and return codes from
| the CICS-DL/I interface.
| v User interface block (UIB). If you chose not to use AIB, the UIB provides
| the program with addresses of the PCBs and return codes from the
| CICS-DL/I interface.
| The horizontal line between number 3 (UIB) and number 4 (Program entry) in
| Figure 5, represents the end of the declarations section and the start of the
| executable code section of the program.

6 Application Programming Planning Guide


4. Program entry. CICS passes control to the application program during
program entry. Do not use an ENTRY statement as you would in a batch
program.
5. Schedule the PSB. This identifies the PSB your program is to use and passes
the address of the AIB or UIB to your program.
6. Issue DL/I calls. Issue DL/I calls to read and update the database.
| 7. One of the following:
| v Check the return code in the AIB. If you chose to use the AIB, you should
| check the return code after issuing any DL/I call for database processing.
| Do this before checking the status code in the PCB.
| v Check the return code in the UIB. If you chose not to use the AIB, you
| should check the return code after issuing any DL/I call for database
| processing, including the PCB or TERM call. Do this before checking the
| status code in the PCB.
8. Check the status code in the PCB. You should check the status code after
issuing any DL/I call for database processing. The code gives you the results
of your DL/I call.
9. Terminate the PSB. This terminates the PSB and commits database changes.
PSB termination is optional, and if it is not done, the PSB is released when
your program returns control to CICS.
10. Return to CICS. This returns control to either CICS or the linking program. If
control is returned to CICS, database changes are committed, and the PSB is
terminated.

DL/I using the ODBA interface


This topic applies to z/OS application programs that use database resources that
are managed by IMS DB. Open Database Access (ODBA) is an interface that
enables the z/OS application programs to access IMS DL/I full-function databases
and data entry databases (DEDBs).

| The three parts that access IMS DL/I through the ODBA interface are common
| logic flow for single Resource Manager scenarios and multiple Resource Manager
| scenarios. The following steps describe the common logic flow for both scenarios:
| 1. I/O area. IMS passes segments to and from the application program in its I/O
| area.
| 2. PCB. IMS describes the results of each DL/I call in the database PCB mask.
| 3. Application interface block (AIB). The AIB provides the program with
| addresses of the PCBs and return codes from the ODBA to DL/I interface.
| 4. Program entry. Obtain and initialize the AIB.
| 5. Initialize the ODBA interface.
| 6. Schedule the PSB. This step identifies the PSB that your program will use and
| also provides a place for IMS to keep internal tokens.
| 7. Issue DL/I calls. Issue DL/I calls to read and update the database. The
| following calls are available:
| v Retrieve
| v Replace
| v Delete
| v Insert

Chapter 1. How application programs work with IMS Database Manager 7


| 8. Check the return code in the AIB. Check the return code after you issue any
| DL/I calls for database processing, and before you check the status code in the
| PCB.
| 9. Check the status code in the PCB. If the AIB return code indicates a return code
| of X'900', check the status code after you issue any DL/I calls for database
| processing. The return code gives you the results of your DL/I call.

| The logic flow for how the programmer commits changes for single Resource
| Manager scenarios follows. The programmer:
| 1. Commits database changes. No DL/I calls, including system service calls such
| as LOG or STAT, can be made between the commit and the termination of the
| deallocate PSB (DPSB) call.
| 2. Terminates the PSB.
| 3. Optional: Terminates the ODBA interface.
| 4. Returns to the environment that initialized the application program.

| The logic flow for how the programmer commits changes for multiple Resource
| Manager scenarios follows. The programmer:
| 1. Terminates the PSB.
| 2. Optional: Terminates the ODBA interface.
| 3. Commits changes.
| 4. Returns to the environment that initialized the application program.

| The programmer can make multiple allocate PSB (APSB) requests before
| terminating the ODBA interface. The ODBA interface only needs to be initialized
| once in the address space and the programmer can repeat the
| schedule/commit/end schedule process as many times as they want.

Database hierarchy examples


In an IMS DB, a record is stored and accessed in a hierarchy. A hierarchy shows
how each piece of data in a record relates to other pieces of data in the record.

IMS connects the pieces of information in a database record by defining the


relationships between the pieces of information that relate to the same subject. The
result is a database hierarchy.

The examples in this information use the medical hierarchy shown in Figure 6 on
page 9 and the bank hierarchies shown in Table 9 on page 12, Table 10 on page 12,
and Table 11 on page 13. The hierarchies used in the medical hierarchy example are
used with full-function databases and Fast Path data entry databases (DEDBs). The
bank hierarchies are an example of an application program used with main storage
databases (MSDBs). To understand these examples, familiarize yourself with the
hierarchies and segments that each hierarchy contains.

| Medical hierarchy example


| The medical database shown in Figure 6 on page 9 contains information that a
| medical clinic keeps about its patients.
|

8 Application Programming Planning Guide


|

|
| Figure 6. Medical hierarchy
|
| Each piece of data represented in Figure 6 is called a segment in the hierarchy. Each
| segment contains one or more fields of information. The PATIENT segment, for
| example, contains all the information that relates strictly to the patient: the
| patient's identification number, name, and address.

| Definitions: A segment is the smallest unit of data that an application program can
| retrieve from the database. A field is the smallest unit of a segment.

| The PATIENT segment in the medical database is the root segment. The segments
| below the root segment are the dependents, or children, of the root. For example,
| ILLNESS, BILLING, and HOUSHOLD are all children of PATIENT. ILLNESS,
| BILLING, and HOUSHOLD are called direct dependents of PATIENT; TREATMNT
| and PAYMENT are also dependents of PATIENT, but they are not direct
| dependents, because they are at a lower level in the hierarchy.

| A database record is a single root segment (root segment occurrence) and all of its
| dependents. In the medical example, a database record is all of the information
| about one patient.

| Definitions: A root segment is the highest-level segment. A dependent is a segment


| below a root segment. A root segment occurrence is a database record and all of its
| dependents.

| Each database record has only one root segment occurrence, but it might have
| several occurrences at lower levels. For example, the database record for a patient
| contains only one occurrence of the PATIENT segment type, but it might contain
| several ILLNESS and TREATMNT segment occurrences for that patient.

| The tables that follow show the layouts of each segment in the hierarchy.

| The segment’s field names are in the first row of each table. The number below
| each field name is the length in bytes that has been defined for that field.
| v PATIENT Segment
| Table 3 on page 10 shows the PATIENT segment.
| It has three fields:
| – The patient’s number (PATNO)
| – The patient’s name (NAME)
| – The patient's address (ADDR)
| PATIENT has a unique key field: PATNO. PATIENT segments are stored in
| ascending order based on the patient number. The lowest patient number in the
| database is 00001 and the highest is 10500.

Chapter 1. How application programs work with IMS Database Manager 9


| Table 3. PATIENT segment
| Field name Field length
| PATNO 10
| NAME 5
| ADDR 30
|
| v ILLNESS Segment
| Table 4 shows the ILLNESS segment.
| It has two fields:
| – The date when the patient came to the clinic with the illness (ILLDATE)
| – The name of the illness (ILLNAME)
| The key field is ILLDATE. Because it is possible for a patient to come to the
| clinic with more than one illness on the same date, this key field is non unique,
| that is, there may be more than one ILLNESS segment with the same (an equal)
| key field value.
| Usually during installation, the database administrator (DBA) decides the order
| in which to place the database segments with equal or no keys. The DBA can
| use the RULES keyword of the SEGM statement of the DBD to specify the order
| of the segments.
| For segments with equal keys or no keys, RULES determines where the segment
| is inserted. Where RULES=LAST, ILLNESS segments that have equal keys are
| stored on a first-in-first-out basis among those with equal keys. ILLNESS
| segments with unique keys are stored in ascending order on the date field,
| regardless of RULES. ILLDATE is specified in the format YYYYMMDD.
| Table 4. ILLNESS segment
| Field name Field length
| ILLDATE 8
| ILLNAME 10
|
| v TREATMNT Segment
| Table 5 shows the TREATMNT segment.
| It contains four fields:
| – The date of the treatment (DATE)
| – The medicine that was given to the patient (MEDICINE)
| – The quantity of the medicine that the patient received (QUANTITY)
| – The name of the doctor who prescribed the treatment (DOCTOR)
| The TREATMNT segment’s key field is DATE. Because a patient may receive
| more than one treatment on the same date, DATE is a non unique key field.
| TREATMNT, like ILLNESS, has been specified as having RULES=LAST.
| TREATMNT segments are also stored on a first-in-first-out basis. DATE is
| specified in the same format as ILLDATE—YYYYMMDD.
| Table 5. TREATMNT segment
| Field name Field length
| DATE 8
| MEDICINE 10
| QUANTITY 4
| DOCTOR 10

10 Application Programming Planning Guide


|
| v BILLING Segment
| Table 6 shows the BILLING segment. It has only one field: the amount of the
| current bill. BILLING has no key field.
| Table 6. BILLING segment
| Field name Field length
| BILLING 6
|
| v PAYMENT Segment
| Table 7 shows the PAYMENT segment. It has only one field: the amount of
| payments for the month. The PAYMENT segment has no key field.
| Table 7. PAYMENT segment
| Field name Field length
| PAYMENT 6
|
| v HOUSHOLD Segment
| Table 8 shows the HOUSHOLD segment.
| It contains two fields:
| – The names of the members of the patient's household (RELNAME)
| – How each member of the household is related to the patient (RELATN)
| The HOUSHOLD segment’s key field is RELNAME.
| Table 8. HOUSHOLD segment
| Field name Field length
| RELNAME 10
| RELATN 8
|

| Bank account hierarchy example


The bank account hierarchy is an example of an application program that is used
with main storage databases (MSDBs). In the medical hierarchy example, the
database record for a particular patient comprises the PATIENT segment and all of
the segments underneath the PATIENT segment. In an MSDB, such as the one in
the bank account example, the segment is the whole database record. The database
record contains only the fields that the segment contains.

The two types of MSDBs are related and nonrelated. In related MSDBs, each segment
is “owned” by one logical terminal. The "owned" segment can only be updated by
the terminal that owns it. In nonrelated MSDBs, the segments are not owned by
logical terminals. “Related MSDBs” and “Nonrelated MSDBs” on page 12 illustrate
the differences between these types of databases.

Related MSDBs
Related MSDBs can be fixed or dynamic. In a fixed related MSDB, you can store
summary data about a particular teller at a bank. For example, you can have an
identification code for the teller's terminal. Then you can keep a count of that
teller's transactions and balance for the day. This type of application requires a
segment with three fields:
TELLERID A two-character code that identifies the teller

Chapter 1. How application programs work with IMS Database Manager 11


TRANCNT The number of transactions the teller has processed
TELLBAL The balance for the teller

Table 9 shows what the segment for this type of application program looks like.
Table 9. Teller segment in a fixed related MSDB
TELLERID TRANCNT TELLBAL

Some of the characteristics of fixed related MSDBs include:


v You can only read and replace segments. You cannot delete or insert segments.
In the bank teller example, the teller can change the number of transactions
processed, but you cannot add or delete any segments. You never need to add or
delete segments.
v Each segment is assigned to one logical terminal. Only the owning terminal can
change a segment, but other terminals can read the segment. In the bank teller
example, you do not want tellers to update the information about other tellers,
but you allow the tellers to view each other’s information. Tellers are responsible
for their own transactions.
v The name of the logical terminal that owns the segment is the segment's key.
Unlike non-MSDB segments, the MSDB key is not a field of the segment. It is
used as a means of storing and accessing segments.
v A logical terminal can only own one segment in any one MSDB.

In a dynamic related MSDB, you can store data summarizing the activity of all
bank tellers at a single branch. For example, this segment contains:
BRANCHNO The identification number for the branch
TOTAL The bank branch's current balance
TRANCNT The number of transactions for the branch on that day
DEPBAL The deposit balance, giving the total dollar amount of deposits for
the branch
WTHBAL The withdrawal balance, giving the dollar amount of the
withdrawals for the branch

Table 10 shows what the branch summary segment looks like in a dynamic related
MSDB.
Table 10. Branch summary segment in a dynamic related MSDB
BRANCHNO TOTAL TRANCNT DEPBAL WTHBAL

How dynamic related MSDBs differ from fixed related MSDBs:


v The owning logical terminal can delete and insert segments in a dynamic related
MSDB.
v The MSDB can have a pool of unassigned segments. This kind of segment is
assigned to a logical terminal when the logical terminal inserts it, and is
returned to the pool when the logical terminal deletes it.

Nonrelated MSDBs
A nonrelated MSDB is used to store data that is updated by several terminals
during the same time period. For example, you might store data about an

12 Application Programming Planning Guide


individuals' bank accounts in a nonrelated MSDB segment, so that the information
can be updated by a teller at any terminal. Your program might need to access the
data in the following segment fields:
ACCNTNO The account number
BRANCH The name of the branch where the account is
TRANCNT The number of transactions for this account this month
BALANCE The current balance

Table 11 shows what the account segment in a nonrelated MSDB application


program looks like.
Table 11. Account segment in a nonrelated MSDB
ACCNTNO BRANCH TRANCNT BALANCE

The characteristics of nonrelated MSDBs include:


v Segments are not owned by terminals as they are in related MSDBs. Therefore,
IMS programs and Fast Path programs can update these segments. Updating
segments is not restricted to the owning logical terminal.
v Your program cannot delete or insert segments.
v Segment keys can be the name of a logical terminal. A nonrelated MSDB exists
with terminal-related keys. The segments are not owned by the logical terminals,
and the logical terminal name is used to identify the segment.
v If the key is not the name of a logical terminal, it can be any value, and it is in
the first field of the segment. Segments are loaded in key sequence.

Chapter 1. How application programs work with IMS Database Manager 13


14 Application Programming Planning Guide
Chapter 2. How application programs work with IMS
Transaction Manager
Application programs use Data Language I (DL/I) to communicate with IMS. This
section gives an overview of the application programming techniques and the
application programming interface for IMS Transaction Manager.

Related Reading:
v If your installation uses IMS Database Manager, see IMS Version 10:
Communications and Connections Guide for information on writing applications
that access IMS databases.
v Information on DL/I EXEC commands is in the IMS Version 10: Application
Programming Guide.

Subsections:
v “Application program environments”
v “DL/I elements”
v “DL/I calls” on page 17

Application program environments


Your application program can run in different IMS environments. The three online
environments are database/data communication (DB/DC), database control
(DBCTL), and data communications control (DCCTL). The two batch environments
are database manager batch (DBB) and transaction manager batch (TMB). This
information explains the DB/DC, DCCTL, and TM batch environments.

Related reading: For additional information on DCCTL and TM Batch


environments, see IMS Version 10: System Administration Guide.

DL/I elements
The information in this section applies to all application programs that run in IMS.
The main elements in an IMS application program consist of the following:
v Program entry
v Program Communication Block (PCB) or Application Interface Block (AIB)
definition
v I/O area definition
v DL/I calls
v Program termination
Figure 7 on page 16 shows how these elements relate to each other. The numbers
on the right in Figure 7 on page 16 refer to the notes that follow.

© Copyright IBM Corp. 1974, 2010 15


Figure 7. DL/I program elements

Notes to Figure 7:
1. Program entry. IMS passes control to the application program with a list of
associated PCBs.
2. PCB or AIB. IMS describes the results of each DL/I call using the AIBTDLI
interface in the application interface block (AIB) and, when applicable, the
program communication block (PCB). To find the results of a DL/I call, your
program must use the PCB that is referenced in the call. To find the results of
the call using the AIBTDLI interface, your program must use the AIB.
Your application program can use the PCB address that is returned in the AIB
to find the results of the call. To use the PCB, the program defines a mask of
the PCB and can then reference the PCB after each call to determine the success
or failure of the call. An application program cannot change the fields in a PCB;
it can only check the PCB to determine what happened when the call was
completed.
3. Input/output (I/O) area. IMS passes segments to and from the program in the
program's I/O area.
4. DL/I calls. The program issues DL/I calls to perform the requested function.
5. Program Termination. The program returns control to IMS DB when it has
finished processing. In a batch program, your program can set the return code
and pass it to the next step in the job.
Recommendation: If your program does not use the return code in this way, it
is a good idea to set it to 0 as a programming convention. Your program can
use the return code for this same purpose in BMPs. (MPPs cannot pass return
codes.)

16 Application Programming Planning Guide


DL/I calls
A DL/I call consists of a call statement and a list of parameters. The parameters
for the call provide information IMS needs to execute the call. This information
consists of the call function, the name of the data structure IMS uses for the call,
the data area in the program into which IMS returns, and any condition the
retrieved data must meet.

You can issue calls to perform transaction management functions (message calls)
and to obtain IMS TM system services (system service calls):

Message call functions


The IMS TM message processing calls are:
AUTH Authorization
CHNG Change
CMD Command
GCMD Get Command
GN Get Next
GU Get Unique
ISRT Insert
PURG Purge
SETO Set Options

System service call functions


The IMS TM system service calls are:
APSB Allocate PSB
CHKP Checkpoint (Basic)
CHKP Checkpoint (Symbolic)
DPSB Deallocate PSB
GMSG Get Message
1
GSCD Get System Contents Directory
ICMD Issue Command
INIT Initialize
INQY Inquiry
LOG Log
RCMD Retrieve Command
ROLB Roll Back
ROLL Roll
ROLS Roll Back to SETS
SETS Set Synchronization Point

1. GSCD is a Product-sensitive programming interface.

Chapter 2. How application programs work with IMS Transaction Manager 17


SETU Set Synchronization Point (Unconditional)
SYNC Synchronization
XRST Extended Restart

Related reading: The DL/I calls are discussed in detail in IMS Version 10:
Application Programming Guide.

Status, return, and reason codes


| To provide information about the results of each DL/I call that your application
| program issues when it uses the PCB, IMS™ places a two-character status code in
| the PCB. If you use the AIB, return and reason codes are placed in the AIB after
| certain DL/I calls. The AIB also receives the PCB address, which can be used to
| access the status code in the PCB.

| The status codes your application program should test for are those that indicate
| exceptional but valid conditions. Your application program should check for status
| codes that indicate that the call was successful, such as blanks. If IMS returns a
| status code that you did not expect, your program should branch to an error
| routine. For information about the status codes for the DL/I calls, see IMS:
| Messages and Codes Reference, Volume 4: IMS Component Codes.

Exceptional condition status code


Some status codes do not mean that your call was successful or unsuccessful; they
just give you information about the results of the call. Your program uses this
information to determine what to do next. The meanings of these status codes
depend on the call.

In a typical program, you should test for status codes that apply only to Get calls.
Some status codes indicate exceptional conditions for other calls. When your
program is retrieving messages, there are situations that you should expect and for
which you should provide routines other than error routines. For example, QC
means that no additional input messages are available for your program in the
message queue, and QD means that no additional segments are available for this
message.

Error routines
If, after checking for blanks and exceptional conditions in the status code, you find
that there has been an error, your program should branch to an error routine and
print as much information as possible about the error before terminating. Print the
status code as well. Determining which call was being executed when the error
occurred, the parameter of the IMS call, and the contents of the PCB will be
helpful in understanding the error.

Two kinds of errors can occur. First, programming errors are usually your
responsibility; they are the ones you can find and fix. These errors are caused by
things like an invalid parameter, an invalid call, or an I/O area that is too long.
The other kind of error is something you cannot usually fix; this is a system or I/O
error. When your program has this kind of error, the system programmer or the
equivalent specialist at your installation should be able to help.

Because every application program should have an error routine available to it,
and because each installation has its own ways of finding and debugging program
errors, installations usually provide their own standard error routines.

18 Application Programming Planning Guide


Chapter 3. How CICS EXEC DLI application programs work
with IMS
| This topic describes the components of your CICS program.

Your EXEC DLI application uses EXEC DLI commands to read and update DL/I
databases. These applications can execute as pure batch, as a BMP program
running with DBCTL or DB/DC, or as an online CICS program using DBCTL.
Your EXEC DLI program can also issue system service commands when using
DBCTL.

IMS DB/DC can provide the same services as DBCTL.

| Subsections:
| v “Getting started with EXEC DLI”

Getting started with EXEC DLI


Figure 8 on page 20 shows the main elements of programs that use EXEC DLI
commands to access DL/I databases. The main differences between a CICS
program and a command-level batch or BMP program (represented by Figure 8 on
page 20) are that you do not schedule a PSB for a batch program, and that you do
not issue checkpoints for a CICS program. The numbers to the left of the figure
correspond to the notes that follow Figure 8 on page 20.

© Copyright IBM Corp. 1974, 2010 19


Figure 8. Structure of a command-level batch or BMP program

Notes to Figure 8:
1I/O areas. DL/I passes segments to and from the program in the I/O areas.
You may use a separate I/O area for each segment.
2Key feedback area. DL/I passes, on request, the concatenated key of the
lowest-level segment retrieved to the key feedback area.
3DL/I Interface Block (DIB). DL/I and CICS place the results of each
command in the DIB. The DIB contains most of the same information returned
in the DB PCB for programs using the call-level interface.

Note: The horizontal line between 3 and 4 represents the end of the
declarations section and the start of the executable code section of the
program.
4Program entry. Control is passed to your program during program entry.
5Issue EXEC DLI commands. Commands read and update information in the
database.
6Check the status code. To find out the results of each command you issue,
you should check the status code in the DIB after issuing an EXEC DLI
command for database processing and after issuing a checkpoint command.
7Issue checkpoint. Issue checkpoints as needed to establish places from which
to restart. Issuing a checkpoint commits database changes and releases
resources.

20 Application Programming Planning Guide


8Terminate. This returns control to the operating system, commits database
changes, and releases resources.

Requirement: CICS Transaction Server for z/OS runs with this version of IMS.
Unless a distinction needs to made, all supported versions are referred to as CICS.
For a complete list of supported software, see the IMS Version 10: Release Planning
Guide.

Chapter 3. How CICS EXEC DLI application programs work with IMS 21
22 Application Programming Planning Guide
|

| Chapter 4. How Java application programs work with IMS


| Java class libraries for IMS allow you to write Java application programs that
| process IMS transactions and access IMS database resources. These Java class
| libraries provide, at a minimum, all of the existing IMS functionality that the
| traditional IMS programming languages provide.

| For additional information about the Java class libraries for IMS, see the topic
| “Hardware and software requirements”of the IMS Version 10: Release Planning
| Guide.

| Subsections:
| v “How Java application programs work with IMS databases”
| v “How Java application programs work with IMS transactions” on page 27
|
| How Java application programs work with IMS databases
| You can write Java application programs that access IMS database resources using
| either the IMS hierarchical database interface for Java or the JDBC interface.

| For additional information about programming Java applications to work with IMS
| databases, see the IMS Version 10: Application Programming Guide.

| Subsections:
| v “Comparison of hierarchical and relational databases”
| v “Overview of the IMS hierarchical database interface for Java” on page 27
| v “JDBC access to IMS” on page 27

| Comparison of hierarchical and relational databases


| A database segment definition defines the fields for a set of segment instances
| similar to the way a relational table defines columns for a set of rows in a table. In
| this way, segments relate to relational tables, and fields in a segment relate to
| columns in a relational table.

| The name of an IMS segment becomes the table name in an SQL query, and the
| name of a field becomes the column name in the SQL query.

| A fundamental difference between segments in a hierarchical database and tables


| in a relational database is that, in a hierarchical database, segments are implicitly
| joined with each other. In a relational database, you must explicitly join two tables.
| A segment instance in a hierarchical database is already joined with its parent
| segment and its child segments, which are all along the same hierarchical path. In
| a relational database, this relationship between tables is captured by foreign keys
| and primary keys.

| This section compares the Dealership sample database, which is shipped with the
| Java API for IMS DB, to a relational representation of the database.

| Important: This information provides only a comparison between relational and


| hierarchical databases; IMS is not translated into SQL for Java
| application programming.

© Copyright IBM Corp. 1974, 2010 23


| The Dealership sample database contains five segment types, which are shown in
| the following figure. The root segment is the Dealer segment. Under the Dealer
| segment is its child segment, the Model segment. Under the Model segment are its
| children: the segments Order, Sales, and Stock.
|
|

|
| Figure 9. Segments of the Dealership sample database
|
| The Dealer segment identifies a dealer that sells cars. The segment contains a
| dealer name in the field DLRNAME, and a unique dealer number in the field
| DLRNO.

| Dealers carry car types, each of which has a corresponding Model segment. A
| Model segment contains a type code in the field MODTYPE.

| Each car that is ordered for the dealership has an Order segment. A Stock segment
| is created for each car that is available for sale in the dealer’s inventory. When the
| car is sold, a Sales segment is created.

| The following shows a relational representation of the IMS database record shown
| in Figure 9.

| Important: This figure is provided to help you understand how to use JDBC calls
| in a hierarchical environment. The Java API for IMS DB does not
| change the structure of IMS data in any way.
|

24 Application Programming Planning Guide


|

|
| Figure 10. Relational representation of the Dealership sample database
|
| If a segment does not have a unique key, which is similar to a primary key in
| relational databases, view the corresponding relational table as having a generated
| primary key added to its column (field) list. An example of a generated primary
| key is in the Model table (segment) of Figure 10. Similar to referential integrity in
| relational databases, you cannot insert, for example, an Order (child) segment to
| the database without it being a child of a specific Model (parent) segment.

| Also note that the field (column) names have been renamed. You can rename
| segments and fields to more meaningful names by using the DLIModel utility.

Chapter 4. How Java application programs work with IMS 25


| An occurrence of a segment in a hierarchical database corresponds to a row (or
| tuple) of a table in a relational database. The following figure shows three
| Dealership database records.
|
|

|
| Figure 11. Segment occurrences in the Dealership sample database
|
| The Dealer segment occurrences have dependent Model segment occurrences. The
| following figure shows the relational representation of the dependent model
| segment occurrences.
|
|

|
| Figure 12. Relational representation of segment occurrences in the Dealership database
|
| The following example shows the SELECT statement of an SQL call. Model is a
| segment name that is used as a table name in the query:
| SELECT * FROM Model

| In the following example, ModelTypeCode is the name of a field that is contained


| in the Model segment and it is used in the SQL query as a column name:
| SELECT * FROM Model WHERE ModelTypeCode = ’062579’

| In both of the preceding examples, Model and ModelTypeCode are alias names
| that you assign by using the DLIModel utility. These names likely will not be the

26 Application Programming Planning Guide


| same 8-character names that are used in the database description (DBD) for IMS.
| Alias names act as references to the 8-character names that are described in the
| DBD.

| See the IMS Version 10: Application Programming Guide for the database description
| (DBD) of the Dealership sample database.

| Overview of the IMS hierarchical database interface for Java


| The IMS hierarchical database interface for Java is closely related to the standard
| DL/I database call interface that is used with other languages, and provides a
| lower-level access to IMS database functions than the JDBC interface. Using the
| IMS hierarchical database interface for Java, you can build segment search
| arguments (SSAs) and call the functions of the DLIConnection object to read, insert,
| update, or delete segments. The application has full control to navigate the
| segment hierarchy.

| You can use either the IMS hierarchical database interface for Java or the JDBC
| interface to access IMS data. However, the IMS hierarchical database interface for
| Java offers more controlled access than the higher-level JDBC interface package
| provides.

| Related Reading: For detailed information about the classes in the IMS hierarchical
| database interface for Java, see the Java API specification for IMS under
| “Application programming APIs” in the Information Management Software for
| z/OS Solutions Information Center at http://publib.boulder.ibm.com/infocenter/
| imzic.

| JDBC access to IMS


| JDBC is the SQL-based standard interface for database access. The IMS
| implementation of JDBC, also known as the JDBC driver for IMS, supports a
| selected subset of the full facilities of the JDBC 2.1 API.

| To maintain performance, the IMS JDBC driver is designed to support a subset of


| SQL keywords that allow the IMS JDBC driver to perform only certain operations.
| Some of these SQL keywords have specific IMS usage requirements.

| For information about the subset of SQL keywords and SQL keyword usage, see
| the topic “SQL keywords and extensions for the JDBC driver for IMS” in the IMS
| Version 10: Application Programming API Reference.

| This information uses the Dealership sample applications that are shipped with the
| Java API for IMS DB to describe how to use the IMS JDBC driver to access an IMS
| database.
|
| How Java application programs work with IMS transactions
| You can write Java application programs that process IMS transactions using Java
| message processing (JMP) regions and Java batch processing (JBP) regions. These
| two IMS dependent regions provide a Java Virtual Machine (JVM) environment for
| Java applications.

| JMP regions and JBP regions operate like any other IMS dependent regions. A JMP
| region is analogous to an MPP, and a JBP is analogous to a non-message-driven
| BMP. However, the fundamental difference between JMP and JBP regions
| compared to MPP regions and BMP regions is that JMP regions and JBP regions

Chapter 4. How Java application programs work with IMS 27


| have a built-in JVM, which is initialized and maintained by the IMS dependent
| region. When the dependent region is initialized, so is the JVM. Also, the JVM runs
| as long as the dependent region is running, which means that it is not stopped and
| re-initialized between transactions.

| All IMS dependent regions are designed to support program switching, which
| means that a program can call another program, regardless of what region the
| program is stored in. For example, a program in an MPP region can call a program
| in a JMP region. Likewise, a program in a JMP region can call a program in an
| MPP region.

| Important: JMP and JBP regions are not necessary if your application runs in
| WebSphere Application Server, DB2 for z/OS, or CICS. JMP or JBP
| regions are needed only if your application will run in an IMS
| dependent region.

| The following figure shows a Java application that is running in a JMP or a JBP
| region. Calls from the JDBC interface or the IMS hierarchical database interface for
| Java are passed to the Java class libraries for IMS, which convert the calls to DL/I
| calls.
|
|

|
| Figure 13. JMP or JBP applications that use the Java class libraries for IMS
|
| JMP regions and JBP regions can run applications that are written in Java,
| object-oriented COBOL, or a combination of the two.

| JMP regions and JBP applications can access DB2 for z/OS databases in addition to
| IMS databases.

| For additional information about JMP and JBP regions, see the IMS Version 10:
| Application Programming Guide.

| Subsections:
| v “Java message processing (JMP) regions” on page 29
| v “Java batch processing (JBP) regions” on page 31

28 Application Programming Planning Guide


| Java message processing (JMP) regions
| In Java, JMP regions have the same functionality as MPP regions, but JMP regions
| allow the scheduling of only Java message-processing applications. JMP
| applications are executed through transaction codes that are submitted by users at
| terminals and from other applications. A JMP application starts when there is a
| message that contains the user submitted transaction codes in the IMS message
| queue. After the application starts, IMS schedules the message to be processed.
| Each transaction code that is submitted represents a transaction that the JMP
| application processes. A single application can also be started from multiple
| transaction codes.

| JMP applications are flexible in how they process transactions and where they send
| the output. JMP applications send any output messages back to the message
| queues and process the next message with the same transaction code. The program
| runs until there are no more messages with the same transaction code. JMP
| applications share the following characteristics:
| v They are small.
| v They can produce output that is needed immediately.
| v They can access IMS or DB2 for z/OS data in a DB/DC environment and DB2
| for z/OS data in a DCCTL environment.

| JMP programming models


| JMP applications get input messages from the IMS message queue, access IMS
| databases, commit transactions, and can send output messages.

| JMP applications are started when IMS receives a message with a transaction code
| for the JMP application and schedules the message. JMP applications end when
| there are no more messages with that transaction code to process.

| It is not a requirement for applications to issue a commit before reading


| subsequent message from the IMS message queue nor is it a requirement to
| commit before ending the application. For applications that do not commit before
| either reading subsequent messages or ending, IMS will automatically commit for
| the application.

| Basic JMP application: A transaction begins when the application gets an input
| message. To get an input message, the application calls the getUniqueMessage
| method. After a message is processed, IMS will commit and end the transaction on
| the behalf of the application. All subsequent getUniqueMessage methods can then
| be made.

| The following skeleton code is for a basic JMP application.


| public static void main(String args[]) {
|
| conn = DriverManager.getConnection(...); //Establish DB connection
|
|
| while(MessageQueue.getUniqueMessage(...)){ //Get input message, which
| //starts transaction
|
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| IMSTransaction.getTransaction().commit(); //Commit and end transaction
| //(optional)
| }

Chapter 4. How Java application programs work with IMS 29


|
| conn.close(); //Close DB connection
| return;
| }

| JMP application with rollback: A JMP application can roll back database
| processing and output messages any number of times during a transaction. A
| rollback call backs out all database processing and output messages to the most
| recent commit. The transaction must end with a commit call when the program
| issues a rollback call, even if no further database or message processing occurs
| after the rollback call.

| The following skeleton code is for a JMP application with rollback.


| public static void main(String args[]) {
|
| conn = DriverManager.getConnection(...); //Establish DB connection
|
| while(MessageQueue.getUniqueMessage(...)){ //Get input message, which
| //starts transaction
|
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| IMSTransaction.getTransaction().rollback(); //Roll back DB processing
| //and output messages
|
| results=statement.executeQuery(...); //Perform more DB processing
| //(optional)
| ...
| MessageQueue.insertMessage(...); //Send more output messages
| //(optional)
| ...
| IMSTransaction.getTransaction().commit(); //Commit and end transaction
| //(optional)
| }
|
| conn.close(); //Close DB connection
| return;
| }

| JMP application that accesses IMS or DB2 for z/OS data: When a JMP
| application accesses only IMS data, it needs to open a database connection only
| once to process multiple transactions, as shown in “Basic JMP application” on page
| 29. However, a JMP application that accesses DB2 for z/OS data must open and
| close a database connection for each message that is processed.

| The following skeleton code is valid for DB2 for z/OS database access, IMS
| database access, or both DB2 for z/OS and IMS database access.
| public static void main(String args[]) {
|
| while(MessageQueue.getUniqueMessage(...)){ //Get input message, which
| //starts transaction
|
| conn = DriverManager.getConnection(...); //Establish DB connection
|
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| conn.close(); //Close DB connection
| ...
| IMSTransaction.getTransaction().commit(); //Commit and end transaction

30 Application Programming Planning Guide


| //(optional)
| }
|
| return;
| }

| Related Reading: For more information about accessing DB2 for z/OS data from a
| JMP application, see IMS Version 10: Application Programming Guide.

| Java batch processing (JBP) regions


| JBP regions run flexible programs that perform batch processing online and can
| access the IMS message queues for output, similar to non-message-driven BMP
| applications. JBP applications are started from TSO or by submitting a job with
| JCL. JBP applications are like BMP applications, except that they cannot read input
| messages from the IMS message queue. For example, there is no IN= parameter in
| the startup procedure. Similarly to BMP applications, JBP applications can use
| symbolic checkpoint calls and restart calls to restart the application after an abend.
| JBP applications can access IMS or DB2 for z/OS data in a DB/DC or DBCTL
| environment and DB2 for z/OS data in a DCCTL environment

| JBP programming models


| In Java, JBP applications have similar functionality to non-message-driven BMP
| applications but, JBP applications must be non-message-driven and do not receive
| input messages from the IMS message queue. Except for JBP applications that have
| the PSB PROCOPT=GO parameter specified, non-message-driven JBP applications
| should periodically issue commit calls.

| Basic JBP application: A JBP application connects to a database, performs


| database processing, periodically commits, and disconnects from the database at
| the end of the program. The program must issue a final commit before ending.

| The following skeleton code is for a basic JBP application.


| public static void main(String args[]) {
|
| conn = DriverManager.getConnection(...); //Establish DB connection
|
| repeat {
|
| repeat {
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| }
|
| IMSTransaction.getTransaction().commit(); //Periodic commits
| } //divide work
|
| conn.close(); //Close DB connection
| return;
| }

| JBP application with symbolic checkpoint and restart: A JBP application


| connects to a database, makes a restart call, performs database processing,
| periodically checkpoints, and disconnects from the database at the end of the
| program. The program must issue a final commit before ending.

| The following skeleton code is for a JBP application with checkpoint and restart.

Chapter 4. How Java application programs work with IMS 31


| public static void main(String args[]) {
|
| conn = DriverManager.getConnection(...); //Establish DB connection
|
| IMSTransaction.getTransaction().retart(); //Restart application
| //after abend from last
| //checkpoint
| repeat {
|
| repeat {
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| }
|
| IMSTransaction.getTransaction().checkpoint(); //Periodic checkpoints
| //divide work
| }
|
| conn.close(); //Close DB connection
| return;
| }

| JBP application with rollback: Similarly to JMP applications, JBP applications can
| also roll back database processing and output messages. A final commit call is
| required before the application can end, even if no further database processing
| occurs or output messages are sent after the last rollback call.

| The following skeleton code is for a JBP application with rollback.


| public static void main(String args[]) {
|
| conn = DriverManager.getConnection(...); //Establish DB connection
|
| repeat {
|
| repeat {
|
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| IMSTransaction.getTransaction().rollback(); //Roll out DB
| //processing and output
| //messages
|
| results=statement.executeQuery(...); //Perform more DB
| //processing (optional)
| ...
| MessageQueue.insertMessage(...); //Send more output
| //messages (optional)
| ...
| }
|
| IMSTransaction.getTransaction().commit(); //Periodic commits
| } //divide work
|
| conn.close(); //Close DB connection
| return;
| }

| JBP application that accesses DB2 for z/OS or IMS data: Like a JBP application
| that accesses IMS data, a JBP application that accesses DB2 for z/OS data connects
| to a database, performs database processing, periodically commits, and disconnects

32 Application Programming Planning Guide


| from the database at the end of the application. However, the application must also
| issue a final commit after closing the database connection.

| Related Reading: For more information about accessing DB2 for z/OS data from a
| JBP application, see IMS Version 10: Application Programming Guide.

| The following skeleton code is valid for DB2 for z/OS database access, IMS
| database access, or both DB2 for z/OS and IMS database access.
| public void doBegin() ... { //Application logic runs
| //doBegin method
| conn = DriverManager.getConnection(...); //Establish DB connection
| repeat {
| repeat {
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| }
| IMSTransaction.getTransaction().commit(); //Periodic commits divide
| } //work
|
| conn.close(); //Close DB connection
|
| IMSTransaction.getTransaction().commit(); //Commit the DB
| //connection close
| return;
| }

| JBP application to access GSAM data: A JBP application that accesses GSAM
| data is able to connect to a database, perform database processing, periodically
| commit, and disconnect from the database at the end of the application. GSAM
| data is frequently referred to as z/OS data sets or, more commonly, as flat files.
| This kind of data is non-hierarchical in structure.

| The following skeleton code is for a JBP application that access GSAM data.
| GSAMConnection connection = GSAMConnection.createInstance(...); //Establish DB
| //connection
| repeat {
| GSAMRecord record = connection.getNext(...); //Perform DB processing
| }
| connection.close(); //Close DB connection
|
| IMSTransaction.getTransaction().commit(); //Commit the DB connection close

Chapter 4. How Java application programs work with IMS 33


34 Application Programming Planning Guide
Chapter 5. Designing an application: Introductory concepts
This section provides an introduction to designing application programs. It
explains some basic concepts about processing a database, and gives an overview
of the tasks covered in this information.

Subsections:
v “Storing and processing information in a database”
v “Tasks for developing an application” on page 40

Storing and processing information in a database


This section describes how storing data in a database is different from other ways
of storing data. The advantages of storing and processing data in a database are
that all of the data needs to appear only once and that each program must process
only the data that it needs. One way to understand this is to compare three ways
of storing data: in separate files, in a combined file, and in a database.

Storing data in separate files


If you keep separate files of data for each part of your organization, you can
ensure that each program uses only the data it needs, but you must store a lot of
data in multiple places simultaneously. Problems with keeping separate files are:
v Redundant data takes up space that could be put to better use
v Maintaining separate files can be difficult and complex

Example: Suppose that a medical clinic keeps separate files for each of its
departments, such as the clinic department, the accounting department, and the
ophthalmology department:
v The clinic department keeps data about each patient who visits the clinic, such
as:
Identification number
Name
Address
Illnesses
Date of each illness
Date patient came to clinic for treatment
Treatment given for each illness
Doctor that prescribed treatment
Charge for treatment
v The accounting department also keeps information about each patient. The
information that the accounting department might keep for each patient is:
Identification number
Name
Address
Charge for treatment
Amount of payments

© Copyright IBM Corp. 1974, 2010 35


v The information that the ophthalmology department might keep for each patient
is:
Identification number
Name
Address
Illnesses relating to ophthalmology
Date of each illness
Names of members in patient's household
Relationship between patient and each household member

If each of these departments keeps separate files, each department uses only the
data that it needs, but much of the data is redundant. For example, every
department in the clinic uses at least the patient's number, name, and address.
Updating the data is also a problem, because if a department changes a piece of
data, the same data must be updated in each separate file. Therefore, it is difficult
to keep the data in each department's files current. Current data might exist in one
file while defunct data remains in another file.

Storing data in a combined file


Another way to store data is to combine all the files into one file for all
departments to use. In the medical example, the patient record that would be used
by each department would contain these fields:
Identification number
Name
Address
Illnesses
Date of each illness
Date patient came to clinic for treatment
Treatment given for each illness
Doctor that prescribed treatment
Charge for treatment
Amount of payments
Names of members in patient's household
Relationship between patient and each household member

Using a combined file solves the updating problem, because all the data is in one
place, but it creates a new problem: the programs that process this data must
access the entire file record to get to the part that they need. For example, to
process only the patient's number, charges, and payments, an accounting program
must access all of the other fields also. In addition, changing the format of any of
the fields within the patient's record affects all the application programs, not just
the programs that use that field.

Using combined files can also involve security risks, because all of the programs
have access to all of the fields in a record.

Storing data in a database


Storing data in a database gives you the advantages of both separate files and
combined files: all the data appears only once, and each program has access to the
data that it needs. This means that:

36 Application Programming Planning Guide


v When you update a field, you do it in one place only.
v Because you store each piece of information only in one place, you cannot have
an updated version of the information in one place and an out-of-date version in
another place.
v Each program accesses only the data it needs.
v You can prevent programs from accessing private or secured information.

In addition, storing data in a database has two advantages that neither of the other
ways has:
v If you change the format of part of a database record, the change does not affect
the programs that do not use the changed information.
v Programs are not affected by how the data is stored.

Because the program is independent of the physical data, a database can store all
the data only once and yet make it possible for each program to use only the data
that it needs. In a database, what the data looks like when it is stored is different
from what it looks like to an application program.

| Database hierarchies
| The examples in this information use the medical hierarchy shown in “Database
| hierarchy examples” on page 8.

| Example: In the medical database shown in “Medical hierarchy example” on page


| 8, the data being kept contains information about a particular patient. Information
| that is not associated with a particular patient is meaningless. For example,
| keeping information about a treatment given for a particular illness is meaningless
| if the illness is not associated with a patient. To be meaningful, ILLNESS,
| TREATMNT, BILLING, PAYMENT, and HOUSHOLD must always be associated
| with one of the clinic's patients.

| In the medical database, information is kept meaningful by keeping five kinds of


| hierarchical information about each patient. The information about the patient's
| illnesses, billings, and household depends directly on the patient, while
| information about the patient's treatment depends on the patient's illness, and
| information about the patient's payments depends on the patient's billings.

Your program's view of the data


| IMS uses two kinds of control blocks to enable application programs to be
| independent of your method of storing data in the database, the database
| description (DBD), and the database program communication block (DB PCB).

Database Description (DBD)


A database description (DBD) is a control block that describes the physical structure
of the database. The DBD also defines the appearance and contents, or fields, that
make up each of the segment types in the database.

For example, the DBD for the medical database hierarchy shown in Figure 6 on
page 9 describes the physical structure of the hierarchy and each of the six
segment types in the hierarchy: PATIENT, ILLNESS, TREATMNT, BILLING,
PAYMENT, and HOUSHOLD.

Related Reading: For more information on generating DBDs, see IMS Version 10:
Database Utilities Reference.

Chapter 5. Designing an application: Introductory concepts 37


Database Program Communication Block (DB PCB)
A database program communication block (DB PCB) defines an application program's
view of the database. An application program often needs to process only some of
the segments in a database. A PCB defines which of the segments in the database
the program is allowed to access—which segments the program is sensitive to.

The data structures that are available to the program contain only segments that
the program is sensitive to. The PCB also defines how the application program is
allowed to process the segments in the data structure: whether the program can
only read the segments, or whether it can also update them.

To obtain the highest level of data availability, your PCBs should request the
fewest number of sensitive segments and the least capability needed to complete
the task.

All the DB PCBs for a single application program are contained in a program
specification block (PSB). A program might use only one DB PCB (if it processes only
one data structure) or it might use several DB PCBs, one for each data structure.

Related Reading: For more information on generating PSBs, see IMS Version 10:
Database Utilities Reference.

Figure 14 illustrates the concept of defining a view for an application program. An


accounting program that calculates and prints bills for the clinic's patients would
need only the PATIENT, BILLING, and PAYMENT segments. You could define the
data structure shown in Figure 14 in a DB PCB for this program.

Figure 14. Accounting program's view of the database

A program that updates the database with information on patients' illnesses and
treatments, in contrast, would need to process the PATIENT, ILLNESS, and
TREATMNT segments. You could define the data structure shown in Figure 15 on
page 39 for this program.

38 Application Programming Planning Guide


Figure 15. Patient illness program's view of the database

Sometimes a program needs to process all of the segments in the database. When
this is true, the program's view of the database as defined in the DB PCB is the
same as the database hierarchy that is defined in the DBD.

An application program processes only the segments in a database that it requires;


therefore, if you change the format of a segment that is not processed, you do not
change the program. A program is affected only by the segments that it accesses. In
addition to being sensitive to only certain segments in a database, a program can
also be sensitive to only certain fields within a segment. If you change a segment
or field that the program is not sensitive to, it does not affect the program. You
define segment and field-level sensitivity during PSBGEN.

Definition: Field-level sensitivity is when a program is sensitive to only certain


fields within a segment.

Related Reading: For more information, see IMS Version 10: Database Administration
Guide.

Processing a database record


To process the information in the database, your application program
communicates with IMS in three ways:
v Passing control—IMS passes control to your application program through an
entry statement in your program. Your program returns control to IMS when it
has finished its processing.
When you are running a CICS online program, CICS passes control to your
application program, and your program schedules a PSB to make IMS requests.
Your program returns control to CICS. If you are running a batch or BMP
program, IMS passes control to your program with an existing PSB scheduled.
v Communicating processing requests—You communicate processing requests to
IMS in one of two ways:
– In IMS, you issue DL/I calls to process the database.
– In CICS, you can issue either DL/I calls or EXEC DLI commands. EXEC DLI
commands more closely resemble a higher-level language than do DL/I calls.

Chapter 5. Designing an application: Introductory concepts 39


v Exchanging information using DL/I calls—Your program exchanges information
in two areas:
– A DL/I call reports the results of your request in a control block and the AIB
communication block when using one of the AIB interfaces. For programs
written using DL/I calls, this control block is the DB PCB. For programs
written using EXEC DLI commands, this control block is the DLI interface
block (DIB). The contents of the DIB reflect the status of the last DL/I
command executed in the program. Your program includes a mask of the
appropriate control block and uses this mask to check the results of the
request.
– When you request a segment from the database, IMS returns the segment to
your I/O area. When you want to update a segment in the database, you
place the new value of the segment in the I/O area.

An application program can read and update a database. When you update a
database, you can replace, delete, or add segments. In IMS, you indicate in the
DL/I call the segment you want to process, and whether you want to read or
update it. In CICS, you can indicate what you want using either a DL/I call or an
EXEC DLI command.

| Tasks for developing an application


The tasks in these topics are involved in developing an IMS application, and the
programs that are part of the application.

Designing the application


Application program design varies from place to place, and from one application
to another. Therefore, this information does not try to cover the early tasks that are
part of designing an application program. Instead, it covers only the tasks that you
are concerned with after the early specifications for the application have been
developed. The tasks for designing the application are:
Analyzing Application Data Requirements
Two important parts of application design are defining the data that each of the
business processes in the application requires and designing a local view for
each of the business processes. Chapter 6, “Designing an application: Data and
local views,” on page 43 explains these tasks.
Analyzing Application Processing Requirements
When you understand the business processes that are part of the application,
you can analyze the requirements of each business process in terms of the
processing that is available with different types of application programs.
Chapter 8, “Analyzing IMS application processing requirements,” on page 99
and Chapter 9, “Analyzing CICS application processing requirements,” on page
121 explain the processing and application requirements that each type of
program satisfies.
Gathering Requirements for Database Options
You then need to look at the database options that can most efficiently meet the
requirements, and gather information about your application's data
requirements that relates to each of the options. Chapter 10, “Gathering
requirements for database options,” on page 139 explains these options and
helps you gather information about your application that will be helpful to the
database administrator in making informed decisions about database options.
Gathering Requirements for Message Processing Options

40 Application Programming Planning Guide


If your application communicates with terminals and other application
programs, look at the message processing options and the requirements they
satisfy. Chapter 11, “Gathering requirements for message processing options,”
on page 163 explains the IMS message processing options and helps you to
gather information about your application that is helpful in choosing message
processing options.
Related Reading:
– For more information about designing a CICS application, see CICS/ESA
Application Programming Guide.
– For more information about designing a Java application, see IMS Version 10:
Application Programming Guide.

Developing specifications
Developing specifications involves defining what your application will do, and
how it will be done. The task of developing specifications is not described in this
information because it depends entirely on the specific application and your
standards.

Implementing the design


When the specifications for each of the programs in the application are developed,
you can structure and code the programs according to those specifications. The
tasks of implementing the design are:
Writing the Database Processing Part of the Program
When the program design is complete, you can structure and code your
requests and data areas based on the programming specifications that have
been developed.
Related Reading: See IMS Version 10: Application Programming Guide for
information about writing a program’s database processing.
Writing the Message Processing Part of the Program
If you are writing a program that communicates with terminals and other
programs, you need to structure and code the message processing part of the
program.
Related Reading: For more information about writing programs for message
processing, see IMS Version 10: Application Programming Guide.
Analyzing APPC/IMS Requirements
The LU 6.2 feature of IMS TM enables your application to be distributed
throughout the network. Chapter 7, “Designing an application for APPC,” on
page 63 tells how to use LU 6.2 and the IMS TM application programs. This
section describes the considerations for modifying these application programs
to communicate with other application programs and shows the results of
conversations.
Testing an Application Program
When you finish coding your program, test it by itself and then as part of a
system. Chapter 12, “Testing an IMS application program,” on page 175 and
Chapter 13, “Testing a CICS application program,” on page 197 give you some
guidelines.
Documenting an Application Program
Documenting a program continues throughout the project and is most effective
when done incrementally. When the program is completely tested, information
must be suppled to those who use and maintain your program. Chapter 15,

Chapter 5. Designing an application: Introductory concepts 41


“Documenting an application program,” on page 211 gives you some
suggestions about the information you should record about your program.

42 Application Programming Planning Guide


Chapter 6. Designing an application: Data and local views
Designing an application that meets the requirements of end users involves a
variety of tasks and, usually, people from several departments. Application design
begins when a department or business area communicates a need for some type of
processing. Application design ends when each of the parts of the application
system—for example, the programs, the databases, the display screens, and the
message formats—have been designed.

Subsections:
v “An overview of application design”
v “Identifying application data” on page 45
v “Designing a local view” on page 50

An overview of application design


The application design process varies from place to place and from application to
application. The overview that is given in this section and the suggestions about
documenting application design and converting existing applications are not the
only way that these tasks are performed.

The purpose of this overview is to give you a frame of reference so that you can
understand where the techniques and guidelines explained in this section fit into
the process. The order in which you perform the tasks described here, and the
importance you give to each one, depend on your settings. Also, the individuals
involved in each task, and their titles, might differ depending on the site. The tasks
are as follows:
v Establish your standards
Throughout the design process, be aware of your established standards. Some of
the areas that standards are usually established for are:
– Naming conventions (for example, for databases and terminals)
– Formats for screens and messages
– Control of and access to the database
– Programming and conventions (for common routines and macros)
Setting up standards in these areas is usually an ongoing task that is the
responsibility of database and system administrators.
v Follow your security standards
Security protects your resources from unauthorized access and use. As with
defining standards, designing an adequate security system is often an ongoing
task. As an application is modified or expanded, often the security must be
changed in some way also. Security is an important consideration in the initial
stages of application design.
Establishing security standards and requirements is usually the responsibility of
system administration. These standards are based on the requirements of your
applications.
Some security concerns are:
– Access to and use of the databases
– Access to terminals

© Copyright IBM Corp. 1974, 2010 43


– Distribution of application output
– Control of program modification
– Transaction and command entry
Related reading: “Providing data security” on page 157 and “Identifying online
security requirements” on page 163 give some suggestions about the kind of
information that you can gather concerning the security requirements for your
application. This information can be helpful to database administration and
system administration in implementing database and data communications
security.
v Define application data
Identifying the data that an application requires is a major part of application
design. One of the tasks of data definition is learning from end users what
information will be required to perform the required processing. After you have
listed the required data, you can name the data and document it. “Identifying
application data” on page 45 describes these parts of data definition.
v Provide input for database design
To design a database that meets the requirements of all the applications that will
process it, the database administrator (DBA) needs information about the data
requirements of each application. One way to gather and supply this
information is to design a local view for each of the business processes in your
application. A local view is a description of the data that a particular business
process requires.
Related reading: “Designing a local view” on page 50 explains how you can
develop a conceptual data structure and analyze the relationships between the
pieces of data in the structure for each business process in the application.
v Design application programs
When the overall application flow and system externals have been defined, you
define the programs that will perform the required processing. Some of the most
important considerations involved in this task are: standards, security
requirements, privacy requirements, and performance requirements. The
specifications you develop for the programs should include:
– Security requirements
– Input and output data formats and volumes
– Data verification and validation requirements
– Logic specifications
– Performance requirements
– Recovery requirements
– Linkage requirements and conventions
– Data availability considerations
In addition, you might be asked to provide some information about your
application to the people responsible for network and user interface design.
v Document the application design process
Recording information about the application design process is valuable to others
who work with the application now and in the future. One kind of information
that is helpful is information about why you designed the application the way
you did. This information can be helpful to people who are responsible for the
database, your IMS system, and the programs in the application—especially if
any part of the application must be changed in the future. Documenting
application design is done most thoroughly when it is done during the design
process, instead of at the end of it.

44 Application Programming Planning Guide


v Convert an existing application
One of the main aspects in converting an existing application to IMS is to know
what already exists. Before starting to convert the existing system, find out
everything you can about the way it works currently. For example, the following
information can be of help to you when you begin the conversion:
– Record layouts of all records used by the application
– Number of data element occurrences for each data element
– Structure of any existing related databases

Identifying application data


Two important aspects of application design are identifying the application data
and describing the data that a particular business process requires.

One of the steps of identifying application data is to thoroughly understand the


processing the user wants performed. You need to understand the input data and
the required output data in order to define the data requirements of the
application. You also need to understand the business processes that are involved
in the user's processing needs. Three of the tasks involved in identifying
application data are:
v Listing the data required by the business process
v Naming the data
v Documenting the data

When analyzing the required application data, you can categorize the data as
either an entity or a data element.

Definitions: An entity is anything about which information can be stored. A data


element is the smallest named unit of data pertaining to an entity. It is information
that describes the entity.

Example: In an education application, “students” and “courses” are both entities;


these are two subjects about which you collect and process data. Table 12 shows
some data elements that relate to the student and course entities. The entity is
listed with its related data elements.
Table 12. Entities and data elements
Entity Data elements
Student Student Name
Student Number
Course Course Name
Course Number
Course Length

When you store this data in an IMS database, groups of data elements are potential
segments in the hierarchy. Each data element is a potential field in that segment.

Subsections:
v “Listing data elements” on page 46
v “Naming data elements” on page 47
v “Documenting application data” on page 48

Chapter 6. Designing an application: Data and local views 45


Listing data elements
Example: To identify application data, consider a company that provides technical
education to its customers. The education company has one headquarters office,
called Headquarters, and several local education centers, called Ed Centers.

A class is a single offering of a course on a specific date at a particular Ed Center.


One course might have several offerings at different Ed Centers; each of these is a
separate class. Headquarters is responsible for developing all the courses that will
be offered, and each Ed Center is responsible for scheduling classes and enrolling
students for its classes.

Suppose that one of the education company's requirements is for each Ed Center to
print weekly current rosters for all classes at the Ed Center. The current roster is to
give information about the class and the students enrolled in the class.
Headquarters wants the current rosters to be in the format shown in Figure 16.

CHICAGO 01/04/04

TRANSISTOR THEORY 41837


10 DAYS
INSTRUCTOR(S): BENSON, R.J. DATE: 01/14/04

STUDENT CUST LOCATION STATUS ABSENT GRADE


1.ADAMS, J.W. XYZ SOUTH BEND, IND CONF
2.BAKER, R.T. ACME BENTON HARBOR, MICH WAIT
3.DRAKE, R.A. XYZ SOUTH BEND, IND CANC
.
.
.
33.WILLIAMS, L.R. BEST CHICAGO, ILL CONF

CONFIRMED = 30
WAIT—LISTED = 1
CANCELED = 2

Figure 16. Current roster for technical education example

To list the data elements for a particular business process, look at the required
output. The current roster shown in Figure 16 is the roster for the class, “Transistor
Theory” to be given in the Chicago Ed Center, starting on January 14, 2004, for ten
days. Each course has a course code associated with it—in this case, 41837. The
code for a particular course is always the same. For example, if Transistor Theory
is also offered in New York, the course code is still 41837. The roster also gives the
names of the instructors who are teaching the course. Although the example only
shows one instructor, a course might require more than one instructor.

For each student, the roster keeps the following information: a sequence number
for each student, the student's name, the student's company (CUST), the company's
location, the student's status in the class, and the student's absences and grade. All
the above information on the course and the students is input information.

The current date (the date that the roster is printed) is displayed in the upper right
corner (01/04/04). The current date is an example of data that is output only data;
it is generated by the operating system and is not stored in the database.

The bottom-left corner gives a summary of the class status. This data is not
included in the input data. These values are determined by the program during
processing.

46 Application Programming Planning Guide


When you list the data elements, abbreviating them is helpful, because you will be
referring to them frequently when you design the local view.

The data elements list for current roster is:


EDCNTR Name of Ed Center giving class
DATE Date class starts
CRSNAME Name of course
CRSCODE Course code
LENGTH Length of course
INSTRS Names of instructors teaching class
STUSEQ# Student's sequence number
STUNAME Student's name
CUST Name of student's company
LOCTN Location of student's company
STATUS Student's status in class—confirmed, wait list, or cancelled
ABSENCE Number of days student was absent
GRADE Student's grade for the course

After you have listed the data elements, choose the major entity that these
elements describe. In this case, the major entity is class. Although a lot of
information exists about each student and some information exists about the
course in general, together all this information relates to a specific class. If the
information about each student (for example, status, absence, and grade) is not
related to a particular class, the information is meaningless. This holds true for the
data elements at the top of the list as well: The Ed Center, the date the class starts,
and the instructor mean nothing unless you know what class they describe.

Naming data elements


Some of the data elements your application uses might already exist and be
named. After you have listed the data elements, find out if any of them exist by
checking with your database administrator (DBA).

Before you begin naming data elements, be aware of the naming standards that
you are subject to. When you name data elements, use the most descriptive names
possible. Remember that, because other applications probably use at least some of
the same data, the names should mean the same thing to everyone. Try not to limit
the name's meaning only to your application.

Recommendation: Use global names rather than local names. A global name is a
name whose meaning is clear outside of any particular application. A local name is
a name that, to be understood, must be seen in the context of a particular
application.

One of the problems with using local names is that you can develop synonyms,
two names for the same data element.

Example: In the current roster example, suppose the student's company was
referred to simply as “company” instead of “customer”. But suppose the
accounting department for the education company used the same piece of data in

Chapter 6. Designing an application: Data and local views 47


a billing application—the name of the student's company—and referred to it as
“customer”. This would mean that two business processes were using two different
names for the same piece of data. At worst, this could lead to redundant data if no
one realized that “customer” and “company” contained the same data. To solve
this, use a global name that is recognized by both departments using this data
element. In this case, “customer” is more easily recognized and the better choice.
This name uniquely identifies the data element and has a specific meaning within
the education company.

When you choose data element names, use qualifiers so that each name can mean
only one thing.

Example: Suppose Headquarters, for each course that is taught, assigns a number
to the course as it is developed and calls this number the “sequence number”. The
Ed Centers, as they receive student enrollments for a particular class, assign a
number to each student as a means of identification within the class. The Ed
Centers call this number the “sequence number”. Thus Headquarters and the Ed
Centers are using the same name for two separate data elements. This is called a
homonym. You can solve the homonym problem by qualifying the names. The
number that Headquarters assigns to each course can be called “course code”
(CRSCODE), and the number that the Ed Centers assign to their students can be
called “student sequence number” (STUSEQ#).

Definition: A homonym is one word for two different things.

Choose data element names that identify the element and describe it precisely.
Make your data element names:
Unique The name is clearly distinguishable from other
names.
Self-explanatory The name is easily understood and recognized.
Concise The name is descriptive in a few words.
Universal The name means the same thing to everyone.

Documenting application data


After you have determined what data elements a business process requires, record
as much information about each of the data elements as possible. This information
is useful to the DBA. Be aware of any standards that you are subject to regarding
data documentation. Many places have standards concerning what information
should be recorded about data and how and where that information should be
recorded. The amount and type of this information varies from place to place. The
following list is the type of information that is often recorded.
The descriptive name of the data element
Data element names should be precise, yet they should be meaningful to
people who are familiar and also to those who are unfamiliar with the
application.
The length of the data element
The length of the data element determines segment size and segment
format.
The character format
The programmer needs to know if the data is alphanumeric, hexadecimal,
packed decimal, or binary.

48 Application Programming Planning Guide


The range of possible values for the element
The range of possible values for the element is important for validity
checking.
The default value
The programmer also needs the default value.
The number of data element occurrences
The number of data element occurrences helps the DBA to determine the
required space for this data, and it affects performance considerations.
How the business process affects the data element
Whether the data element is read or updated determines the processing
option that is coded in the PSB for the application program.

You should also record control information about the data. Such information
should address the following questions:
v What action should the program take when the data it attempts to access is not
available?
v If the format of a particular data element changes, which business processes
does that affect? For example, if an education database has as one of its data
elements a five-digit code for each course, and the code is changed to six digits,
which business processes does this affect?
v Where is the data now? Know the sources of the data elements required by the
application.
v Which business processes make changes to a particular data element?
v Are there security requirements about the data in your application? For example,
you would not want information such as employees' salaries available to
everyone?
v Which department owns and controls the data?

One way to gather and record this information is to use a form similar to the one
shown in Table 13. The amount and type of data that you record depends on the
standards that you are subject to. For example, Table 13 lists the ID number, data
element name, length, the character format, the allowed, null, default values, and
the number of occurrences.
Table 13. Example of data elements information form
Data
element Char. Null Default
ID # name Length format Allowed values values value Number of occurrences
5 Course 5 bytes Hexa- 0010090000 00000 N/A There are 200 courses in
Code decimal the curriculum. An
average of 10 are new or
revised per year. An
average of 5 are dropped
per year.
25 Status 4 bytes Alpha- CONF WAIT blanks WAIT 1 per student
numeric CANC
36 Student 20 bytes Alpha- Alpha only blanks N/A There are 3 to 100
Name numeric students per class with
an average of 40 per
class.

Chapter 6. Designing an application: Data and local views 49


A data dictionary is a good place to record the facts about the application's data.
When you are analyzing data, a dictionary can help you find out whether a
particular data element already exists, and if it does, its characteristics. With the
DB/DC Data Dictionary, and its successor, IBM DataAtlas for OS/2 (a part of the
IBM VisualGen Team Suite), you can determine online what segments exist in a
particular database and what fields those segments contain. You can use either tool
to create reports involving the same information.

Related Reading: For information on these products, see


v OS/VS DB/DC Data Dictionary Applications Guide
v VisualGen V2R0.0 Introducing
v VisualGen: Running Application on MVS

Designing a local view


A local view is a description of the data that an individual business process
requires. It includes the following:
v A list of the data elements
v A conceptual data structure that shows how you have grouped data elements by
the entities that they describe
v The relationships between each of the groups of data elements

Definitions: A data aggregate is a group of data elements. When you have grouped
data elements by the entity they describe, you can determine the relationships
between the data aggregates. These relationships are called mappings. Based on the
mappings, you can design a conceptual data structure for the business process. You
should document this process as well.

Analyzing data relationships


When you analyze data relationships, you are developing conceptual data
structures for the business processes in your application. This process, called data
structuring, is a way to analyze the relationships among the data elements a
business process requires, not a way to design a database. The decisions about
segment formats and contents belong to the DBA. The information you develop is
input for designing a database.

Data structuring can be done in many different ways. The method explained in
this section is one example.

Subsections:
v “Grouping data elements into hierarchies”
v “Determining mappings” on page 56

Grouping data elements into hierarchies


The data elements that describe a data aggregate, the student, might be
represented by the descriptive names STUSEQ#, STUNAME, CUST, LOCTN,
STATUS, ABSENCE, and GRADE. We call this group of data elements the student
data aggregate.

Data elements have values and names. In the student data elements example, the
values are a particular student's sequence number, the student's name, company,
company location, the student's status in the class, the student's absences, and
grade. The names of the data aggregate are not unique—they describe all the

50 Application Programming Planning Guide


students in the class in the same terms. The combined values, however, of a data
aggregate occurrence are unique. No two students can have the same values in
each of these fields.

As you group data elements into data aggregates and data structures, look at the
data elements that make up each group and choose one or more data elements that
uniquely identify that group. This is the data aggregate's controlling key, which is
the data element or group of data elements in the aggregate that uniquely
identifies the aggregate. Sometimes you must use more than one data element for
the key in order to uniquely identify the aggregate.

By following the three steps explained in this section, you can develop a
conceptual data structure for a business process's data. However, you are not
developing the logical data structure for the program that performs the business
process. The three steps are:
1. Separate repeating data elements in a single occurrence of the data aggregate.
2. Separate duplicate values in multiple occurrences of the data aggregate.
3. Group each data element with its controlling keys.

Step 1. separating repeating data elements: Look at a single occurrence of the


data aggregate. Table 14 shows what this looks like for the class aggregate; the data
element is listed with the class aggregate occurrence.
Table 14. Single occurrence of class aggregate
Data element Class aggregate occurrence
EDCNTR CHICAGO
DATE(START) 1/14/96
CRSNAME TRANSISTOR THEORY
CRS CODE 41837
LENGTH 10 DAYS
INSTRS multiple
STUSEQ# multiple
STUNAME multiple
CUST multiple
LOCTN multiple
STATUS multiple
ABSENCE multiple
GRADE multiple

The data elements defined as multiple are the elements that repeat. Separate the
repeating data elements by shifting them to a lower level. Keep data elements with
their controlling keys.

The data elements that repeat for a single class are: STUSEQ#, STUNAME, CUST,
LOCTN, STATUS, ABSENCE, and GRADE. INSTRS is also a repeating data
element, because some classes require two instructors, although this class requires
only one.

When you separate repeating data elements into groups, you have the structure
shown in Figure 17 on page 52.

Chapter 6. Designing an application: Data and local views 51


In Figure 17, the data elements in each box form an aggregate. The entire figure
depicts a data structure. The data elements include the Course aggregate, the
Student aggregate, and the Instructor aggregate.

Figure 17 shows these aggregates with the keys indicated with leading asterisks (*).

Figure 17. Current roster after step 1

The keys for the data aggregates are shown in Table 15.
Table 15. Data aggregates and keys for current roster after step 1
Data aggregate Keys
Course aggregate EDCNTR, DATE, CRSCODE
Student aggregate EDCNTR, DATE, CRSCODE, STUSEQ#
Instructor aggregate EDCNTR, DATE, CRSCODE, INSTRS

The asterisks in Figure 17 identify the key data elements. For the Class aggregate,
it takes multiple data elements to identify the course, so you need multiple data
elements to make up the key. The data elements that comprise the Class aggregate
are:
v Controlling key element, STUSEQ#
v STUNAME
v CUST
v LOCTN
v STATUS
v ABSENCE

Along with these keys inherited from the root segment, Course aggregate:

52 Application Programming Planning Guide


v GRADE
v EDCNTR
v DATE
v CRSCODE

The data elements that comprise the Instructor aggregate are:


v Key element, INSTRS

Along with these Keys inherited from the root segment, Course aggregate:
v EDCNTR
v DATE
v CRSCODE

After you have shifted repeating data elements, make sure that each element is in
the same group as its controlling key. INSTRS is separated from the group of data
elements describing a student because the information about instructors is
unrelated to the information about the students. The student sequence number
does not control who the instructor is.

In the example shown in Figure 17 on page 52, the Student aggregate and
Instructor aggregate are both dependents of the Course aggregate. A dependent
aggregate's key includes the concatenated keys of all the aggregates above the
dependent aggregate. This is because a dependent's controlling key does not mean
anything if you don't know the keys of the higher aggregates. For example, if you
knew that a student's sequence number was 4, you would be able to find out all
the information about the student associated with that number. This number
would be meaningless, however, if it were not associated with a particular course.
But, because the key for the Student aggregate is made up of Ed Center, date, and
course code, you can deduce which class the student is in.

Step 2. isolating duplicate aggregate values: Look at multiple occurrences of the


aggregate—in this case, the values you might have for two classes. Table 16 shows
multiple occurrences (2) of the same data elements. As you look at this table, check
for duplicate values. Remember that both occurrences describe one course.
Table 16. Multiple occurrences of class aggregate
Data element list Occurrence 1 Occurrence 2
EDCNTR CHICAGO NEW YORK
DATE(START) 1/14/96 3/10/96
CRSNAME TRANS THEORY TRANS THEORY
CRSCODE 41837 41837
LENGTH 10 DAYS 10 DAYS
INSTRS multiple multiple
STUSEQ# multiple multiple
STUNAME multiple multiple
CUST multiple multiple
LOCTN multiple multiple
STATUS multiple multiple
ABSENCE multiple multiple
GRADE multiple multiple

Chapter 6. Designing an application: Data and local views 53


The data elements defined as multiple are the data elements that repeat. The
values in these elements are not the same. The aggregate is always unique for a
particular class.

In this step, compare the two occurrences and shift the fields with duplicate values
(TRANS THEORY and so on) to a higher level. If you need to, choose a controlling
key for aggregates that do not yet have keys.

In Table 16 on page 53, CRSNAME, CRSCODE, and LENGTH are the fields that
have duplicate values. Much of this process is intuitive. Student status and grade,
although they can have duplicate values, should not be separated because they are
not meaningful values by themselves. These values would not be used to identify a
particular student. This becomes clear when you remember to keep data elements
with their controlling keys. When you separate duplicate values, you have the
structure shown in Figure 18.

Figure 18. Current roster after step 2

Step 3. grouping data elements with their controlling keys: This step is often a
check on the first two steps. (Sometimes the first two steps have already done
what this step instructs you to do.)

At this stage, make sure that each data element is in the group that contains its
controlling key. The data element should depend on the full key. If the data

54 Application Programming Planning Guide


element depends only on part of the key, separate the data element along with the
partial (controlling) key on which it depends.

In this example, CUST and LOCTN do not depend on the STUSEQ#. They are
related to the student, but they do not depend on the student. They identify the
company and company address of the student.

CUST and LOCTN are not dependent on the course, the Ed Center, or the date,
either. They are separate from all of these things. Because a student is only
associated with one CUST and LOCTN, but a CUST and LOCTN can have many
students attending classes, the CUST and LOCTN aggregate should be above the
student aggregate.

Figure 19 shows these aggregates and keys indicated with leading asterisks (*).
Figure 19 shows what the structure looks like when you separate CUST and
LOCTN.

Figure 19. Current roster after step 3

The keys for the data aggregates are shown in Table 17.
Table 17. Data aggregates and keys for current roster after step 3
Data aggregate Keys
Course aggregate CRSCODE
Class aggregate CRSCODE, EDCNTR, DATE
Customer aggregate CUST, LOCTN

Chapter 6. Designing an application: Data and local views 55


Table 17. Data aggregates and keys for current roster after step 3 (continued)
Data aggregate Keys
Student aggregate (when viewed from the customer aggregate in
Figure 19 on page 55 instead of from the course
aggregate, in Figure 18 on page 54) CUST, LOCTN,
STUSEQ, CRSCODE, EDCNTR, DATE
Instructor aggregate CRSCODE, EDCNTR, DATE, INSTRS

Deciding on the arrangement of the customer and location information is part of


designing a database. Data structuring should separate any inconsistent data
elements from the rest of the data elements.

Determining mappings
When you have arranged the data aggregates into a conceptual data structure, you
can examine the relationships between the data aggregates. A mapping between
two data aggregates is the quantitative relationship between the two. The reason
you record mappings is that they reflect relationships between segments in the
data structure that you have developed. If you store this information in an IMS
database, the DBA can construct a database hierarchy that satisfies all the local
views, based on the mappings. In determining mappings, it is easier to refer to the
data aggregates by their keys, rather than by their collected data elements.

The two possible relationships between any two data aggregates are:
v One-to-many
For each segment A, one or more occurrences of segment B exist. For example,
each class maps to one or more students.
Mapping notation shows this in the following way:

Class ──────── Student


v Many-to-many
Segment B has many A segments associated with it and segment A has many B
segments associated with it. In a hierarchic data structure, a parent can have one
or more children, but each child can be associated with only one parent. The
many-to-many association does not fit into a hierarchy, because in a
many-to-many association each child can be associated with more than one
parent.
Related Reading: For more information about analyzing data requirements, see
IMS Version 10: Database Administration Guide
Many-to-many relationships occur between segments in two business processes.
A many-to-many relationship indicates a conflict in the way that two business
processes need to process those data aggregates. If you use the IMS full-function
database, you can solve this kind of processing conflict by using secondary
indexing or logical relationships. “Understanding how data structure conflicts
are resolved” on page 147 explains how to use these tools.

The mappings for the current roster are:


v Course ──────── Class
For each course, there might be several classes scheduled, but a class is
associated with only one course.
v Class ──────── Student
A class has many students enrolled in it, but a student might be in only one
class offering of this course.
56 Application Programming Planning Guide
v Class ──────── Instructor
A class might have more than one instructor, but an instructor only teaches one
class at a time.
v Customer/location ──────── Student
A customer might have several students attending a particular class, but each
student is only associated with one customer and location.

Local view examples


This topic presents three more examples of designing a local view:
v The schedule of courses
v The instructor skills report
v The instructor schedules
This topic does not explain how to design a local view; it simply takes you
through the examples. Each example shows the following parts of designing a local
view:
1. Gather the data. For each example, the data elements are listed and two
occurrences of the data aggregate are shown. Two occurrences are shown
because you need to look at both occurrences when you look for repeating
fields and duplicate values.
2. Analyze the data relationships. First, group the data elements into a conceptual
data structure using these three steps:
a. Separate repeating data elements in a single occurrence of the data
aggregate by shifting them to a lower level. Keep data elements with their
keys.
b. Separate duplicating values in two occurrences of the data aggregate by
shifting those data elements to a higher level. Again, keep data elements
with their keys.
c. Group data elements with their keys. Make sure that all the data elements
within one aggregate have the same key. Separate any that do not.
3. Determine the mappings between the data aggregates in the data structure you
have developed.

Example 1: schedule of courses


Headquarters keeps a schedule of all the courses given each quarter and
distributes it monthly. Headquarters wants the schedule to be sorted by course
code and printed in the format shown in Figure 20.

COURSE SCHEDULE

COURSE: TRANSISTOR THEORY COURSE CODE: 418737


LENGTH: 10 DAYS PRICE: $280

DATE LOCATION

APRIL 14 BOSTON
APIRL 21 CHICAGO
.
.
.
NOVEMBER 18 LOS ANGELES

Figure 20. Schedule of courses

1. Gather the data. Table 18 on page 58 lists the data elements and two
occurrences of the data aggregate.

Chapter 6. Designing an application: Data and local views 57


Table 18. Course schedule data elements
Data elements Occurrence 1 Occurrence 2
CRSNAME TRANS THEORY MICRO PROG
CRSCODE 41837 41840
LENGTH 10 DAYS 5 DAYS
PRICE $280 $150
DATE multiple multiple
EDCNTR multiple multiple

2. Analyze the data relationships. First, group the data elements into a conceptual
data structure.
a. Separate repeating data elements in one occurrence of the data aggregate by
shifting them to a lower level, as shown in Figure 21

Figure 21. Course schedule after step 1

b. Next, separate duplicate values in two occurrences of the data aggregate by


shifting the data elements to a higher level.
This data aggregate does not contain duplicate values.
c. Group data elements with their controlling keys.
Data elements are grouped with their keys in the present structure. No
changes are necessary for this step.
The keys for the data aggregates are shown in Table 19.
Table 19. Data aggregates and keys for course schedule after step 1
Data aggregate Keys
Course aggregate CRSCODE
Class aggregate CRSCODE, EDCNTR, DATE

3. When you have developed a conceptual data structure, determine the


mappings for the data aggregates.

58 Application Programming Planning Guide


The mapping for this local view is:
Course ──────── Class

Example 2: instructor skills report


Each Ed Center needs to print a report showing the courses that its instructors are
qualified to teach. The report format is shown in Figure 22.

INSTRUCTOR SKILLS REPORT

INSTRUCTOR COURSE CODE COURSE NAME

BENSON, R. J. 41837 TRANS THEORY


MORRIS, S. R. 41837 TRANS THEORY
41850 CIRCUIT DESIGN
41852 LOGIC THEORY
.
.
.
REYNOLDS, P. W. 41840 MICRO PROG
41850 CIRCUIT DESIGN

Figure 22. Instructor skills report

1. Gather the data. Table 20 lists the data elements and two occurrences of the
data aggregate.
Table 20. Instructor skills data elements
Data elements Occurrence 1 Occurrence 2
INSTR REYNOLDS, P.W. MORRIS, S. R.
CRSCODE multiple multiple
CRSNAME multiple multiple

2. Analyze the data relationships. First, group the data elements into a conceptual
data structure.
a. Separate repeating data elements in one occurrence of the data aggregate by
shifting to a higher level as shown in Figure 23

Figure 23. Instructor skills after step 1

b. Separate any duplicate values in the two occurrences of the data aggregate.
No duplicate values exist in this data aggregate.
c. Group data elements with their keys.

Chapter 6. Designing an application: Data and local views 59


All data elements are grouped with their keys in the current data structure.
There are no changes to this data structure.
3. Determine the mappings for the data aggregates.
The mapping for this local view is:
Instructor ──────── Course

Example 3: instructor schedules


Headquarters wants to produce a report showing the schedules for all the
instructors. Figure 24 shows the report format.

INSTRUCTOR SCHEDULES

INSTRUCTOR COURSE CODE ED CENTER DATE

BENSON, R. J. TRANS THEORY 41837 CHICAGO 1/14/96


MORRIS, S. R. TRANS THEORY 41837 NEW YORK 3/10/96
LOGIC THEORY 41852 BOSTON 3/27/96
CIRCUIT DES 41840 CHICAGO 4/21/96
REYNOLDS, B. H. MICRO PROG 41850 NEW YORK 2/25/96
CIRCUIT DES 41850 LOS ANGELES 3/10.96

Figure 24. Instructor schedules

1. Gather the data. Table 21 lists the data elements and two occurrences of the
data aggregate.
Table 21. Instructor schedules data elements
Data elements Occurrence 1 Occurrence 2
INSTR BENSON, R. J. MORRIS, S. R.
CRSNAME multiple multiple
CRSCODE multiple multiple
EDCNTR multiple multiple
DATE(START) multiple multiple

2. Analyze the data relationships. First, group the data elements into a conceptual
data structure.
a. Separate repeating data elements in one occurrence of the data aggregate by
shifting data elements to a lower level as shown in Figure 25.

Figure 25. Instructor schedules step 1

60 Application Programming Planning Guide


b. Separate duplicate values in two occurrences of the data aggregate by
shifting data elements to a higher level as shown in Figure 26.
In this example, CRSNAME and CRSCODE can be duplicated for one
instructor or for many instructors, for example, 41837 for Benson and 41850
for Morris and Reynolds.

Figure 26. Instructor schedules step 2

c. Group data elements with their keys.


All data elements are grouped with their controlling keys in the current
data structure. No changes to the current data structure are required.
3. Determine the mappings for the data aggregates.
The mappings for this local view are:
Instructor ──────── Course
Course ──────── Class
An analysis of data requirements is necessary to combine the requirements of
the three examples presented in this section and to design a hierarchic structure
for the database based on these requirements.
Related Reading: For more information on analyzing data requirements, see
IMS Version 10: Database Administration Guide.

Chapter 6. Designing an application: Data and local views 61


62 Application Programming Planning Guide
Chapter 7. Designing an application for APPC
Advanced Program-to-Program Communication (APPC) is IBM's preferred protocol
for program-to-program communication. Application programs can be distributed
throughout the network and communicate with each other in many hardware
architectures and software environments.

This section describes the APPC function of IMS TM.

Subsections:
v “Overview of APPC and LU 6.2”
v “Application program types”
v “Application objectives” on page 65
v “Choosing conversation attributes” on page 65
v “Conversation type” on page 66
v “Conversation state” on page 67
v “Synchronization level” on page 67
v “Distributed sync point” on page 68
v “Application programming interface for LU type 6.2” on page 72
v “LU 6.2 partner program design” on page 73

Related Reading: For more information on APPC, see:


v IMS Version 10: Application Programming Guide, which includes specific
information on APPC such as the application programming interface (API) and
descriptions of the APSB and DPSB calls.
v IMS Version 10: Communications and Connections Guide, which includes an
overview of APPC for LU 6.2 devices and CPI Communications concepts.

Overview of APPC and LU 6.2


APPC allows application programs using APPC protocols to enter IMS transactions
from LU 6.2 devices. The LU 6.2 application program runs on an LU 6.2 device
supporting APPC.

APPC creates an environment that allows:


v Remote LU 6.2 devices to enter IMS local and remote transactions
v IMS application programs to insert transaction output to LU 6.2 devices with no
coding changes to existing application programs
v New application programs to make full use of LU 6.2 device facilities
v Data integrity provided by IMS and in LU 6.2 environments that do not have a
distributed sync-point function

Application program types


APPC/IMS is part of IMS TM that uses the CPI communications interface to
communicate with application programs. APPC/IMS supports the following types
of application programs for LU 6.2 processing:
v Standard DL/I

© Copyright IBM Corp. 1974, 2010 63


v Modified standard DL/I
v CPI Communications driven

Standard DL/I application program


A standard DL/I application program does not issue any CPI Communications
calls or establish any CPI-C conversations. This application program can
communicate with LU 6.2 products that replace other LU-type terminals using the
IMS API. A standard DL/I application program does not need to be modified,
recompiled, or bound, and it executes as it currently does.

Modified standard DL/I application program


A modified standard DL/I application program is a standard DL/I online IMS TM
application program that uses both DL/I calls and CPI Communications calls. It
can be an MPP, BMP, or IFP that can access full-function databases, DEDBs,
MSDBs, and DB2 for z/OS databases.

A modified standard DL/I application program uses CPI Communications (CPI-C)


calls to provide support for an LU 6.2 and non-LU 6.2 mixed network. The same
application program can be a standard DL/I on one execution, when the CPI
Communications ALLOCATE verb is not issued, and a modified standard DL/I on a
different execution when the CPI Communications ALLOCATE verb is issued.

A modified standard DL/I application program receives its messages using DL/I
GU calls to the I/O PCB and issues output responses using DL/I ISRT calls. CPI
Communications calls can also be used to allocate new conversations and to send
and receive data for them.

Related Reading: For a list of the CPI Communications calls, see Common
Programming Interface Communications Reference.

Use a modified standard DL/I application program when you want to use an
existing standard DL/I application program to establish a conversation with
another LU 6.2 device or the same network destination. The standard DL/I
application program is optionally modified and uses new functions, new
application and transaction definitions, and modified DL/I calls to initiate LU 6.2
application programs. Program calls and parameters are available to use the
IMS-provided implicit API and the CPI Communications explicit API.

CPI Communications driven program


A CPI Communications driven application program uses Commit and Backout calls,
and CPI Communications interface calls or LU 6.2 verbs for input and output
message processing. This application program uses the CPI Communications
explicit API, and can access full-function databases, DEDBs, MSDBs, and DB2 for
z/OS databases. An LU 6.2 device can activate a CPI Communications driven
application program only by allocating a conversation.

Unlike a standard DL/I or modified standard DL/I application program, input


and output message processing for a CPI Communications driven program uses
APPC/MVS buffers and bypasses IMS message queueing. Because these
application programs do not use the IMS message queue, they can control their
own execution with the partner LU 6.2 system. An IMS APSB call enables you to
allocate a PSB for accessing IMS databases and alternate PCBs.

64 Application Programming Planning Guide


The application program uses the Common Programming Interface Resource
Recovery (CPI-RR) SRRCMIT verb to initiate an IMS sync point and the CPI-RR
SRRBACK verb for backout. CPI Communications driven application programs use
the CPI-RR calls to initiate IMS sync point processing prior to program
termination.

A CPI Communications driven application program is able to:


v Access any type of database
v Receive and send large messages like the standard DL/I and modified standard
DL/I application programs
v Control the flow of input and output with CPI Communications calls
v Allocate multiple conversations with partner LU 6.2 devices
v Cause synchronization with conversation partners
v Use the IMS implicit API (for example, IMS queue services)
v Use IMS services (for example, sync point at program termination) regardless of
the API that is used

Application objectives
Each application type has a different purpose, and its ease-of-use varies depending
on whether the program is a standard DL/I, modified standard DL/I, or a CPI
Communications driven application program. Table 22 on page 65 lists the purpose
and ease-of-use for each application type (standard DL/I, modified standard DL/I,
and PI-C driven). This information must be balanced with IMS resource use.
Table 22. Using application programs in APPC
Ease of use
Purpose of Standard DL/I Modified standard
application program program DL/I program PI-C driven program
Inquiry Easy Neutral Very Difficult
Data Entry Easy Easy Difficult
Bulk Transfer Easy Easy Neutral
Cooperative Difficult Difficult Desirable
Distributed Difficult Neutral Desirable
High Integrity Neutral Neutral Desirable
Client Server Easy Neutral Very Difficult

Choosing conversation attributes


The LU 6.2 transaction program indicates how the transaction is to be processed by
IMS. Two processing modes are available: synchronous and asynchronous.

Synchronous conversation
A conversation is synchronous if the partner waits for the response on the same
conversation used to send the input data.

Synchronous processing is requested by issuing the RECEIVE_AND_WAIT verb after


the SEND_DATA verb. Use this mode for IMS response-mode transactions and IMS
conversational-mode transactions.

Chapter 7. Designing an application for APPC 65


Example:
MC_ALLOCATE TPN(MYTXN)
MC_SEND_DATA ’THIS CAN BE A RESPONSE MODE’
MC_SEND_DATA ’OR CONVERSATIONAL MODE’
MC_SEND_DATA ’IMS TRANSACTION’
MC_RECEIVE_AND_WAIT

For examples of transaction flow, see “LU 6.2 flow diagrams” on page 74.

Asynchronous conversation
A conversation is asynchronous if the partner program normally deallocates a
conversation after sending the input data. Output is sent to the TP name of
DFSASYNC.

Asynchronous processing is requested by issuing the DEALLOCATE verb after the


SEND_DATA verb. Use asynchronous processing for IMS commands, message
switches, and non-response, non-conversational transactions.

Example:
MC_ALLOCATE TPN(OTHERTXN)
MC_SEND_DATA ’THIS MUST BE A MESSAGE SWITCH, IMS COMMAND’
MC_SEND_DATA ’OR A NON-RESP NON-CONV TRANSACTION’
MC_DEALLOCATE

For examples of transaction flow, see “LU 6.2 flow diagrams” on page 74.

Asynchronous output delivery


Asynchronous output is held on the IMS message queue for delivery. When the
output is enqueued, IMS attempts to allocate a conversation to send this output. If
this fails, IMS holds the output for later delivery. This delivery can be initiated by
an operator command (/ALLOC), or by the enqueue of a new message for this LU
6.2 destination.

MSC synchronous and asynchronous conversation


MSC remote application messages from both synchronous and asynchronous APPC
conversations can be queued on the multiple systems coupling (MSC) link. These
messages can then be sent across the MSC link to a remote IMS for processing.

For examples of transaction flow, see “LU 6.2 flow diagrams” on page 74.

Conversation type
The APPC conversation type defines how data is passed on and retrieved from
APPC verbs. It is similar in concept to file blocking and affects both ends of the
conversation.

APPC supports two types of conversations:


Basic conversation
This low-conversation allows programs to exchange data in a standardized
format. This format is a stream of data containing 2-byte length fields
(referred to as LLs) that specify the amount of data to follow before the
next length field. The typical data pattern is:
LL, data, LL, data

66 Application Programming Planning Guide


Each grouping of LL, data is referred to as a logical record. A basic
conversation is used to send multiple segments with one verb and to
receive maximum data with one verb.
Mapped conversation
This high-conversation allows programs to exchange arbitrary data records
in data formats approved by application programmers. One send verb
results in one receive verb, and z/OS and VTAM handle the buffering.

Related Reading: For more information on basic and mapped conversations, see
v Systems Network Architecture: LU 6.2 Reference: Peer Protocols and
v Systems Network Architecture: Transaction Programmer's Reference Manual for LU
Type 6.2

Conversation state
CPI Communications uses conversation state to determine what the next set of
actions will be. Examples of conversation states are:
RESET The initial state before communications begin.
SEND The program can send or optionally receive.
RECEIVE The program must receive or abort.
CONFIRM The program must respond to a partner.

The basic rules for APPC verbs are:


v The program that initiates the conversation speaks first.
v Only one APPC verb can be outstanding at time.
v Programs take turns sending and receiving.
v The state of the conversation determines the verbs a program can issue.

Synchronization level
The APPC synchronization level defines the protocol that is used when changing
conversation states. APPC and IMS support the following sync_level values:
NONE Specifies that the programs do not issue calls or recognize returned
parameters relating to synchronization.
CONFIRM Specifies that the programs can perform confirmation processing
on the conversation.
SYNCPT Specifies that the programs participate in coordinated commit
processing on resources that are updated during the conversation
under the RRS/MVS recovery platform. A conversation with this
level is also called a protected conversation.

Allocating a conversation with SYNCLVL=SYNCPT requires the Resource Recovery


Services (RRS/MVS) as the sync-point manager (SPM). RRS/MVS controls the
commitment of protected resources by coordinating the commit or backout request
with the participating owners of the updated resources, the resource managers.
IMS is the resource manager for DL/I, Fast Path data, and the IMS message
queues. The application program decides whether the data is to be committed or
aborted and communicates this decision to the SPM. The SPM then coordinates the
actions in support of this decision among the resource managers.

Chapter 7. Designing an application for APPC 67


Related reading: For more information on SYNCLVL=SYNCPT, see IMS Version 10:
Communications and Connections Guide.

Distributed sync point


The Distributed Sync Point support enables IMS and remote application programs
(APPC or OTMA) to participate in protected conversations with coordinated
resource updates and recoveries. Before this support, IMS acted as the sync-point
manager. In this new scenario, z/OS manages the sync-point process on behalf of
the conversation participants: the application program and IMS (now acting as a
resource manager).

z/OS implements a system resource recovery platform, the Resource Recovery


Services/MVS (RRS/MVS). RRS/MVS supports the Common Programming
Interface - Resource Recovery (CPI-RR), an element of the SAA Common
Programming Interface that defines resource recovery and provides for the
coordinated management of resource recovery for both local and distributed
resources. In addition to RRS/MVS, a communications resource manager (called
APPC/PC for APPC/Protected Conversations) provides distribution of the
recovery.

In the APPC environment, a protected conversation is initiated when the


application program allocates an APPC conversation with SYNC_LEVEL=SYNCPT. Both
IMS and APPC are resource managers in this scenario. In the OTMA environment,
some additional code is required because OTMA is not a resource manager. The
additional code needed is an OTMA adapter, IBM supplied or equivalent. This
adapter indicates to IMS (in the OTMA message prefix) that this message is part of
a protected conversation, and thus IMS and the adapter are participants in the
coordinated commit process as managed by RRS/MVS.

Application programmers can now develop APPC application programs (local and
remote) and remote OTMA application programs that use RRS/MVS as the
sync-point manager, rather than IMS. This enhancement enables resources across
multiple platforms to be updated and recovered in a coordinated manner.

Distributed sync point concepts


The Distributed Sync Point support entails:
v Changes in IMS that allow it to function as a resource manager under RRS/MVS
v Changes to the application program environment that support using applications
in protected conversations
v Changes to some commands that aid the user

Introduction to resource recovery


Most customers maintain computer resources that are essential to the survival of
their businesses. When these resources are updated in a controlled and
synchronized manner, they are said to be protected resources or recoverable
resources. These resources can all reside locally (on the same system) or be
distributed (across nodes in the network). The protocols and mechanisms for
regulating the updating of multiple protected resources in a consistent manner is
provided in z/OS with Resource Recovery Services/MVS (RRS/MVS).

Participants in resource recovery: As shown in Figure 27 on page 69 the Resource


Recovery environment is composed of three participants:
v Sync-point manager

68 Application Programming Planning Guide


v Resource managers
v Application program

RRS/MVS is the sync-point manager, also known as the coordinator. The


sync-point manager controls the commitment of protected resources by
coordinating the commit request (or backout request) with the resource managers,
the participating owners of the updated resources. These resource managers are
known as participants in the sync-point process. IMS participates as a resource
manager for DL/I, Fast Path, and DB2 for z/OS data if this data has been updated
in such an environment.

The final participant in this resource recovery protocol is the application program,
the program accessing and updating protected resources. The application program
decides whether the data is to be committed or aborted and relates this decision to
the sync-point manager. The sync-point manager then coordinates the actions in
support of this decision among the resource managers.

Figure 27. Participants in resource recovery

Two-phase commit protocol: As shown in Figure 28 on page 70, the two-phase


commit protocol is a process involving the sync-point manager and the resource
manager participants to ensure that of the updates made to a set of resources by a
third participant, the application program, either all updates occur or none. In
simple terms, the application program decides to commit its changes to some
resources; this commit is made to the sync-point manager that then polls all of the
resource managers as to the feasibility of the commit call. This is the prepare
phase, often called phase one. Each resource manager votes yes or no to the
commit.

After the sync-point manager has gathered all the votes, phase two begins. If all
votes are to commit the changes, then the phase two action is commit. Otherwise,
phase two becomes a backout. System failures, communication failures, resource
manager failures, or application failures are not barriers to the completion of the
two-phase commit process.

Chapter 7. Designing an application for APPC 69


The work done by various resource managers is called a unit of recovery (UOR) and
spans the time from one consistent point of the work to another consistent point,
usually from one commit point to another. It is the unit of recovery that is the
object of the two-phase commit process.

Figure 28. Two-phase commit process with one resource manager

Notes:
1. The application and IMS make a connection.
2. IMS expresses protected interest in the work started by the application. This
tells RRS/MVS that IMS will participate in the 2-phase commit process.
3. The application makes a read request to an IMS resource.
4. Control is returned to the application following its read request.
5. The application updates a protected resource.
6. Control is returned to the application following its update request.
7. The application requests that the update be made permanent by way of the
SRRCMIT call.
8. RRS/MVS calls IMS to do the prepare (phase 1) process.
9. IMS returns to RRS/MVS with its vote to commit.
10. RRS/MVS calls IMS to do the commit (phase 2) process.
11. IMS informs RRS/MVS that it has completed phase 2.
12. Control is returned to the application following its commit request.

Local versus distributed: The residence of the participants involved in the


recovery process determines whether that recovery is considered local or

70 Application Programming Planning Guide


distributed. In a local recovery scenario, all the participants reside on the same
single system. In a distributed recovery scenario, the participants are scattered
over multiple systems. Figure 29 shows the communication between Resource
Manager participants in a distributed resource recovery. There is no conceptual
difference between a local and distributed recovery in the functions provided by
RRS/MVS. However, to distribute the original sync-point manager's function to
involve remote sync-point managers, a special resource manager is required. The
APPC communications resource manager provides this support in the distributed
environment.

Figure 29. Distributed resource recovery

Summary of RRS/MVS support


The objective of RRS/MVS is to provide a system resource recovery platform such
that applications executing on MVS can have access to local and distributed
resources and have system coordinated recovery management of these resources.
The support includes:
v A sync-point manager to coordinate the two-phase commit process

Chapter 7. Designing an application for APPC 71


v Implementation of the SAA Commit and Backout callable services for use by
application programs
v A mechanism to associate resources with an application instance
v Services for resource manager registration and participation in the two-phase
commit process with RRS/MVS
v Services to allow resource managers to express interest in an application instance
and be informed of commit and backout requests
v Services to enable resource managers to obtain system data to restore their
resources to consistent state
v A communications resource manager (called APPC/PC for APPC/Protected
Conversations) so that distributed applications can coordinate their recovery
with participating local resource managers

Restriction:
v Extended Recovery Facility (XRF)
Running protected conversations in an IMS-XRF environment does not
guarantee that the alternate system can resume and resolve any unfinished work
started by the active system. This process is not guaranteed because a failed
resource manager must re-register with its original RRS system if the RRS is still
available when the resource manager restarts. Only if the RRS on the active
system is not available can an XRF alternate register with another RRS in the
sysplex and obtain the incomplete unit of recovery data of the failing active.
Recommendation: Because IMS retains indoubt units-of-recovery indefinitely
until they're resolved, a switch back to the original active system should be done
as soon as possible to pickup unit of recovery information to resolve and
complete all the work of the resource managers involved. If this is not possible,
the indoubt units-of-recovery can be resolved using commands.
v Remote Site Recovery (RSR)
Active systems tracked by a remote system in an RSR environment can
participate in protected conversations, although it will be necessary to resolve
indoubt units-of-recovery using commands if they should exist after a takeover
to a remote site has been done. This is because the remote site is probably not
part of the active sysplex and the new IMS cannot acquire unfinished
unit-of-recovery information from RRS. IMS provides commands to interrogate
protected conversation work and to resolve the unfinished unit-of-recovery if
necessary.
v Batch and Non-Message-Driven BMPs in a DBCTL Environment
Distributed Sync Point does not support the IMS batch environment. In a
DBCTL environment, there are no inbound protected conversations possible.
However, a BMP in a DBCTL environment can allocate an outbound protected
conversation, which will be supported by Distributed Sync Point and RRS/MVS.

Impact on the network


Network traffic will increase as a result of the conversation participants and the
sync-point manager communicating with each other.

Application programming interface for LU type 6.2


IMS application programs can use the IMS implicit LU 6.2 API to access LU 6.2
devices. This API provides compatibility with non-LU 6.2 device types so that the
same application program can be used from both LU 6.2 and non-LU 6.2 devices.
The API adds to the APPC interface by supplying IMS-provided processing for the

72 Application Programming Planning Guide


application program. You can use the explicit CPI Communications interface for
APPC functions and facilities for new or rewritten IMS application programs.

Implicit API
The implicit API accesses an APPC conversation indirectly. This API uses the
standard DL/I calls (GU, ISRT, PURG) to send and receive data. It allows application
programs that are not specific to LU 6.2 protocols to use LU 6.2 devices. The API
uses new and changed DL/I calls (CHNG, INQY, SETO) to utilize LU 6.2. Using the
existing IMS application programming base, you can write specific applications for
LU 6.2 using this API and not using the CPI Communications calls. Although the
implicit API uses only some of the LU 6.2 capabilities, it can be a useful
simplification for many applications. The implicit API also provides function
outside of LU 6.2, like message queueing and automatic asynchronous message
delivery.

IMS generates all CPI Communications calls under the implicit API. The
application interaction is strictly with the IMS message queue.

The remote LU 6.2 system must be able to handle the LU 6.2 flows. APPC/MVS
generates these flows from the CPI Communications calls issued by the IMS
application program using the implicit API. An IMS application program can use
the explicit API to issue the CPI Communications directly. This is useful with
remote LU 6.2 systems that have incomplete LU 6.2 implementations, or that are
incompatible with the IMS implicit API support. See the LU 6.2 data flow
examples under “LU 6.2 partner program design.”

The existing API is extended so that:


v Asynchronous LU 6.2 output is created by using alternate PCBs that reference
LU 6.2 destinations. The DL/I CHNG call can supply parameters to specify an LU
6.2 destination. Default values are used for omitted parameters.
v An application program can retrieve the current conversation attributes such as
the conversation type (basic or mapped), the sync_level (NONE, CONFIRM, or
SYNCPT), and asynchronous or synchronous conversation.
v A terminal message switch can be used to and from LU 6.2 devices. See “LU 6.2
partner program design” for a description of the message switch.

Explicit API
The explicit API (the CPI Communications API) can be used by any IMS
application program to access an APPC conversation directly. IMS resources are
available to the CPI Communications driven application program only if the
application issues the APSB (Allocate PSB) call. The CPI Communications driven
application program must use the CPI-RR SRRCMIT and SRRBACK verbs to initiate an
IMS sync point or backout, or if SYNCLVL=SYNCPT is specified, to communicate
the sync point decision to the RRS/MVS sync point manager.

Related Reading: For a description of the SRRCMIT and SRRBACK verbs, see SAA CPI
Resource Recovery Reference.

LU 6.2 partner program design


The flow of a transaction that is sent from an LU 6.2 device differs, depending on
the conversation attributes and synchronization levels. Different results occur, and
the partner system takes actions accordingly. The flow diagrams and the integrity
tables in this section present these differences.

Chapter 7. Designing an application for APPC 73


LU 6.2 flow diagrams
Figure 30 on page 75 through Figure 38 on page 83 show the flow between a
synchronous or asynchronous LU 6.2 application program and an IMS application
program in a single (local) IMS system.

Figure 39 on page 84 through Figure 42 on page 87 show the flow between a


synchronous or asynchronous LU 6.2 application program in a single (local) IMS
system and an IMS application program in a remote IMS system across a multiple
systems coupling (MSC) link.

Figure 43 on page 88 and Figure 44 on page 89 show commit scenarios with


SYNC_LEVEL=SYNCPT. Figure 45 on page 90 shows a backout scenario with
SYNC_LEVEL=SYNCPT.

Differences in buffering and encapsulation of control data with user data may
cause variations in the flows. The control data are the 3 returned fields from the
Receive APPC verb: Status_received, Data_received, and Request_to_send_received.
Any variations based on these differences will not affect the function or use of the
flows.

74 Application Programming Planning Guide


Figure 30. Flow of a local IMS synchronous transaction when Sync_level=None

Figure 31 on page 76 shows the flow of a local synchronous transaction when


Sync_level is Confirm.

Chapter 7. Designing an application for APPC 75


Figure 31. Flow of a local IMS synchronous transaction when Sync_level=Confirm

Figure 32 on page 77 shows the flow of a local asynchronous transaction when


Sync_level is None.

76 Application Programming Planning Guide


Figure 32. Flow of a local IMS asynchronous transaction when Sync_level=None

Figure 33 on page 78 shows the flow of a local asynchronous transaction when


Sync_level is Confirm.

Chapter 7. Designing an application for APPC 77


Figure 33. Flow of a local IMS asynchronous transaction when Sync_level=Confirm

Figure 34 on page 79 shows the flow of a local conversational transaction When


Sync_level is None.

78 Application Programming Planning Guide


Figure 34. Flow of a local IMS conversational transaction when Sync_level=None

Figure 35 on page 80 shows the flow of a local IMS command when Sync_level is
None.

Chapter 7. Designing an application for APPC 79


Figure 35. Flow of a local IMS command when Sync_level=None

Figure 36 on page 81 shows the flow of a local asynchronous command when


Sync_level is Confirm.

80 Application Programming Planning Guide


Figure 36. Flow of a local IMS asynchronous command when Sync_level=Confirm

Figure 37 on page 82 shows the flow of a message switch When Sync_level is


None.

Chapter 7. Designing an application for APPC 81


Figure 37. Flow of a message switch when Sync_level=None

Synchronous is used to verify that no error has occurred while processing


DFSAPPC. If an error occurred, the error message returns before DEALLOCATE.

Figure 38 on page 83 shows the flow of a CPI-C driven program when Sync_level
is None.

82 Application Programming Planning Guide


Figure 38. Flow of a local CPI communications driven program when Sync_level=None

Figure 39 on page 84 shows the flow of a remote synchronous transaction when


Sync_level is None.

Chapter 7. Designing an application for APPC 83


Figure 39. Flow of a remote IMS synchronous transaction when Sync_level=None

Figure 40 on page 85 shows the flow of a remote asynchronous transaction when


Sync_level is None.

84 Application Programming Planning Guide


Figure 40. Flow of a remote IMS asynchronous transaction when Sync_level=None

Figure 41 on page 86 shows the flow of a remote asynchronous transaction when


Sync_level is Confirm.

Chapter 7. Designing an application for APPC 85


Figure 41. Flow of a remote IMS asynchronous transaction when Sync_level=Confirm

Figure 42 on page 87 shows the flow of a remote synchronous transaction when


Sync_level is Confirm.

86 Application Programming Planning Guide


Figure 42. Flow of a remote IMS synchronous transaction when Sync_level=Confirm

The scenarios shown in Figure 43 on page 88, Figure 44 on page 89, Figure 45 on
page 90, Figure 46 on page 91, and Figure 47 on page 92 provide examples of the
two-phase process for the supported application program types. The LU 6.2 verbs
are used to illustrate supported functions and interfaces between the components.
Only parameters pertinent to the examples are included. This does not imply that
other parameters are not supported.

Chapter 7. Designing an application for APPC 87


Figure 43 shows a standard DL/I program commit scenario when
Sync_Level=Syncpt.

Figure 43. Standard DL/I program commit scenario when Sync_Level=Syncpt

Notes:
1Sync_Level=Syncpt triggers a protected resource update.
2This application program inserts output for the remote application to
the IMS message queue.
3 The GU initiates the transfer of the output.
4 The remote application sends a Confirmed after receiving data
(output).
5 IMS issues ATRCMIT (equivalent to SRRCMIT) to start the two-phase
process.

Figure 44 on page 89 shows a CPI-C driven commit scenario when


Sync_Level=Syncpt.

88 Application Programming Planning Guide


Figure 44. CPI-C driven commit scenario when Sync_Level=Syncpt

Notes:
1Sync_Level=Syncpt triggers a protected resource update.
2 The programs send and receive data.
3 The remote application decides to commit the updates.
4 The CPI-C program issues SRRCMIT to commit the changes.
5 The commit return code is returned to the remote application.

Figure 45 on page 90 shows a standard DL/I program backout scenario when


Sync_Level=Syncpt.

Chapter 7. Designing an application for APPC 89


Figure 45. Standard DL/I program U119 backout scenario when Sync_Level=Syncpt

Notes:
1Sync_Level=Syncpt triggers a protected-resource update.
2 This application program inserts output for the remote application
to the IMS message queue.
3 The GU initiates the transfer of the output.
4 The remote application decides to back out any updates.
5 IMS abends the application with a U119 to back out the application.
6 The backout return code is returned to the remote application.

Figure 46 on page 91 shows a standard DL/I program backout scenario when


Sync_Level=Syncpt.

90 Application Programming Planning Guide


Figure 46. Standard DL/I program U0711 backout scenario when Sync_Level=Syncpt

Notes:
1Sync_Level=Syncpt triggers a protected-resource update.
2 This application program inserts output for the remote application
to the IMS message queue.
3 The GU initiates the transfer of the output.
4 The remote application sends a Confirmed after receiving data
(output).
5 IMS issues ATBRCVW on behalf of the DL/I application to wait
for a commit or backout.
6 The remote application decides to back out any updates.
7 IMS abends the application with U0711 to back out the application.
8 The backout return code is returned to the remote application.

Figure 47 on page 92 shows a standard DL/I program ROLB scenario when


Sync_Level=Syncpt.

Chapter 7. Designing an application for APPC 91


Figure 47. Standard DL/I program ROLB scenario when Sync_Level=Syncpt

Notes:
1Sync_Level=Syncpt triggers a protected-resource update.
2 This application program inserts output for the remote application
to the IMS message queue.
3 DL/I program issues a ROLB. ABENDU0711 with Return Code X’20’
is issued.

Figure 48 on page 93 shows multiple transactions in the same commit when


Sync_Level=Syncpt.

92 Application Programming Planning Guide


Figure 48. Multiple transactions in same commit when Sync_Level=Syncpt

Notes:
1 An allocate with Sync_Level=Syncpt triggers a protected resource
update with Conversation 1.
2 The first transaction provides the output for Conversation 1.
3 An allocate with Sync_Level=Syncpt triggers a protected resource
update with Conversation 2.
4 The second transaction provides the output for Conversation 2.

Chapter 7. Designing an application for APPC 93


5 The remote application issues SRRCMIT to commit both
transactions.
6 IMS issues ATRCMIT to start the two-phase process on behalf of each
DL/I application.

Integrity tables
Table 23 shows the results, from the viewpoint of the IMS partner system, of
normal conversation completion, abnormal conversation completion due to a
session failure, and abnormal conversation completion due to non-session failures.
These results apply to asynchronous and synchronous conversations and both
input and output. This table also shows the outcome of the message, and the
action that the partner system takes when it detects the failure. An example of an
action, under “LU 6.2 Session Failure,” is a programmable work station (PWS)
resend.
Table 23. Message integrity of conversations
Conversation attributes Normal LU 6.2 session failure1 Other failure2
Synchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=NONE Output: Reliable Output: PWS resend Output: Reliable
Synchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=CONFIRM Output: Reliable Output: Reliable Output: Reliable
Synchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=SYNCPT Output: Reliable Output: Reliable Output: Reliable
Asynchronous Input: Ambiguous Input: Undetectable Input: Undetectable
Sync_level=NONE Output: Reliable Output: Reliable Output: Reliable
Asynchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=CONFIRM Output: Reliable Output: Reliable Output: Reliable
Asynchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=SYNCPT Output: Reliable Output: Reliable Output: Reliable
Notes:
1. A session failure is a network-connectivity breakage.
2. A non-session failure is any other kind of failure, such as invalid security
authorization.
3. IMS resends asynchronous output if CONFIRM is lost; therefore, the PWS must tolerate
duplicate output.

Table 24 shows the specifics of the processing windows when integrity is


compromised (the message is either lost or its state is ambiguous). The table
indicates the relative probability of an occurrence of each window and whether
output is lost or duplicated.

A Sync_level value of NONE does not apply to asynchronous output, because IMS
always uses Sync_level=CONFIRM for such output.
Table 24. Results of processing when integrity is compromised
State of window1 Probability of action
before accepting Probability of Possible action while while sending
Conversation attributes transaction window state sending response response
Synchronous ALLOCATE to Medium Can lose or send Medium
Sync_level=NONE PREPARE_TO_ duplicate output.
RECEIVE return

94 Application Programming Planning Guide


Table 24. Results of processing when integrity is compromised (continued)
State of window1 Probability of action
before accepting Probability of Possible action while while sending
Conversation attributes transaction window state sending response response
Synchronous PREPARE_TO_ Small CONFIRM to IMS Small
Sync_level=CONFIRM RECEIVE to receipt. Can cause
PREPARE_TO_ duplicate output.
RECEIVE return
Synchronous PREPARE_TO_ Small CONFIRM to IMS Small
Sync_level=SYNCPT RECEIVE to receipt. Can cause
PREPARE_TO_ duplicate output.
RECEIVE return
Asynchronous Allocate to High CONFIRMED to IMS Small
Sync_level=NONE Deallocate receipt. Can cause
duplicate output.
Asynchronous PREPARE_TO_ Small2 CONFIRMED to IMS Small
Sync_level=CONFIRM RECEIVE to receipt. Can cause
PREPARE_TO_ duplicate output.
RECEIVE return
Asynchronous PREPARE_TO_ Small2 CONFIRMED to IMS
Sync_level=SYNCPT RECEIVE to receipt. Can cause
PREPARE_TO_ duplicate output.
RECEIVE return
Notes:
1. The term window refers to a period of time when certain events can occur, such as the
consequences described in this table.
2. Can be recoverable.

Table 25 indicates how IMS recovers APPC transactions across IMS warm starts,
XRF takeovers, APPC session failures, and MSC link failures.
Table 25. Recovering APPC messages
IMS warm start APPC (LU 6.2) MSC LINK
Message type (NRE or ERE) XRF takeover session fail failure
Local Recoverable Tran.,
Non Resp., Non Conversation
- APPC Sync. Conv. Mode Discarded (2) Discarded (4) Discarded (6) N/A (9)
- APPC Async. Conv. Mode Recovered Recovered Recovered (1) N/A (9)
Local Recoverable Tran.,
Conv. or Resp. mode
- APPC Sync. Conv. Mode Discarded (2) Discarded (4) Discarded (6) N/A (9)
- APPC Async. Conv. Mode N/A (8) N/A (8) N/A (8) N/A (8,9)
Local Non Recoverable Tran.,
- APPC Sync. Conv. Mode Discarded (2) Discarded (6) N/A (9)
- APPC Async. Conv. Mode Discarded (2) Discarded (4) Recovered (1) N/A (9)
Remote Recoverable Tran.,
Non Resp., Non Conv.
- APPC Sync. Conv. Mode Discarded (2,5) Discarded (3,5) Recovered (1) Recovered (7)
- APPC Async. Conv. Mode Recovered Recovered Recovered (1) Recovered (7)
Remote Recoverable Tran.,
Conv. or Resp. mode
- APPC Sync. Conv. Mode Discarded (2,5) Discarded (3,5) Recovered (1) Recovered (7)
- APPC Async. Conv. Mode N/A (8) N/A (8) N/A (8) N/A (8)

Chapter 7. Designing an application for APPC 95


Table 25. Recovering APPC messages (continued)
IMS warm start APPC (LU 6.2) MSC LINK
Message type (NRE or ERE) XRF takeover session fail failure
Remote Non Recoverable Tran.,
- APPC Sync. Conv. Mode Discarded (2,5) Discarded (3,5) Recovered (1) Recovered (7)
- APPC Async. Conv. Mode Discarded (2,5) Discarded (3,5) Recovered (1) Recovered (7)

Notes:
1. This recovery scenario assumes the message was enqueued before failure; otherwise, the message is discarded.
2. The message is discarded during IMS warm-start processing.
3. The message is discarded when the MSC link is restarted and when the message is taken off the queue (for
sending across the link).
4. The message is discarded when the message region is started and when the message is taken off the queue (for
processing by the application program).
5. For all remote MSC APPC transactions, if the message has already been sent across the MSC link to the remote
system when the failure occurs in the local IMS, the message is processed. After the message is processed by the
remote application program and a response message is sent back to the local system, it is enqueued to the
DFSASYNC TP name of the LU 6.2 device or program that submitted the original transaction.
6. At sync point, the User Message Control Error exit routine (DFSCMUX0) can prevent the transaction from being
aborted and the output message can be rerouted (recovered).
For more information about this exit routine, see IMS Version 10: Exit Routine Reference.
7. The standard MSC Link recovery protocol recovers all messages that are queued or are in the process of being
sent across the MSC link when the link fails.
8. IMS conversational-mode and response-mode transactions cannot be submitted from APPC asynchronous
conversation sessions. APPC synchronous conversation-mode must be used.
9. MSC link failures do not affect local transactions.

DFSAPPC message switch


DFSAPPC is an LU 6.2 descriptor that provides an IMS system service. It allows
LU 6.2 application programs to send messages to the following:
v Application programs (transactions)
v IMS-managed local or remote LTERMs (message switches)
v LU name and TP name

Messages sent with the LTERM= option are directed to IMS-managed local or
remote LTERMs. Messages sent without the LTERM= option are sent to the
appropriate LU 6.2 application or IMS application program.

Because the LTERM can be an LU 6.2 descriptor name, the message is sent to the
LU 6.2 application program as if an LU 6.2 device had been explicitly selected.

With DFSAPPC, message delivery is asynchronous. If a message is allocated and


the allocate fails, the message is held on the IMS message queue until it can be
successfully delivered.

Example: In the LU 6.2 conversation example, an IMS application issues a


DFSAPPC message switch to its partner with the LU name FRED and TPN name
REPORT. REPI is the user data.
DFSAPPC (TPN=REPORT LU=FRED) REP1

You can use a 17-byte network-qualified name in the LU= field.

96 Application Programming Planning Guide


Restriction: LU 6.2 architecture prohibits the use of the ALTRESP PCB on a CHNG
call in an LU 6.2 conversation. The LU 6.2 conversation can only be associated
with the IOPCB. The application sends a message on the existing LU 6.2
conversation (synchronous) or has IMS create a new conversation (asynchronous)
using the IOPCB. Since there is no LTERM associated with an LU 6.2 conversation,
only the IOPCB represents the original LU 6.2 conversation.

Related Reading: For more information about DFSAPPC, see IMS Version 10:
Communications and Connections Guide.

Chapter 7. Designing an application for APPC 97


98 Application Programming Planning Guide
Chapter 8. Analyzing IMS application processing requirements
This section assumes you are writing application programs for IMS environments
and explains the kinds of application programs that IMS supports and the
requirements that each satisfies.

Subsections:
v “Defining IMS application requirements”
v “Accessing databases with your IMS application program” on page 100
v “Accessing data: the types of programs you can write for your IMS application”
on page 102
v “IMS programming integrity and recovery considerations” on page 110
v “Dynamic allocation for IMS databases” on page 119

Related reading: For information on writing CICS application programs, see


Chapter 9, “Analyzing CICS application processing requirements,” on page 121.

| Defining IMS application requirements


One of the steps of application design is to decide how the business processes, or
tasks, that the end user wants performed can be best grouped into a set of
programs that efficiently performs the required processing. To analyze processing
requirements, consider:
When the task must be performed
– Will the task be scheduled unpredictably (for example, on terminal demand)
or periodically (for example, weekly)?
How the program that performs the task is executed
– Will the program be executed online, where response time is crucial, or by
batch job submission, where a slower response time is acceptable?
The consistency of the processing components
– Does the action the program is to perform involve more than one type of
program logic? For example, does it involve mostly retrievals and only one
or two updates? If so, you should consider separating the updates into a
separate program.
– Does this action involve several large groups of data? If it does, it might be
more efficient to separate the programs by the data they access.
Any special requirements about the data or processing
Security Should access to the program be restricted?
Recovery Are there special recovery considerations in the program's
processing?
Availability Does your application require high data availability?
Integrity Do other departments use the same data?

Answers to questions like these can help you decide on the number of application
programs that the processing will require, and on the types of programs that

© Copyright IBM Corp. 1974, 2010 99


perform the processing most efficiently. Although rules dealing with how many
programs can most efficiently do the required processing do not exist, here are
some suggestions:
v As you look at each programming task, examine the data and processing that
each task involves. If a task requires different types of processing and has
different time limitations (for example, daily as opposed to different times
throughout the month), that task might be more efficiently performed by several
programs.
v As you define each program, it is a good idea for maintenance and recovery
reasons to keep it as simple as possible. The simpler a program is—the less it
does—the easier it is to maintain, and to restart after a program or system
failure. The same is true with data availability—the less data that is accessed, the
more likely the data is to be available. The more limited the access requested,
the more likely the data is to be available.
Similarly, if the data that the application requires is physically in one place, it
might be more efficient to have one program do more of the processing than
usual. These are considerations that depend upon the processing and the data of
each application.
v Documenting each of the user tasks is helpful during the design process, and in
the future when others will work with your application. Be sure you are aware
of standards in this area. The kind of information that is typically kept is when
the action is to be executed, a functional description, and requirements for
maintenance, security, and recovery.
Example: For the current roster process described in “Listing data elements” on
page 46, you might record the information shown in Figure 49. How frequently
the program is run is determined by the number of classes (20) needed by the
Education Center each week.

USER TASK DESCRIPTION

NAME: Current Roster


ENVIRONMENT: Batch FREQUENCY: 20 per week

INVOKING EVENT OR DOCUMENT: Time period (one week)

REQUIRED RESPONSE TIME: 24 hours

FUNCTION DESCRIPTION: Print weekly, a current student roster, in student


number sequence for each class offered at the Education Center.

MAINTENANCE:Included in Education DB maintenance.

SECURITY: None.

RECOVERY:After a failure, the ability to start printing a particular


class roster starting from a particular sequential student number.

Figure 49. Documenting user task descriptions: current roster example

Accessing databases with your IMS application program


When designing your program, consider the type of database it must access. The
type of database depends on the operating environment. The program types you
can run and the different types of databases you can access in a DB batch, TM
batch, DB/DC, DBCTL, or DCCTL environment are shown in Table 26 on page 101.

100 Application Programming Planning Guide


Table 26. Program and database options in IMS environments
Type of program you
Environment can run Type of database that can be accessed
DB/DC BMP DB2 for z/OS
DEDB and MSDB
Full function
z/OS files
IFP DB2 for z/OS
DEDB
Full function
JBP DB2 for z/OS
DEDB
Full function
JMP DB2 for z/OS
DEDB
Full function
MPP DB2 for z/OS
DEDB and MSDB
Full function
DB Batch DB Batch DB2 for z/OS
Full function
GSAM
z/OS files
DBCTL BMP (Batch-oriented) DB2 for z/OS
DEDB
Full function
GSAM
z/OS files
JBP DB2 for z/OS
DEDB
Full function
DCCTL BMP DB2 for z/OS
GSAM
IFP DB2 for z/OS
JMP DB2 for z/OS
MPP DB2 for z/OS
TM Batch TM Batch DB2 for z/OS
GSAM
z/OS files

The types of databases that can be accessed are:


v IMS Databases
There are two types of IMS databases: full-function and Fast Path.

Chapter 8. Analyzing IMS application processing requirements 101


– Full-function databases
Full-function databases are hierarchic databases that are accessed through
Data Language I (DL/I) call interface and can be processed by these types of
application programs: IFP, JMP, JBP, MPP, BMP, and DB batch. DL/I calls
make it possible for IMS application programs to retrieve, replace, delete, and
add segments to full-function databases.
JMP and JBP applications use JDBC to access full-function databases in
addition to DL/I.
If you use data sharing, online programs and batch programs can access the
same full-function database concurrently.
Full-function database types include: HDAM, HIDAM, HSAM, HISAM,
PHDAM, PHIDAM, SHSAM, and SHISAM.
– Fast Path databases
Fast Path databases are of two types: MSDBs and DEDBs.
- Main storage databases (MSDBs) are root-segment-only databases that
reside in virtual storage during execution.
- Data entry databases (DEDBs) are hierarchic databases that provide a high
level of availability for, and efficient access to, large volumes of detailed
data.
MPP, BMP, and IFP programs can access Fast Path databases. In the DBCTL
environment, BMP programs can access DEDBs but not MSDBs. JMP and JBP
programs can access DEDBs but not MSDBs.
v DB2 for z/OS databases
DB2 for z/OS databases are relational databases that can be processed by IMS
batch, BMP, IFP, JBP, JMP, and MPP programs. An IMS application program
might access only DL/I databases, both DL/I and DB2 for z/OS databases, or
only DB2 for z/OS databases. Relational databases are represented to application
programs and users as tables, and are processed using a relational data language
called Structured Query Language (SQL).

Note: JMP and JBP programs cannot access DB2 for z/OS databases.
Related Reading: For information on processing DB2 for z/OS databases, see
DB2 for z/OS and OS/390 Application Programming and SQL Guide.
v z/OS Files
BMPs (in both the DB/DC and DBCTL environment) are the only type of online
application program that can access z/OS files for their input or output. Batch
programs can also access z/OS files.
v GSAM Databases (Generalized Sequential Access Method)
Generalized Sequential Access Method (GSAM) is an access method that makes
it possible for BMPs and batch programs to access a sequential z/OS data set as
a simple database. A GSAM database can be accessed by z/OS or by IMS.

Accessing data: the types of programs you can write for your IMS
application
You must decide what type of program to use: batch programs, message
processing programs (MPPs), IMS Fast Path (IFP) applications, batch message
processing (BMP) applications, Java Message Processing (JMP) applications, or Java
Batch Processing (JBP) applications. As Table 26 on page 101 shows, the types of
programs you can use depend on whether you are running in the batch, DB/DC,
or DBCTL environment.

102 Application Programming Planning Guide


These topics explain the types of databases that the programs access, when the
programs are used, and how to recover the programs.

DB batch processing
These topics describe DB batch processing and can help you decide if this batch
program is appropriate for your application.

Data that a DB batch program can access


A DB batch program can access full-function databases, DB2 for z/OS databases,
GSAM databases, and z/OS files. A DB batch program cannot access DEDBs or
MSDBs.

Using DB batch processing


Batch programs are typically longer-running programs than online programs. You
use a batch program when you have a large number of database updates to do or
a report to print. Because a batch program runs by itself—it does not compete with
any other programs for resources like databases—it can run independently of the
control region. If you use data sharing, DB batch programs and online programs
can access full-function databases concurrently. Batch programs:
v Typically produce a large amount of output, such as reports.
v Are not executed by another program or user. They are usually scheduled at
specific time intervals (for example, weekly) and are started with JCL.
v Produce output that is not needed right away. The turnaround time for batch
output is not crucial, as it usually is for online programs.

Recovering a DB batch program


Include checkpoints in your batch program to restart it in case of failure.

Issuing checkpoints: Issue checkpoints in a batch program to commit database


changes and provide places from which to restart your program. Issuing
checkpoints in a batch program is important, because commit points do not occur
automatically, as they do in MPPs, transaction-oriented BMPs, and IFPs.

Issuing checkpoints is particularly important in a batch program that participates


in data sharing with your online system. Checkpoints free up resources for use by
online programs. You should initially include checkpoints in all batch programs
that you write. Even though the checkpoint support might not be needed then, it is
easier to incorporate checkpoints initially than to try to fit them in later. And it is
possible that you might want to convert your batch program to a BMP or
participate in data sharing. For more information on issuing checkpoints, see
“Checkpoints in batch programs” on page 116.

To issue checkpoints (or other system service calls), you must specify an I/O PCB
for your program. To obtain an I/O PCB, use the compatibility option by
specifying CMPAT=YES in the PSBGEN statement in your program's PSB.

Related Reading: For more information on obtaining an I/O PCB, see IMS Version
10: Application Programming Guide.

Recommendation: For PSBs used by DB batch programs, always specify


CMPAT=YES.

Chapter 8. Analyzing IMS application processing requirements 103


Backing out database changes: The type of storage medium for the system log
determines what happens when a DB batch program terminates abnormally. You
can specify that the system log be stored on either DASD (direct access storage
device) or tape.

System log on DASD: If the system log is stored on DASD, using the BKO
execution parameter you can specify that IMS is to dynamically back out the
changes that the program has made to the database since its last commit point.

Related Reading: For information on using the BKO execution parameter, see IMS
Version 10: System Definition Reference.

Dynamically backing out database changes has the following advantages:


v Data accessed by the program that failed is available to other programs
immediately. If batch backout is used, other programs cannot access the data
until the IMS Batch Backout utility has been run to back out the database
changes.
v If data sharing is being used and two programs are deadlocked, one of the
programs can continue processing. Otherwise, if batch backout is used, both
programs fail.

IMS performs dynamic backout for a batch program when an IMS-detected failure
occurs, for example, when a deadlock is detected. Logging to DASD makes it
possible for batch programs to issue the SETS, ROLB, and ROLS system service calls.
These calls cause IMS to dynamically back out changes that the program has made.

Related Reading: For information on the SETS, ROLB, and ROLS calls, see the
information about recovering databases and maintaining database integrity in
either of the following books:
v IMS Version 10: Application Programming Guide
v IMS Version 10: Database Administration Guide

System log on tape: If a batch application program terminates abnormally and the
batch system log is stored on tape, you must use the IMS Batch Backout utility to
back out the program's changes to the database.

TM batch processing
A TM batch program acts like a DB batch program with the following differences:
v It cannot access full-function databases, but it can access DB2 for z/OS
databases, GSAM databases, and z/OS files.
v To issue checkpoints for recovery, you need not specify CMPAT=YES in your
program's PSB. (The CMPAT parameter is ignored in TM batch.) The I/O PCB is
always the first PCB in the list.
v You cannot dynamically back out a database because IMS does not own the
databases.

| The IEFRDER log DD statement is required in order to enable log synchronization


| with other external subsystems, such as DB2 for z/OS

Processing messages: MPPs


These topics describe the message processing program (MPP) and can help you
decide if this online program is appropriate for your application.

104 Application Programming Planning Guide


Data that an MPP can access
An MPP is an online program that can access full-function databases, DEDBs,
MSDBs, and DB2 for z/OS databases. Unlike BMPs and batch programs, MPPs
cannot access GSAM databases. MPPs can only run in DB/DC and DCCTL
environments.

Using an MPP
The primary purpose of an MPP is to process requests from users at terminals and
from other application programs. Ideally, MPPs are very small, and the processing
they perform is tailored to respond to requests quickly. They process messages as
their input, and send messages as responses.

Definition: A message is data that is transmitted between any two terminals,


application programs, or IMS systems. Each message has one or more segments.

MPPs are executed through transaction codes. When you define an MPP, you
associate it with one or more transaction codes. Each transaction code represents a
transaction the MPP is to process. To process a transaction, a user at a terminal
enters a code for that transaction. IMS then schedules the MPP associated with that
code, and the MPP processes the transaction. The MPP might need to access the
database to do this. Generally, an MPP goes through these five steps to process a
transaction:
1. Retrieve a message from IMS.
2. Process the message and access the database as necessary.
3. Respond to the message.
4. Repeat the process until no messages are forthcoming.
5. Terminate.

When an MPP is defined, a system administrator makes decisions about the


program's scheduling and processing. For each MPP, a system administrator
specifies:
v The transaction's priority
v The number of messages for a particular transaction code that the MPP can
process in a single scheduling
v The amount of time (in seconds) in which the MPP is allowed to process a single
transaction

Defining priorities and processing limits gives system administration some control
over load balancing and processing.

Although the primary purpose of an MPP is to process and reply to messages


quickly, it is flexible in how it processes a transaction and where it can send output
messages. For example, an MPP can send output messages to other terminals and
application programs. See Chapter 10, “Gathering requirements for database
options,” on page 139 for a description of some of the options available to MPPs.

Processing messages: IFPs


These topics describe IMS Fast Path (IFP) programs and can help you decide if this
online program is appropriate for your application.

Data that an IFP can access


An IFP is similar to an MPP: Its main purpose is to quickly process and reply to
messages from terminals.

Chapter 8. Analyzing IMS application processing requirements 105


Like an MPP, an IFP can access full-function databases, DEDBs, MSDBs, and DB2
for z/OS databases. IFPs can only be run in DB/DC and DCCTL environments.

Using an IFP
You should use an IFP if you need quick processing and can accept the
characteristics and constraints associated with IFPs.

The main differences between IFPs and MPPs are as follows:


v Messages processed by IFPs must consist of only one segment. Messages that are
processed by MPPs can consist of several segments.
v IFPs bypass IMS queuing, allowing for more efficient processing. Transactions
that are processed by Fast Path's EMH (expedited message handler) are on a
first-in, first-out basis.

IFPs also have the following characteristics:


v They run in transaction response mode. This means that they must respond to
the terminal that sent the message before the terminal can enter any more
requests.
v They process only wait-for-input transactions. When you define a program as
processing wait-for-input transactions, the program remains in virtual storage,
even when no additional messages are available for it to process.

Restrictions:
v An IMS program cannot send messages to an IFP transaction unless it is in
another IMS system that is connected using Intersystem Communication (ISC).
v MPPs cannot pass conversations to an IFP transaction.

Recovering an IFP
IFPs must be defined as single mode. This means that a commit point occurs each
time the program retrieves a message. Because of this, you do not need to issue
checkpoint calls.

Batch message processing: BMPs


BMPs are application programs that can perform batch-type processing online and
access the IMS message queues for their input and output. Because of this and
because of the data available to them, BMPs are the most flexible of the IMS
application programs.

The two types of BMPs are: batch-oriented and transaction-oriented.

Batch processing online: batch-oriented BMPs


These topics describe the batch message processing program and can help you
decide if this batch program is appropriate for your application.

Data a batch-oriented BMP can access: A batch-oriented BMP performs


batch-type processing in any online environment. When run in the DB/DC or
DCCTL environment, a batch-oriented BMP can send its output to the IMS
message queue to be processed later by another application program. Unlike a
transaction-oriented BMP, a batch-oriented BMP cannot access the IMS message
queue for input.

In the DBCTL environment, a batch-oriented BMP can access full-function


databases, DB2 for z/OS databases, DEDBs, z/OS files, and GSAM databases. In
the DB/DC environment, a batch-oriented BMP can access all of these types of

106 Application Programming Planning Guide


databases, as well as Fast Path MSDBs. In the DCCTL environment, this program
can access DB2 for z/OS databases, z/OS files, and GSAM databases.

Using a batch-oriented BMP: A batch-oriented BMP can be simply a batch


program that runs online. (Online requests are processed by the IMS DB/DC,
DBCTL, or DCCTL system rather than by a batch system.) You can even run the
same program as a BMP or as a batch program.

Recommendation: If the program performs a large number of database updates


without issuing checkpoints, consider running it as a batch program so that it does
not degrade the performance of the online system.

To use batch-oriented BMPs most efficiently, avoid a large amount of batch-type


processing online. If you have a BMP that performs time-consuming processing
such as report writing and database scanning, schedule it during non-peak hours
of processing. This will prevent it from degrading the response time of MPPs.

Because BMPs can degrade response times, your response time requirements
should be the main consideration in deciding the extent to which you will use
batch message processing. Therefore, use BMPs accordingly.

Recovering a batch-oriented BMP: Issuing checkpoint calls is an important part


of batch-oriented BMP processing, because commit points do not occur
automatically, as they do in MPPs, transaction-oriented BMPs, and IFPs. Unlike
most batch programs, a BMP shares resources with MPPs. In addition to
committing database changes and providing places from which to restart (as for a
batch program), checkpoints release resources that are locked for the program. For
more information on issuing checkpoints, see “Checkpoints in batch-oriented
BMPs” on page 115.

If a batch-oriented BMP fails, IMS and DB2 for z/OS back out the database
updates the program has made since the last commit point. You then restart the
program with JCL. If the BMP processes z/OS files, you must provide your own
method of taking checkpoints and restarting.

Converting a batch program to a batch-oriented BMP: If you have IMS TM or


are running in the DBCTL environment, you can convert a batch program to a
batch-oriented BMP.
v If you have IMS TM, you might want to convert your programs for these
reasons:
– BMPs can send output to the message queues.
– BMPs can access DEDBs and MSDBs.
– BMPs simplify program recovery because logging goes to a single system log.
If you use DASD for the system log in batch, you can specify that you want
dynamic backout for the program. In that case, batch recovery is similar to
BMP recovery, except, of course, with batch you need to manage multiple
logs.
– Restart can be done automatically from the last checkpoint without changing
the JCL.
v If you are using DBCTL, you might want to convert your programs for these
reasons:
– BMPs can access DEDBs.
– BMPs simplify program recovery because logging goes to a single system log.
If you use DASD for the system log in batch, you can specify that you want

Chapter 8. Analyzing IMS application processing requirements 107


dynamic backout for the program. In that case, batch recovery is similar to
BMP recovery, except, of course, with batch you need to manage multiple
logs.
v If you are running sysplex data sharing and you either have IMS TM or are
using DBCTL, you might want to convert your program. This is because using
batch-oriented BMPs helps you stay within the sysplex data-sharing limit of 32
connections for each OSAM or VSAM structure.
If you use data sharing, you can run batch programs concurrently with online
programs. If you do not use data sharing, converting a batch program to a BMP
makes it possible to run the program with BMPs and other online programs.
Also, if you plan to run your batch programs offline, converting them to BMPs
enables you to run them with the online system, instead of waiting until the
online system is not running. Running a batch program as a BMP can also keep
the data more current.
v If you have IMS TM or are using DBCTL, you can have a program that runs as
either a batch program or a BMP.
Recommendation: Code your checkpoints in a way that makes them easy to
modify. Converting a batch program to a BMP or converting a batch program to
use data sharing requires more frequent checkpoints. Also, if a program fails
while running in a batch region, you must restart it in a batch region. If a
program fails in a BMP region, you must restart it in a BMP region.

The requirements for converting a batch program to a BMP are:


v The program must have an I/O PCB. You can obtain an I/O PCB in batch by
specifying the compatibility (CMPAT) option in the program specification block
(PSB) for the program.
Related Reading: For more information on the CMPAT option in the PSB, see
IMS Version 10: System Utilities Reference.
v BMPs must issue checkpoint calls more frequently than batch programs.

Batch message processing: transaction-oriented BMPs


These topics describe a transaction-oriented BMP and can help you decide if this
batch program is appropriate for your application.

Data a transaction-oriented BMP can access: Transaction-oriented BMPs can


access z/OS files, GSAM databases, DB2 for z/OS databases, full-function
databases, DEDBs, and MSDBs.

Unlike a batch-oriented BMP, a transaction-oriented BMP can access the IMS


message queue for input and output, and it can only run in the DB/DC and
DCCTL environments.

Using a transaction-oriented BMP: Unlike MPPs, transaction-oriented BMPs are


not scheduled by IMS. You schedule them as needed and start them with JCL. For
example, an MPP, as it processes each message, might send an output message
giving details of the transaction to the message queue. A transaction-oriented BMP
could then access the message queue to produce a daily activity report.

Typically, you use a transaction-oriented BMP to simulate direct update online:


Instead of updating the database while processing its transactions, an MPP sends
its updates to the message queue. A transaction-oriented BMP then performs the
updates for the MPP. You can run the BMP as needed, depending on the number
of updates. This improves response time for the MPP, and it keeps the data
current. This can be more efficient than having the MPP process its transactions if

108 Application Programming Planning Guide


the response time of the MPP is very important. One disadvantage in doing this,
however, is that it splits the transaction into two parts which is not necessary.

If you have a BMP perform an update for an MPP, design the BMP so that, if the
BMP terminates abnormally, you can reenter the last message as input for the BMP
when you restart it. For example, suppose an MPP gathers database updates for
three BMPs to process, and one of the BMPs terminates abnormally. You would
need to reenter the message that the terminating BMP was processing to one of the
other BMPs for reprocessing.

BMPs can process transactions defined as wait-for-input (WFI). This means that
IMS allows the BMP to remain in virtual storage after it has processed the
available input messages. IMS returns a QC status code, indicating that the
program should terminate when one of the following occurs:
v The program reaches its time limit.
v The master terminal operator enters a command to stop processing.
v IMS is terminated with a checkpoint shutdown.

You specify WFI for a transaction on the WFI parameter of the TRANSACT macro
during IMS system definition.

A batch message processing region (BMP) scheduled against WFI transactions


returns a QC status code (no more messages) only for the following commands:
/PSTOP REGION, /DBD, /DBR, or /STA.

Like MPPs, BMPs can send output messages to several destinations, including
other application programs. See “Identifying output message destinations” on page
171 for more information.

Recovering a transaction-oriented BMP: Like MPPs, with transaction-oriented


BMPs, you can choose where commit points occur in the program. You can specify
that a transaction-oriented BMP be single or multiple mode, just as you can with
an MPP. If the BMP is single mode, issuing checkpoint calls is not as critical as in a
multiple mode BMP. In a single mode BMP, a commit point occurs each time the
program retrieves a message. For more information on issuing checkpoints in a
BMP, see “Checkpoints in MPPs and transaction-oriented BMPs” on page 114.

Java message processing: JMPs


A JMP application program is similar to an MPP application program, except that
JMP applications must be written in Java or object-oriented COBOL. Like an MPP
application, a JMP application is started when there is a message in the message
queue for the JMP application and IMS schedules the message for processing.

JMP applications can access IMS data or DB2 for z/OS data using JDBC. JMP
applications run in JMP regions which have JVMs (Java Virtual Machines). For
more information about JMPs, see the IMS Version 10: Application Programming
Guide.

| Java batch processing: JBPs


| A JBP application program is similar to a non-message-driven BMP application
| program, except that JBP applications must be written in Java or object-oriented
| COBOL.

Chapter 8. Analyzing IMS application processing requirements 109


| JBP applications can access IMS data or DB2 for z/OS data using JDBC. JBP
| applications run in JBP regions which have JVMs. For more information about
| JBPs, see the IMS Version 10: Application Programming Guide.

IMS programming integrity and recovery considerations


This section explains how IMS protects data integrity, and how you can plan ahead
for program recovery. These topics assume some knowledge of IMS application
programming. You might want to read this section after reading the IMS
application programming information that is applicable for your environment.

How IMS protects data integrity: commit points


When an online program accesses the database, it is not necessarily the only
program doing so. IMS and DB2 for z/OS make it possible for more than one
application program to access the data concurrently without endangering the
integrity of the data.

To access data concurrently while protecting data integrity, IMS and DB2 for z/OS
prevent other application programs from accessing segments that your program
deletes, replaces, or inserts, until your program reaches a commit point. A commit
point is the place in the program's processing at which it completes a unit of work.
When a unit of work is completed, IMS and DB2 for z/OS commit the changes
that your program made to the database. Those changes are now permanent and
the changed data is now available to other application programs.

What happens at a commit point


When an application program finishes processing one distinct unit of work, IMS
and DB2 for z/OS consider that processing to be valid, even if the program later
encounters problems. For example, an application program that is retrieving,
processing, and responding to a message from a terminal constitutes a unit of work.
If the program encounters problems while processing the next input message, the
processing it has done on the first input message is not affected. These input
messages are separate pieces of processing.

A commit point indicates to IMS that a program has finished a unit of work, and
that the processing it has done is accurate. At that time:
v IMS releases segments it has locked for the program since the last commit point.
Those segments are then available to other application programs.
v IMS and DB2 for z/OS make the program's changes to the database permanent.
v The current position in all databases except GSAM is reset to the start of the
database.

If the program terminates abnormally before reaching the commit point:


v IMS and DB2 for z/OS back out all of the changes the program has made to the
database since the last commit point. (This does not apply to batch programs
that write their log to tape.)
v IMS discards any output messages that the program has produced since the last
commit point.
Until the program reaches a commit point, IMS holds the program's output
messages so that, if the program terminates abnormally, users at terminals and
other application programs do not receive inaccurate information from the
abnormally terminating application program.
If the program is processing an input message and terminates abnormally, the
input message is not discarded if both of the following conditions exist:

110 Application Programming Planning Guide


1. You are not using the Non-Discardable Messages (NDM) exit routine.
2. IMS terminates the program with one of the following abend codes: U0777,
U2478, U2479, U3303. The input message is saved and processed later.
Exception: The input message is discarded if it is not terminated by one of
the abend codes previously referenced. When the program is restarted, IMS
gives the program the next message.
If the program is processing an input message when it terminates abnormally,
and you use the NDM exit routine, the input message might be discarded from
the system regardless of the abend. Whether the input message is discarded
from the system depends on how you have written the NDM exit routine.
Related Reading: For more information about the NDM exit routine, see IMS
Version 10: Exit Routine Reference.
v IMS notifies the MTO that the program terminated abnormally.
v IMS and DB2 for z/OS release any locks that the program has held on data it
has updated since the last commit point. This makes the data available to other
application programs and users.

Where commit points occur


A commit point can occur in a program for any of the following reasons:
v The program terminates normally. Except for a program that accesses Fast Path
resources, normal program termination is always a commit point. A program
that accesses Fast Path resources must reach a commit point before terminating.
v The program issues a checkpoint call. Checkpoint calls are a program's means of
explicitly indicating to IMS that it has reached a commit point in its processing.
v If a program processes messages as its input, a commit point might occur when
the program retrieves a new message. IMS considers this commit point the start
of a new unit of work in the program. Retrieving a new message is not always a
commit point. This depends on whether the program has been defined as single
mode or multiple mode.
– If you specify single mode, a commit point occurs each time the program
issues a call to retrieve a new message. Specifying single mode can simplify
recovery, because you can restart the program from the most recent call for a
new message if the program terminates abnormally. When IMS restarts the
program, the program begins by processing the next message.
– If you specify multiple mode, a commit point occurs when the program issues
a checkpoint call or when it terminates normally. At those times, IMS sends
the program's output messages to their destinations. Because multiple-mode
programs contain fewer commit points than do single mode programs,
multiple mode programs might offer slightly better performance than
single-mode programs. When a multiple mode program terminates
abnormally, IMS can only restart it from a checkpoint. Instead of reprocessing
only the most recent message, a program might have several messages to
reprocess, depending on when the program issued the last checkpoint call.

Table 27 lists the modes in which the programs can run. Because processing mode
is not applicable to batch programs and batch-oriented BMPs, they are not listed in
the table. The program type is listed, and the table indicates which mode is
supported.
Table 27. Processing modes
Multiple mode
Program type Single mode only only Either mode
MPP X

Chapter 8. Analyzing IMS application processing requirements 111


Table 27. Processing modes (continued)
Multiple mode
Program type Single mode only only Either mode
IFP X
Transaction-oriented BMP X

You specify single or multiple mode on the MODE parameter of the TRANSACT
macro.

Related Reading: For information on the TRANSACT macro, see IMS Version 10:
System Definition Reference.

See Figure 50 for an illustration of the difference between single-mode and


multiple-mode programs. A single-mode program gets and processes messages,
sends output, looks for more messages, and terminates if there are no more. A
multiple-mode program gets and processes messages, sends output, but has a
checkpoint before looking for more messages and terminating. For a single-mode
program, the commit points are when the message is obtained and the program
terminates. For multiple-mode, the commit point is at the checkpoint and when the
program terminates.

Figure 50. Single mode and multiple mode

DB2 for z/OS does some processing with multiple- and single-mode programs that
IMS does not. When a multiple-mode program issues a call to retrieve a new
message, DB2 for z/OS performs an authorization check. If the authorization check
is successful, DB2 for z/OS closes any SQL cursors that are open. This affects the
design of your program.

The DB2 for z/OS SQL COMMIT statement causes DB2 for z/OS to make permanent
changes to the database. However, this statement is valid only in TSO application
programs. If an IMS application program issues this statement, it receives a
negative SQL return code.

112 Application Programming Planning Guide


Planning for program recovery: checkpoint and restart
Recovery in an IMS application program that accesses DB2 for z/OS data is
handled by both IMS and DB2 for z/OS. IMS coordinates the process, and DB2 for
z/OS handles recovery of DB2 for z/OS data.

Introducing checkpoint calls


Checkpoint calls indicate to IMS that the program has reached a commit point.
They also establish places in the program from which the program can be
restarted. IMS has symbolic checkpoint calls and basic checkpoint calls.

A program might issue only one type of checkpoint call.


v MPPs and IFPs must use basic checkpoint calls.
v BMP, JMP, and batch programs can use either symbolic checkpoint calls or basic
checkpoint calls.

Programs that issue symbolic checkpoint calls can specify as many as seven data
areas in the program to be checkpointed. When IMS restarts the program, the
Restart call restores these areas to the condition they were in when the program
issued the symbolic checkpoint call. Because symbolic checkpoint calls do not
support z/OS files, if your program accesses z/OS files, you must supply your
own method of establishing checkpoints.

You can use symbolic checkpoint for either Normal Start or Extended Restart
(XRST).

Example: Typical calls for a Normal start would be as follows:


v XRST (I/O area is blank)
v CHKP (I/O area has checkpoint ID)
v Database Calls (including checkpoints)
v CHKP (final checkpoint)

Example: Typical calls for an Extended Restart (XRST) would be as follows:


v XRST (I/O area has checkpoint ID)
v CHKP (I/O area has new checkpoint ID)
v Database Calls (including checkpoints)
v CHKP (final checkpoint)

Related Reading: For more information on checkpoint calls, see IMS Version 10:
Application Programming Guide.

The restart call, which you must use with symbolic checkpoint calls, provides a
way of restarting a program after an abnormal termination. It restores the
program's data areas to the way they were when the program issued the symbolic
checkpoint call. It also restarts the program from the last checkpoint the program
established before terminating abnormally.

All programs can use basic checkpoint calls. Because you cannot use the restart call
with the basic checkpoint call, you must provide program restart. Basic checkpoint
calls do not support either z/OS or GSAM files. IMS programs cannot use z/OS
checkpoint and restart. If you access z/OS files, you must supply your own
method of establishing checkpoints and restarting.

Chapter 8. Analyzing IMS application processing requirements 113


In addition to the actions that occur at a commit point, issuing a checkpoint call
causes IMS to:
v Inform DB2 for z/OS that the changes your program has made to the database
can be made permanent. DB2 for z/OS makes the changes to DB2 for z/OS data
permanent, and IMS makes the changes to IMS data permanent.
v Write a log record containing the checkpoint identification given in the call to
the system log, but only if the PSB contains a DB PCB. You can print checkpoint
log records by using the IMS File Select and Formatting Print program
(DFSERA10). With this utility, you can select and print log records based on
their type, the data they contain, or their sequential positions in the data set.
Checkpoint records are X'18' log records.
Related Reading: For more information about the DFSERA10 program, see IMS
Version 10: System Utilities Reference.
v Send a message containing the checkpoint identification that was given in the
call to the system console operator and to the IMS master terminal operator.
v Return the next input message to the program's I/O area, if the program
processes input messages. In MPPs and transaction-oriented BMPs, a checkpoint
call acts like a call for a new message.

Restriction: Do not specify CHKPT=EOV on any DD statement in order to take an


IMS checkpoint because of unpredictable results.

When to use checkpoint calls


Issuing Checkpoint calls is most important in programs that do not have built-in
commit points. The decision about whether your program should issue
checkpoints, and if so, how often, depends on your program. Generally, these
programs should issue checkpoint calls:
v Multiple-mode programs
v Batch-oriented BMPs (which can issue either SYNC or CHKP calls)
v Most batch programs
v Programs that run in a data sharing environment
v JMP applications
You do not need to issue checkpoint calls in:
v Single-mode BMP or MPP programs
v Database load programs
v Programs that access the database in read-only mode, as defined with the
PROCOPT=GO option (during a PSBGEN), and are short enough to restart from
the beginning
v Programs that have exclusive use of the database

Checkpoints in MPPs and transaction-oriented BMPs: The mode type of the


program is specified on the MODE keyword of the TRANSACT macro during IMS
system generation. The modes are single and multiple.
v In single-mode programs
In single mode programs (MODE=SNGL was specified on the TRANSACT
macro during IMS system definition), a Get Unique to the message queue causes
an implicit commit to be performed.
v In multiple-mode programs
In multiple-mode BMPs and MPPs, the only commit points are those that result
from the checkpoint calls that the program issues and from normal program
termination. If the program terminates abnormally and it has not issued

114 Application Programming Planning Guide


checkpoint calls, IMS backs out the program's database updates and cancels the
messages it created since the beginning of the program. If the program has
issued checkpoint calls, IMS backs out the program's changes and cancels the
output messages it has created since the most recent checkpoint.
Consider the following when issuing checkpoint calls in multiple-mode
programs:
– How long it would take to back out and recover that unit of processing. The
program should issue checkpoints frequently enough to make the program
easy to back out and recover.
– How you want the output messages grouped. checkpoint calls establish how
a multiple-mode program's output messages are grouped. Programs should
issue checkpoint calls frequently enough to avoid building up too many
output messages.
Depending on the database organization, issuing a checkpoint call might reset
your position in the database.
Related Reading: For more information about losing your position when a
checkpoint is issued, see IMS Version 10: Database Administration Guide.

Checkpoints in batch-oriented BMPs: Issuing checkpoint calls in a batch-oriented


BMP is important for several reasons:
v In addition to committing changes to the database and establishing places from
which the program can be restarted, checkpoint calls release resources that IMS
has locked for the program.
v A batch-oriented BMP that uses DEDBs or MSDBs might terminate with abend
U1008 if a SYNC or CHKP call is not issued before the application program
terminates.
v If a batch-oriented BMP does not issue checkpoints frequently enough, it can be
abnormally terminated, or it can cause another application program to be
abnormally terminated by IMS for any of these reasons:
– If a BMP retrieves and updates many database records between checkpoint
calls, it can tie up large portions of the databases and cause long waits for
other programs needing those segments.
Exception: For a BMP with a processing option of GO or exclusive, IMS does
not lock segments for programs. Issuing checkpoint calls releases the
segments that the BMP has locked and makes them available to other
programs.
– The space needed to maintain lock information about the segments that the
program has read and updated exceeds what has been defined for the IMS
system. If a BMP locks too many segments, the amount of storage needed for
the locked segments can exceed the amount of available storage. If this
happens, IMS terminates the program abnormally. You must increase the
program's checkpoint frequency before rerunning the program. The available
storage is specified during IMS system definition.
Related Reading: For more information on specifying storage, see IMS Version
10: System Definition Reference.
You can limit the number of locks for the BMP by using the LOCKMAX=n
parameter on the PSBGEN statement. For example, a specification of
LOCKMAX=5 means the application cannot obtain more than 5000 locks at
any time. The value of n must be between 0 and 255. When a maximum lock
limit does not exist, 0 is the default. If the BMP tries to acquire more than the
specified number of locks, IMS terminates the application with abend U3301.
Related Reading: For more information about this abend, see IMS: Messages
and Codes Reference, Volume 3: IMS Abend Codes.
Chapter 8. Analyzing IMS application processing requirements 115
Checkpoints in batch programs: Batch programs that update databases should
issue checkpoint calls. The main consideration in deciding how often to take
checkpoints in a batch program is the time required to back out and reprocess the
program after a failure. A general recommendation is to issue one checkpoint call
every 10 or 15 minutes.

If you might need to back out the entire batch program, the program should issue
the checkpoint call at the beginning of the program. IMS backs out the program to
the checkpoint you specify, or to the most recent checkpoint, if you do not specify
a checkpoint. If the database is updated after the beginning of the program and
before the first checkpoint, IMS is not able to back out these database updates.

For a batch program to issue checkpoint calls, it must specify the compatibility
option in its PSB (CMPAT=YES). This generates an I/O PCB for the program,
which IMS uses as an I/O PCB in the checkpoint call.

Another important reason for issuing checkpoint calls in batch programs is that,
although they may currently run in an IMS batch region, they might later need to
access online databases. This would require converting them to BMPs. Issuing
checkpoint calls in a BMP is important for reasons other than recovery—for
example, to release database resources for other programs. So, you should initially
include checkpoints in all batch programs that you write. Although the checkpoint
support might not be needed then, it is easier to incorporate checkpoint calls
initially than to try to fit them in later.

To free database resources for other programs, batch programs that run in a
data-sharing environment should issue checkpoint calls more frequently than those
that do not run in a data-sharing environment.

Specifying checkpoint frequency


You should specify checkpoint frequency in your program so that you can easily
modify it when the frequency needs to be adjusted. You can do this by:
v Using a counter in your program to keep track of elapsed time, and issuing a
checkpoint call after a certain time interval.
v Using a counter to keep track of the number of root segments your program
accesses, and issuing a checkpoint call after a certain number of root segments.
v Using a counter to keep track of the number of updates your program performs,
and issuing a checkpoint call after a certain number of updates.

Data availability considerations


Your program might be unable to access data in a full-function database. This
section describes the conditions for an unavailable database and the program calls
that allow your program to manage data under these conditions.

Dealing with unavailable data


The conditions that make the database totally unavailable for both read and update
are:
v The /LOCK command for a database was issued.
v The /STOP command for a database was issued.
v The /DBRECOVERY command was issued.
v Authorization for a database failed.

The conditions that make the database available only for read and not for update
are:

116 Application Programming Planning Guide


v The /DBDUMP command has been issued.
v Database ACCESS value is RD (read).

In addition to unavailability of an entire database, other situations involving


unavailability of a limited amount of data can also inhibit program access. One
such example would be a failure situation involving data sharing. The active IMS
system knows which locks were held by a sharing IMS system at the time the
sharing IMS system failed. Although the active IMS system continues to use the
database, it must reject access to the data which the failed IMS system locked upon
failure. This situation occurs for both full-function and DEDB databases.

The two situations where the program might encounter unavailable data are:
v The program makes a call requiring access to a database that was unavailable at
the time the program was scheduled.
v The database was available when the program was scheduled, but limited
amounts of data are unavailable. The current call has attempted to access the
unavailable data.

Regardless of the condition causing the data to be unavailable, the program has
two possible approaches when dealing with unavailable data. The program can be
insensitive or sensitive to data unavailability.
v When the program is insensitive, IMS takes appropriate action when the
program attempts to access unavailable data.
v When the program is sensitive, IMS informs the program that the data it is
attempting to access is not available.

If the program is insensitive to data unavailability, and attempts to access


unavailable data, IMS aborts the program (3303 pseudo-abend), and backs out any
updates the program has made. The input message that the program was
processing is suspended, and the program is scheduled to process the input
message when the data becomes available. However, if the database is unavailable
because dynamic allocation failed, a call results in an AI (unable to open) status
code.

If the program is sensitive to data unavailability and attempts to access unavailable


data, IMS returns a status code indicating that it could not process the call. The
program then takes the appropriate action. A facility exists for the program to
initiate the same action that IMS would have taken if the program had been
insensitive to unavailable data.

IMS does not schedule batch programs if the data that the program can access is
unavailable. If the batch program is using block-level data sharing, it might
encounter unavailable data if the sharing system fails and the batch system
attempts to access data that was updated but not committed by the failed system.

The following conditions alone do not cause a batch program to fail during
initialization:
v A PCB refers to a HALDB.
v The use of DBRC is suppressed.
However, without DBRC, a database call using a PCB for a HALDB is not allowed.
If the program is sensitive to unavailable data, such a call results in the status code
BA; otherwise, such a call results in message DFS3303I, followed by ABENDU3303.

Chapter 8. Analyzing IMS application processing requirements 117


Scheduling and accessing unavailable databases
By using the INIT, INQY, SETS, SETU, and ROLS calls, the program can manage a data
environment where the program is scheduled with unavailable databases.

The INIT call informs IMS that the program is sensitive to unavailable data and
can accept the status codes that are issued when the program attempts to access
such data. The INIT call can also be used to determine the data availability for
each PCB.

The INQY call is operable in both batch and online IMS environments. IMS
application programs can use the INQY call to request information regarding output
destination, session status, the current execution environment, the availability of
databases, and the PCB address based on the PCBNAME. The INQY call is only
supported by way of the AIB interface (AIBTDLI or CEETDLI using the AIB rather
than the PCB address).

The SETS, SETU, and ROLS calls enable the application to define multiple points at
which to preserve the state of full-function (except HSAM) databases and message
activity. The application can then return to these points at a later time. By issuing a
SETS or SETU call before initiating a set of DL/I calls to perform a function, the
program can later issue the ROLS call if it cannot complete a function due to data
unavailability.

The ROLS call allows the program to roll back its IMS full-function database activity
to the state that it was in prior to a SETS or SETU call being issued. If the PSB
contains an MSDB or a DEDB, the SETS and ROLS (with token) calls are invalid. Use
the SETU call instead of the SETS call if the PSB contains a DEDB, MSDB, or GSAM
PCB.

Related Reading: For more information on using the SETS and SETU calls with the
ROLS call, see IMS Version 10: Application Programming Guide.

The ROLS call can also be used to undo all update activity (database and messages)
since the last commit point and to place the current input message on the suspend
queue for later processing. This action is initiated by issuing the ROLS call without
a token or I/O area.

Restriction: With DB2 for z/OS, you cannot use ROLS (with a token) or SETS.

Use of STAE or ESTAE and SPIE in IMS programs


IMS uses STAE or ESTAE routines in the control region, the dependent (MPP, IFP,
BMP) regions, and the batch regions. In the control region, STAE or ESTAE
routines ensure that database logging and various resource cleanup functions are
complete. In the dependent region, STAE or ESTAE routines are used to notify the
control region of any abnormal termination of the application program or the
dependent region itself. If the control region is not notified of the dependent region
termination, resources are not properly released and normal checkpoint shutdown
might be prevented.

In the batch region, STAE or ESTAE routines ensure that database logging and
various resource cleanup functions are complete. If the batch region is not notified
of the application program termination, resources might not be properly released.

Two important aspects of the STAE or ESTAE facility are that:

118 Application Programming Planning Guide


v IMS relies on its STAE or ESTAE facility to ensure database integrity and
resource control.
v The STAE or ESTAE facility is also available to the application program.
Because of these two factors, be sure you clearly understand the relationship
between the program and the STAE or ESTAE facility.

Generally, do not use the STAE or ESTAE facility in your application program.
However, if you believe that the STAE or ESTAE facility is required, you must
observe the following basic rules:
v When the environment supports STAE or ESTAE processing, the application
program STAE or ESTAE routines always get control before the IMS STAE or
ESTAE routines. Therefore, you must ensure that the IMS STAE or ESTAE exit
routines receive control by observing the following procedures in your
application program:
– Establish the STAE or ESTAE routine only once and always before the first
DL/I call.
– When using the STAE or ESTAE facility, the application program should not
alter the IMS abend code.
– Do not use the RETRY option when exiting from the STAE or ESTAE routine.
Instead, return a CONTINUE-WITH-TERMINATION indicator at the end of
the STAE or ESTAE processing. If your application program specifies the
RETRY option, be aware that IMS STAE or ESTAE exit routines will not get
control to perform cleanup. Therefore, system and database integrity might be
compromised.
– For PL/I for MVS and VM use of STAE and SPIE, see the description of IMS
considerations in Enterprise PL/I for z/OS and OS/390 Programming Guide.
– For PL/I for MVS and VM, COBOL for z/OS, and C/C++ for MVS/ESA, if
you are using the AIBTDLI interface in a non-Language Environment enabled
system, you must specify NOSTAE and NOSPIE. However, in Language
Environment® for MVS and VM Version 1.2 or later enabled environment, the
NOSTAE and NOSPIE restriction is removed.
v The application program STAE or ESTAE exit routine must not issue DL/I calls
(DB or TM) because the original abend might have been caused by a problem
between the application and IMS. A problem between the application and IMS
could result in recursive entry to STAE or ESTAE with potential loss of database
integrity, or in problems taking a checkpoint. This also could result in a hang
condition or an ABENDU0069 during termination.

Dynamic allocation for IMS databases


Use the dynamic allocation function to specify the JCL information for IMS
databases in a library instead of in the JCL of each batch or online job.

Related Reading: For additional information on the definitions for dynamic


allocation, see the description of the DFSMDA macro in IMS Version 10: System
Definition Reference.

If you use dynamic allocation, do not include JCL DD statements for any database
data sets that have been defined for dynamic allocation. Check with the DBA or
comparable specialist to determine which databases have been defined for dynamic
allocation.

Chapter 8. Analyzing IMS application processing requirements 119


120 Application Programming Planning Guide
Chapter 9. Analyzing CICS application processing
requirements
This section provides information for writing application programs in a CICS
environment. See Chapter 8, “Analyzing IMS application processing requirements,”
on page 99 for the corresponding information on IMS application programming.
This section explain the kinds of programs CICS supports and the requirements
that each satisfies.

Subsections:
v “Defining CICS application requirements”
v “Accessing databases with your CICS application program” on page 123
v “Writing a CICS program to access IMS databases” on page 124
v “Using data sharing for your CICS program” on page 128
v “Scheduling and terminating a PSB (CICS online programs only)” on page 129
v “Linking and passing control to other programs (CICS online programs only)”
on page 129
v “How CICS distributed transactions access IMS” on page 130
v “Maximizing the performance of your CICS system” on page 130
v “Programming integrity and database recovery considerations for your CICS
program” on page 131
v “Data availability considerations for your CICS program” on page 135
v “Use of STAE or ESTAE and SPIE in IMS batch programs” on page 137
v “Dynamic allocation for IMS databases” on page 138

| Defining CICS application requirements


One of the steps of application design is to decide how the business processes, or
tasks can be best grouped into a set of programs that will efficiently perform the
required processing. Some of the considerations in analyzing processing
requirements are:
When the task must be performed
– Will it be scheduled unpredictably (for example on terminal demand) or
periodically (for example, weekly)?
How the program that performs the task is executed
– Will it be executed online, where response time is more important, or by
batch job submission, where a slower response time is acceptable?
The consistency of the processing components
– Does this action the program is to perform involve more than one type of
program logic? For example, does it involve mostly retrievals, and only one
or two updates? If so, you should consider separating the updates into a
separate program.
– Does this action involve several large groups of data? If it does, it might be
more efficient to separate the programs by the data they access.
Any special requirements about the data or processing
Security Should access to the program be restricted?

© Copyright IBM Corp. 1974, 2010 121


Recovery Are there special recovery considerations in the program's
processing?
Integrity Do other departments use the same data?

Answers to questions like these can help you decide on the number of application
programs that the processing will require, and on the types of programs that
perform the processing most efficiently. Although rules dealing with how many
programs can most efficiently do the required processing do not exist, here are
some suggestions:
v As you look at each programming task, examine the data and processing that
each task involves. If a task requires different types of processing and has
different time limitations (for example, weekly as opposed to monthly), that task
may be more efficiently performed by several programs.
v As you define each program, it is a good idea for maintenance and recovery
reasons to keep programs as simple as possible. The simpler a program is—the
less it does—the easier it is to maintain, and to restart after a program or system
failure. The same is true with data availability—the less data that is accessed, the
more likely the data is to be available; the more limited the data accessed, the
more likely the data is to be available.
Similarly, if the data that the application requires is physically in one place, it
might be more efficient to have one program do more of the processing than
usual. These are considerations that depend on the processing and the data of
each application.
v Documenting each of the user tasks is helpful during the design process, and in
the future when others will work with your application. Be sure you are aware
of the standards in this area. The kind of information that is typically kept is
when the task is to be executed, a functional description, and requirements for
maintenance, security, and recovery.
Example: For the Current Roster process described under “Listing data
elements” on page 46, you might record the information shown in Figure 51.
How frequently the program is run is determined by the number of classes (20)
for which the Ed Center will print current rosters each week.

USER TASK DESCRIPTION

NAME: Current Roster


ENVIRONMENT: Batch FREQUENCY: 20 per week

INVOKING EVENT OR DOCUMENT: Time period (one week)

REQUIRED RESPONSE TIME: 24 hours

FUNCTION DESCRIPTION: Print weekly, a current student roster, in student


number sequence for each class offered at the Education Center.

MAINTENANCE: Included in Education DB maintenance.

SECURITY: None.

RECOVERY: After a failure, the ability to start printing a particular


class roster starting from a particular sequential student number.

Figure 51. Current roster task description

122 Application Programming Planning Guide


Accessing databases with your CICS application program
When designing your program, consider the type of data it must access. The type
of data depends on the operating environment. The data from IMS and DB2 for
z/OS databases, and z/OS files, that is available to CICS online and IMS batch
programs is shown in Table 28. Usage notes are also included.
Table 28. The data that your CICS program can access
DB2 for z/OS
Type of program IMS databases databases z/OS files
1 2
CICS online Yes Yes Yes3
DB batch Yes Yes3 Yes
Notes:
1. Except for Generalized Sequential Access Method (GSAM) databases. GSAM enables
batch programs to access a sequential z/OS data set as a simple database.
2. IMS does not participate in the call process.
3. Access through CICS file control or transient data services.

Also, consider the type of database your program must access. As shown in
Table 29, the type of program you can write and database that can be accessed
depends on the operating environment. Table 29 also includes usage notes.
Table 29. Program and database options in the CICS environments
Type of program
Environment1 you can write Type of database that can be accessed
DB batch DB batch DB2 for z/OS2
DL/I Full-function
GSAM
z/OS files
DBCTL BMP DB2 for z/OS
DEDBs
Full-function
GSAM
z/OS files
CICS online DB2 for z/OS2
DEDBs
Full-function
z/OS files (access through CICS file
control or transient data services)
Notes:
| 1. A CICS environment, or CICS remote DL/I environment also exists and is also referred
| to as function shipping. In this environment, a CICS system supports applications that
| issue DL/I calls but the CICS system does not service the requests itself.
| The CICS environment “function ships” the DL/I calls to another CICS system that is
| using DBCTL. For more information on remote DL/I, see CICS IMS Database Control
| Guide.
2. IMS does not participate in the call process.

The types of databases that can be accessed are:

Chapter 9. Analyzing CICS application processing requirements 123


Full-Function Databases
Full-function databases are hierarchic databases that are accessed through Data
Language I (DL/I). DL/I calls enable application programs to retrieve, replace,
delete, and add segments to full-function databases. CICS online and BMP
programs can access the same database concurrently (if participating in IMS
data sharing); an IMS batch program must have exclusive access to the
database (if not participating in IMS data sharing). See “Using data sharing for
your CICS program” on page 128 for more details about when to use this
environment.
All types of programs (batch, BMPs, and online) can access full-function
databases.
Fast Path DEDBs
Data entry databases (DEDBs) are hierarchic databases for, and efficient access
to, large volumes of detailed data. In the DBCTL environment, CICS online and
BMP programs can access DEDBs.
DB2 for z/OS Databases
DB2 for z/OS databases are relational databases. Relational databases are
represented to application programs and users as tables and are processed
using a relational data language called Structured Query Language (SQL). DB2
for z/OS databases can be processed by CICS online transactions, and by IMS
batch and BMP programs.
Related Reading: For information on processing DB2 for z/OS databases, see
DB2 for z/OS and OS/390 Application Programming and SQL Guide.
GSAM Databases
Generalized Sequential Access Method (GSAM) is an access method that
enables BMPs and batch programs to access a “flat” sequential z/OS data set as
a simple database. A GSAM database can be accessed by z/OS or CICS.
z/OS Files
CICS online and IMS batch programs can access z/OS files for their input,
processing, or output. Batch programs can access z/OS files directly; online
programs must access them through CICS file control or transient data services.

Writing a CICS program to access IMS databases


This section explains the following kinds of application programs that CICS users
can write to process IMS databases:
v CICS online programs
v IMS batch programs
v IMS batch message processing (BMP) programs that are batch-oriented
As shown in Table 29 on page 123, the types of programs you can use depend on
whether you are running in the DBCTL environment. Within the different
environments, the type of program you write depends on the processing your
application requires. Each type of program answers different application
requirements.

Writing a CICS online program


These topics describe a CICS online program and can help you decide if an online
program is appropriate for your application.

124 Application Programming Planning Guide


Data that a CICS online program can access
CICS online programs run in the DBCTL environment and can access IMS
full-function databases, Fast Path DEDBs, DB2 for z/OS databases, and z/OS files.

Online programs that access IMS databases are executed in the same way as other
CICS programs.

Using a CICS online program


An online program runs under the control of CICS, and it accesses resources
concurrently with other online programs. Some of the application requirements
online programs can answer are:
v Information in the database must be available to many users.
v Program needs to communicate with terminals and other programs.
v Programs must be available to users at remote terminals.
v Response time is important.

The structure of an online program, and the way it receives status information,
depend on whether it is a call- or command-level program. However, both
command- and call-level online programs:
v Schedule a PSB (for CICS online programs). A PSB is automatically scheduled
for batch or BMP programs.
v Issue either commands or calls to access the database. Online programs cannot
mix commands and calls in one logical unit of work (LUW).
v Optionally, terminate a PSB for CICS online programs.
v Issue an EXEC CICS RETURN statement when they have finished their processing.
This statement returns control to the linking program. When the highest-level
program issues the RETURN statement, CICS regains control and terminates the
PSB if it has not yet been terminated.

Because an online application program can be used concurrently by several tasks,


it must be quasi-reentrant.

An online program in the DBCTL environment can use many IMS system service
requests.

Related Reading:
v For more information on writing these types of programs, see
– IMS Version 10: Application Programming Guide or
– IMS Version 10: Application Programming API Reference
v For more details about programming techniques and restrictions, see CICS
Application Programming Reference.
v For a summary of the calls and commands an online program can issue, see
– IMS Version 10: Application Programming Guide or
– IMS Version 10: Application Programming API Reference

DL/I database or system service requests must refer to one of the program
communication blocks (PCBs) from the list of PCBs passed to your program by
IMS. The PCB that must be used for making system service requests is called the
I/O PCB. When present, it is the first PCB in the list of PCBs.

Chapter 9. Analyzing CICS application processing requirements 125


For an online program in the DBCTL environment, the I/O PCB is optional. To use
the I/O PCB, you must indicate this in the application program when it schedules
the PSB.

| Before you run your program, use the IMS ACBGEN utility to convert the program
| specification blocks (PSBs) and database descriptions (DBDs) to the internal control
| block format. PSBs describe the application program's characteristics and use of
| data and terminals. DBDs describe a database's physical and logical characteristics.

Related Reading: For more information on performing an ACBGEN and a


PSBGEN, see IMS Version 10: System Utilities Reference.

Because an online program shares a database with other online programs, it may
affect the performance of your online system. For more information on what you
can do to minimize the effect your program has on performance, see “Maximizing
the performance of your CICS system” on page 130.

Writing an IMS batch program


The topics describe a batch program and can help you decide if this program is
appropriate for your application.

Data that a batch program can access


A batch program can access DL/I full-function, DB2 for z/OS, and GSAM
databases, and z/OS files. A batch program cannot access DEDBs or MSDBs, and it
can run in the DBCTL environment.

Using a batch program


Batch programs typically run longer than online programs. If it is not participating
in IMS data sharing, a batch program runs by itself and does not compete with
other programs for database resources. Use a batch program to do a large number
of database updates or when you want to print a report. Batch programs:
v Typically produce a large amount of output—for example, reports.
v Are not executed by another program or user. They are usually scheduled at
specific time intervals (for example, weekly) and are started with JCL.
v Produce output that is not needed right away. The response time for batch
output is not as important as it usually is for online programs.
The structure of a batch program and the way it receives status information
depend on whether it is a command- or call-level program.

Related Reading: For more information on this topic, see:


v IMS Version 10: Application Programming Guide
v IMS Version 10: Application Programming API Reference

Unlike online programs, batch programs do not schedule or terminate PSBs. This is
done automatically.

Batch programs can issue system service requests (such as checkpoint, restart, and
rollback) to perform functions such as dynamically backing out database changes
made by your program.

Related Reading: For a summary of the commands and calls that you can use in a
batch program, see:
v IMS Version 10: Application Programming Guide

126 Application Programming Planning Guide


v IMS Version 10: Application Programming API Reference

When performing a PSBGEN, you must define the language of the program that
will schedule the PSB. For your program to be able to successfully issue certain
system service requests, such as a checkpoint or a rollback request, an I/O PCB
must be available for your program. To obtain an I/O PCB, specify CMPAT=YES in
the PSBGEN statement. Make all batch programs sensitive to the I/O PCB so that
checkpoints are easily introduced. Design all batch programs with checkpoint and
restart in mind. Although the checkpoint support may not be needed initially, it is
easier to incorporate checkpoints initially than to try to fit them in later. With
checkpoints, it will be easier to convert batch programs to BMP programs or to
batch programs that use data sharing.

Related Reading: For more information about obtaining an I/O PCB, see
“Requesting an I/O PCB in batch programs” on page 132. For information on how
to perform a PSBGEN, see IMS Version 10: System Utilities Reference.

Converting a batch program to a batch-oriented BMP


If you are running in the DBCTL environment, you can convert a batch program to
a batch-oriented BMP. Conversion to a BMP can be advantageous for these reasons:
v Logging is to the IMS log, which means that multiple logs are unnecessary.
v Automatic backout is available.
v Restart can be done automatically from the last checkpoint without changing the
JCL.
v Concurrent access to databases is possible. If you are needing to run your batch
programs offline, converting them to BMPs enables you to run them with the
online system, instead of having to wait until the online system is not running.
Running a batch program as a BMP can also keep the data more current.
v BMPs can access DEDBs.
v You can have a program that runs as either a batch or BMP program. However,
because batch programs require fewer checkpoint calls than BMPs (except when
data sharing), code checkpoint calls in a way that makes them easy to modify.
Also, if a program fails while running in a batch region, you must restart it in a
batch region. If a program fails in a BMP region, you must restart it in a BMP
region.
v If you are running sysplex data sharing, use of batch-oriented BMPs helps you
stay within the sysplex data sharing limit of 32 connections for each OSAM or
VSAM structure.

Requirements for converting a batch program to a BMP are:


v A BMP must have an I/O PCB. You can obtain an I/O PCB in batch by
specifying the compatibility option in the program specification block (PSB) for
the program.
Related Reading: For more information on the compatibility option in the PSB,
see IMS Version 10: System Utilities Reference.
v BMPs should issue checkpoint calls more frequently than batch programs.
However, batch programs in a data-sharing environment must also issue
checkpoint calls frequently.

Writing a batch-oriented BMP program


These topics describe a batch-oriented BMP program and can help you decide if
this program is appropriate for your application.

Chapter 9. Analyzing CICS application processing requirements 127


Data that a batch-oriented BMP can access
Batch-oriented batch message processing (BMP) programs can access full-function,
DEDB, DB2 for z/OS, and GSAM databases and z/OS files. Batch-oriented BMPs
can be run only in a DBCTL environment.

Using a batch-oriented BMP


A batch-oriented BMP performs batch processing online. A batch-oriented BMP can
be simply a batch program that runs online. You can even run the same program
as a BMP or as a batch program.

Recommendations: If the program performs a large number of database updates


without issuing checkpoint calls, it may be more efficient to run it as a batch
program so that it does not degrade the performance of the online system.

To use batch-oriented BMPs most efficiently, avoid a large amount of batch-type


processing online. If you have a BMP that performs time-consuming processing
such as report writing and database scanning, schedule it during non-peak hours
of processing.

Because BMPs can degrade response times, carefully consider the response time
requirements as you decide on the extent to which you will use batch message
processing. You should examine the trade-offs in using BMPs and use them
accordingly.

Recovering a batch-oriented BMP


Issuing checkpoint calls is an important part of batch-oriented BMP processing,
because commit points do not occur automatically as they do in some other types
of programs.

Unlike most batch programs, a BMP can share resources with CICS online
programs using DBCTL. In addition to committing database changes and
providing places from which to restart (as for a batch program), checkpoint calls
release resources locked for the program. For more information on issuing
checkpoint calls, see “Checkpoints in batch-oriented BMPs” on page 115.

If a batch-oriented BMP fails, IMS backs out the database updates the program has
made since the last commit point. You must restart the program with JCL. If the
BMP processes z/OS files, you must provide your own method of taking
checkpoints and restarting.

Using data sharing for your CICS program


If you use data sharing, your programs can participate in IMS data sharing. Under
data sharing, CICS online and BMP programs can access the same DL/I database
concurrently.

Batch programs in a data-sharing environment can access databases used by other


batch programs, and by CICS and IMS online programs. With data sharing, you
can share data directly and your program's requests need not go through a mirror
transaction.

Related Reading: For more information on sharing a database with an IMS system,
see IMS Version 10: System Administration Guide.

128 Application Programming Planning Guide


Scheduling and terminating a PSB (CICS online programs only)
Before your online program issues any DL/I calls, it must indicate to IMS its intent
to use a particular PSB by issuing either a PCB call or a SCHD command. In addition
to indicating which PSB your program will use, the PCB call obtains the address of
the PCBs in the PSB. When you no longer need a PSB, you can terminate it using
the TERM request. The rest of this section describes the use of the TERM request and
how it can affect your system.

In a CICS online program, you use a PCB call or SCHD command (for
command-level programs) to obtain the PSB for your program. Because CICS
releases the PSB your program uses when the transaction ends, your program need
not explicitly terminate the PSB. Only use a terminate request if you want to:
v Use a different PSB
v Commit all the database updates and establish a logical unit of work for backing
out updates
v Free IMS resources for use by other CICS tasks

A terminate request causes a CICS sync point, and a CICS sync point terminates
the PSB. For more information about CICS recovery concepts, see the appropriate
CICS publication.

Do not use terminate requests for other reasons because:


v A terminate request forces a CICS sync point. This sync point releases all
recoverable resources and IMS database resources that were enqueued for this
task.
If the program continues to update other CICS resources after the terminate
request and then terminates abnormally, only those resources that were updated
after the terminate request are backed out. Any IMS changes made by the
program are not backed out.
v IMS lock management detects deadlocks that occur if two transactions are
waiting for segments held by the other.
When a deadlock is detected, one transaction is abnormally terminated.
Database changes are backed out to the last TERM request. If a TERM request or
CICS sync point was issued prior to the deadlock, CICS does not restart the
transaction.
Related Reading: For a complete description of transaction restart
considerations, see CICS Recovery and Restart Guide.
v Issuing a terminate request causes additional logging.
v If the terminal output requests are issued after a terminate request and the
transaction fails at this point, the terminal operator does not receive the
message.
The terminal operator may assume that the entire transaction failed, and reenter
the input, thus repeating the updates that were made before the terminate
request. These updates were not backed out.

Linking and passing control to other programs (CICS online programs


only)
Use CICS to link your program to other programs without losing access to the
facilities acquired in the linking program, as in the following examples:

Chapter 9. Analyzing CICS application processing requirements 129


v You could schedule a PSB and then link to another program using a LINK
command. On return from that program, the PSB is still scheduled.
v Similarly, you could pass control to another program using the XCTL command,
and the PSB remains scheduled until that program issues an EXEC CICS
RETURN statement. However, when you pass control to another program using
XCTL, the working storage of the program passing control is lost. If you want to
retain the working storage for use by the program being linked to, you must
pass the information in the COMMAREA.

Recommendation: To simplify your work, instead of linking to another program,


you can issue all DL/I requests from one program module. This helps to keep the
programming simple and easy to maintain.

Terminating a PSB or issuing a sync point affects the linking program. For
example, a terminate request or sync point that is issued in the program that was
linked causes the release of CICS resources enqueued in the linking program.

| How CICS distributed transactions access IMS


CICS can divide a single, logical unit of work into separate CICS transactions and
coordinate the sync point globally. If such CICS transactions access DBCTL, locking
and buffer management issues might occur. To IMS, the transactions are separate
units of work, on different DBCTL threads, and they do not share locks or buffers.
For example, if a global transaction runs, obtains a database lock, and reaches the
commit point, CICS does not process the synchronization point until the other
transactions in the CICS unit of recovery (UOR) are ready to commit. If a second
transaction in the same CICS UOR requests the same lock as that held by the first
transaction, the second transaction is held in a lock wait state. The first transaction
cannot complete the sync point and release the lock until the second transaction
also reaches the commit point, but this cannot happen because the second
transaction is in a lock wait state. You must ensure that this type of collision does
not occur with CICS distributed transactions that access IMS.

Maximizing the performance of your CICS system


When you write programs that share data with other programs (for example, a
program that will participate in IMS data sharing or a BMP), be aware of how
your program affects the performance of the online system. This section explains
some things you can do to minimize the effect your program has on that
performance.

A BMP program, in particular, can affect the performance of the CICS online
transactions. This is because BMP programs usually make a larger number of
database updates than CICS online transactions, and a BMP program is more likely
to hold segments that CICS online programs need. Limit the number of segments
held by a BMP program, so CICS online programs need not wait to acquire them.

One way to limit the number of segments held by a BMP or batch program that
participates in IMS data sharing is to issue checkpoint requests in your program to
commit database changes and release segments held by the program. When
deciding how often to issue checkpoint requests, you can use one or more of the
following techniques:
v Divide the program into small logical units of work, and issue a checkpoint call
at the end of each unit.

130 Application Programming Planning Guide


v Issue a checkpoint call after a certain number of DL/I requests have been issued,
or after a certain number of transactions are processed.

In CICS online programs, release segments for use by other transactions to


maximize the performance of your online system. (Ordinarily, database changes are
committed and segments are released only when control is returned to CICS.) To
more quickly free resources for use by other transactions, you can issue a TERM
request to terminate the PSB. However, less processing overhead generally occurs
if the PSB is terminated when control is returned to CICS.

Programming integrity and database recovery considerations for your


CICS program
This section explains how IMS and CICS protect data integrity for CICS online
programs, and how you can plan ahead for recovering batch and BMP programs.

How IMS protects data integrity for your program (CICS online
programs)
IMS protects the integrity of the database for programs that share data by:
v Preventing other application programs with update capability from accessing
any segments in the database record your program is processing, until your
program finishes with that record and moves to a new database record in the
same database.
v Preventing other application programs from accessing segments that your
program deletes, replaces, or inserts, until your program reaches a sync point.
When your program reaches a sync point, the changes your program has made
to the database become permanent, and the changed data becomes available to
other application programs.
Exception: If PROCOPT=GO has been defined during PSBGEN for your
program, your program can access segments that have been updated but not
committed by another program.
v Backing out database updates made by an application program that terminates
abnormally.

You may also want to protect the data your program accesses by retaining
segments for the sole use of your program until your program reaches a sync
point—even if you do not update the segments. (Ordinarily, if you do not update
the segments, IMS releases them when your program moves to a new database
record.) You can use the Q command code to reserve segments for the exclusive
use of your program. You should use this option only when necessary because it
makes data unavailable to other programs and can have an impact on
performance.

Recovering databases accessed by batch and BMP programs


This section describes the planning you must do for recovering databases accessed
by batch or BMP programs. CICS recovers databases accessed by CICS online
programs in the same way it handles other recoverable CICS resources. For
example, if an IMS transaction terminates abnormally, CICS and IMS back out all
database updates to the last sync point.

For batch or BMP programs, do the following:

Chapter 9. Analyzing CICS application processing requirements 131


v Take checkpoints in your program to commit database changes and provide
places from which your program can be restarted.
v Provide the code for or issue a request to restart your program.
You may also want to back out the database changes that have been made by a
batch program that has not yet committed these changes.

To perform these tasks, you use system service calls, described in more detail in
the appropriate application programming information for your environment.

Requesting an I/O PCB in batch programs


For your program to successfully issue any system service request, an I/O PCB
must have been previously requested. See IMS Version 10: Application Programming
Guide for details on how to request an I/O PCB in your program.

Taking checkpoints in batch and BMP programs


Taking checkpoints in batch and BMP programs is important for two reasons:
Recovery
Checkpoints establish places in your program from which your program could
be restarted, in the event of a program or system failure. If your program
abnormally terminates after issuing a checkpoint request, database changes will
be backed out to the point at which the checkpoint request was issued.
Integrity
Checkpoints also commit the changes your program has made to the database.

In addition to providing places from which to restart your program and


committing database changes, issuing checkpoint calls in a BMP program or in a
program participating in IMS data sharing releases database segments for use by
other programs.

When a batch or BMP program issues a checkpoint request, IMS writes a record
containing a checkpoint ID to the IMS/ESA® system log.

When your application program reaches a point during its execution where you
want to make sure that all changes made to that point have been physically
entered in the database, issue a checkpoint request. If some condition causes your
program to fail before its execution is complete, the database must be restored to
its original state. The changes made to the database must be backed out so that the
database is not left in a partially updated condition for access by other application
programs.

If your program runs a long time, you can reduce the number of changes that
must be backed out by taking checkpoints in your program. Then, if your program
terminates abnormally, only the database updates that occurred after the
checkpoint must be backed out. You can also restart the program from the point at
which you issued the checkpoint request, instead of having to restart it from the
beginning.

Issuing a checkpoint call cancels your position in the database.

Issue a checkpoint call just before issuing a Get Unique call, which reestablishes
your position in the database record after the checkpoint is taken.

132 Application Programming Planning Guide


The kinds of checkpoints you can use: The two kinds of checkpoint calls are:
basic and symbolic.See Both kinds commit your program's changes to the database
and establish places from which your program can be restarted:

Batch and BMP programs can issue basic checkpoint calls using the CHKP call.
When you use basic checkpoint calls, you must provide the code for restarting the
program after an abnormal termination.

Batch and BMP programs can also issue symbolic checkpoint calls. You can issue a
symbolic checkpoint call by using the CHKP call. Like the basic checkpoint call, the
symbolic checkpoint call commits changes to the database and establishes places
from which the program can be restarted. In addition, the symbolic checkpoint call:
v Works with the Extended Restart call to simplify program restart and recovery.
v Lets you specify as many as seven data areas in the program to be checkpointed.
When you restart the program, the restart call restores these areas to the way
they were when the program terminated abnormally.

Specifying a checkpoint ID: Each checkpoint call your program issues must have
an identification, or ID. Checkpoint IDs must be 8 bytes in length and should
contain printable EBCDIC characters.

When you want to restart your program, you can supply the ID of the checkpoint
from which you want the program to be started. This ID is important because
when your program is restarted, IMS then searches for checkpoint information
with an ID matching the one you have supplied. The first matching ID that IMS
encounters becomes the restart point for your program. This means that checkpoint
IDs must be unique both within each application program and among application
programs. If checkpoint IDs are not unique, you cannot be sure that IMS will
restart your program from the checkpoint you specified.

One way to make sure that checkpoint IDs are unique within and among programs
is to construct IDs in the following order:
v Three bytes of information that uniquely identifies your program.
v Five bytes of information that serves as the ID within the program, for example,
a value that is increased by 1 for each checkpoint command or call, or a portion
of the system time obtained at program start by issuing the TIME macro.

Specifying checkpoint frequency: To determine the frequency of checkpoint


requests, you must consider the type of program and its performance
characteristics.

In batch programs: When deciding how often to issue checkpoint requests in a


batch program, you should consider the time required to back out and reprocess
the program after a failure. For example, if you anticipate that the processing your
program performs will take a long time to back out, you should establish
checkpoints more frequently.

If you might back out of the entire program, issue the checkpoint request at the
very beginning of the program. IMS backs out the database updates to the
checkpoint you specify. If the database is updated after the beginning of the
program and before the first checkpoint, IMS is not able to back out these database
updates.

Chapter 9. Analyzing CICS application processing requirements 133


In a data-sharing environment, also consider the impact of sharing resources with
other programs on your online system. You should issue checkpoint calls more
frequently in a batch program that shares data with online programs, to minimize
resource contention.

It is a good idea to design all batch programs with checkpoint and restart in mind.
Although the checkpoint support may not be needed initially, it is easier to
incorporate checkpoint calls initially than to try to fit them in later. If the
checkpoint calls are incorporated, it is easier to convert batch programs to BMP
programs or to batch programs that use data sharing.

In BMP programs: When deciding how often to issue checkpoint requests in a


BMP program, consider the performance of your CICS online system. Because
these programs share resources with CICS online transactions, issue checkpoint
requests to release segments so CICS online programs need not wait to acquire
them. “Maximizing the performance of your CICS system” on page 130 explains
this in more detail.

Printing checkpoint log records: You can print checkpoint log records by using
the IMS File Select and Formatting Print Program (DFSERA10). With this utility,
you can select and print log records based on their type, the data they contain, or
their sequential positions in the data set. Checkpoint records are type 18 log
records. IMS Version 10: System Utilities Reference describes this program.

Backing out database changes


If your program terminates abnormally, the database must be restored to its
previous state and uncommitted changes must be backed out. Changes made by a
BMP or CICS online program are automatically backed out. Database changes
made by a batch program might or might not be backed out, depending on
whether your system log is on DASD.

For a batch program: What happens when a batch program terminates


abnormally and how you recover the database depend on the storage medium for
the system log. You can specify that the system log is to be stored on either DASD
or on tape.
When the system log is on DASD
You can specify that IMS is to dynamically back out the changes that a batch
program has made to the database since its last commit point by coding
BKO=Y in the JCL. IMS performs dynamic backout for a batch program when
an IMS-detected failure occurs, such as when a deadlock is detected (for batch
programs that share data).
DASD logging also makes it possible for batch programs to issue the rollback
(ROLB) system service request, in addition to ROLL. The ROLB request causes IMS
to dynamically back out the changes the program has made to the database
since its last commit point, and then to return control to the application
program.
Dynamically backing out database changes has the following advantages:
– Data accessed by the program that failed is immediately available to other
programs. Otherwise, if batch backout is not used, data is not available to
other programs until the IMS Batch Backout utility has been run to back out
the database changes.
– If two programs are deadlocked, one of the programs can continue
processing. Otherwise, if batch backout is not used, both programs will fail.
(This applies only to batch programs that share data.)

134 Application Programming Planning Guide


Instead of using dynamic backout, you can run the IMS Batch Backout utility to
back out changes.
When the system log is on tape
If a batch application program terminates abnormally and the system log is
stored on tape, you must use the IMS Batch Backout utility to back out the
program's changes to the database.

Related Reading: For more information, see IMS Version 10: Database Utilities
Reference.

For BMP programs: If your program terminates abnormally, the changes the
program has made since the last commit point are backed out. If a system failure
occurs, or if the CICS control region or DBCTL terminates abnormally, DBCTL
emergency restart backs out all changes made by the program since the last
commit point. You need not use the IMS Batch Backout utility because DBCTL
backs out the changes. If you need to back out all changes, you can use the ROLL
system service call to dynamically back out database changes.

Restarting your program


If you issue symbolic checkpoint calls (for batch and BMP programs), you can use
the Extended Restart system service request (XRST) to restart your program after an
abnormal termination. The XRST call restores the program's data areas to the way
they were when the program terminated abnormally, and it restarts the program
from the last checkpoint request the program issued before terminating abnormally.

If you use basic checkpoint calls (for batch and BMP programs), you must provide
the necessary code to restart the program from the latest checkpoint in the event
that it terminates abnormally.

One way to restart the program from the latest checkpoint is to store repositioning
data in an HDAM database. Your program writes a database record containing
repositioning information to the HDAM database. It updates this record at
intervals. When the program terminates, the database record is deleted. At the
completion of the XRST call, the I/O area always contains a checkpoint ID used by
the restart. Normally, XRST will return the 8-byte symbolic checkpoint ID, followed
by 4 blanks. If the 8-byte ID consists of all blanks, then XRST will return the 14-byte
time-stamp ID. Also, check the status code in the PCB. The only successful status
code for an XRST call is a row of blanks.

Data availability considerations for your CICS program


Unfortunately, the data that a program needs to access may sometimes be
unavailable. This section describes the situations where data is unavailable,
whether a program is scheduled in these situations, and the functions your
program might need to use when data is not available.

Unavailability of a database
The conditions that make an entire database unavailable for both read and update
are the following:
v A STOP command has been issued for the database.
v A DBRECOVERY (DBR) command has been issued for the database.
v DBRC authorization for the database has failed.
The conditions that make a database available for read but not for update are:
v A DBDUMP command has been issued for the database.
Chapter 9. Analyzing CICS application processing requirements 135
v The database access value is RD (read).

In a data-sharing environment, the command or error that created any of these


conditions may have originated on the other system which is sharing data.

Whether a program is scheduled or whether an executing program can schedule a


PSB when the database is unavailable depends on the type of program and the
environment:
v A batch program
IMS does not schedule a batch program when one of the databases that the
program can access is not available.
In a non-data sharing environment, DBRC authorization for a database may fail
because the database is currently authorized to a DB/DC environment. In a
data-sharing environment, a CICS or a DBCTL master terminal global command
to recover a database or to dump a database may make the database unavailable
to a batch program.
The following conditions alone do not cause a batch program to fail during
initialization:
– A PCB refers to a HALDB.
– The use of DBRC is suppressed.
However, without DBRC, a database call using a PCB for a HALDB is not
allowed. If the program is sensitive to unavailable data, such a call results in the
status code BA; otherwise, such a call results in message DFS3303I, followed by
ABENDU3303.
v An online or BMP program in the DBCTL environment.
When a program executing in this environment attempts to schedule with a PSB
containing one or more full-function databases that are unavailable, the
scheduling is allowed. If the program does not attempt to access the unavailable
database, it can function normally. If it does attempt to access the database, the
result is the same as when the database is available but some of the data in it is
not available.

Unavailability of some data in a database


In addition to the situation where the entire database is unavailable, there are other
situations where a limited amount of data is unavailable. One example is a failure
situation involving data sharing where the IMS system knows which locks were
held by a sharing IMS at the time the sharing IMS system failed. This IMS system
continues to use the database but rejects access to the data that the failed IMS
system held locked at the time of failure.

A batch program, an online program, or a BMP program can be operating in the


DBCTL environment. If so, the online or BMP programs may have been scheduled
when an entire database was not available. The following options apply to these
programs when they attempt to access data and either the entire database is
unavailable or only some of the data in the database is unavailable.

Programs executing in these environments have an option of being sensitive or


insensitive to data unavailability.
v When the program is insensitive to data unavailability and attempts to access
unavailable data, the program fails with a 3303 abend. For online programs, this
is a pseudo-abend. For batch programs, it is a real abend. However, if the
database is unavailable because dynamic allocation failed, a call results in an AI
(unable to open) status code.

136 Application Programming Planning Guide


v When the program is sensitive to data unavailability and attempts to access
unavailable data, IMS returns a status code indicating that it could not process
the request. The program can then take the appropriate action. A facility exists
for the program to then initiate the same action that IMS would have taken if
the program had been insensitive to unavailable data.

The program issues the INIT call or ACCEPT STATUS GROUP A command to inform
IMS that it is sensitive to unavailable data and can accept the status codes issued
when the program attempts to access such data. The INIT request can also be used
to determine data availability for each PCB in the PSB.

The SETS or SETU and ROLS functions


The SETS or SETU and ROLS requests allow an application to define multiple points
at which to preserve the state of full-function databases. The application can then
return to these points at a later time. By issuing a SETS or SETU request before
initiating a set of DL/I requests to perform a function, the program can later issue
the ROLS request if it cannot complete the function due possibly to data
unavailability.

ROLS allows the program to roll back its IMS activity to the state prior to the SETS
or SETU call.

Restriction: SETS or SETU and ROLS only roll back the IMS updates. They do not roll
back the updates made using CICS file control or transient data.

Additionally, you can use the ROLS call or command to undo all database update
activity since the last checkpoint.

Use of STAE or ESTAE and SPIE in IMS batch programs


This section describes using STAE or ESTAE and SPIE in an IMS batch program.
For information on using these routines in CICS online programs, refer to CICS
manuals.

IMS uses STAE or ESTAE routines in the IMS batch regions to ensure that database
logging and various resource cleanup functions are completed. Two important
aspects of the STAE or ESTAE facility are that:
v IMS relies on its STAE or ESTAE facility to ensure database integrity and
resource control.
v The STAE or ESTAE facility is also available to the application program.
Because of these two factors, be sure you clearly understand the relationship
between the program and the STAE or ESTAE facility.

Generally, do not use the STAE or ESTAE facility in your batch application
program. However, if you believe that the STAE or ESTAE facility is required, you
must observe the following basic rules:
v When the environment supports STAE or ESTAE processing, the application
program STAE or ESTAE routines always get control before the IMS STAE or
ESTAE routines. Therefore, you must ensure that the IMS STAE or ESTAE exit
routines receive control by observing the following procedures in your
application program:
– Establish the STAE or ESTAE routine only once and always before the first
DL/I call.

Chapter 9. Analyzing CICS application processing requirements 137


– When using the STAE or ESTAE facility, the application program must not
alter the IMS abend code.
– Do not use the RETRY option when exiting from the STAE or ESTAE routine.
Instead, return a CONTINUE-WITH-TERMINATION indicator at the end of
the STAE or ESTAE processing. If your application program does specify the
RETRY option, be aware that IMS STAE or ESTAE exit routines will not get
control to perform cleanup. Therefore, system and database integrity may be
compromised.
– For PL/I for MVS and VM, use of STAE and SPIE, see the description of IMS
considerations in Enterprise PL/I for z/OS and OS/390 Programming Guide.
– For PL/I for MVS and VM, COBOL for z/OS, and C/C++ for MVS/ESA, if
you are using the AIBTDLI interface in a non-Language Environment enabled
environment, you must specify NOSTAE or NOSPIE. However, in a Language
Environment for MVS and VM Version 1.2 or later enabled environment, the
NOSTAE and NOSPIE restriction is removed.
v The application program STAE/ESTAE exit routine must not issue DL/I calls
because the original abend may have been caused by a problem between the
application and IMS. This would result in recursive entry to STAE/ESTAE with
potential loss of database integrity or in problems taking a checkpoint.

Dynamic allocation for IMS databases


Use the dynamic allocation function to specify the JCL information for IMS
databases in a library instead of in the JCL of each batch job or in the JCL for
DBCTL. If you use dynamic allocation, do not include JCL DD statements for any
database data sets that have been defined for dynamic allocation. Check with the
database administrator (DBA) or comparable specialist at to determine which
databases have been defined for dynamic allocation.

Related Reading: For more information on the definitions for dynamic allocation,
see the DFSMDA macro in IMS Version 10: System Definition Reference.

138 Application Programming Planning Guide


Chapter 10. Gathering requirements for database options
This section guides you in gathering information that the database administrator
(DBA) can use in designing a database and implementing that design. After
designing hierarchies for the databases that your application will access, the DBA
evaluates database options in terms of which options will best meet application
requirements. Whether these options are used depends on the collected
requirements of the applications. To design an efficient database, the DBA needs
information about the individual applications. This section describes the type of
information that can be helpful to the DBA, how the information you are gathering
relates to different database options, and the different aspects of your application
that you need to examine.

Subsections:
v “Analyzing data access”
v “Understanding how data structure conflicts are resolved” on page 147
v “Providing data security” on page 157
v “Read without integrity” on page 161

Analyzing data access


The DBA chooses a type of database, based on how the majority of programs that
use the database will access the data. IMS databases are categorized according to
the access method used. The following is a list of the types of databases that can
be defined:
HDAM (Hierarchical Direct Access Method)
PHDAM (Partitioned Hierarchical Direct Access Method)
HIDAM (Hierarchical Indexed Direct Access Method)
PHIDAM (Partitioned Hierarchical Indexed Direct Access Method)
MSDB (Main Storage Database)
DEDB (Data Entry Database)
HSAM (Hierarchical Sequential Access Method)
HISAM (Hierarchical Indexed Sequential Access Method)
GSAM (Generalized Sequential Access Method)
SHSAM (Simple Hierarchical Sequential Access Method)
SHISAM (Simple Hierarchical Indexed Sequential Access Method)

Important: PHDAM and PHIDAM are the partitioned versions of the HDAM and
HIDAM database types, respectively. The corresponding descriptions of the HDAM
and HIDAM database types therefore apply to PHDAM and PHIDAM in the these
sections .

Some of the information that you can gather to help the DBA with this decision
answers questions like the following:
v To access a database record, a program must first access the root of the record.
How will each program access root segments?
Directly
Sequentially

© Copyright IBM Corp. 1974, 2010 139


Both
v The segments within the database record are the dependents of the root
segment. How will each program access the segments within each database
record?
Directly
Sequentially
Both
It is important to note the distinction between accessing a database record and
accessing segments within the record. A program could access database records
sequentially, but after the program is within a record, the program might access
the segments directly. These are different, and can influence the choice of access
method.
v To what extent will the program update the database?
By adding new database records?
By adding new segments to existing database records?
By deleting segments or database records?

Again, note the difference between updating a database record and updating a
segment within the database record.

Subsections:
v “Direct access”
v “Sequential access” on page 144
v “Accessing z/OS files through IMS: GSAM” on page 146
v “Accessing IMS data through z/OS: SHSAM and SHISAM” on page 146

Direct access
The advantage of direct access processing is that you can get good results for both
direct and sequential processing. Direct access means that by using a randomizing
routine or an index, IMS can find any database record that you want, regardless of
the sequence of database records in the database.

IMS full function has four direct access methods.


v HDAM and PHDAM process data directly by using a randomizing routine to
store and locate root segments.
v HIDAM and PHIDAM use an index to help them provide direct processing of
root segments.

The direct access methods use pointers to maintain the hierarchic relationships
between segments of a database record. By following pointers, IMS can access a
path of segments without passing through all the segments in the preceding paths.

Some of the requirements that direct access satisfies are:


v Fast direct processing of roots using an index or a randomizing routine
v Sequential processing of database records with HIDAM and PHIDAM using the
index
v Fast access to a path of segments using pointers

In addition, when you delete data from a direct-access database, the new space is
available almost immediately. This gives you efficient space utilization; therefore,

140 Application Programming Planning Guide


reorganization of the database is often unnecessary. Direct access methods
internally maintain their own pointers and addresses.

A disadvantage of direct access is that you have a larger IMS overhead because of
the pointers. But if direct access fulfills your data access requirements, it is more
efficient than using a sequential access method.

Subsections:
v “Primarily direct processing: HDAM”
v “Direct and sequential processing: HIDAM” on page 142
v “Main storage database: MSDB” on page 143
v “Data entry database: DEDB” on page 144

Primarily direct processing: HDAM


Important: PHDAM is the partitioned version of the HDAM database type. The
corresponding descriptions of the HDAM database type therefore apply to
PHDAM in the these sections .

HDAM is efficient for a database that is usually accessed directly but sometimes
sequentially. HDAM uses a randomizing routine to locate its root segments and
then chains dependent segments together according to the pointer options chosen.
The z/OS access methods that HDAM can use are Virtual Storage Access Method
(VSAM) and Overflow Storage Access Method (OSAM).

The requirements that HDAM satisfies are:


v Direct access of roots by root keys because HDAM uses a randomizing routine
to locate root segments
v Direct access of paths of dependents
v Adding new database records and new segments because the new data goes into
the nearest available space
v Deleting database records and segments because the space created by a deletion
can be used by any new segment

HDAM characteristics: An HDAM database:


v Can store root segments anywhere. Root segments do not need to be in sequence
because the randomizing routine locates them.
v Uses a randomizing routine to locate the relative block number and root anchor
point (RAP) within the block that points to the root segment.
v Accesses the RAPs from which the roots are chained in physical sequence. Then
the root segments that are chained from the root anchors are returned. Therefore,
sequential retrieval of root segments from HDAM is not based on the results of
the randomizing routine and is not in key sequence unless the randomizing
routine put them into key sequence.
v May not give the desired result for some calls unless the randomizing module
causes the physical sequence of root segments to be in the key sequence. For
example, a GU call for a root segment that is qualified as less than or equal to a
root key value would scan in physical sequence for the first RAP of the first
block. This may result in a not-found condition, even though segments meeting
the qualification do exist.

For dependent segments, an HDAM database:


v Can store them anywhere
v Chains all segments of one database record together with pointers

Chapter 10. Gathering requirements for database options 141


An Overview of how HDAM works: This section contains diagnosis,
modification, or tuning information.

When a database record is stored in an HDAM database, HDAM keeps one or


more RAPs at the beginning of each physical block. The RAP points to a root
segment. HDAM also keeps a pointer at the beginning of each physical block that
points to any free space in the block. When you insert a segment, HDAM uses this
pointer to locate free space in the physical block. To locate a root segment in an
HDAM database, you give HDAM the root key. The randomizing routine gives it
the relative physical block number and the RAP that points to the root segment.
The specified RAP number gives HDAM the location of the root within a physical
block.

Although HDAM can place roots and dependents anywhere in the database, it is
better to choose HDAM options that keep roots and dependents close together.

HDAM performance depends largely on the randomizing routine you use.


Performance can be very good, but it also depends on other factors such as:
v The block size you use
v The number of RAPs per block
v The pattern for chaining together different segments. You can chain segments of
a database record in two ways:
– In hierarchic sequence, starting with the root
– In parent-to-dependent sequence, with parents having pointers to each of
their paths of dependents

To use HDAM for sequential access of database records by root key, you need to
use a secondary index or a randomizing routine that stores roots in physical key
sequence.

Direct and sequential processing: HIDAM


Important: PHIDAM is the partitioned version of the HIDAM database type. The
corresponding descriptions of the HIDAM database type therefore apply to
PHIDAM in the these sections .

HIDAM is the access method that is most efficient for an approximately equal
amount of direct and sequential processing. The z/OS access methods it can use
are VSAM and OSAM. The specific requirements that HIDAM satisfies are:
v Direct and sequential access of records by their root keys
v Direct access of paths of dependents
v Adding new database records and new segments because the new data goes into
the nearest available space
v Deleting database records and segments because the space created by a deletion
can be used by any new segment

HIDAM can satisfy most processing requirements that involve an even mixture of
direct and sequential processing. However, HIDAM is not very efficient with
sequential access of dependents.

HIDAM characteristics: For root segments, a HIDAM database:


v Initially loads them in key sequence
v Can store new root segments wherever space is available

142 Application Programming Planning Guide


v Uses an index to locate a root that you request and identify by supplying the
root's key value

For dependent segments, a HIDAM database:


v Can store segments anywhere, preferably fairly close together
v Chains all segments of a database record together with pointers

An overview of how HIDAM works: This section contains diagnosis,


modification, or tuning information.

HIDAM uses two databases. The primary database holds the data. An index
database contains entries for all of the root segments in order by their key fields.
For each key entry, the index database contains the address of that root segment in
the primary database.

When you access a root, you supply the key to the root. HIDAM looks the key up
in the index to find the address of the root and then goes to the primary database
to find the root.

HIDAM chains dependent segments together so that when you access a dependent
segment, HIDAM uses the pointer in one segment to locate the next segment in the
hierarchy.

When you process database records directly, HIDAM locates the root through the
index and then locates the segments from the root. HIDAM locates dependents
through pointers.

If you plan to process database records sequentially, you can specify special
pointers in the DBD for the database so that IMS does not need to go to the index
to locate the next root segment. These pointers chain the roots together. If you do
not chain roots together, HIDAM always goes to the index to locate a root
segment. When you process database records sequentially, HIDAM accesses roots
in key sequence in the index. This only applies to sequential processing; if you
want to access a root segment directly, HIDAM uses the index, and not pointers in
other root segments, to find the root segment you have requested.

Main storage database: MSDB


Use MSDBs to store the most frequently-accessed data. MSDBs are suitable for
applications such as general ledger applications in the banking industry.

MSDB characteristics: MSDBs reside in virtual storage, enabling application


programs to avoid the I/O activity that is required to access them. The two kinds
of MSDBs are terminal-related and non-terminal-related.

In a terminal-related MSDB, each segment is owned by one terminal, and each


terminal owns only one segment. One use for this type of MSDB is an application
in which each segment contains data associated with a logical terminal. In this type
of application, the program can read the data (perhaps for reporting purposes), but
cannot update it. A non-terminal-related MSDB stores data that is needed by many
users during the same time period. It can be updated and read from all terminals
(for example, a real time inventory control application, where reduction of
inventory can be noted from many cash registers).

Chapter 10. Gathering requirements for database options 143


An overview of how MSDBs work:

Diagnosis, Modification or Tuning Information

MSDB segments are stored as root segments only. Only one type of pointer, the
forward chain pointer, is used. This pointer connects the segment records in the
database.
End of Diagnosis, Modification or Tuning Information

Data entry database: DEDB


DEDBs are designed to provide access to and efficient storage for large volumes of
data. The primary requirement a DEDB satisfies is a high level of data availability.

DEDB characteristics: DEDBs are hierarchic databases that can have as many as
15 hierarchic levels, and as many as 127 segment types. They can contain both
direct and sequential dependent segments. Because the sequential dependent
segments are stored in chronological order as they are committed to the database,
they are useful in journaling applications.

DEDBs support a subset of functions and options that are available for a HIDAM
or HDAM database. For example, a DEDB does not support indexed access
(neither primary index nor secondary index), or logically related segments.

An overview of how DEDBs work:

Diagnosis, Modification or Tuning Information

A DEDB can be partitioned into multiple areas, with each area containing a
different collection of database records. The data in a DEDB area is stored in a
VSAM data set. Root segments are stored in the root-addressable part of an area,
with direct dependents stored close to the roots for fast access. Direct dependents
that cannot be stored close to their roots are stored in the independent overflow
portion of the area. Sequential dependents are stored in the sequential dependent
portion at the end of the area so that they can be quickly inserted. Each area data
set can have up to seven copies, making the data easily available to application
programs.
End of Diagnosis, Modification or Tuning Information

Sequential access
When you use a sequential access method, the segments in the database are stored
in hierarchic sequence, one after another, with no pointers.

IMS full-function has two sequential access methods. Like the direct access
methods, one has an index and the other does not:
v HSAM only processes root segments and dependent segments sequentially.
v HISAM processes data sequentially but has an index so that you can access
records directly. HISAM is primarily for sequentially processing dependents, and
directly processing database records.

Some of the general requirements that sequential access satisfies are:


v Fast sequential processing
v Direct processing of database records with HISAM

144 Application Programming Planning Guide


v Small IMS overhead on storage because sequential access methods relate
segments by adjacency rather than with pointers

The three disadvantages of using sequential access methods are:


v Sequential access methods give slower access to the right-most segments in the
hierarchy, because HSAM and HISAM must read through all other segments to
get to them.
v HISAM requires frequent reorganization to reclaim space from deleted segments
and to keep the logical records of a database record physically adjoined.
v You cannot update HSAM databases. You must create a new database to change
any of the data.

Sequential processing only: HSAM


HSAM is a hierarchic access method that can handle only sequential processing.
You can retrieve data from HSAM databases, but you cannot update any of the
data. The z/OS access methods that HSAM can use are QSAM and BSAM.

HSAM is ideal for the following situations:


v You are using the database to collect (but not update) data or statistics.
v You only plan to process the data sequentially.

HSAM characteristics: HSAM stores database records in the sequence in which


you submit them. You can only process records and dependent segments
sequentially, which means the order in which you have loaded them. HSAM stores
dependent segments in hierarchic sequence.

An overview of how HSAM works:

Diagnosis, Modification or Tuning Information

HSAM databases are very simple databases. The data is stored in hierarchic
sequence, one segment after the other, and no pointers or indexes are used.
End of Diagnosis, Modification or Tuning Information

Primarily sequential processing: HISAM


HISAM is an access method that stores segments in hierarchic sequence with an
index to locate root segments. It also has an overflow data set. Store segments in a
logical record until you reach the end of the logical record. When you run out of
space on the logical record, but you still have more segments belonging to the
database record, you store the remaining segments in an overflow data set. The
access methods that HISAM can use are VSAM and OSAM.

HISAM is well-suited for:


v Direct access of record by root keys
v Sequential access of records
v Sequential access of dependent segments

The situations in which your processing has some of these characteristics but
where HISAM is not necessarily a good choice, occur when:
v You must access dependents directly.
v You have a high number of inserts and deletes.

Chapter 10. Gathering requirements for database options 145


v Many of the database records exceed average size and must use the overflow
data set. The segments that overflow into the overflow data set require
additional I/O.

HISAM characteristics: For database records, HISAM databases:


v Store records in key sequence
v Can locate a particular record with a key value by using the index

For dependent segments, HISAM databases:


v Start each HISAM database record in a new logical record in the primary data
set
v Store the remaining segments in one or more logical records in the overflow
data set if the database record does not fit in the primary data set

An overview of how HISAM works:

Diagnosis, Modification or Tuning Information

HISAM does not immediately reuse space. When you insert a new segment,
HISAM databases shift data to make room for the new segment, and this leaves
unused space after deletions. HISAM space is reclaimed when you reorganize a
HISAM database.
End of Diagnosis, Modification or Tuning Information

Accessing z/OS files through IMS: GSAM


GSAM enables IMS batch application programs and BMPs to access a sequential
z/OS data set as a simple database. The z/OS access methods that GSAM can use
are BSAM and VSAM. A GSAM database is a z/OS data set record that is defined
as a database record. The record is handled as one unit; it contains no segments or
fields and the structure is not hierarchic. GSAM databases can be accessed by
z/OS, IMS, and CICS.

In a CICS environment, an application program can access a GSAM database from


either a Call DL/I (or EXEC DLI) batch or batch-oriented BMP program. A CICS
application cannot, however, use EXEC DLI to process GSAM databases; it must
use IMS calls.

You commonly use GSAM to send input to and receive output from batch-oriented
BMPs or batch programs. To process a GSAM database, an application program
issues calls similar to the ones it issues to process a full-function database. The
program can read data sequentially from a GSAM database, and it can send output
to a GSAM database.

GSAM is a sequential access method. You can only add records to an output
database sequentially.

Accessing IMS data through z/OS: SHSAM and SHISAM


Two database access methods give you simple hierarchic databases that z/OS can
use as data sets, SHSAM and SHISAM.

These access methods can be particularly helpful when you are converting data
from z/OS files to an IMS database. SHISAM is indexed and SHSAM is not.

146 Application Programming Planning Guide


When you use these access methods, you define an entire database record as one
segment. The segment does not contain any IMS control information or pointers;
the data format is the same as it is in z/OS data sets. The z/OS access methods
that SHSAM can use are BSAM and QSAM. SHISAM uses VSAM.

SHSAM and SHISAM databases can be accessed by z/OS access methods without
IMS, which is useful during transitions.

Understanding how data structure conflicts are resolved


The order in which application programs need to process fields and segments
within hierarchies is frequently not the same for each application. When the DBA
finds a conflict in the way that two or more programs need to access the data,
three options are available to solve these problems. Each of the following options
solves a different kind of conflict.
v When an application program does not need access to all the fields in a segment,
or if the program needs to access them in a different order, the DBA can use
field level sensitivity for that program. Field-level sensitivity makes it possible for
an application program to access only a subset of the fields that a segment
contains, or for an application program to process a segment's fields in an order
that is different from their order in the segment.
v When an application program needs to access a particular segment by a field
other than the segment's key field, the DBA can use a secondary index for that
database.
v When the application program needs to relate segments from different
hierarchies, the DBA can use logical relationships. Using logical relationships
can give the application program a logical hierarchy that includes segments from
several hierarchies.

Subsections:
v “Using different fields: field-level sensitivity”
v “Resolving processing conflicts in a hierarchy: secondary indexing” on page 148
v “Creating a new hierarchy: logical relationships” on page 152

Using different fields: field-level sensitivity


Field-level sensitivity applies the same kind of security for fields within a segment
that segment sensitivity does for segments within a hierarchy: An application
program can access only those fields within a segment, and those segments within
a hierarchy to which it is sensitive.

Field-level sensitivity also makes it possible for an application program to use a


subset of the fields that make up a segment, or to use all the fields in the segment
but in a different order. If a segment contains fields that the application program
does not need to process, using field-level sensitivity enables the program not to
process them.

Example of field-level sensitivity


Suppose that a segment containing data about an employee contains the fields
shown in Table 30 on page 148. These fields are:
v Employee number: EMPNO
v Employee name: EMPNAME
v Birthdate: BIRTHDAY
v Salary: SALARY

Chapter 10. Gathering requirements for database options 147


v Address: ADDRESS
Table 30. Physical employee segment
EMPNO EMPNAME BIRTHDAY SALARY ADDRESS

A program that printed mailing labels for employees' checks each week would not
need all the data in the segment. If the DBA decided to use field-level sensitivity
for that application, the program would receive only the fields it needed in its I/O
area. The I/O area would contain the EMPNAME and ADDRESS fields. Table 31
shows what the program's I/O area would contain.
Table 31. Employee segment with field-level sensitivity
EMPNAME ADDRESS

Field-level sensitivity makes it possible for a program to receive a subset of the


fields that make up a segment, the same fields but in a different order, or both.

Another situation in which field-level sensitivity is very useful is when new uses
of the database involve adding new fields of data to an existing segment. In this
situation, you want to avoid re-coding programs that use the current segment. By
using field-level sensitivity, the old programs can see only the fields that were in
the original segment. The new program can see both the old and the new fields.

Specifying field-level sensitivity


You specify field-level sensitivity in the PSB for the application program by using a
sensitive field (SENFLD) statement for each field to which you want the
application program to be sensitive.

Resolving processing conflicts in a hierarchy: secondary


indexing
Sometimes a database hierarchy does not meet all the processing requirements of
the application programs that will process it. Secondary indexing can be used to
solve two kinds of processing conflicts:
v When an application program needs to retrieve a segment in a sequence other
than the one that has been defined by the segment's key field
v When an application program needs to retrieve a segment based on a condition
that is found in a dependent of that segment

To understand these conflicts and how secondary indexing can resolve them,
consider the examples of two application programs that process the patient
hierarchy, shown in Figure 52 on page 149. Three segment types in this hierarchy
are:
v PATIENT contains three fields: the patient's identification number, name, and
address. The patient number field is the key field.
v ILLNESS contains two fields: the date of the illness and the name of the illness.
The date of the illness is the key field.
v TREATMNT contains four fields: the date the medication was given; the name of
the medication; the quantity of the medication that was given; and the name of
the doctor who prescribed the medication. The date that the medication was
given is the key field.

148 Application Programming Planning Guide


Figure 52. Patient hierarchy

Retrieving segments based on a different key


When an application program retrieves a segment from the database, the program
identifies the segment by the segment's key field. But sometimes an application
program needs to retrieve a segment in a sequence other than the one that has
been defined by the segment's key field. Secondary indexing makes this possible.

Note: A new database type, the Partitioned Secondary Index (PSINDEX), is


supported by the High Availability Large Database (HALDB). PSINDEX is the
partitioned version of the secondary index database type. The corresponding
descriptions of the secondary index database type therefore apply to PSINDEX in
these sections.

Example: Suppose you have an online application program that processes requests
about whether an individual has ever been to the clinic. If you are not sure
whether the person has ever been to the clinic, you will not be able to supply the
identification number for the person. But the key field of the PATIENT segment is
the patient's identification number.

Segment occurrences of a segment type (for example, the segments for each of the
patients) are stored in a database in order of their keys (in this case, by their
patient identification numbers). If you issue a request for a PATIENT segment and
identify the segment you want by the patient's name instead of the patient's
identification number, IMS must search through all of the PATIENT segments to
find the PATIENT segment you have requested. IMS does not know where a
particular PATIENT segment is just by having the patient's name.

To make it possible for this application program to retrieve PATIENT segments in


the sequence of patients' names (rather than in the sequence of patients'
identification numbers), you can index the PATIENT segment on the patient name
field and store the index entries in a separate database. The separate database is
called a secondary index database.

Chapter 10. Gathering requirements for database options 149


Then, if you indicate to IMS that it is to process the PATIENT segments in the
patient hierarchy in the sequence of the index entries in the secondary index
database, IMS can locate a PATIENT segment if you supply the patient's name.
IMS goes directly to the secondary index and locates the PATIENT index entry
with the name you have supplied; the PATIENT index entries are in alphabetical
order of the patient names. The index entry is a pointer to the PATIENT segment
in the patient hierarchy. IMS can determine whether a PATIENT segment for the
name you have supplied exists, and then it can return the segment to the
application program if the segment exists. If the requested segment does not exist,
IMS indicates this to the application program by returning a not-found status code.

Related reading: For more information on HALDB, see IMS Version 10: Database
Administration Guide.

Definitions: Three terms involved in secondary indexing are:


v The pointer segment is the index entry in the secondary index database that IMS
uses to find the segment you have requested. In the previous example, the
pointer segment is the index entry in the secondary index database that points
to the PATIENT segment in the patient hierarchy.
v The source segment is the segment that contains the field that you are indexing.
In the previous example, the source segment is the PATIENT segment in the
patient hierarchy, because you are indexing on the name field in the PATIENT
segment.
v The target segment is the segment in the database that you are processing to
which the secondary index points; it is the segment that you want to retrieve.

In the previous example, the target segment and the source segment are the same
segment—the PATIENT segment in the patient hierarchy. When the source segment
and the target segment are different segments, secondary indexing solves the
processing conflict.

The PATIENT segment that IMS returns to the application program's I/O area
looks the same as it would if secondary indexing had not been used.

The key feedback area is different. When IMS retrieves a segment without using a
secondary index, IMS places the concatenated key of the retrieved segment in the
key feedback area. The concatenated key contains all the keys of the segment's
parents, in order of their positions in the hierarchy. The key of the root segment is
first, followed by the key of the segment on the second level in the hierarchy, then
the third, and so on—with the key of the retrieved segment last.

But when you retrieve a segment from an indexed database, the contents of the
key feedback area after the request are a little different. Instead of placing the key
of the root segment in the left-most bytes of the key feedback area, DL/I places the
key of the pointer segment there. Note that the term “key of the pointer segment,”
as used here, refers to the key as perceived by the application program—that is,
the key does not include subsequence fields.

Example: Suppose index segment A shown in Figure 53 on page 151 is indexed on


a field in segment C. Segment A is the target segment, and segment C is the source
segment.

150 Application Programming Planning Guide


Figure 53. Indexing a root segment

When you use the secondary index to retrieve one of the segments in this
hierarchy, the key feedback area contains one of the following:
v If you retrieve segment A, the key feedback area contains the key of the pointer
segment from the secondary index.
v If you retrieve segment B, the key feedback area contains the key of the pointer
segment, concatenated with the key of segment B.
v If you retrieve segment C, the key of the pointer segment, the key of segment B,
and the key of segment C are concatenated in the key feedback area.

Although this example creates a secondary index for the root segment, you can
index dependent segments as well. If you do this, you create an inverted structure:
the segment you index becomes the root segment, and its parent becomes a
dependent.

Example: Suppose you index segment B on a field in segment C. In this case,


segment B is the target segment, and segment C is the source field. Figure 54
shows the physical database structure and the structure that is created by the
secondary index.

Figure 54. Indexing a dependent segment

When you retrieve the segments in the secondary index data structure on the right,
IMS returns the following to the key feedback area:

Chapter 10. Gathering requirements for database options 151


v If you retrieve segment B, the key feedback area contains the key of the pointer
segment in the secondary index database.
v If you retrieve segment A, the key feedback area contains the key of the pointer
segment, concatenated with the key of segment A.
v If you retrieve segment C, the key feedback area contains the key of the pointer
segment, concatenated with the key of segment C.

Retrieving segments based on a dependent's qualification


Sometimes an application program needs to retrieve a segment, but only if one of
the segment's dependents meet a certain qualification.

Example: Suppose that the medical clinic wants to print a monthly report of the
patients who have visited the clinic during that month. If the application program
that processes this request does not use a secondary index, the program has to
retrieve each PATIENT segment, and then retrieve the ILLNESS segment for each
PATIENT segment. The program tests the date in the ILLNESS segment to
determine whether the patient has visited the clinic during the current month, and
prints the patient's name if the answer is yes. The program continues retrieving
PATIENT segments and ILLNESS segments until it has retrieved all the PATIENT
segments.

But with a secondary index, you can make the processing of the program simpler.
To do this, you index the PATIENT segment on the date field in the ILLNESS
segment. When you define the PATIENT segment in the DBD, you give IMS the
name of the field on which you are indexing the PATIENT segment, and the name
of the segment that contains the index field. The application program can then
request a PATIENT segment and qualify the request with the date in the ILLNESS
segment. The PATIENT segment that is returned to the application program looks
just as it would if you were not using a secondary index.

In this example, the PATIENT segment is the target segment; it is the segment that
you want to retrieve. The ILLNESS segment is the source segment; it contains the
information that you want to use to qualify your request for PATIENT segments.
The index segment in the secondary database is the pointer segment. It points to
the PATIENT segments.

Creating a new hierarchy: logical relationships


When an application program needs to associate segments from different
hierarchies, logical relationships can make that possible. Logical relationships can
solve the following conflicts:
v When two application programs need to process the same segment, but they
need to access the segment through different hierarchies
v When a segment's parent in one application program's hierarchy acts as that
segment's child in another application program

Accessing a segment through different paths


Sometimes an application program needs to process the data in a different order
than the way it is arranged in the hierarchy.

Example: An application program that processes data in a purchasing database


also requires access to a segment in a patient database:
v Program A processes information in the patient database about the patients at a
medical clinic: the patients' illnesses and their treatments.

152 Application Programming Planning Guide


v Program B is an inventory program that processes information in the purchasing
database about the medications that the clinic uses: the item, the vendor,
information about each shipment, and information about when and under what
circumstances each medication is given.

Figure 55 on page 154 shows the hierarchies that Program A and Program B
require for their processing. Their processing requirements conflict: they both need
to have access to the information that is contained in the TREATMNT segment in
the patient database. This information is:
v The date that a particular medication was given
v The name of the medication
v The quantity of the medication given
v The doctor that prescribed the medication

To Program B this is not information about a patient's treatment; it is information


about the disbursement of a medication. To the purchasing database, this is the
disbursement segment (DISBURSE).

Figure 55 on page 154 shows the hierarchies for Program A and Program B.
Program A needs the PATIENT segment, the ILLNESS segment, and the
TREATMNT segment. Program B needs the ITEM segment, the VENDOR segment,
the SHIPMENT segment, and the DISBURSE segment. The TREATMNT segment
and the DISBURSE segment contain the same information.

Chapter 10. Gathering requirements for database options 153


Figure 55. Patient and inventory hierarchies

Instead of storing this information in both hierarchies, you can use a logical
relationship. A logical relationship solves the problem by storing a pointer from
where the segment is needed in one hierarchy to where the segment exists in the
other hierarchy. In this case, you can have a pointer in the DISBURSE segment to
the TREATMNT segment in the medical database. When IMS receives a request for
information in a DISBURSE segment in the purchasing database, IMS goes to the
TREATMNT segment in the medical database that is pointed to by the DISBURSE
segment. Figure 56 on page 155 shows the physical hierarchy that Program A
would process and the logical hierarchy that Program B would process. DISBURSE
is a pointer segment to the TREATMNT segment in Program A's hierarchy.

154 Application Programming Planning Guide


Figure 56. Logical relationships example

To define a logical relationship between segments in different hierarchies, you use


a logical DBD. A logical DBD defines a hierarchy that does not exist in storage, but
can be processed as though it does. Program B would use the logical structure
shown in Figure 56 as though it were a physical structure.

Inverting a parent-child relationship


Another type of conflict that logical relationships can resolve occurs when a
segment's parent in one application program acts as that segment's child in another
application program:
v The inventory program, Program B, needs to process information about
medications using the medication as the root segment.
v A purchasing application program, Program C, processes information about
which vendors have sold which medications. Program C needs to process this
information using the vendor as the root segment.

Figure 57 on page 156 shows the hierarchies for each of these application
programs.

Chapter 10. Gathering requirements for database options 155


Figure 57. Supplies and purchasing hierarchies

Logical relationships can solve this problem by using pointers. Using pointers in
this example would mean that the ITEM segment in the purchasing database
would contain a pointer to the actual data stored in the ITEM segment in the
supplies database. The VENDOR segment, on the other hand, would actually be
stored in the purchasing database. The VENDOR segment in the supplies database
would point to the VENDOR segment that is stored in the purchasing database.

Figure 58 shows the hierarchies of these two programs.

Figure 58. Program B and program C hierarchies

If you did not use logical relationships in this situation, you would:
v Keep the same data in both paths, which means that you would be keeping
redundant data.
v Have the same disadvantages as separate files of data:
– You would need to update multiple segments each time one piece of data
changed.
– You would need more storage.

156 Application Programming Planning Guide


Providing data security
If you find that some of the data in your application has a security requirement, an
IMS application can provide security for that data in two ways:
v Data sensitivity is a way of controlling what data a particular program can
access.
v Processing options are a way of controlling how a particular program can
process data that it can access.

Subsections:
v “Providing data availability”
v “Keeping a program from accessing the data: data sensitivity”
v “Preventing a program from updating data: processing options” on page 159

Providing data availability


Specifying segment sensitivity and processing options also affects data availability.
You should set the specifications so that the PCBs request the fewest SENSEGS and
limit the possible processing options. With data availability, a program can
continue to access and update segments in the database successfully, even though
some parts of the database are unavailable.

The SENSEG statement defines a segment type in the database to which the
application program is sensitive. A separate SENSEG statement must exist for each
segment type. The segments can physically exist in one database or they can be
derived from several physical databases. If an application program is sensitive to a
segment that is below the root segment, it must also be sensitive to all segments in
the path from the root segment to the sensitive segment.

Related Reading: For more information on using field-level sensitivity for data
security and using the SENSEG statement to limit the scope of the PCBs, see IMS
Version 10: Database Administration Guide.

Keeping a program from accessing the data: data sensitivity


An IMS program can only access data to which it is sensitive. You can control the
data to which your program is sensitive on three levels:
v Segment sensitivity can prevent an application program from accessing all the
segments in a particular hierarchy. Segment sensitivity tells IMS which segments
in a hierarchy the program is allowed to access.
v Field-level sensitivity can keep a program from accessing all the fields that
make up a particular segment. Field-level sensitivity tells IMS which fields
within a particular segment a program is allowed to access.
v Key sensitivity means that the program can access segments below a particular
segment, but it cannot access the particular segment. IMS returns only the key of
this type of segment to the program.

You define each of these levels of sensitivity in the PSB for the application
program. Key sensitivity is defined in the processing option for the segment.
Processing options indicate to IMS exactly what a particular program may or may
not do to the data. You specify a processing option for each hierarchy that the
application program processes; you do this in the DB PCB that represents each
hierarchy. You can specify one processing option for all the segments in the
hierarchy, or you can specify different processing options for different segments
within the hierarchy.

Chapter 10. Gathering requirements for database options 157


Segment sensitivity and field-level sensitivity are defined using special statements
in the PSB.

Segment sensitivity
You define what segments an application program is sensitive to in the DB PCB for
the hierarchy that contains those segments.

Example: Suppose that the patient hierarchy shown in Figure 52 on page 149
belongs to the medical database shown in Figure 59. The patient hierarchy is like a
subset of the medical database.

Figure 59. Medical database hierarchy

PATIENT is the root segment and the parent of the three segments below it:
ILLNESS, BILLING, and HOUSHOLD. Below ILLNESS is TREATMNT. Below
BILLING is PAYMENT.

To make it possible for an application program to view only the segments


PATIENT, ILLNESS, and TREATMNT from the medical database, you specify in
the DB PCB that the hierarchy you are defining has these three segment types, and
that they are from the medical database. You define the database hierarchy in the
DBD; you define the application program's view of the database hierarchy in the
DB PCB.

Field-level sensitivity
In addition to providing data independence for an application program, field-level
sensitivity can also act as a security mechanism for the data that the program uses.

If a program needs to access some of the fields in a segment, but one or two of the
fields that the program does not need to access are confidential, you can use
field-level sensitivity. If you define that segment for the application program as
containing only the fields that are not confidential, you prevent the program from
accessing the confidential fields. Field-level sensitivity acts as a mask for the fields
to which you want to restrict access.

Key sensitivity
To access a segment, an application program must be sensitive to all segments at a
higher level in the segment's path. In other words, in Figure 60 on page 159, a
program must be sensitive to segment B in order to access segment C.

Example: Suppose that an application program needs segment C to do its


processing. But if segment B contains confidential information (such as an
employee's salary), the program is not able to access that segment. Using key
sensitivity lets you withhold segment B from the application program while giving
the program access to the dependents of segment B.

158 Application Programming Planning Guide


When a sensitive segment statement has a processing option of K specified for it,
the program cannot access that segment, but the program can pass beyond that
segment to access the segment's dependents. When the program does access the
segment's dependents, IMS does not return that segment; IMS returns only the
segment's key with the keys of the other segments that are accessed.

Figure 60. Sample hierarchy for key sensitivity example

Preventing a program from updating data: processing options


During PCB generation, you can use five options of the PROCOPT parameter (in
the DATABASE macro) to indicate to IMS whether your program can read
segments in the hierarchy, or whether it can also update segments. From most
restrictive to least restrictive, these options are:
G Your program can read segments.
R Your program can read and replace segments.
I Your program can insert segments.
D Your program can read and delete segments.
| A Your program can perform all the processing options. It is equivalent to
| specifying G, R, I, and D.

Related Reading: For a thorough description of the processing options see, IMS
Version 10: System Utilities Reference.

Processing options provide data security because they limit what a program can do
to the hierarchy or to a particular segment. Specifying only the processing options
the program requires ensures that the program cannot update any data it is not
supposed to. For example, if a program does not need to delete segments from a
database, the D option need not be specified.

When an application program retrieves a segment and has any of the


just-described processing options, IMS locks the database record for that
application. If PROCOPT=G is specified, other programs with the option can
concurrently access the database record. If an update processing option (R, I, D, or

Chapter 10. Gathering requirements for database options 159


A) is specified, no other program can concurrently access the same database
record. If no updates are performed, the lock is released when the application
moves to another database record or, in the case of HDAM, to another anchor
point.

The following locking protocol allows IMS to make this determination. If the root
segment is updated, the root lock is held at update level until commit. If a
dependent segment is updated, it is locked at update level. When exiting the
database record, the root segment is demoted to read level. When a program enters
the database record and obtains the lock at either read or update level, the lock
manager provides feedback indicating whether or not another program has the
lock at read level. This determines if dependent segments will be locked when they
are accessed. For HISAM, the primary logical record is treated as the root, and the
overflow logical records are treated as dependent segments.

When using block-level or database-level data sharing for online and batch
programs, you can use additional processing options.

Related Reading:
v For a special case involving HISAM delete byte with parameter ERASE=YES see,
IMS Version 10: Database Administration Guide.
v For more information on database and block-level data sharing, see IMS Version
10: System Administration Guide.

E option
With the E option, your program has exclusive access to the hierarchy or to the
segment you use it with. The E option is used in conjunction with the options G, I,
D, R, and A. While the E program is running, other programs cannot access that
data, but may be able to access segments that are not in the E program's PCB. No
dynamic enqueue by program isolation is done, but dynamic logging of database
updates will be done.

GO option
When your program retrieves a segment with the GO option, IMS does not lock
the segment. While the read without integrity program reads the segment, it
remains available to other programs. This is because your program can only read
the data (termed read-only); it is not allowed to update the database. No dynamic
enqueue is done by program isolation for calls against this database. Serialization
between the program with PROCOPT=GO and any other update program does not
occur; updates to the same data occur simultaneously.

If a segment has been deleted and another segment of the same type has been
inserted in the same location, the segment data and all subsequent data that is
returned to the application may be from a different database record.

A read-without-integrity program can also retrieve a segment even if another


program is updating the segment. This means that the program need not wait for
segments that other programs are accessing. If a read-without-integrity program
reads data that is being updated by another program, and that program terminates
abnormally before reaching the next commit point, the updated segments might
contain invalid pointers. If an invalid pointer is detected, the read-without-integrity
program terminates abnormally, unless the N or T options were specified with GO.
Pointers are updated during insert, delete and backout functions.

160 Application Programming Planning Guide


N option
When you use the N option with GO to access a full-function database or a DEDB,
and the segment you are retrieving contains an invalid pointer, IMS returns a GG
status code to your program. Your program can then terminate processing,
continue processing by reading a different segment, or access the data using a
different path. The N option must be specified as PROCOPT=GON, GON, or
GONP.

T option
When you use the T option with GO and the segment you are retrieving contains
an invalid pointer, the response from an application program depends on whether
the program is accessing a full-function or Fast Path database.

For calls to full-function databases, the T option causes DL/I to automatically retry
the operation. You can retrieve the updated segment, but only if the updating
program has reached a commit point or has had its updates backed out since you
last tried to retrieve the segment. If the retry fails, a GG status code is returned to
your program.

For calls to Fast Path DEDBs, option T does not cause DL/I to retry the operation.
A GG status code is returned. The T option must be specified as PROCOPT=GOT,
GOT, or GOTP.

GOx and data integrity


For a very small set of applications and data, PROCOPT=GOx offers some
performance and parallelism benefits. However, it does not offer application data
integrity. For example, using PROCOPT=GOT in an online environment on a
full-function database can cause performance degradation. The T option forces a
re-read from DASD, negating the advantage of very large buffer pools and VSAM
hiperspace for all currently running applications and shared data. For more
information on the GOx processing option for DEDBs, see IMS Version 10: System
Utilities Reference.

Read without integrity


Database-level sharing of IMS databases provides for sharing of databases between
a single update-capable batch or online IMS system and any number of other IMS
systems that are reading data that are without integrity.

A GE status code might be returned to a program using PROCOPT=GOx for a


segment that exists in a HIDAM database during control interval (CI) splits.

In IMS, programs that use database-level sharing include PROCOPT=GOx in their


DBPCBs for that data. For batch jobs, the DBPCB PROCOPTs establish the batch
job's access level for the database. That is, a batch job uses the highest declared
intent for a database as the access level for DBRC database authorization. In an
online IMS environment, database ACCESS is specified on the DATABASE macro
during IMS system definition, and it can be changed using the /START DB
ACCESS=RO command. Online IMS systems schedule programs with data availability
determined by the PROCOPTs within those program PSBs being scheduled. That
data availability is therefore limited by the online system's database access.

The PROCOPT=GON and GOT options (described in “N option” and “T option”)


provide certain limited PCB status code retry for some recognizable pointer errors,
within the data that is being read without integrity. In some cases, dependent
segment updates, occurring asynchronously to the read-without-integrity IMS

Chapter 10. Gathering requirements for database options 161


instance, do not interfere with the program that is reading that data without
integrity. However, update activity to an average database does not always allow a
read-without-integrity IMS system to recognize a data problem.

What read without integrity means


Each IMS batch or online instance has OSAM and VSAM buffer pools defined for
it. Without locking to serialize concurrent updates that are occurring in another
IMS instance, a read without integrity from a database data set fetches a copy of a
block or CI into the buffer pool in storage. Blocks or CIs in the buffer pool can
remain there a long time. Subsequent read without integrity of other blocks or CIs
can then fetch more recent data. Data hierarchies and other data relationships
between these different blocks or CIs can be inconsistent.

For example, consider an index database (VSAM KSDS), which has an index
component and a data component. The index component contains only hierarchic
control information, relating to the data component CI where a given keyed record
is located. Think of this as the way that the index component CI maintains the
high key in each data component CI. Inserting a keyed record into a KSDS data
component CI that is already full causes a CI split. That is, some portion of the
records in the existing CI are moved to a new CI, and the index component is
adjusted to point to the new CI.

Example: Suppose the index CI shows the high key in the first data CI as KEY100,
and a split occurs. The split moves keys KEY051 through KEY100 to a new CI; the
index CI now shows the high key in the first data CI as KEY050, and another entry
shows the high key in the new CI as KEY100.

A program that is reading is without integrity, which already read the “old” index
component CI into its buffer pool (high key KEY100), does not point to the newly
created data CI and does not attempt to access it. More specifically, keyed records
that exist in a KSDS at the time a read-without-integrity program starts might
never be seen. In this example, KEY051 through KEY100 are no longer in the first
data CI even though the “old” copy of the index CI in the buffer pool still
indicates that any existing keys up to KEY100 are in the first data CI.

Hypothetical cases also exist where the deletion of a dependent segment and the
insertion of that same segment type under a different root, placed in the same
physical location as the deleted segment, can cause simple Get Next processing to
give the appearance of only one root in the database. For example, accessing the
segments under the first root in the database down to a level-06 segment (which
had been deleted from the first root and is now logically under the last root)
would then reflect data from the other root. The next and subsequent Get Next
calls retrieve segments from the other root.

Read-only (PROCOPT=GO) processing does not provide data integrity.

Data set extensions


IMS instances with database-level sharing can open a database for read without
integrity. After the database is opened, another program that is updating that
database can make changes to the data. These changes might result in logical and
physical extensions to the database data set. Because the read-without-integrity
program is not aware of these extensions, problems with the RBA (beyond
end-of-data) can occur.

162 Application Programming Planning Guide


Chapter 11. Gathering requirements for message processing
options
One of the tasks of application design is providing information about your
application's requirements to the people in charge of designing and administering
your IMS system. This section describes the information you should provide, and
why this information is important.

Restriction: This section applies to DB/DC and DCCTL environments only.

Subsections:
v “Identifying online security requirements”
v “Analyzing screen and message formats” on page 165
v “Gathering requirements for conversational processing” on page 168
v “Identifying output message destinations” on page 171

Identifying online security requirements


Security in an online system means protecting the data from unauthorized use
through terminals. It also means preventing unauthorized use of both the IMS
system and the application programs that access the database. For example, you do
not want a program that processes paychecks to be available to everyone who can
access the system.

The security mechanisms that IMS provides are signon, terminal, and password
security.

Related reading: For an explanation of how to establish these types of security, see
IMS Version 10: System Administration Guide.

Limiting access to specific individuals: signon security


Signon security is available through Resource Access Control Facility (RACF®) or a
user-written security exit routine. With signon security, individuals who want to
use IMS must be defined to RACF or its equivalent before they are allowed access.

When a person signs on to IMS, RACF or security exits verify that the person is
authorized to use IMS before access to IMS-controlled resources is allowed. This
signon security is provided by the /SIGN ON command. You can also limit the
transaction codes and commands that individuals are allowed to enter. You do this
by associating an individual's user identification (USERID) with the transaction
codes and commands.

LU 6.2 transactions contain the USERID.

Related reading: For more information on security, see IMS Version 10:
Communications and Connections Guide.

© Copyright IBM Corp. 1974, 2010 163


Limiting access for specific terminals: terminal security
Use terminal security to limit the entry of a transaction code to a particular
terminal or group of terminals in the system. How you do this depends on how
many programs you want to protect.

To protect a particular program, you can either authorize a transaction code to be


entered from a list of logical terminals, or you can associate each logical terminal
with a list of the transaction codes that a user can enter from that logical terminal.
For example, you could protect the paycheck application program by defining the
transaction code associated with it as valid only when entered from the terminals
in the payroll department. If you wanted to restrict access to this application even
more, you could associate the paycheck transaction code with only one logical
terminal. To enter that transaction code, a user needs to be at a physical terminal
that is associated with that logical terminal.

Restriction: If you are using the shared-queues option, static control blocks
representing the resources needed for the security check need to be available in the
IMS system where the security check is being made. Otherwise, the security check
is bypassed.

Related reading: For more information on shared queues, see IMS Version 10:
IMSplex Administration Guide.

Limiting access to the program: password security


Another way you can protect the application program is to require a password
when a person enters the transaction code that is associated with the application
program you want to protect. If you use only password security, the person
entering a particular transaction code must also enter the password of the
transaction before IMS processes the transaction.

If you use password security with terminal security, you can restrict access to the
program even more. In the paycheck example, using password security and
terminal security means that you can restrict unauthorized individuals within the
payroll department from executing the program.

Restriction: Password security for transactions is only supported if the transactions


that are needed for the security check are defined in the IMS system where the
security check is being made. Otherwise, the security check is bypassed.

Allowing access to security data: authorization security


RACF has a data set that you can use to store user-unique information. The AUTH
call gives application programs access to the RACF data set security data, and a
way to control access to application-defined resources. Thus, application programs
can obtain the security information about a particular user.

How IMS security relates to DB2 for z/OS security


An important part of DB2 for z/OS security is the authorization ID. The
authorization ID that IMS uses for a program or a user at a terminal depends on
the kind of security that is used and the kind of program that is running. For
MPPs, IFPs, and transaction-oriented BMPs, the authorization ID depends on the
type of IMS security:
v If signon is required, IMS passes the USERID and group name that are
signed-on to DB2 for z/OS.

164 Application Programming Planning Guide


v If signon is not required, DB2 for z/OS uses the name of the originating logical
terminal as the authorization ID.

For batch-oriented BMPs, the authorization ID is dependent on the value specified


for the BMPUSID= keyword in the DFSDCxxx PROCLIB member:
v If BMPUSID=USERID is specified, the value from the USER= keyword on the
JOB statement is used.
v If USER= is not specified on the JOB statement, the program's PSB name is used.
v If BMPUSID=PSBNAME is specified, or if BMPUSID= is not specified at all, the
program's PSB name is used.

Supplying security information


When you evaluate your application in terms of its security requirements, you
need to look at each program individually. When you have done this, you can
supply the following information to your security personnel.
v For programs that require signon security:
– List the individuals who should be able to access IMS.
v For programs that require terminal security:
– List the transaction codes that must be secured.
– List the terminals that should be allowed to enter each of these transaction
codes. If the terminals you are listing are already installed and being used,
identify the terminals by their logical terminal names. If not, identify them by
the department that will use them (for example, the accounting department).
v For programs that require password security:
– List the transaction codes that require passwords.
v For commands that require security:
– List the commands that require signon or password security.

Analyzing screen and message formats


When an application program communicates with a terminal, an editing procedure
translates messages from the way they are entered at the terminal to the way the
program expects to receive and process them. The decisions about how IMS will
edit your program's messages are based on how your data should be presented to
the person at the terminal and to the application program. You need to describe
how you want data from the program to appear on the terminal screen, and how
you want data from the terminal to appear in the application program's I/O area.
(The I/O area contains the segments being processed by the application program.)

To supply information that will be helpful in these decisions, you should be


familiar with how IMS edits messages. IMS has two editing procedures:
v Message Format Service (MFS) uses control blocks that define what a message
should look like to the person at the terminal and to the application program.
v Basic edit is available to all IMS application programs. Basic edit removes
control characters from input messages and inserts the control characters you
specify in output messages to the terminal.

Related reading: For information on defining IMS editing procedures and on other
design considerations for IMS networks, see IMS Version 10: Communications and
Connections Guide.

Chapter 11. Gathering requirements for message processing options 165


An overview of MFS
MFS uses four kinds of control blocks to format messages between an application
program and a terminal. The information you gather about how you want the data
formatted when it is passed between the application program and the terminal is
contained in these control blocks.

The two control blocks that describe input messages to IMS are:
v The device input format (DIF) describes to IMS what the input message is to
look like when it is entered at the terminal.
v The message input descriptor (MID) tells IMS how the application program
expects to receive the input message in its I/O area.

By using the DIF and the MID, IMS can translate the input message from the way
that it is entered at the terminal to the way it should appear in the program's I/O
area.

The two control blocks that describe output messages to IMS are:
v The message output descriptor (MOD) tells IMS what the output message is to
look like in the program's I/O area.
v The device output format (DOF) tells IMS how the message should appear on
the terminal.

To define the MFS control blocks for an application program, you need to know
how you want the data to appear at the terminal and in the application program's
I/O area for both input and output.

Related reading: For more information about how you define this information to
MFS, see IMS Version 10: Application Programming Guide.

An overview of basic edit


Basic edit removes the control characters from an input message before the
application program receives it, and inserts the control characters you specify
when the application program sends a message back to the terminal. To format
output messages at a terminal using basic edit, you need to supply the necessary
control characters for the terminal you are using.

If your application will use basic edit, you should describe how you want the data
to be presented at the terminal, and what it is to look like in the program's I/O
area.

Editing considerations in your application


Before you describe the editing requirements of your application, be sure that you
are aware of your standards concerning screen design. Make sure that the
requirements that you describe comply with those standards.

Provide the following information about your program's editing requirements:


v How you want the screen to be presented to the person at the terminal for the
person to enter the input data. For example, if an airline agent wants to reserve
seats on a particular flight, the screen that asks for this information might look
like this:

166 Application Programming Planning Guide


FLIGHT#:
NAME:
NO. IN PARTY:
v What the data should look like when the person at the terminal enters the input
message.
v What the input message should look like in the program's I/O area.
v What the data should look like when the program builds the output message in
its I/O area.
v How the output message should be formatted at the terminal.
v The length and type of data that your program and the terminal will be
exchanging.

The type of data you are processing is only one consideration when you analyze
how you want the data presented at the terminal. In addition, you should weigh
the needs of the person at the terminal (the human factors aspects in your
application) against the effect of the screen design on the efficiency of the
application program (the performance factors in the application program).
Unfortunately, sometimes a trade-off between human factors and performance
factors exists. A screen design that is easily understood and used by the person at
the terminal may not be the design that gives the application program its best
performance. Your first concern should be that you are following whatever are
your established screen standards.

A terminal screen that has been designed with human factors in mind is one that
puts the person at the terminal first; it is one that makes it as easy as possible for
that person to interact with IMS. Some of the things you can do to make it easy for
the person at the terminal to understand and respond to your application program
are:
v Display a small amount of data at one time.
v Use a format that is clear and uncluttered.
v Provide clear and simple instructions.
v Display one idea at a time.
v Require short responses from the person at the terminal.
v Provide some means for help and ease of correction for the person at the
terminal.

At the same time, you do not want the way in which a screen is designed to have
a negative effect on the application program's response time, or on the system's
performance. When you design a screen with performance first in mind, you want
to reduce the processing that IMS must do with each message. To do this, the
person at the terminal should be able to send a lot of data to the application
program in one screen so that IMS does not have to process additional messages.
And the program should not require two screens to give the person at the terminal
information that it could give on one screen.

When describing how the program should receive the data from the terminal, you
need to consider the program logic and the type of data you are working with.

Chapter 11. Gathering requirements for message processing options 167


Gathering requirements for conversational processing
When you use conversational processing, the person at the terminal enters some
information, and an application program processes the information and responds
to the terminal. The person at the terminal then enters more information for an
application program to process. Each of these interactions between the person at
the terminal and the program is called a step in the conversation. Only MPPs can
be conversational programs; Fast Path programs and BMPs cannot be
conversational.

Definition: Conversational processing means that the person at the terminal can
communicate with the application program.

What happens in a conversation


| Definition: A conversation is a dialog between a user at a terminal and IMS
| through a scratchpad area (SPA) and one or more application programs.

| During a conversation, the user at the terminal enters a request, receives the
| information from IMS, and enters another request. Although it is not apparent to
| the user, a conversation can be processed by several application programs or by
| one application program.

| To continue a conversation, the program must have the necessary information to


| continue processing. IMS stores data from one step of the conversation to the next
| in a SPA. When the same program or a different program continues the
| conversation, IMS gives the program the SPA for the conversation associated with
| that terminal.

In the preceding airline example, the first program might save the flight number
and the names of the people traveling, and then pass control to another application
program to reserve seats for those people on that flight. The first program saves
this information in the SPA. If the second application program did not have the
flight number and names of the people traveling, it would not be able to do its
processing.

Designing a conversation
The first part of designing a conversation is to design the flow of the conversation.
If the requests from the person at the terminal are to be processed by only one
application program, you need only to design that program. If the conversation
should be processed by several application programs, you need to decide which
steps of the conversation each program is to process, and what each program is to
do when it has finished processing its step of the conversation.

When a person at a terminal enters a transaction code that has been defined as
conversational, IMS schedules the conversational program (for example, Program
A) associated with that transaction code. When Program A issues its first call to the
message queue, IMS returns the SPA that is defined for that transaction code to
Program A's I/O area. The person at the terminal must enter the transaction code
(and password, if one exists) only on the first input screen; the transaction code
need not be entered during each step of the conversation. IMS treats data in
subsequent screens as a continuation of the conversation started on the first screen.

After the program has retrieved the SPA, Program A can retrieve the input
message from the terminal. After it has processed the message, Program A can
either continue the conversation, or end it.

168 Application Programming Planning Guide


To continue the conversation, Program A can do any of the following:
v Reply to the terminal that sent the message.
v Reply to the terminal and pass the conversation to another conversational
program, for example Program B. This is called a deferred program switch.
Definition: A deferred program switch means that Program A responds to the
terminal and then passes control to another conversational program, Program B.
After passing control to Program B, Program A is no longer part of the
conversation. The next input message that the person at the terminal enters goes
to Program B, although the person at the terminal is unaware that this message
is being sent to a second program.
Restriction: A deferred program switch is disallowed if the application is
involved in an inbound protected conversation. The application will receive an
X6 status code if it attempts to perform a deferred program switch in this
environment.
v Pass control of the conversation to another conversational program without first
responding to the originating terminal. This is called an immediate program switch.
Definition: An immediate program switch lets you pass control directly to
another conversational program without having to respond to the originating
terminal. When you do this, the program that you pass the conversation to must
respond to the person at the terminal. To continue the conversation, Program B
then has the same choices as Program A did: It can respond to the originating
terminal and keep control, or it can pass control in a deferred or immediate
program switch.
Restriction: An immediate program switch is disallowed if the application is
involved in an inbound protected conversation. The application will be abended
with a U711 if it attempts to perform an immediate program switch in this
environment.

To end the conversation, Program A can do either of the following:


v Move a blank to the first byte of the transaction code area of the SPA and then
return the SPA to IMS.
v Respond to the terminal and pass control to a nonconversational program. This
is also called a deferred program switch, but Program A ends the conversation
before passing control to another application program. The second application
program can be an MPP or a transaction-oriented BMP that processes
transactions from the conversational program.

Important points about the SPA


When program A passes control of a conversation to program B, program B needs
to have the data that program A saved in the SPA in order to continue the
conversation. IMS gives the SPA for the transaction to program B when program B
issues its first message call.

The SPA is kept with the message. When the truncated data option is on, the size
of the retained SPA is the largest SPA of any transaction in the conversation.

Example: If the conversation starts with TRANA (SPA=100), and the program
switches to a TRANB (SPA=50), the input message for TRANB will contain a SPA
segment of 100 bytes. IMS adjusts the size of the SPA so that TRANB receives only
the first 50 bytes.

However, the IMS support that adjusts the size of the SPA does not exist in either
IMS Version 5 or earlier systems. If TRANB is to execute on a remote MSC system

Chapter 11. Gathering requirements for message processing options 169


without this support, it will be passed a SPA of 100 bytes when it is only expecting
50 bytes. There are two ways to prevent this larger sized SPA from being sent to an
IMS Version 5 or earlier system:
1. You could define TRANB on the local IMS system with the RTRUNC parameter
on its TRANSACT macro, this forces the SPA to a size of 50 bytes when it is
inserted by TRANA.
2. If you never use truncated data nor want to change the TRANSACT macros for
remote transactions to specify RTRUNC, a specification is available to set the
system-wide default for the truncated data option. The specification is
TRUNC=Y|N in the DFSDCxxx PROCLIB member. You could set the system
default to not save truncated data, and the SPA would be automatically
truncated to a size of 50 bytes when it is inserted by TRANA.

Related reading: For more information on how to structure a conversational


program, see IMS Version 10: Application Programming Guide.

Recovery considerations in conversations


Because a conversation involves several steps and can involve several application
programs, consider the following items:
v One way you can make recovery easier is to design the conversation so that all
the database updates are done in the last step of the conversation. This way, if
the conversation terminates abnormally, IMS can back out all the updates
because they were all made during the same step of the conversation. Updating
the database during the last step of the conversation is also a good idea, because
the input from each step of the conversation is available.
v Although a conversation can terminate abnormally during any step of the
conversation, IMS backs out only the database updates and output messages
resulting during the last step of the conversation. IMS does not back out
database updates or cancel output messages for previous steps, even though
some of that processing might be inaccurate as a result of the abnormal
termination.
v Certain IMS system service calls can be helpful if the program determines that
some of its processing was invalid. These calls include ROLB, SETS, SETU, and
ROLS. The Roll Back call (ROLB) backs out all of the changes that the program has
made to the database. ROLB also cancels the output messages that the program
has created (except those sent with an express PCB) since the program's last
commit point.
The SETS, or SETU, and ROLS (with a token) calls work together to allow the
application program to set intermediate backout points within the call
processing of the program. The application program can set up to nine
intermediate backout points. Your program needs to use the SETS or SETU call to
specify a token for each point. A subsequent ROLS call, using the same token, can
back out all database changes and discard all nonexpress messages processed
since that SETS or SETU call.
Definition: A token is a 4-byte identifier.
v The program can use an express PCB to send a message to the person at the
terminal and to the master terminal operator. When the application program
inserts messages using an express PCB, IMS waits until it has the complete
message, rather than for the occurrence of a commit point, to transmit the
message to its destination. (In this context, “insert” refers to a situation in which
the application program sends the message and it is received by IMS; “transmit”
refers to a situation in which IMS begins sending the message to its destination.)
Therefore, when IMS has the complete message, it will be transmitted even if the

170 Application Programming Planning Guide


program abnormally terminates. Messages sent with an express PCB are sent to
their final destinations even if the program terminates abnormally or issues a
ROLB call. For more information about the express PCB, refer to “To other
programs and terminals.”
v To verify the accuracy of the previous processing, and to correct the processing
that is determined to be inaccurate, you can use the Conversational Abnormal
termination routine, DFSCONE0.
Related reading: For more information on DFSCONE0, see IMS Version 10: Exit
Routine Reference.
v You can write an MPP to examine the SPA, send a message notifying the person
at the terminal of the abnormal termination, make any necessary database calls,
and use a user-written or system-provided exit routine to schedule it.

Identifying output message destinations


An application program can send messages to another application program or to
IMS terminals. To send output messages, the program issues a call and references
the I/O PCB or an alternate PCB. The I/O PCB and alternate PCBs represent
logical terminals and other application programs with which the application
program communicates.

Definition: An alternate PCB is a data communication program communication


block (DCPCB) that you define to describe output message destinations other than
the terminal that originated the input message.

The originating terminal


To send a message to the logical terminal that sent the input message, the program
uses an I/O PCB. IMS puts the name of the logical terminal that sent the message
in the I/O PCB when the program receives the message. As a result, the program
need not do anything to the I/O PCB before sending the message. If a program
receives a message from a batch-oriented BMP or CPI Communications driven
program, no logical terminal name is available to put into the I/O PCB. In these
cases, the logical terminal name field contains blanks.

To other programs and terminals


When you want to send an output message to a terminal other than, or in addition
to, the terminal that sent the input message, you use an alternate PCB. You can set
the alternate PCB for a specific logical terminal when the program's PSB is
generated, or you can define the alternate PCB as being modifiable. A program can
change the destination of a modifiable alternate PCB while the program is running,
so you can send output messages to several alternate destinations.

The application program might need to respond to the originating terminal before
the person at the originating terminal can send any more messages. This might
occur when a terminal is in response mode or in conversational mode:
v Response mode can apply to a communication line, a terminal, or a transaction.
When response mode is in effect, IMS does not accept any input from the
communication line or terminal until the program has sent a response to the
previous input message. The originating terminal is unusable (for example, the
keyboard locks) until the program has processed the transaction and sent the
reply back to the terminal.
If a response-mode transaction is processed, including Fast Path transactions,
and the application does not insert a response back to the terminal through
either the I/O PCB or alternate I/O PCB, but inserts a message to an alternate

Chapter 11. Gathering requirements for message processing options 171


PCB (program-to-program switch), the second or subsequent application
program must respond to the originating terminal and satisfy the response. IMS
will not take the terminal out of response mode.
If an application program terminates normally and does not issue an ISRT call to
the I/O PCB, alternate I/O PCB, or alternate PCB, IMS sends system message
DFS2082I to the originating terminal to satisfy the response for all
response-mode transactions, including Fast Path transactions.
You can define communication lines and terminals as operating in response
mode, not operating in response mode, or operating in response mode only if
processing a transaction that is been defined as response mode. You specify
response mode for communication lines and terminals on the TYPE and
TERMINAL macros, respectively, at IMS system definition. You can define any
transaction as a response-mode transaction; you do this on the TRANSACT
macro at IMS system definition. Response mode is in effect if:
– The communication line has been defined as being in response mode.
– The terminal has been defined as being in response mode.
– The transaction code has been defined as response mode.
v Conversational mode applies to a transaction. When a program is processing a
conversational transaction, the program must respond to the originating terminal
after each input message it receives from the terminal.

In these processing modes, the program must respond to the originating terminal.
But sometimes the originating terminal is a physical terminal that is made up of
two components—for example, a printer and a display. If the physical terminal is
made up of two components, each component has a different logical terminal
name. To send an output message to the printer part of the terminal, the program
must use a different logical terminal name than the one associated with the input
message; it must send the output message to an alternate destination. A special
kind of alternate PCB is available to programs in these situations; it is called an
alternate response PCB.

Definition: An alternate response PCB lets you send messages when exclusive,
response, or conversational mode is in effect. See the next section for more
information.

Alternate response PCB


The destination of an alternate response PCB must be a logical terminal—you
cannot use an alternate response PCB to represent another application program.
When you use an alternate response PCB during response mode or conversational
mode, the logical terminal represented by the alternate response PCB must
represent the same physical terminal as the originating logical terminal.

In these processing modes, after receiving the message, the application program
must respond by issuing an ISRT call to one of the following:
v The I/O PCB.
v An alternate response PCB.
v An alternate PCB whose destination is another application program, that is, a
program-to-program switch.
v An alternate PCB whose destination is an ISC link. This is allowed only for
front-end switch messages.
Related reading: For more information on front-end switch messages, see IMS
Version 10: Exit Routine Reference.

172 Application Programming Planning Guide


If one of these criteria is not met, message DFS2082I is sent to the terminal.

Express PCB
Consider specifying an alternate PCB as an express PCB. The express designation
relates to whether a message that the application program inserted is actually
transmitted to the destination if the program abnormally terminates or issues a
ROLL, ROLB, or ROLS call. For all PCBs, when a program abnormally terminates or
issues a ROLL, ROLB, or ROLS call, messages that were inserted but not made
available for transmission are cancelled while messages that were made available
for transmission are never cancelled.

Definition: An express PCB is an alternate response PCB that allows your program
to transmit the message to the destination terminal earlier than when you use a
nonexpress PCB.

For a nonexpress PCB, the message is not made available for transmission to its
destination until the program reaches a commit point. The commit point occurs
when the program terminates, issues a CHKP call, or requests the next input
message and when the transaction has been defined with MODE=SNGL.

For an express PCB, when IMS has the complete message, it makes the message
available for transmission to the destination. In addition to occurring at a commit
point, it also occurs when the application program issues a PURG call using that
PCB or when it requests the next input message.

You should provide the answers to the following questions to the data
communications administrator to help in meeting your application's message
processing requirements:
v Will the program be required to respond to the terminal before the terminal can
enter another message?
v Will the program be responding only to the terminal that sends input messages?
v If the program needs to send messages to other terminals or programs as well, is
there only one alternate destination?
v What are the other terminals to which the program must send output messages?
v Should the program be able to send an output message before it terminates
abnormally?

Chapter 11. Gathering requirements for message processing options 173


174 Application Programming Planning Guide
Chapter 12. Testing an IMS application program
This section describes what is involved in testing an IMS application program (as a
unit) and provides suggestions on how to do it. The purpose of this test, called a
program unit test, is to ensure that the program correctly handles its input data,
processing, and output test data.

The amount and type of testing you do depends on the individual program you
are testing. Though no strict rules for testing are available, the guidelines offered in
this section might be helpful.

Subsections:
v “What you need to test an IMS program”
v “Testing DL/I call sequences (DFSDDLT0) before testing your IMS program”
v “Using BTS II to test your IMS program” on page 176
v “Tracing DL/I calls with image capture for your IMS program” on page 176
v “Requests for monitoring and debugging your IMS program” on page 179
v “What to do when your IMS program terminates abnormally” on page 193

What you need to test an IMS program


Before you start testing your program, be aware of your established test
procedures. To start testing, you need the following three items:
v Test JCL.
v A test database. Never test a program using a production database because the
program, if faulty, might damage valid data.
v Test input data. The input data that you use need not be current, but it should
be valid. You cannot be sure that your output data is valid unless you use valid
input data.

The purpose of testing the program is to make sure that the program can correctly
handle all the situations that it might encounter. To thoroughly test the program,
try to test as many of the paths that the program can take as possible.

Recommendations:
v Test each path in the program by using input data that forces the program to
execute each of its branches.
v Be sure that your program tests its error routines. Again, use input data that will
force the program to test as many error conditions as possible.
v Test the editing routines your program uses. Give the program as many different
data combinations as possible to make sure it correctly edits its input data.

Testing DL/I call sequences (DFSDDLT0) before testing your IMS


program
The DL/I test program, DFSDDLT0, is an IMS application program that executes
the DL/I calls you specify against any database.

© Copyright IBM Corp. 1974, 2010 175


Restriction: DFSDDLT0 does not work if you are using a coordinator controller
(CCTL).

An advantage of using DFSDDLT0 is that you can test the DL/I call sequence you
will use prior to coding your program. Testing the DL/I call sequence before you
test the program makes debugging easier, because by the time you test the
program, you know that the DL/I calls are correct. When you test the program,
and it does not execute correctly, you know that the DL/I calls are not part of the
problem if you have already tested them using DFSDDLT0.

For each DL/I call that you want to test, you give DFSDDLT0 the call and any
SSAs that you are using with the call. DFSDDLT0 then executes and gives you the
results of the call. After each call, DFSDDLT0 shows you the contents of the DB
PCB mask and the I/O area. This means that for each call, DFSDDLT0 checks the
access path you have defined for the segment, and the effect of the call. DFSDDLT0
is helpful in debugging because it can display IMS application control blocks.

To indicate to DFSDDLT0 the call you want executed, you use four types of control
statements:
Status statements establish print options for DFSDDLT0's output and select the
DB PCB to use for the calls you specify.
Comment statements let you choose whether you want to supply comments.
Call statements indicate to DFSDDLT0 the call you want to execute, any SSAs
you want used with the call, and how many times you want the call executed.
Compare statements tell DFSDDLT0 that you want it to compare its results
after executing the call with the results you supply.

In addition to testing call sequences to see if they work, you can also use
DFSDDLT0 to check the performance of call sequences.

Related Reading: For more details about using DFSDDLT0, and how to check call
sequence performance, see IMS Version 10: Application Programming Guide.

Using BTS II to test your IMS program


IMS Batch Terminal Simulator for z/OS is a valuable tool for testing programs
because you can use it to test call sequences. The documentation BTS II produces is
helpful in debugging. You can also test online application programs without
actually running them online.

Restriction: BTS II does not work if you are using a CCTL or running under
DBCTL.

Related Reading: For information about how to use BTS II, refer to BTS Program
Reference/Operations Manual.

Tracing DL/I calls with image capture for your IMS program
The DL/I image capture program (DFSDLTR0) is a trace program that can trace
and record DL/I calls issued by all types of IMS application programs.

Restriction: The image capture program does not trace calls to Fast Path databases.

You can run the image capture program in a DB/DC or a batch environment to:
Test your program
176 Application Programming Planning Guide
If the image capture program detects an error in a call it traces, it reproduces as
much of the call as possible, although it cannot document where the error
occurred, and cannot always reproduce the full SSA.
Produce input for DFSDDLT0
You can use the output produced by the image capture program as input to
DFSDDLT0. The image capture program produces status statements, comment
statements, call statements, and compare statements for DFSDDLT0.
Debug your program
When your program terminates abnormally, you can rerun the program using
the image capture program, which can then reproduce and document the
conditions that led to the program failure. You can use the information in the
report produced by the image capture program to find and fix the problem.

Subsections:
v “Using image capture with DFSDDLT0”
v “Restrictions on using image capture output” on page 178
v “Running image capture online” on page 178
v “Running image capture as a batch job” on page 178
v “Retrieving image capture data from the log data set” on page 179

Using image capture with DFSDDLT0


The image capture program produces the following control statements that you can
use as input to DFSDDLT0:
Status statements
When you invoke the image capture program, it produces the status statement.
The status statement it produces:
– Sets print options so that DFSDDLT0 prints all call trace comments, all DL/I
calls, and the results of all comparisons.
– Determines the new relative PCB number each time a PCB change occurs
while the application program is executing.
Comments statement
The image capture program also produces a comments statement when you
invoke it. The comments statements give:
– The time and date IMS started the trace
– The name of the PSB being traced
The image capture program also produces a comments statement preceding any
call in which IMS finds an error.
Call statements
The image capture program produces a call statement for each DL/I call the
application program issues. It also generates a CHKP call when it starts the trace
and after each commit point or CHKP request.
Compare statements
The image capture program produces data and PCB comparison statements if
you specify COMP on the TRACE command (if you run the image capture
program online), or on the DLITRACE control statement (if you run the image
capture program as a batch job).

Chapter 12. Testing an IMS application program 177


Restrictions on using image capture output
The status statement of the image capture call is based on relative PCB position.
When the PCB parameter LIST=NO has been specified, the status statement may
need to be changed to select the PCB as follows:
v If all PCBs have the parameter LIST=YES, the status statement does not need to
be changed.
v If all PCBs have the parameter LIST=NO, the status statement needs to be
changed from the relative PCB number to the correct PCB name.
v If some PCBs have the parameter LIST=YES and some have the parameter
LIST=NO, the status statement needs to be changed as follows:
– The PCB relative position is based on all PCBs as if LIST=YES.
– For PCBs that have a PCB name, the status statement can be changed to use
the PCB name based on a relative PCB number.
– For PCBs that have LIST=YES and no PCB name, change the relative PCB
number to refer to the relative PCB number in the user list by looking at the
PCB list using LIST=YES and LIST=NO.

Running image capture online


When you run the image capture program online, the trace output goes to the IMS
log data set. To run the image capture program online, you issue the IMS TRACE
command from the IMS master terminal.

If you trace a BMP or an MPP and you want to use the trace results with
DFSDDLT0, the BMP or MPP must have exclusive write access to the databases it
processes. If the application program does not have exclusive access, the results of
DFSDDLT0 may differ from the results of the application program. When you trace
a BMP that accesses GSAM databases, you must include an //IMSERR DD
statement to get a formatted dump of the GSAM control blocks.

The following diagram shows the TRACE command format:

ON
 / TRACE SET OFF PSB psbname 
NOCOMP
COMP

SET ON|OFF
Turns the trace on or off.
PSB psbname
Specifies the name of the PSB you want to trace. You can trace more than one
PSB at the same time by issuing a separate TRACE command for each PSB.
COMP|NOCOMP
Specifies whether you want the image capture program to produce data and
PCB compare statements to be used as input to DFSDDLT0.

Running image capture as a batch job


To run the image capture program as a batch job, you use the DLITRACE control
statement in the DFSVSAMP DD data set. In the DLITRACE control statement, you
specify:
v Whether you want to trace all of the DL/I calls the program issues or trace only
a certain group of calls.

178 Application Programming Planning Guide


v Whether you want the trace output to go to:
A sequential data set that you specify
The IMS log data set
Both sequential and IMS log data sets

Notes on using image capture:


v If the program being traced issues CHKP and XRST calls, the checkpoint and
restart information may not be directly reproducible when you use the trace
output with DFSDDLT0.
v When you run DFSDDLT0 in an IMS DL/I or DBB batch region with trace
output, the results are the same as the application program's results, but only if
the database has not been altered.

| For information on the format of the DLITRACE control statement in the


| DFSVSAMP DD data set, see the topic “Defining DL/I call image trace” in the IMS
| Version 10: System Definition Reference.

Retrieving image capture data from the log data set


If the trace output is sent to the IMS log data set, you can retrieve it by using
utility DFSERA10 and a DL/I call trace exit routine, DFSERA50. DFSERA50
deblocks, formats, and numbers the image capture program records that are to be
retrieved. To use DFSERA50, you must insert a DD statement defining a sequential
output data set in the DFSERA10 input stream. The default ddname for this DD
statement is TRCPUNCH. The statement must specify BLKSIZE=80.

Examples: You can use the following examples of DFSERA10 input control
statements in the SYSIN data set to retrieve the image capture program data from
the log data set:
v Print all image capture program records:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,FLDTYP=X
v Print selected image capture program records by PSB name:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT OFFSET=25,VLDTYP=C,FLDLEN=8,
VALUE=psbname, COND=E
v Format image capture program records (in a format that can be used as input
to DFSDDLT0):
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT EXITR=DFSERA50,OFFSET=25,FLDTYP=C
VALUE=psbname,FLDLEN=8,DDNAME=OUTDDN,COND=E

Point to remember: The DDNAME= parameter names the DD statement to be


used by DFSERA50. The data set that is defined on the
OUTDDN DD statement is used instead of the default
TRCPUNCH DD statement. For this example, the DD is:
//OUTDDN DD ...,DCB=(BLKSIZE=80),...

Requests for monitoring and debugging your IMS program


You can use the following two requests to help you in debugging your program:
v The Statistics (STAT) call retrieves database statistics.

Chapter 12. Testing an IMS application program 179


v The Log (LOG) call makes it possible for the application program to write a
record on the system log.

The enhanced OSAM and VSAM STAT calls provide additional information for
monitoring performance and fine tuning of the system for specific needs.

When the enhanced STAT call is issued, the following information is returned:
v OSAM statistics for each defined subpool
v VSAM statistics that also include hiperspace statistics
v OSAM and VSAM count fields that have been expanded to 10 digits

Subsections:
v “Retrieving database statistics: the STAT call”
v “Writing Information to the system log: the LOG request” on page 193

Retrieving database statistics: the STAT call


Product-sensitive programming interface

This section contains product-sensitive programming interface information.


End of Product-sensitive programming interface

The STAT call is helpful in debugging a program because it retrieves IMS database
statistics. It is also helpful in monitoring and fine tuning for performance. The STAT
call retrieves OSAM database buffer pool statistics and VSAM database buffer
subpool statistics.

Related Reading: For information on coding the STAT call, see the appropriate
application programming information.

When you issue the STAT call, you indicate:


v An I/O area into which the statistics are to be returned.
v A statistics function, which is the name of a 9-byte area whose contents describe
the type and format of the statistics you want returned. The contents of the area
are defined as follows:
– The first 4 bytes define the type of statistics desired (OSAM or VSAM).
– The 5th byte defines the format to be returned (formatted, unformatted, or
summary).
– The remaining 4 bytes are defined as follows:
- The normal or enhanced STAT call contains 4 bytes of blanks.
- The extended STAT call contains the 4-byte parameter ' E1 ' (a 1-byte blank,
followed by a 2-byte character string, and then another 1-byte blank).

Format of OSAM buffer pool statistics


For OSAM buffer pool statistics, the values are possible for the stat-function
parameter and for the format of the data that is returned to the application
program. If no OSAM buffer pool is present, a GE status code is returned to the
program.

DBASF: This function value provides the full OSAM database buffer pool
statistics in a formatted form. The application program I/O area must be at least

180 Application Programming Planning Guide


360 bytes. Three 120-byte records (formatted for printing) are provided as two
heading lines and one line of statistics. The following diagram shows the data
format.

BLOCK FOUND READS BUFF OSAM BLOCKS NEW CHAIN


REQ IN POOL ISSUED ALTS WRITES WRITTEN BLOCKS WRITES
nnnnnnn nnnnnnn nnnnn nnnnnnn nnnnnnn nnnnnnn nnnnn nnnnn

WRITTEN LOGICAL PURGE RELEASE


AS NEW CYL REQ REQ ERRORS
FORMAT
nnnnnnn nnnnnnn nnnnnnn nnnnnnn nn/nn

BLOCK REQ
Number of block requests received.
FOUND IN POOL
Number of times the block requested was found in the buffer pool.
READS ISSUED
Number of OSAM reads issued.
BUFF ALTS
Number of buffers altered in the pool.
OSAM WRITES
Number of OSAM writes issued.
BLOCKS WRITTEN
Number of blocks written from the pool.
NEW BLOCKS
Number of new blocks created in the pool.
CHAIN WRITES
Number of chained OSAM writes issued.
WRITTEN AS NEW
Number of blocks created.
LOGICAL CYL FORMAT
Number of format logical cylinder requests issued.
PURGE REQ
Number of purge user requests.
RELEASE REQ
Number of release ownership requests.
ERRORS
Number of write error buffers currently in the pool or the largest number
of errors in the pool during this execution.

DBASU: This function value provides the full OSAM database buffer pool
statistics in an unformatted form. The application program I/O area must be at
least 72 bytes. Eighteen fullwords of binary data are provided:
Word Contents
1 A count of the number of words that follow.
2-18 The statistic values in the same sequence as presented by the DBASF
function value.

Chapter 12. Testing an IMS application program 181


DBASS: This function value provides a summary of the OSAM database buffer
pool statistics in a formatted form. The application program I/O area must be at
least 180 bytes. Three 60-byte records (formatted for printing) are provided. The
following diagram shows the data format.

DATA BASE BUFFER POOL: SIZE nnnnnnn


REQ1 nnnnn REQ2 nnnnn READ nnnnn WRITES nnnnn LCYL nnnnn
PURG nnnnn OWNRR nnnnn ERRORS nn/nn

SIZE Buffer pool size.


REQ1 Number of block requests.
REQ2 Number of block requests satisfied in the pool plus new blocks created.
READ Number of read requests issued.
WRITES
Number of OSAM writes issued.
LCYL Number of format logical cylinder requests.
PURG Number of purge user requests.
OWNRR
Number of release ownership requests.
ERRORS
Number of permanent errors now in the pool or the largest number of
permanent errors during this execution.

Format of VSAM buffer subpool statistics


Because there might be several buffer subpools for VSAM databases, the STAT call
is iterative when requesting these statistics. If more than one VSAM local shared
resource pool is defined, statistics are retrieved for all VSAM local shared resource
pools in the order in which they are defined. For each local shared resource pool,
statistics are retrieved for each subpool according to buffer size.

The first time the call is issued, the statistics for the subpool with the smallest
buffer size are provided. For each succeeding call (without intervening use of the
PCB), the statistics for the subpool with the next-larger buffer size are provided.

If index subpools exist within the local shared resource pool, the index subpool
statistics always follow statistics of the data subpools. Index subpool statistics are
also retrieved in ascending order based on the buffer size.

The final call for the series returns a GA status code in the PCB. The statistics
returned are totals for all subpools in all local shared resource pools. If no VSAM
buffer subpools are present, a GE status code is returned to the program.

VBASF: This function value provides the full VSAM database subpool statistics in
a formatted form. The application program I/O area must be at least 360 bytes.
Three 120-byte records (formatted for printing) are provided as two heading lines
and one line of statistics. Each successive call returns the statistics for the next data
subpool. If present, statistics for index subpools follow the statistics for data
subpools.

The following diagram shows the data format.


BUFFER HANDLER STATISTICS
BSIZ NBUF RET RBA RET KEY ISRT ES ISRT KS BFR ALT BGWRT SYN PTS
nnnK nnn nnnnnnn nnnnnnn nnnnnnn nnnnnnn nnnnnnn nnnnnnn nnnnnnn

182 Application Programming Planning Guide


VSAM STATISTICS POOLID: xxxx
GETS SCHBFR FOUND READS USR WTS NUR WTS ERRORS
nnnnnnn nnnnnnn nnnnnnn nnnnnnn nnnnnnn nnnnnnn nn/nn
POOLID
ID of the local shared resource pool.
BSIZ Size of the buffers in this VSAM subpool. In the final call, this field is set
to ALL.
NBUF Number of buffers in this subpool. In the final call, this is the number of
buffers in all subpools.
RET RBA
Number of retrieve-by-RBA calls received by the buffer handler.
RET KEY
Number of retrieve-by-key calls received by the buffer handler.
ISRT ES
Number of logical records inserted into ESDSs.
ISRT KS
Number of logical records inserted into KSDSs.
BFR ALT
Number of logical records altered in this subpool. Delete calls that result in
erasing records from a KSDS are not counted.
BGWRT
Number of times the background-write function was executed by the
buffer handler.
SYN PTS
Number of Synchronization calls received by the buffer handler.
GETS Number of VSAM GET calls issued by the buffer handler.
SCHBFR
Number of VSAM SCHBFR calls issued by the buffer handler.
FOUND
Number of times VSAM found the control interval already in the subpool.
READS
Number of times VSAM read a control interval from external storage.
USR WTS
Number of VSAM writes initiated by IMS.
NUR WTS
Number of VSAM writes initiated to make space in the subpool.
ERRORS
Number of write error buffers currently in the subpool or the largest
number of write errors in the subpool during this execution.

VBASU: This function value provides the full VSAM database subpool statistics
in a unformatted form. The application program I/O area must be at least 72
bytes. Eighteen fullwords of binary data are provided for each subpool:
Word Contents
1 A count of the number of words that follow.

Chapter 12. Testing an IMS application program 183


2-18 The statistic values in the same sequence as presented by the VBASF
function value, except for POOLID, which is not included in this
unformatted form.

VBASS: This function value provides a summary of the VSAM database subpool
statistics in a formatted form. The application program I/O area must be at least
180 bytes. Three 60-byte records (formatted for printing) are provided.

The following diagram shows the data format.

DATA BASE BUFFER POOL: BSIZE nnnnnnn POOLID xxxx Type x


RRBA nnnnn RKEY nnnnn BFALT nnnnn NREC nnnnn SYN PTS nnnnn
NMBUFS nnn VRDS nnnnn FOUND nnnnn VWTS nnnnn ERRORS nn/nn

BSIZE Size of the buffers in this VSAM subpool.


POOLID
ID of the local shared resource pool.
TYPE Indicates a data (D) subpool or an index (I) subpool.
RRBA Number of retrieve-by-RBA requests.
RKEY Number of retrieve-by-key requests.
BFALT
Number of logical records altered.
NREC Number of new VSAM logical records created.
SYN PTS
Number of sync point requests.
NMBUFS
Number of buffers in this VSAM subpool.
VRDS Number of VSAM control interval reads.
FOUND
Number of times VSAM found the requested control interval already in the
subpool.
VWTS
Number of VSAM control interval writes.
ERRORS
Number of permanent write errors now in the subpool or the largest
number of errors in this execution.

Format of enhanced/extended OSAM buffer subpool statistics


The enhanced OSAM buffer pool statistics provide additional information
generated for each defined subpool. Because there might be several buffer subpools
for OSAM databases, the enhanced STAT call repeatedly requests these statistics.
The first time the call is issued, the statistics for the subpool with the smallest
buffer size is provided. For each succeeding call (without intervening use of the
PCB), the statistics for the subpool with the next-larger buffer size is provided.

The final call for the series returns a GA status code in the PCB. The statistics
returned are the totals for all subpools. If no OSAM buffer subpools are present, a
GE status code is returned.

184 Application Programming Planning Guide


Extended OSAM buffer pool statistics can be retrieved by including the 4-byte
parameter 'E1' following the enhanced call function. The extended STAT call
returns all of the statistics returned with the enhanced call, plus the statistics on
the coupling facility buffer invalidates, OSAM caching, and sequential buffering
IMMED/SYNC read counts.

Restriction: The extended format parameter is supported by the DBESO, DBESU,


and DBESF functions only.

DBESF: This function value provides the full OSAM subpool statistics in a
formatted form. The application program I/O area must be at least 600 characters.
For OSAM subpools, five 120-byte records (formatted for printing) are provided.
Three of the records are heading lines and two of the records are lines of subpool
statistics.

Example: The following shows the enhanced stat call format:

B U F F E R H A N D L E R O S A M S T A T I S T I C S FIXOPT=X/X POOLID: xxxx


BSIZ NBUFS LOCATE-REQ NEW-BLOCKS ALTER- REQ PURGE- REQ FND-IN-POOL BUFRS-SRCH READ- REQS BUFSTL-WRT
PURGE-WRTS WT-BUSY-ID WT-BUSY-WR WT-BUSY-RD WT-RLSEOWN WT-NO-BFRS ERRORS
nn1K nnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn
nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnn/nnnnnnn

Example: The following shows the extended stat call format:

B U F F E R H A N D L E R O S A M S T A T I S T I C S STG CLS= FIXOPT=N/N POOLID:


BSIZ NBUFS LOCATE-REQ NEW-BLOCKS ALTER- REQ PURGE- REQ FND-IN-POOL BUFRS-SRCH READ- REQS BUFSTL-WRT
PURGE-WRTS WT-BUSY-ID WT-BUSY-WR WT-BUSY-RD WT-RLSEOWN WT-NO-BFRS ERRORS
nn1K nnnnnnn5 nnnnnnnnn0 nnnnnnnnn0 nnnnnnnnn0 nnnnnnnnn0 nnnnnnnnn0 nnnnnnnnn0 nnnnnnnnn0 nnnnnnnnn0
nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnn/nnnnnnn
CF-READS EXPCTD-NF CFWRT-PRI CFWRT-CHG STGCLS-FULL XI-CNT VECTR-XI SB-SEQRD SB-ANTICIP
nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn

FIXOPT
Fixed options for this subpool. Y or N indicates whether the data buffer
prefix and data buffers are fixed.
POOLID
ID of the local shared resource pool.
BSIZ Size of the buffers in this subpool. Set to ALL for total line. For the
summary totals (BSIZ=ALL), the FIXOPT and POOLID fields are replaced
by an OSM= field. This field is the total size of the OSAM subpool.
NBUFS
Number of buffers in this subpool. This is the total number of buffers in
the pool for the ALL line.
LOCATE-REQ
Number of LOCATE-type calls.
NEW-BLOCKS
Number of requests to create new blocks.
ALTER-REQ
Number of buffer alter calls. This count includes NEW BLOCK and
BYTALT calls.
PURGE-REQ
Number of PURGE calls.

Chapter 12. Testing an IMS application program 185


FND-IN-POOL
Number of LOCATE-type calls for this subpool where data is already in
the OSAM pool.
BUFRS-SRCH
Number of buffers searched by all LOCATE-type calls.
READ-REQS
Number of READ I/O requests.
BUFSTL-WRT
Number of single block writes initiated by buffer steal routine.
PURGE-WRTS
Number of blocks for this subpool written by purge.
WT-BUSY-ID
Number of LOCATE calls that waited due to busy ID.
WT-BUSY-WR
Number of LOCATE calls that waited due to buffer busy writing.
WT-BUSY-RD
Number of LOCATE calls that waited due to buffer busy reading.
WT-RLSEOWN
Number of buffer steal or purge requests that waited for ownership to be
released.
WT-NO-BFRS
Number of buffer steal requests that waited because no buffers are
available to be stolen.
ERRORS
Total number of I/O errors for this subpool or the number of buffers
locked in pool due to write errors.
CF-READS
Number of blocks read from CF.
EXPCTD-NF
Number of blocks expected but not read.
CFWRT-PRI
Number of blocks written to CF (prime).
CFWRT-CHG
Number of blocks written to CF (changed).
STGGLS-FULL
Number of blocks not written (STG CLS full).
XI-CNTL
Number of XI buffer invalidate calls.
VECTR-XI
Number of buffers found invalidated by XI on VECTOR call.
SB-SEQRD
Number of immediate (SYNC) sequential reads (SB stat).
SB-ANTICIP
Number of anticipatory reads (SB stat).

186 Application Programming Planning Guide


DBESU: This function value provides full OSAM statistics in an unformatted
form. The application program I/O area must be at least 84 bytes. Twenty-one
fullwords of binary data are provided for each subpool:
Word Contents
1 A count of the number of words that follow.
2-19 The statistics provided in the same sequence as presented by the DBESF
function value.
20 The POOLID provided at subpool definition time.
21 The second byte contains the following fix options for this subpool:
v X'04' = DATA BUFFER PREFIX fixed
v X'02' = DATA BUFFERS fixed
The summary totals (word 2=ALL), for word 21, contain the total size of
the OSAM pool.
22-30 Extended stat data in same sequence as on DBESF call.

DBESS: This function value provides a summary of the OSAM database buffer
pool statistics in a formatted form. The application program I/O area must be at
least 360 bytes. Six 60-byte records (formatted for printing) are provided. This STAT
call is a restructured DBASF STAT call that allows for 10-digit count fields. In
addition, the subpool header blocks give a total of the number of OSAM buffers in
the pool.

The following shows the data format:

DATA BASE BUFFER POOL: NSUBPL nnnnnn NBUFS nnnnnnnn


BLKREQ nnnnnnnnnn INPOOL nnnnnnnnnn READS nnnnnnnnnn
BUFALT nnnnnnnnnn WRITES nnnnnnnnnn BLKWRT nnnnnnnnnn
NEWBLK nnnnnnnnnn CHNWRT nnnnnnnnnn WRTNEW nnnnnnnnnn
LCYLFM nnnnnnnnnn PURGRQ nnnnnnnnnn RLSERQ nnnnnnnnnn
FRCWRT nnnnnnnnnn ERRORS nnnnnnnn/nnnnnnnn

NSUBPL
Number of subpools defined for the OSAM buffer pool.
NBUFS
Total number of buffers defined in the OSAM buffer pool.
BLKREQ
Number of block requests received.
INPOOL
Number of times the block requested is found in the buffer pool.
READS
Number of OSAM reads issued.
BUFALT
Number of buffers altered in the pool.
WRITES
Number of OSAM writes issued.
BLKWRT
Number of blocks written from the pool.
NEWBLK
Number of blocks created in the pool.

Chapter 12. Testing an IMS application program 187


CHNWRT
Number of chained OSAM writes issued.
WRTNEW
Number of blocks created.
LCYLFM
Number of format logical cylinder requests issued.
PURGRQ
Number of purge user requests.
RLSERQ
Number of release ownership requests.
FRCWRT
Number of forced write calls.
ERRORS
Number of write error buffers currently in the pool or the largest number
of errors in the pool during this execution.

DBESO: This function value provides the full OSAM database subpool statistics
in a formatted form for online statistics that are returned as a result of a /DIS POOL
command. This call can also be a user-application STAT call. When issued as an
application DL/I STAT call, the program I/O area must be at least 360 bytes. Six
60-byte records (formatted for printing) are provided.

Example: The following shows the enhanced stat call format:


OSAM DB BUFFER POOL:ID xxxx BSIZE nnnnnK NBUFnnnnnnn FX=X/X
LCTREQ nnnnnnnnnn NEWBLK nnnnnnnnnn ALTREQ nnnnnnnnnn
PURGRQ nnnnnnnnnn FNDIPL nnnnnnnnnn BFSRCH nnnnnnnnnn
RDREQ nnnnnnnnnn BFSTLW nnnnnnnnnn PURGWR nnnnnnnnnn
WBSYID nnnnnnnnnn WBSYWR nnnnnnnnnn WBSYRD nnnnnnnnnn
WRLSEO nnnnnnnnnn WNOBFR nnnnnnnnnn ERRORS nnnnn/nnnnn

Example: The following shows the extended stat call format:


OSAM DB BUFFER POOL:ID xxxx BSIZE nnnnnK NBUFnnnnnnn FX=X/X
LCTREQ nnnnnnnnnn NEWBLK nnnnnnnnnn ALTREQ nnnnnnnnnn
PURGRQ nnnnnnnnnn FNDIPL nnnnnnnnnn BFSRCH nnnnnnnnnn
RDREQ nnnnnnnnnn BFSTLW nnnnnnnnnn PURGWR nnnnnnnnnn
WBSYID nnnnnnnnnn WBSYWR nnnnnnnnnn WBSYRD nnnnnnnnnn
WRLSEO nnnnnnnnnn WNOBFR nnnnnnnnnn ERRORS nnnnn/nnnnn
CFREAD nnnnnnnnnn CFEXPC nnnnnnnnnn CFWRPR nnnnn/nnnnn
CFWRCH nnnnnnnnnn STGCLF nnnnnnnnnn XIINV nnnnn/nnnnn
XICLCT nnnnnnnnnn SBSEQR nnnnnnnnnn SBANTR nnnnn/nnnnn
POOLID
ID of the local shared resource pool.
BSIZE Size of the buffers in this subpool. Set to ALL for summary total line. For
the summary totals (BSIZE=ALL), the FX= field is replaced by the OSAM=
field. This field is the total size of the OSAM buffer pool. The POOLID is
not shown. For the summary totals (BSIZE=ALL), the FX= field is replaced
by the OSAM= field. This field is the total size of the OSAM buffer pool.
The POOLID is not shown.
NBUF Number of buffers in this subpool. Total number of buffers in the pool for
the ALL line.
FX= Fixed options for this subpool. Y or N indicates whether the data buffer
prefix and data buffers are fixed.

188 Application Programming Planning Guide


LCTREQ
Number of LOCATE-type calls.
NEWBLK
Number of requests to create new blocks.
ALTREQ
Number of buffer alter calls. This count includes NEW BLOCK and
BYTALT calls.
PURGRQ
Number of PURGE calls.
FNDIPL
Number of LOCATE-type calls for this subpool where data is already in
the OSAM pool.
BFSRCH
Number of buffers searched by all LOCATE-type calls.
RDREQ
Number of READ I/O requests.
BFSTLW
Number of single-block writes initiated by buffer-steal routine.
PURGWR
Number of buffers written by purge.
WBSYID
Number of LOCATE calls that waited due to busy ID.
WBSYWR
Number of LOCATE calls that waited due to buffer busy writing.
WBSYRD
Number of LOCATE calls that waited due to buffer busy reading.
WRLSEO
Number of buffer steal or purge requests that waited for ownership to be
released.
WNOBRF
Number of buffer steal requests that waited because no buffers are
available to be stolen.
ERRORS
Total number of I/O errors for this subpool or the number of buffers
locked in pool due to write errors.
CFREAD
Number of blocks read from CF.
CFEXPC
Number of blocks expected but not read.
CFWRPR
Number of blocks written to CF (prime).
CFWRCH
Number of blocks written to CF (changed).
STGCLF
Number of blocks not written (STG CLS full).

Chapter 12. Testing an IMS application program 189


XIINV
Number of XI buffer invalidate calls.
XICLCT
Number of buffers found invalidated by XI on VECTOR call.
SBSEQR
Number of immediate (SYNC) sequential reads (SB stat).
SBANTR
Number of anticipatory reads (SB stat).

Format of enhanced VSAM buffer subpool statistics


The enhanced VSAM buffer subpool statistics provide information on the total size
of VSAM subpools in virtual storage and in hiperspace. All count fields are 10
digits.

Because there might be several buffer subpools for VSAM databases, the enhanced
STAT call repeatedly requests these statistics. If more than one VSAM local shared
resource pool is defined, statistics are retrieved for all VSAM local shared resource
pools in the order in which they are defined. For each local shared resource pool,
statistics are retrieved for each subpool according to buffer size.

The first time the call is issued, the statistics for the subpool with the smallest
buffer size are provided. For each succeeding call (without intervening use of the
PCB), the statistics for the subpool with the next-larger buffer size are provided.

If index subpools exist within the local shared resource pool, the index subpool
statistics always follow the data subpools statistics. Index subpool statistics are also
retrieved in ascending order based on the buffer size.

The final call for the series returns a GA status code in the PCB. The statistics
returned are totals for all subpools in all local shared resource pools. If no VSAM
buffer subpools are present, a GE status code is returned to the program.

VBESF: This function value provides the full VSAM database subpool statistics in
a formatted form. The application program I/O area must be at least 600 bytes. For
each shared resource pool ID, the first call returns five 120-byte records (formatted
for printing). Three of the records are heading lines and two of the records are
lines of subpool statistics.

The following shows the data format:

B U F F E R H A N D L E R S T A T I S T I C S / V S A M S T A T I S T I C S FIXOPT=X/X/X POOLID: xxxx


BSIZ NBUFFRS HS-NBUF RETURN-RBA RETURN-KEY ESDS-INSRT KSDS-INSRT BUFFRS-ALT BKGRND-WRT SYNC-POINT ERRORS
VSAM-GETS SCHED-BUFR VSAM-FOUND VSAM-READS USER-WRITS VSAM-WRITS HSRDS-SUCC HSWRT-SUCC HSR/W-FAIL
nn1K nnnnnn nnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnn/nnnnnn
nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnnnnnnn nnnnn/nnnnn

FIXOPT
Fixed options for this subpool. Y or N indicates whether the data buffer
prefix, the index buffers, and the data buffers are fixed.
POOLID
ID of the local shared resource pool.
BSIZ Size of the buffers in this subpool. Set to ALL for total line. For the
summary totals (BSIZ=ALL), the FIXOPT and POOLID fields are replaced

190 Application Programming Planning Guide


by a VS= field and a HS= field. The VS= field is the total size of the VSAM
subpool in virtual storage. The HS= field is the total size of the VSAM
subpool in hiperspace.
NBUFFRS
Number of buffers in this subpool. Total number of buffers in the VSAM
pool that appears in the ALL line.
HS-NBUF
Number of hiperspace buffers defined for this subpool.
RETURN-RBA
Number of retrieve-by-RBA calls received by the buffer handler.
RETURN-KEY
Number of retrieve-by-key calls received by the buffer handler.
ESDS-INSRT
Number of logical records inserted into ESDSs.
KSDS-INSRT
Number of logical records inserted into KSDSs.
BUFFRS-ALT
Number of logical records altered in this subpool. Delete calls that result in
erasing records from a KSDS are not counted.
BKGRND-WRT
Number of times the background write function was executed by the
buffer handler.
SYNC-POINT
Number of Synchronization calls received by the buffer handler.
ERRORS
Number of write error buffers currently in the subpool or the largest
number of write errors in the subpool during this execution.
VSAM-GETS
Number of VSAM Get calls issued by the buffer handler.
SCHED-BUFR
Number of VSAM Scheduled-Buffer calls issued by the buffer handler
VSAM-FOUND
Number of times VSAM found the control interval in the buffer pool.
VSAM-READS
Number of times VSAM read a control interval from external storage.
USER-WRITS
Number of VSAM writes initiated by IMS.
VSAM-WRITS
Number of VSAM writes initiated to make space in the subpool.
HSRDS-SUCC
Number of successful VSAM reads from hiperspace buffers.
HSWRT-SUCC
Number of successful VSAM writes from hiperspace buffers.
HSR/W-FAIL
Number of failed VSAM reads from hiperspace buffers/number of failed

Chapter 12. Testing an IMS application program 191


VSAM writes to hiperspace buffers. This indicates the number of times a
VSAM READ/WRITE request from or to hiperspace resulted in DASD
I/O.

VBESU: This function value provides full VSAM statistics in an unformatted


form. The application program I/O area must be at least 104 bytes. Twenty-five
fullwords of binary data are provided for each subpool.
Word Contents
1 A count of the number of words that follow.
2-23 The statistics provided in the same sequence as presented by the VBESF
function value.
24 The POOLID provided at the time the subpool is defined.
25 The first byte contains the subpool type, and the third byte contains the
following fixed options for this subpool:
v X'08' = INDEX BUFFERS fixed
v X'04' = DATA BUFFER PREFIX fixed
v X'02' = DATA BUFFERS fixed
The summary totals (word 2=ALL) for word 25 and word 26 contain the
virtual and hiperspace pool sizes.

VBESS: This function value provides a summary of the VSAM database subpool
statistics in a formatted form. The application program I/O area must be at least
360 bytes. For each shared resource pool ID, the first call provides six 60-byte
records (formatted for printing).

The following shows the data format:

VSAM DB BUFFER POOL:ID xxxx BSIZE nnnnnnK TYPE x FX=X/X/X


RRBA nnnnnnnnnn RKEY nnnnnnnnnn BFALT nnnnnnnnnn
NREC nnnnnnnnnn SYNC PT nnnnnnnnnn NBUFS nnnnnnnnnn
VRDS nnnnnnnnnn FOUND nnnnnnnnnn VWTS nnnnnnnnnn
HSR-S nnnnnnnnnn HSW-S nnnnnnnnnn HS NBUFS nnnnnnnn
HS-R/W-FAIL nnnnn/nnnnn ERRORS nnnnnn/nnnnnn

POOLID
ID of the local shared resource pool.
BSIZE Size of the buffers in this VSAM subpool.
TYPE Indicates a data (D) subpool or an index (I) subpool.
FX Fixed options for this subpool. Y or N indicates whether the data buffer
prefix, the index buffers, and the data buffers are fixed.
RRBA
Number of retrieve-by-RBA calls received by the buffer handler.
RKEY Number of retrieve-by-key calls received by the buffer handler.
BFALT
Number of logical records altered.
NREC Number of new VSAM logical records created.
SYNC PT
Number of sync point requests.

192 Application Programming Planning Guide


NBUFS
Number of buffers in this VSAM subpool.
VRDS Number of VSAM control interval reads.
FOUND
Number of times VSAM found the requested control interval already in the
subpool.
VWTS
Number of VSAM control interval writes.
HSR-S
Number of successful VSAM reads from hiperspace buffers.
HSW-S
Number of successful VSAM writes to hiperspace buffers.
HS NBUFS
Number of VSAM hiperspace buffers defined for this subpool.
HS-R/W-FAIL
Number of failed VSAM reads from hiperspace buffers and number of
failed VSAM writes to hiperspace buffers. This indicates the number of
times a VSAM READ/WRITE request to or from hiperspace resulted in
DASD I/O.
ERRORS
Number of permanent write errors now in the subpool or the largest
number of errors in this execution.

Writing Information to the system log: the LOG request


An application program can write a record to the system log by issuing the LOG
call. When you issue the LOG request, you specify the I/O area that contains the
record you want written to the system log. You can write any information to the
log that you want, and you can use different log codes to distinguish between
different types of information.

Related Reading: For information about coding the LOG request, see the
appropriate application programming reference information.

What to do when your IMS program terminates abnormally


When your program terminates abnormally, you can take the following actions to
simplify the task of finding and fixing the problem:
v Record as much information as possible about the circumstances under which
the program terminated abnormally.
v Check for certain initialization and execution errors.

Recommended actions after an abnormal termination of an


IMS program
Many places have guidelines on what you should do if your program terminates
abnormally. The suggestions given here are common guidelines:
v Document the error situation to help in investigating and correcting it. The
following information can be helpful:
– The program's PSB name
– The transaction code that the program was processing (online programs only)

Chapter 12. Testing an IMS application program 193


– The text of the input message being processed (online programs only)
– The call function
– The name of the originating logical terminal (online programs only)
– The contents of the PCB that was referenced in the call that was executing
– The contents of the I/O area when the problem occurred
– If a database call was executing, the SSAs, if any, that the call used
– The date and time of day
v When your program encounters an error, it can pass all the required error
information to a standard error routine. You should not use STAE or ESTAE
routines in your program; IMS uses STAE or ESTAE routines to notify the
control region of any abnormal termination of the application program. If you
call your own STAE or ESTAE routines, IMS may not get control if an abnormal
termination occurs. For additional information about STAE or ESTAE routines,
see “Use of STAE or ESTAE and SPIE in IMS programs” on page 118.
v Online programs might want to send a message to the originating logical
terminal to inform the person at the terminal that an error has occurred. Unless
you are using a CCTL, your program can get the logical terminal name from the
I/O PCB, place it in an express PCB, and issue one or more ISRT calls to send
the message.
v An online program might also want to send a message to the master terminal
operator giving information about the program's termination. To do this, the
program places the logical terminal name of the master terminal in an express
PCB and issues one or more ISRT calls. (This is not applicable if you are using a
CCTL.)
v You might also want to send a message to a printer so that you will have a
hard-copy record of the error.
v You can send a message to the system log by issuing a LOG request.
v Some places run a BMP at the end of the day to list all the errors that have
occurred during the day. If your shop does this, you can send a message using
an express PCB that has its destination set for that BMP. (This is not applicable if
you are using a CCTL.)

Diagnosing an abnormal termination of an IMS program


If your program does not run correctly when you are testing it or when it is
executing, you need to isolate the problem. The problem might be anything from a
programming error (for example, an error in the way you coded one of your
requests) to a system problem. This section gives some guidelines about the steps
that you, as the application programmer, can take when your program fails to run,
terminates abnormally, or gives incorrect results.

IMS program initialization errors


Before your program receives control, IMS must have correctly loaded and
initialized the PSB and DBDs used by your application program. Often, when the
problem is in this area, you need a system programmer or DBA (or your
equivalent specialist) to fix the problem. One thing you can do is to find out if
there have been any recent changes to the DBDs, PSB, and the control blocks that
they generate.

IMS program execution errors


If you do not have any initialization errors, check:
1. The output from the compiler. Make sure that all error messages have been
resolved.

194 Application Programming Planning Guide


2. The output from the binder:
v Are all external references resolved?
v Have all necessary modules been included?
v Was the language interface module correctly included?
v Is the correct entry point specified?
3. Your JCL:
v Is the information that described the files that contain the databases correct?
If not, check with your DBA.
v Have you included the DL/I parameter statement in the correct format?
v Have you included the region size parameter in the EXEC statement? Does it
specify a region or partition large enough for the storage required for IMS
and your program?
v Have you declared the fields in the PCB masks correctly?
v If your program is an assembler language program, have you saved and
restored registers correctly? Did you save the list of PCB addresses at entry?
Does register 1 point to a parameter list of fullwords before issuing any DL/I
calls?
v For COBOL for z/OS and PL/I for MVS and VM, are the literals you are
using for arguments in DL/I calls producing the results you expect? For
example, in PL/I for MVS and VM, is the parameter count being generated
as a half-word instead of a fullword, and is the function code producing the
required 4-byte field?
v Use the PCB as much as possible to determine what in your program is
producing incorrect results.

Chapter 12. Testing an IMS application program 195


196 Application Programming Planning Guide
Chapter 13. Testing a CICS application program
This section tells you what is involved in testing a CICS application program as a
unit and gives you some suggestions on how to do testing. This stage of testing is
called program unit test. The purpose of program unit test is to test each application
program as a single unit to ensure that the program correctly handles its input
data, processing, and output data.

The amount and type of testing you do depends on the individual program.
Though strict rules for testing are not available, the guidelines provided in this
section might be helpful.

Subsections:
v “What you need to test a CICS program”
v “Testing your CICS program” on page 198
v “Requests for monitoring and debugging your CICS program” on page 202
v “What to do when your CICS program terminates abnormally” on page 202

What you need to test a CICS program


When you are ready to test your program, be aware of your established test
procedures before you start. To start testing, you need the following three items:
v Test JCL.
v A test database. When you are testing a program, do not execute it against a
production database because the program, if faulty, might damage valid data.
v Test input data. The input data that you use need not be current, but it should
be valid data. You cannot be sure that your output data is valid unless you use
valid input data.

The purpose of testing the program is to make sure that the program can correctly
handle all the situations that it might encounter.

To thoroughly test the program, try to test as many of the paths that the program
can take as possible. For example:
v Test each path in the program by using input data that forces the program to
execute each of its branches.
v Be sure that your program tests its error routines. Again, use input data that will
force the program to test as many error conditions as possible.
v Test the editing routines your program uses. Give the program as many different
data combinations as possible to make sure it correctly edits its input data.

© Copyright IBM Corp. 1974, 2010 197


Testing your CICS program
You can use different tools to test a program, depending on the type of program.
Table 32 summarizes the tools that are available for online DBCTL, batch, and BMP
programs.
Table 32. Tools you can use for testing your program
Online
Tool (DBCTL) Batch BMP
1
Execution Diagnostic Facility (EDF) Yes No No
CICS dump control Yes No No
CICS trace control Yes Yes No
2
DFSDDLT0 No Yes Yes2
DL/I image capture program Yes Yes Yes
Notes:
1. For online, command-level programs only.
2. For call-level programs only. (For a command-level batch program, you can use DL/I
image capture program first, to produce calls for DFSDDLT0.)

Subsections:
v “Using the Execution Diagnostic Facility (command-level only)”
v “Using CICS dump control”
v “Using CICS trace control” on page 199
v “Tracing DL/I calls with image capture” on page 199

Using the Execution Diagnostic Facility (command-level only)


You can use the Execution Diagnostic Facility (EDF) to test command-level
programs online. EDF can display EXEC CICS and EXEC DLI commands in online
programs; it cannot intercept DL/I calls. (To test a call-level online program, you
can use the CICS dump control facility or the CICS trace facility, described in the
following sections.)

With EDF you can:


v Display and modify working storage; you can change values in the DIB.
v Display and modify a command before it is executed. You can modify the value
of any argument, and then execute the command.
v Modify the return codes after the execution of the command. After the command
has been executed, but before control is returned to the application program, the
command is intercepted to show the response and any argument values set by
CICS.

You can run EDF on the same terminal as the program you are testing.

Related Reading: For more information about using EDF, see “Execution
(Command-Level) Diagnostic Facility” in CICS Application Programming Reference.

Using CICS dump control


You can use the CICS dump control facility to dump virtual storage areas, CICS
tables, and task-related storage areas.

198 Application Programming Planning Guide


For more information about using the CICS dump control facility, see the CICS
application programming reference manual that applies to your version of CICS.

Using CICS trace control


You can use the trace control facility to help debug and monitor your online
programs in the DBCTL environment. You can use trace control requests to record
entries in a trace table. The trace table can be located either in virtual storage or on
auxiliary storage. If it is in virtual storage, you can gain access to it by
investigating a dump; if it is on auxiliary storage, you can print the trace table. For
more information about the control statements you can use to produce trace
entries, see the information about trace control in the application programming
reference manual that applies to your version of CICS.

Tracing DL/I calls with image capture


DL/I image capture program (DFSDLTR0) is a trace program that can trace and
record DL/I calls issued by batch, BMP, and online (DBCTL environment)
programs. You can also use the image capture program with command-level
programs, and you can produce calls for use as input to DFSDDLT0. You can use
the image capture program to:
Test your program
If the image capture program detects an error in a call it traces, it reproduces as
much of the call as possible, although it cannot document where the error
occurred, and cannot always reproduce the full SSA.
Produce input for DFSDDLT0 (DL/I test program)
You can use the output produced by the image capture program as input to
DFSDDLT0. The image capture program produces status statements, comment
statements, call statements, and compare statements for DFSDDLT0. For
example, you can use the image capture program with a command-level
program, to produce calls for DFSDDLT0.
Debug your program
When your program terminates abnormally, you can rerun the program using
the image capture program. The image capture program can then reproduce
and document the conditions that led to the program failure. You can use the
information in the report produced by the image capture program to find and
fix the problem.

Subsections:
v “Using image capture with DFSDDLT0”
v “Running image capture online” on page 200
v “Running image capture as a batch job” on page 200
v “Example of DLITRACE” on page 201
v “Special JCL requirements” on page 201
v “Notes on using image capture” on page 201
v “Retrieving image capture data from the log data set” on page 201

Using image capture with DFSDDLT0


The image capture program produces the following control statements that you can
use as input to DFSDDLT0:
Status statements
When you invoke the image capture program, it produces the status statement.
The status statement it produces:

Chapter 13. Testing a CICS application program 199


– Sets print options so that DFSDDLT0 prints all call trace comments, all DL/I
calls, and the results of all comparisons.
– Determines the new relative PCB number each time a PCB change occurs
while the application program is executing.
Comments statement
The image capture program also produces a comments statement when you
invoke it. The comments statements give:
– The time and date IMS started the trace
– The name of the PSB being traced
The image capture program also produces a comments statement preceding any
call in which IMS finds an error.
Call statements
The image capture program produces a call statement for each DL/I call or
EXEC DLI command the application program issues. It also generates a CHKP
call when it starts the trace and after each commit point or CHKP request.
Compare statements
If you specify COMP on the DLITRACE control statement, the image capture
program produces data and PCB comparison statements.

Running image capture online


When you run the image capture program online, the trace output goes to the IMS
log data set. To run the image capture program online, you issue the IMS TRACE
command from the z/OS console.

If you trace a BMP and you want to use the trace results with DFSDDLT0, the
BMP must have exclusive write access to the databases it processes. If the
application program does not have exclusive access, the results of DFSDDLT0 may
differ from the results of the application program.

The following diagram shows TRACE command format:

ON
 / TRACE SET OFF PSB psbname 
NOCOMP
COMP

SET ON|OFF
Turns the trace on or off.
PSB psbname
Specifies the name of the PSB you want to trace. You can trace more than one
PSB at the same time, by issuing a separate TRACE command for each PSB.
COMP|NOCOMP
Specifies whether you want the image capture program to produce data and
PCB compare statements to be used with DFSDDLT0.

Running image capture as a batch job


To run the image capture program as a batch job, you use the DLITRACE control
statement in the DFSVSAMP DD data set. In the DLITRACE control statement, you
specify:
v Whether you want to trace all of the DL/I calls the program issues or trace only
a certain group of calls.
v Whether you want the trace output to go to:

200 Application Programming Planning Guide


A sequential data set that you specify
The IMS log data set
Both sequential and IMS log data sets

Notes on using image capture:


v If the program being traced issues CHKP and XRST calls, the checkpoint and
restart information may not be directly reproducible when you use the trace
output with DFSDDLT0.
v When you run DFSDDLT0 in an IMS DL/I or DBB batch region with trace
output, the results are the same as the application program's results, but only if
the database has not been altered.

| For information on the format of the DLITRACE control statement in the


| DFSVSAMP DD data set, see the topic “Defining DL/I call image trace” in the IMS
| Version 10: System Definition Reference.

Example of DLITRACE
This example shows a DLITRACE control statement that:
v Traces the first 14 DL/I calls or commands that the program issues
v Sends the output to the IMS log data set
v Produces data and PCB comparison statements for DFSDDLT0
//DFSVSAMP DD *
DLITRACE LOG=YES,STOP=14,COMP
/*

Special JCL requirements


The following are special JCL requirements:
//IEFRDER DD
If you want log data set output, this DD statement is required to define the
IMS log data set.
//DFSTROUT DD|anyname
If you want sequential data set output, this DD statement is required to define
that data set. If you want to specify an alternate DDNAME (anyname), it must
be specified using the DDNAME parameter on the DLITRACE control
statement.
The DCB parameters on the JCL statement are not required. The data set
characteristics are:
v RECFM=F
v LRECL=80

Notes on using image capture


v If the program being traced issues CHKP and XRST calls, the checkpoint and
restart information may not be directly reproducible when you use the trace
output with the DFSDDLT0.
v When you run DFSDDLT0 in an IMS DL/I or DBB batch region with trace
output, the results are the same as the application program's results provided
the database has not been altered.

Retrieving image capture data from the log data set


If the trace output is sent to the IMS log data set, you can retrieve it by using
utility DFSERA10 and a DL/I call trace exit routine, DFSERA50. DFSERA50
deblocks, formats, and numbers the image capture program records to be retrieved.

Chapter 13. Testing a CICS application program 201


To use DFSERA50, you must insert a DD statement defining a sequential output
data set in the DFSERA10 input stream. The default ddname for this DD statement
is TRCPUNCH. The card must specify BLKSIZE=80.

Examples: You can use the following examples of DFSERA10 input control
statements in the SYSIN data set to retrieve the image capture program data from
the log data set:
v Print all image capture program records:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,FLDTYP=X
v Print selected image capture program records by PSB name:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT OFFSET=25,VLDTYP=C,FLDLEN=8,
VALUE=psbname, COND=E
v Format image capture program records (in a format that can be used as input
to DFSDDLT0):
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT EXITR=DFSERA50,OFFSET=25,FLDTYP=C
VALUE=psbname,FLDLEN=8,DDNAME=OUTDDN,COND=E

The DDNAME= parameter is used to name the DD statement used by DFSERA50.


The data set defined on the OUTDDN DD statement is used instead of the default
TRCPUNCH DD statement. For this example, the DD appears as:
//OUTDDN DD ...,DCB=(BLKSIZE=80),...

Requests for monitoring and debugging your CICS program


You can use the following two requests to help you in debugging your program:
v The statistics (STAT) request retrieves database statistics. STAT can be issued from
both call- and command-level programs. See “Retrieving database statistics: the
STAT call” on page 180 for a description of the STAT request.
v The log (LOG) request makes it possible for the application program to write a
record on the system log. You can issue LOG as a command or call in a batch
program; in this case, the record is written to the IMS log. You can issue LOG as a
call or command in an online program in the DBCTL environment; in this case,
the record is written to the DBCTL log. See “Writing Information to the system
log: the LOG request” on page 193 for a description of the LOG request.

What to do when your CICS program terminates abnormally


Whenever your program terminates abnormally, you can take some actions to
simplify the task of finding and fixing the problem. First, you can record as much
information as possible about the circumstances under which the program
terminated abnormally; and second, you can check for certain initialization and
execution errors.

Recommended actions after an abnormal termination of CICS


Many places have guidelines on what you should do if your program terminates
abnormally. The suggestions given here are some common guidelines:
v Document the error situation to help in investigating and correcting it. Some of
the information that can be helpful is:
– The program's PSB name

202 Application Programming Planning Guide


– The transaction code that the program was processing (online programs only)
– The text of the input screen being processed (online programs only)
– The call function
– The terminal ID (online programs only)
– The contents of the PCB or the DIB
– The contents of the I/O area when the problem occurred
– If a database request was executing, the SSAs or SEGMENT and WHERE
options, if any, the request used
– The date and time of day
v When your program encounters an error, it can pass all the required error
information to a standard error routine.
v An online program might also want to send a message to the master terminal
destination (CSMT) and application terminal operator, giving information about
the program's termination.
v You can send a message to the system log by issuing a LOG request.

Diagnosing an abnormal termination of CICS


If your program does not run correctly when you are testing it or when it is
executing, you need to isolate the problem. The problem might be anything from a
programming error (for example, an error in the way you coded one of your
requests) to a system problem. This section gives some guidelines about the steps
that you, as the application programmer, can take when your program fails to run,
terminates abnormally, or gives incorrect results.

CICS initialization errors


Before your program receives control, IMS must have correctly loaded and
initialized the PSB and DBDs used by your application program. Often, when the
problem is in this area, you need a system programmer or DBA (or your
equivalent specialist) to fix the problem. One thing you can do is to find out if
there have been any recent changes to the DBDs, PSB, and the control blocks that
they generate.

CICS execution errors


If you do not have any initialization errors, check the following in your program:
1. The output from the compiler. Make sure that all error messages have been
resolved.
2. The output from the binder:
v Are all external references resolved?
v Have all necessary modules been included?
v Was the language interface module correctly included?
v Is the correct entry point specified (for batch programs only)?
3. Your JCL:
v Is the information that described the files that contain the databases correct?
If not, check with your DBA.
v Have you included the DL/I parameter statement in the correct format (for
batch programs only)?
v Have you included the region size parameter in the EXEC statement? Does it
specify a region or partition large enough for the storage required for IMS
and your program (for batch programs only)?
4. Your call-level program:

Chapter 13. Testing a CICS application program 203


v Have you declared the fields in the PCB masks correctly?
v If your program is an assembler language program, have you saved and
restored registers correctly? Did you save the list of PCB addresses at entry?
Does register 1 point to a parameter list of full words before issuing any
DL/I calls?
v For COBOL for z/OS and PL/I for MVS and VM, are the literals you are
using for arguments in DL/I calls producing the results you expect? For
example, in PL/I for MVS and VM, is the parameter count being generated
as a half word instead of a fullword, and is the function code producing the
required 4-byte field?
v Use the PCB as much as possible to determine what in your program is
producing incorrect results.
5. Your command-level program:
v Did you use the FROM option with your ISRT or REPL command? If not, data
will not be transferred to the database.
v Check translator messages for errors.

204 Application Programming Planning Guide


Chapter 14. Testing an ODBA application program
This section tells you what is involved in testing an ODBA application program as
a unit and gives you some suggestions on how to do testing. This stage of testing
is called program unit test. The purpose of program unit test is to test each
application program as a single unit to ensure that the program correctly handles
its input data, processing, and output data. The amount and type of testing you do
depends on the individual program. Though strict rules for testing are not
available, the guidelines provided in this section might be helpful.

Subsections:
v “Tracing DL/I calls with image capture to test your ODBA program” on page
206
v “Using image capture with DFSDDLT0 to test your ODBA program” on page
206
v “Running image capture online” on page 207
v “Retrieving image capture data from the log data set” on page 207
v “Requests for monitoring and debugging your ODBA program” on page 208
v “What to do when your ODBA program terminates abnormally” on page 208

Be aware of your established test procedures before you start to test your program.
To begin testing, you need the following items:
v A test JCL statement
v A test database
Always begin testing programs against test-only databases. Do not test programs
against production databases. If the program is faulty it might damage or delete
critical data.
v Test input data
The input data that you use need not be current, but it should be valid data. You
cannot be sure that your output data is valid unless you use valid input data.

The purpose of testing the program is to make sure that the program can correctly
handle all the situations that it might encounter. To thoroughly test the program,
try to test as many of the paths that the program can take as possible. For
example:

Test each path in the program by using input data that forces the program to
execute each of its branches. Be sure that your program tests its error routines.
Again, use input data that will force the program to test as many error conditions
as possible. Test the editing routines your program uses. Give the program as
many different data combinations as possible to make sure it correctly edits its
input data. Table 33 lists the tools you can use to test Online (IMSDB), Batch, and
BMP programs.
Table 33. Tools you can use for testing your program
Tool Online (IMS DB) Batch BMP
DFSDDLT0 No Yes¹ Yes
DL/I image capture Yes Yes Yes
program

© Copyright IBM Corp. 1974, 2010 205


Table 33. Tools you can use for testing your program (continued)
Tool Online (IMS DB) Batch BMP
Note: 1. For call-level programs only. (For a command-level batch program, you can use
DL/I image capture program first, to produce calls for DFSDDLT0).

Tracing DL/I calls with image capture to test your ODBA program
The DL/I image capture program (DFSDLTR0) is a trace program that can trace
and record DL/I calls issued by batch, BMP, and online (IMS DB environment)
programs. You can produce calls for use as input to DFSDDLT0. You can use the
image capture program to:
v Test your program
If the image capture program detects an error in a call it traces, it reproduces as
much of the call as possible, although it cannot document where the error
occurred, and cannot always reproduce the full SSA.
v Produce input for DFSDDLT0 (DL/I test program)
You can use the output produced by the image capture program as input to
DFSDDLT0. The image capture program produces status statements, comment
statements, call statements, and compare statements for DFSDDLT0. For
example, you can use the image capture program with a ODBA application, to
produce calls for DFSDDLT0.
v Debug your program
When your program terminates abnormally, you can rerun the program using
the image capture program. The image capture program can then reproduce and
document the conditions that led to the program failure. You can use the
information in the report produced by the image capture program to find and
fix the problem.

Using image capture with DFSDDLT0 to test your ODBA program


The image capture program produces the following control statements that you can
use as input to DFSDDLT0:
v Status statements
When you invoke the image capture program, it produces the status statement.
The status statement it produces:
– Sets print options so that DFSDDLT0 prints all call trace comments, all DL/I
calls, and the results of all comparisons
– Determines the new relative PCB number each time a PCB change occurs
while the application program is running
v Comments statement
The image capture program also produces a comments statement when you run
it. The comments statements give:
The time and date IMS started the trace
The name of the PSB being traced
The image capture program also produces a comments statement preceding any
call in which IMS finds an error.
v Call statements
The image capture program produces a call statement for each DL/I call.
v Compare statements

206 Application Programming Planning Guide


If you specify COMP on the DLITRACE control statement, the image capture
program produces data and PCB comparison statements.

Running image capture online


| When you run the image capture program online, the trace output goes to the IMS
| log data set. To run the image capture program online, you issue the IMS TRACE
| command from the z/OS console.

If you trace a BMP and you want to use the trace results with DFSDDLT0, the
BMP must have exclusive write access to the databases it processes. If the
application program does not have exclusive access, the results of DFSDDLT0 may
differ from the results of the application program.

| The following diagram shows TRACE command format:

| ON
 / TRACE SET OFF PSB psbname 
NOCOMP
COMP
|
| SET ON|OFF
| Turns the trace on or off.
| PSB psbname
| Specifies the name of the PSB you want to trace. You can trace more than
| one PSB at the same time by issuing a separate TRACE command for each
| PSB.
| COMP|NOCOMP
| Specifies whether you want the image capture program to produce data
| and PCB compare statements to be used with DFSDDLT0.

Retrieving image capture data from the log data set


If the trace output is sent to the IMS log data set, you can retrieve it by using
utility DFSERA10 and a DL/I call trace exit routine, DFSERA50. DFSERA50
deblocks, formats, and numbers the image capture program records to be retrieved.
To use DFSERA50, you must insert a DD statement defining a sequential output
data set in the DFSERA10 input stream. The default ddname for this DD statement
is TRCPUNCH. The card must specify BLKSIZE=80.

Examples: You can use the following examples of DFSERA10 input control
statements in the SYSIN data set to retrieve the image capture program data from
the log data set:
v Print all image capture program records:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,FLDTYP=X
v Print selected image capture program records by PSB name:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT OFFSET=25,VLDTYP=C,FLDLEN=8,
VALUE=psbname, COND=E
v Format image capture program records (in a format that can be used as input to
DFSDDLT0):

Chapter 14. Testing an ODBA application program 207


Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT EXITR=DFSERA50,OFFSET=25,FLDTYP=C
VALUE=psbname,FLDLEN=8,DDNAME=OUTDDN,COND=E
The DDNAME= parameter is used to name the DD statement used by
DFSERA50. The data set defined on the OUTDDN DD statement is used instead
of the default TRCPUNCH DD statement. For this example, the DD appears as:
//OUTDDN DD ...,DCB=(BLKSIZE=80),...

Requests for monitoring and debugging your ODBA program


You can use the following two requests to help you in debugging your program:
v The statistics (STAT) request retrieves database statistics. STAT can be issued
from both call- and command-level programs. See “Retrieving database statistics:
the STAT call” on page 180 for a description of the STAT request.
v The log (LOG) request makes it possible for the application program to write a
record on the system log. You can issue LOG as a command or call in a batch
program; in this case, the record is written to the IMS log. You can issue LOG as
a call or command in an online program in the IMS DB environment; in this
case, the record is written to the IMS DB log. See “Writing Information to the
system log: the LOG request” on page 193 for a description of the LOG request.

What to do when your ODBA program terminates abnormally


Whenever your program terminates abnormally, you can take some actions to
simplify the task of finding and fixing the problem.

| ODBA does not issue any return or reason codes. Most non-terminating errors for
| ODBA application programs are communicated in AIB return and reason codes.

First, you can record as much information as possible about the circumstances
under which the program terminated abnormally; and second, you can check for
certain initialization and execution errors.

Recommended actions after an abnormal termination of an


ODBA program
Many shops have guidelines on what you should do if your program terminates
abnormally. The suggestions given here are some common guidelines:
| v Document the error situation to help in investigating and correcting it. Some of
| the information that can be helpful is:
| – The program's PSB name
| – The call function
| – The terminal ID (online programs only)
| – The contents of the AIB or the PCB
| – The contents of the I/O area when the problem occurred
| – If a database request was executing, the SSAs or SEGMENT and WHERE
| options, if any, the request used
| – The date and time of day
| v When your program encounters an error, it can pass all the required error
| information to a standard error routine.
| v You can send a message to the system log by issuing a LOG request.

208 Application Programming Planning Guide


Diagnosing an abnormal termination of an ODBA program
If your program does not run correctly when you are testing it or when it is
running, you need to isolate the problem. The problem might be anything from a
programming error (for example, an error in the way you coded one of your
requests) to a system problem. This section gives some guidelines about the steps
that you, as the application programmer, can take when your program fails to run,
terminates abnormally, or gives incorrect results.

ODBA initialization errors


Before your program receives control, IMS must have correctly loaded and
initialized the PSB and DBDs used by your application program. Often, when the
problem is in this area, you need a system programmer or DBA (or your
equivalent specialist) to fix the problem. One thing you can do is to find out if
there have been any recent changes to the DBDs, PSB, and the control blocks that
they generate.

ODBA running errors


If you do not have any initialization errors, check the following in your program:
1. The output from the compiler. Make sure that all error messages have been
resolved.
2. The output from the binder:
v Are all external references resolved?
v Have all necessary modules been included?
v Was the language interface module correctly included?
3. Your JCL. Is the information that described the files that contain the databases
correct? If not, check with your DBA.

Chapter 14. Testing an ODBA application program 209


210 Application Programming Planning Guide
Chapter 15. Documenting an application program
This section provides guidelines for program documentation. The purposes for
documenting an application program are described.

Subsections:
v “Documentation for other programmers”
v “Documentation for users” on page 212

Many places establish standards for program documentation; make sure you are
aware of your established standards.

Documentation for other programmers


Documenting a program is not something you do at the end of the project; your
documentation will be much more complete, and more useful to others if you
record information about the program as you structure and code it. Include any
information that might be useful to someone else who must work with your
program.

The reason you record this information is so that people who maintain your
program know why you chose certain commands, options, call structures, and
command codes. For example, if the DBA were considering reorganizing the
database in some way, information about why your program accesses the data the
way it does would be helpful.

A good place to record information about your program is in a data dictionary.


You can use the DB/DC Data Dictionary, or its successor, IBM DataAtlas for OS/2
(a part of the IBM VisualGen Team Suite), for this purpose.

Related Reading: For information on how to use these products to document a


data processing environment: the application system, the programs, the programs'
modules, and the IMS system, see
v OS/VS DB/DC Data Dictionary Applications Guide
v Introducing VisualGen or
v VisualGen: Running Applications on MVS

Information you can include for other programmers includes:


v Flowcharts and pseudocode for the program
v Comments about the program from code inspections
v A written description of the program flow
v Information about why you chose the call sequence you did, such as:
– Did you test the call sequence using DFSDDLT0?
– In cases where more than one combination of calls would have had the same
results, why did you choose the sequence you did?
– What was the other sequence? Did you test it using DFSDDLT0?
v Any problems you encountered in structuring or coding the program
v Any problems you had when you tested the program
v Warnings about what should not be changed in the program

© Copyright IBM Corp. 1974, 2010 211


All this information relates to structuring and coding the program. In addition, you
should include the information described in “Documentation for users” with the
documentation for programmers.

Again, the amount of information you include and the form in which you
document it depend upon you and your application. These documentation
guidelines are provided as suggestions.

Documentation for users


All the information listed in the “Documentation for other programmers” on page
211 relates to the design of the program. In addition to this, you should record
information about how you use the program. The amount of information that users
need and how much of it you should supply depends upon whom the users of the
program are and what type of program it is.

At a minimum, include the following information for those who use your program:
v What one needs in order to use the program, for example:
– For online programs, is there a password?
– For batch programs, what is the required JCL?
v The input that one needs to supply to the program, for example:
– For an MPP, what is the MOD name that must be entered to initially format
the screen?
– For a CICS online program, what is the CICS transaction code that must be
entered? What terminal input is expected?
– For a batch program, is the input in the form of a tape, or a disk data set? Is
the input originally output from a previous job?
v The content and form of the program's output, for example:
– If it is a report, show the format or include a sample listing.
– For an online application program, show what the screen will look like.
v For online programs, if decisions must be made, explain what is involved in
each decision. Present the choices and the defaults.

If the people that will be using your program are unfamiliar with terminals, they
will need a user's guide also. This guide should give explicit instructions on how
to use the terminal and what a user can expect from the program. The guide
should contain discussions of what should be done if the task or program abends,
whether the program should be restarted, or if the database requires recovery.
Although you may not be responsible for providing this kind of information, you
should provide any information that is unique to your application to whomever is
responsible for this kind of information.

212 Application Programming Planning Guide


Chapter 16. Managing the IMS Spool API overall design
The IMS Spool API (application programming interface) is an expansion of the IMS
application program interface that allows applications to interface directly to JES
and create print data sets on the job entry subsystem (JES) spool. These print data
sets can then be made available to print managers and spool servers to serve the
needs of the application.

This section describes the design of the IMS Spool API and how an application
program uses it.

Subsections:
v “IMS Spool API design”
v “Sending data to the JES spool data sets” on page 214
v “IMS Spool API performance considerations” on page 214
v “IMS Spool API application coding considerations” on page 215

Related Reading: For more information about the IMS Spool API, see:
v IMS Version 10: Application Programming Guide
v IMS Version 10: System Administration Guide

IMS Spool API design


The IMS Spool API design provides the application program with the ability to
create print data sets on the JES spool using the standard DL/I call interface. The
functions provided are:
Definition of the data set output characteristics
Allocation of the data set
Insertion of lines of print into the data set
Closing and deallocation of the data set
Backout of uncommitted data within the limits of the JES interface
Assistance in controlling an in-doubt print data set

The IMS Spool API support uses existing DL/I calls to provide data set allocation
information and to place data into the print data set. These calls are:
v The CHNG call. This call is expanded so that print data set characteristics can be
specified for the print data set that will be allocated. The process uses the
alternate PCB as the interface block associated with the print data set.
v The ISRT call. This call is expanded to perform dynamic allocation of the print
data set on the first insert, and to write data to the data set. The data set is
considered in-doubt until the unit of work (UOW) terminates. If possible, the
sync point process deletes all in-doubt data sets for abending units of work and
closes and deallocates data sets for normally terminating units of work.
v The SETO call. This is a call, SETO (Set Options), introduced by this support. Use
this call to create dynamic output text units to be used with the subsequent CHNG
call. If the same output descriptor is used for many print data sets, the overhead
can be reduced by using the SETO call to prebuild the text units necessary for the
dynamic output process.

© Copyright IBM Corp. 1974, 2010 213


Related Reading: The use of the SETO call is covered in more detail in IMS Version
10: Application Programming Guide.

Sending data to the JES spool data sets


Application programs can send data to the JES spool data sets using the same
method that is used to send output to an alternate terminal. Use the DL/I call to
change the output destination to a JES spool data set. Use the DL/I ISRT or PURG
call to insert a message.

The options list parameter on the CHNG and SETO calls contains the data set printer
processing options. These options direct the output to the appropriate IMS Spool
API data set. These options are validated for the DL/I call by the MVSScheduler
JCL Facility (SJF). If the options are invalid, error codes are returned to the
application. To receive the error information, the application program specifies a
feedback area in the CHNG or SETO DL/I call parameter list. If the feedback area is
present, information about the options list error is returned directly to the
application.

IMS Spool API performance considerations


The IMS Spool API interface uses z/OS services within an IMS application while
minimizing the performance impact of the z/OS services on the other IMS
transactions and services. For this reason, the IMS Spool API support places the
print data directly on the JES spool at insert time instead of using the IMS message
queue for intermediate storage. The processing of IMS Spool API requests is
performed under the TCB of the dependent region to ensure maximum usage of
N-way processors. This design reduces the error recovery and JES job orientation
problems.

JES initiator considerations


Because the dependent regions are normally long-running jobs, some of the
initiator or job specifications might must be changed if the dependent region is
using the IMS Spool API. You might need to limit the amount of JES spool space
used by the dependent region to contain the dynamic allocation and deallocation
messages. For example, you can use the JOB statement MSGLEVEL to eliminate
the dynamic allocation messages from the job log for the dependent region. You
might be able to eliminate these messages for dependent regions executing as
z/OS started tasks.

Another initiator consideration is the use of the JES job journal for the dependent
region. If the job step has a journal associated with it, the information for z/OS
checkpoint restart is recorded in the journal. Because IMS dependent regions
cannot use z/OS checkpoint restart, specify JOURNAL=NO for the JES2 initiator
procedure and the JES3 class associated with the dependent regions execution
class. You can also specify the JOURNAL= on the JES3 //*MAIN statement for
dependent regions executing as jobs.

Application managed text units


The application can manage the dynamic descriptor text units instead of IMS. If
the application manages the text units, overhead for parsing and text unit build
can be reduced. Use the SETO call to have IMS build dynamic descriptor text units.
After they are built, these text units can be used with subsequent CHNG calls to
define the print characteristics for a data set.

214 Application Programming Planning Guide


To reduce overhead by managing the text units, the text units should be used with
several change calls. An example of this is a wait-for-input (WFI) transaction. The
same data set attributes can be used for all print data sets. For the first message
processed, the application uses the SETO call to build the text units for dynamic
descriptors and a subsequent CHNG call with the TXTU= parameter referencing the
prebuilt text units. For all subsequent messages, only a CHNG call using the prebuilt
text units is necessary.

Be aware of the following: No testing has been done to determine the amount of
overhead that might be saved using prebuilt text units.

BSAM I/O area


The I/O area for spool messages can be very large. It is not uncommon for the
area to be 32 KB in length. To reduce the overhead incurred with moving large
buffers, IMS attempts to write to the spool data set from the application's I/O area.
BSAM does not support I/O areas in 31-bit storage for SYSOUT files. If IMS finds
that the application's I/O area is in 31-bit storage:
v A work area is obtained from 24-bit storage.
v The application's I/O area is moved to the work area.
v The spool data set is written from the work area.

If the application's I/O area can easily be placed in 24-bit storage, the need to
move the I/O area can be avoided and possible performance improvements
achieved.

Be aware of the following: No testing has been done to determine the amount of
performance improvement possible.

Since a record can be written by BSAM directly from the application's I/O area, the
area must be in the format expected by BSAM. The format must contain:
v Variable length records
v A Block Descriptor Word (BDW)
v A Record Descriptor Word (RDW)

Related Reading: For more information on the formats of the BDW and RDW, see
MVS/XA Data Administration Guide. The format of the I/O area is described in
more familiar IMS terms in IMS Version 10: Application Programming Guide.

IMS Spool API application coding considerations


Your application can send data to a JES Spool or Print server using a print data set.
This section describes this process and includes options for message integrity and
recovering data when failures occur.

Print data formats


The IMS Spool API attempts to provide a transparent interface for the application
to insert data to the JES spool. The data can be in line, page, IPDS, AFPDS, or any
format that can be handled by a JES Spool or Print server that processes the print
data set. The IMS Spool API does not translate or otherwise modify the data
inserted to the JES spool.

Chapter 16. Managing the IMS Spool API overall design 215
Message integrity options
The IMS Spool API provides support for message integrity. This is necessary
because IMS cannot properly control the disposition of a print data set when:
v IMS abnormal termination does not execute because of a hardware or software
problem.
v A dynamic deallocation error exists for a print data set.
v Logic errors are in the IMS code.

In these conditions, IMS might not be able to stop the JES subsystem from printing
partial print data sets. Also, the JES subsystems do not support a two-phase sync
point.

Print disposition
The most common applications using Advanced Function Printing (AFP) are TSO
users and batch jobs. If any of these applications are creating print data sets when
a failure occurs, the partial print data sets will probably print and be handled in a
manual fashion. Many IMS applications creating print data sets can manage partial
print data sets in the same manner. For those applications that need more control
over the automatic printing by JES of partial print data sets, the IMS Spool API
provides the following integrity options. However, these options alone might not
guarantee the proper disposition of partial print data sets. These options are the b
variable following the IAFP keyword used with the CHNG call.
b=0
Indicates no data set protection
This is probably the most common option. When this option is selected, IMS
does not do any special handling during allocation or deallocation of the print
data set. If this option is selected, and any condition occurs that prevents IMS
from properly disposing the print data set, the partial data set probably prints
and must be controlled manually.
b=1
Indicates SYSOUT HOLD protection
This option ensures that a partial print data set is not released for printing
without a JES operator taking direct action. When the data set is allocated, the
allocation request indicates to JES that this print data set be placed in SYSOUT
HOLD status. The SYSOUT HOLD status is maintained for this data set if IMS
cannot deallocate the data set for any reason. Because the print data set is in
HOLD status, a JES operator must identify the partial data set and issue the
JES commands to delete or print this data set.
If the print data set cannot be deleted or printed:
v Message DFS0012I is issued when a print data set cannot be deallocated.
v Message DFS0014I is issued during IMS emergency restart when an in-doubt
print data set is found. The message provides information to help the JES
operator find the proper print data set and effect the proper print
disposition.
Some of the information includes:
– JOBNAME
– DSNAME
– DDNAME
– A recommendation on what IMS believes to be the proper disposition for
the data set (for example, printing or deleting).

216 Application Programming Planning Guide


By using the Spool Display and Search Facility (SDSF), you can display the
held data sets, identify the in-doubt print data set by DDNAME and
DSNAME, and issue the proper JES command to either delete or release the
print data set.
b=2
Indicates a non-selectable destination
This option prevents the automatic printing of partial print data sets. The IMS
Spool API function requests a remote destination of IMSTEMP for the data set
when the data set is allocated. The JES system must have a remote destination
of IMSTEMP defined so that JES does not attempt to print any data sets that
are sent to the destination.
If b=2, the name of the remote destination for the print data set must be
specified in the destination name field of the call parameter list when the CHNG
call is issued. When IMS deallocates the data set at sync point, and the data set
prints, IMS requests that the data set be transferred to the requested final
remote destination.
If the remote destination is not defined to the JES system, a dynamic allocation
failure occurs. Because this remote destination is defined as non-selectable, and
if IMS is unable to deallocate the print data set and control its proper
disposition, the print data set remains associated with remote destination
IMSTEMP when deallocated by z/OS.
When an deallocation error occurs, message DFS0012I is issued to provide
details of the deallocation error and help identify the print data set that
requires operator action. When partial print data sets are left on this special
remote destination, the JES operator can display all the print data sets
associated with this JES destination to locate the data set that requires action.
The b=2 option simplifies the operator's task of locating partial print data sets.

Message options
The third option on the IAPF keyword controls informational messages issued by
the IMS Spool API support. These messages inform the JES operator of in-doubt
data sets that need action.
c=0
Indicates that no DFS0012I or DFS0014I messages are issued for the print data
set. You can specify c=0 only if b=0 is specified.
c=m
Indicates that DFS0012I and DFS0014I messages are issued if necessary. You
can specify c=m or if b=1 or if b=2, it is the default.

Option c does not affect issuing message DFS0013E.

IMS emergency restart: When IMS emergency restart is performed, DFS0014I


messages might be issued if IMS finds that the proper disposition of a print data
set is in-doubt, as a result of the restart. This message is only issued if the message
option for the print data set was requested or c=m on the IAFP variable. When a
DFS0014I message is received, a JES operator might need to find and properly
dispose of the print data set. The DFS0014I message provides a recommended
disposition (that is, deletion or printing).

Destination name (LTERM) usage


The standard CHNG call parameter list contains a destination name field. For
traditional message calls, this field contains the LTERM or transaction code that

Chapter 16. Managing the IMS Spool API overall design 217
becomes the destination of messages sent using this alternate PCB. When ISRT calls
are issued against the PCB, the data is sent to the LTERM or transaction.

However, the destination name field has no meaning to the IMS Spool API
function unless b=2 is specified following the IAFP keyword.

When b=2 is specified:


v The name must be a valid remote destination supported by the JES system that
receives the print data sets.
v If the name is not a valid remote destination, an error occurs during dynamic
deallocation.

If any option other than 2 is selected, the name is not used by IMS.

The LTERM name appears in error messages and log records. Use a name that
identifies the routine creating the print data set. This information can aid in
debugging application program errors.

218 Application Programming Planning Guide


Notices
This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan Ltd.
1623-14, Shimotsuruma, Yamato-shi
Kanagawa 242-8502 Japan

The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.

This information could include technical inaccuracies or typographical errors.


Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.

© Copyright IBM Corp. 1974, 2010 219


IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.

Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
J46A/G4
555 Bailey Avenue
San Jose, CA 95141-1003
U.S.A.

Such information may be available, subject to appropriate terms and conditions,


including in some cases, payment of a fee.

The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.

Any performance data contained herein was determined in a controlled


environment. Therefore, the results obtained in other operating environments may
vary significantly. Some measurements may have been made on development-level
systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurement may have been
estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of


those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.

All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.

This information is for planning purposes only. The information herein is subject to
change before the products described become available.

This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which


illustrates programming techniques on various operating platforms. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating
platform for which the sample programs are written. These examples have not

220 Application Programming Planning Guide


been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or
imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM for the purposes of developing, using, marketing, or distributing application
programs conforming to IBM's application programming interfaces.

Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows:

© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.

Programming interface information


This information documents Product-sensitive Programming Interface and
Associated Guidance Information provided by IMS, as well as Diagnosis,
Modification or Tuning Information provided by IMS. This information is intended
to help you plan for application programs that access IMS databases or messages.

Product-sensitive Programming Interfaces allow the customer installation to


perform tasks such as diagnosing, modifying, monitoring, repairing, tailoring, or
tuning of this software product. Use of such interfaces creates dependencies on the
detailed design or implementation of the IBM software product. Product-sensitive
Programming Interfaces should be used only for these specialized purposes.
Because of their dependencies on detailed design and implementation, it is to be
expected that programs written to such interfaces may need to be changed in order
to run with new product releases or versions, or as a result of service.
Product-sensitive Programming Interface and Associated Guidance Information is
identified where it occurs, either by an introductory statement to a section or topic,
or by a Product-sensitive programming interface label. IBM requires that the
preceding statement, and any statement in this information that refers to the
preceding statement, be included in any whole or partial copy made of the
information described by such a statement.

Diagnosis, Modification or Tuning information is provided to help you diagnose,


modify, or tune IMS. Do not use this Diagnosis, Modification or Tuning
information as a programming interface.

Diagnosis, Modification or Tuning Information is identified where it occurs, either


by an introductory statement to a section or topic, or by the following marking:
Diagnosis, Modification or Tuning Information.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web in the topic “Copyright
and trademark information” at http://www.ibm.com/legal/copytrade.shtml.

The following terms are trademarks or registered trademarks of other companies,


and have been used at least once in this information:
v Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered
trademarks or trademarks of Adobe Systems Incorporated in the United States,
and/or other countries.
Notices 221
v Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
v Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc., in
the United States, other countries, or both.
v Linux is a registered trademark of Linus Torvalds in the United States, other
countries, or both.
v UNIX is a registered trademark of The Open Group in the United States and
other countries.

Other company, product, or service names may be trademarks or service marks of


others.

222 Application Programming Planning Guide


Bibliography
This bibliography lists all of the publications in Title Acronym Order
the IMS Version 10 library, supplementary number
publications, publication collections, and IMS: Messages and Codes MC4 GC18-9715
accessibility titles cited in the IMS Version 10 Reference, Volume 4: IMS
Component Codes
library.
IMS Version 10: Operations and OAG SC18-9716
Automation Guide
IMS Version 10 library IMS Version 10: Release Planning
Guide
RPG GC18-9717

IMS Version 10: System SAG SC18-9718


| Note: Because the IBM strategy is to deliver Administration Guide
| product information in Eclipse information IMS Version 10: System Definition SDG GC18-9998
| centers, IMS Version 10 is the last version Guide
| of the IMS library that will be available in IMS Version 10: System Definition SDR GC18-9966
| BookManager format. Reference
IMS Version 10: System SPR SC18-9967
Title Acronym Order Programming API Reference
number IMS Version 10: System Utilities SUR SC18-9968
IMS Version 10: Application APP SC18-9697 Reference
Programming Planning Guide
IMS Version 10: Application APG SC18-9698
Programming Guide
IMS Version 10: Application APR SC18-9699
Supplementary publications
Programming API Reference
| Documentation for the following supplementary
IMS Version 10: Command CR1 SC18-9700
Reference, Volume 1
| publications is available in the Information
IMS Version 10: Command CR2 SC18-9701
| Management Software for z/OS Solutions
Reference, Volume 2 | Information Center at http://
IMS Version 10: Command CR3 SC18-9702 | publib.boulder.ibm.com/infocenter/imzic.
Reference, Volume 3
IMS Version 10: Communications CCG SC18-9703 Title Order number
and Connections Guide IMS TM Resource Adapter User's Guide and SC19-1211
IMS Version 10: Database DAG SC18-9704 Reference
Administration Guide IMS SOAP Gateway User's Guide and SC19-1290
IMS Version 10: Database Utilities DUR SC18-9705 Reference
Reference IMS Version 10: Fact Sheet GC19-1064
IMS Version 10: Diagnosis Guide DG GC18-9706 IRLM Messages and Codes GC19-2666
IMS Version 10: Diagnosis DR GC18-9707 IMS and SOA Executive Overview GC19-2516
Reference
IMS Version 10: Exit Routine ERR SC18-9708
Reference | Documentation for the following IMS SOA
IMS Version 10: IMSplex IAG SC18-9709 | Integration Suite functions and tools that are
Administration Guide | supported by IMS Version 10 is also available in
IMS Version 10: Installation Guide IG GC18-9710
| the Information Management Software for z/OS
IMS Version 10: Licensed LPS GC18-9782
Programming Specifications
| Solutions Information Center at
IMS Version 10: Master Index and MIG SC18-9711 | http://publib.boulder.ibm.com/infocenter/imzic:
Glossary | v IMS DLIModel utility
IMS: Messages and Codes MC1 GC18-9712
Reference, Volume 1: DFS
| v IMS Enterprise Suite
Messages | v IMS MFS Web Solutions
IMS: Messages and Codes MC2 GC18-9713
| v IMS SOAP Gateway
Reference, Volume 2: Non-DFS
Messages | v IMS TM Resource Adapter
IMS: Messages and Codes MC3 GC18-9714 | v IMS Web 2.0 Solution
Reference, Volume 3: IMS Abend
Codes

© Copyright IBM Corp. 1974, 2010 223


Publication collections
Title Format Order
number
IMS Version 10 Product Kit CD SK5T-7327
z/OS Software Products CD SK3T-4270
Collection
z/OS and Software Products DVD SK3T-4271
DVD Collection

Accessibility titles cited in the


IMS Version 10 library
Title Order number
z/OS V1R1.0 TSO Primer SA22-7787
z/OS V1R5.0 TSO/E User’s Guide SA22-7794
z/OS V1R5.0 ISPF User’s Guide, Volume 1 SC34-4822

224 Application Programming Planning Guide


Index
A application design
analyzing
abend codes processing requirements 121
pseudo- 117 the data a program must access 123
U0069 119 user requirements 43, 45
U0711 91 data dictionary, using 50
U0777 111 DataAtlas 50
U1008 115 DB/DC Data Dictionary 50
U119 90 debugging 218
U2478 111 designing a local view 50
U2479 111 documenting 44, 212
U3301 115 IMS Spool API interface 213
U3303 111 overview 43
U711 169 application program 69
access methods DBDs (database descriptions), about 4
DEDB 144 documentation 212
description 139 environments 1, 15
GSAM 146 hierarchy examples 8, 11
HDAM 141 PSBs (program specification blocks), about 4
HIDAM 142 test 175, 197
HISAM 145 TSO 112
HSAM 145 application programming interface (API) 73
MSDB 143 APSB (allocate program specification block) 73
PHDAM 139, 141 area, I/O 40
PHIDAM 139, 142 association, data 37
SHISAM 146 asynchronous conversation, description for LU 6.2
SHSAM 146 transactions 66
access of AUTH call 164
IMS databases through z/OS 146 authorization
segments through different paths 152 ID, DB2 for z/OS 164
accessibility security 164
features xv availability of data 38, 116, 135
keyboard shortcuts xv
Advanced Function Printing (AFP) 216
AFPDS and IMS Spool API 215
aggregates, data 50 B
AIB interface 40 back-out database changes 134
AIBTDLI interface 119 backout, dynamic 104
allocation, dynamic 119 bank account database example 11
alternate PCBs 171 basic checkpoint 113, 133, 135
alternate response PCBs 171 basic conversation, APPC 66
analysis of basic edit, overview of 166
processing requirements 99 Batch Backout utility 104
required application data 45 batch environment 101
user requirements 43 batch message processing program.
anchor point, root 141 See BMP (batch message processing) program
API (application programming interface) for LU 6.2 devices batch programs
explicit API 73 converting to BMPs 107, 127
implicit API 73 databases that can be accessed 100, 123
APPC DB batch processing 103
application program types for LU 6.2 devices 64 description 126
basic conversation 66 differences from online 103, 126
description 63 I/O PCB, requesting during PSBGEN 132
LU 6.2 partner program design issuing checkpoints 116, 132
DFSAPPC message switch 96 overview 1, 15
flow diagrams 74, 87 recovery 104, 131
integrity after conversation completion 94 recovery of database 134
mapped conversation 66 structure 1, 15
application data Batch Terminal Simulator II (BTS II) 176
analyzing required 45 batch-oriented BMPs. 108
identifying 45 BILLING segment 11
BKO execution parameter 104

© Copyright IBM Corp. 1974, 2010 225


block descriptor word (BDW), IMS Spool API 215 code
BMP (batch message processing) program 108 course 46
batch-oriented 108 transaction 105
checkpoints in 115, 132 codes abend 111
converting batch programs to BMPs 127 codes, status
databases that can be accessed 106, 128 checking 3
description of 106, 127 columns
limiting number of locks with LOCKMAX= fields, compared to 23
parameter 115 relational representation, in 24
recovery 107, 128 commands, EXEC DLI 39
databases that can be accessed 101, 123 COMMENTS statement 176
transaction-oriented commit database changes 21
checkpoints in 114 commit points 103, 110, 133
databases that can be accessed 108 COMPARE statement 176
recovery 109 comparison of symbolic CHKP and basic CHKP 113
BTS II (Batch Terminal Simulator II) 176 concurrent access to full-function databases 103
buffer pool, STAT call and OSAM 180 considerations in screen design 167
buffer subpool, statistics for debugging CONTINUE-WITH-TERMINATION indicator 119
enhanced STAT call and continuing a conversation 169
OSAM 184 control, passing processing 39
VSAM 182, 190 conventions, naming 43
conversation attributes
asynchronous 66
C MSC synchronous and asynchronous 66
synchronous 65
C/MVS 119
conversation state, rules for APPC verbs 67
call results for status codes, exceptional 3
conversational mode
CALL statement (DL/I test program) 176
description 172
call-level programs, CICS 5
LU 6.2 transactions 65
call-level programs, scheduling a PSB 129
conversational processing
calls, DL/I 39
abnormal termination of, precautions for 170
CCTL (coordinator controller)
deferred program switch 169
restrictions
designing a conversation 168
when you encounter a problem 194
DFSCONE0 170
with BTS II (Batch Terminal Simulator II) 176
gathering requirements 168
with DL/I test program 176
how to continue the conversation 169
checkpoint 133
how to end the conversation 169
basic 113, 133
immediate program switch 169
calls, when to use 114
overview 168
frequency, specifying 116, 133
passing the conversation to another program 169
IDs 133
recovery considerations 170
in batch programs 116, 133
SPA 169
in batch-oriented BMPs 115, 134
use with alternate response PCBs 172
in MPPs 114
using a deferred program switch to end the
in transaction-oriented BMPs 114
conversation 169
issuing 103
what happens in a conversation 168
printing log records 134
conversations, preventing abnormal termination 170
restart 115, 135
converting an existing application 45
summary of 113
coordinator controller.
symbolic 113, 133
See CCTL (coordinator controller)
checkpoint (CHKP)
coordinator, sync-point 69
command
course code 46
issuing 20
CPI Communications driven application program for LU 6.2
CHKP (checkpoint) 113
devices 64
CHKP (Checkpoint)
creation of
command
a new hierarchy 152
issuing 20
reports 50
CHKPT=EOV 114
currency of data 36, 37
CHNG system service call 213
current roster 46
CICS 39
Transaction Server 21
CICS dump control 198
CICS programs D
structure 5 data
classes schedule, example 57 a program’s view 37
CMPAT=YES PSB specification 103 aggregate 50
COBOL 119 association 37
documentation 48

226 Application Programming Planning Guide


data (continued) database types (continued)
elements, homonym 48 DEDB 102, 144
elements, isolating repeating 51 description 102
elements, naming 47 Fast Path 101
hierarchical relationships 8, 37 full-function 101
integrity, how DL/I protects 131 GSAM 102, 146
keys 54 HDAM 141, 142
recording its availability 49 HISAM 145
relationships, analyzing 50 HSAM 145
structuring 50 MSDB 102, 143
unique identifier 48 PHDAM 139, 141
data availability PHIDAM 139, 142
considerations 116, 135 relational 102
levels 38 root-segment-only 102
recording 49 SHISAM 146
data currency 36, 37 SHSAM 146
data definition 44 DB batch processing 103
data dictionary DB PCB (database program communication block) 37
DataAtlas 50, 211 DB/DC
DB/DC Data Dictionary 50, 211 Data Dictionary 50, 211
documentation for other programmers 211 environment 101
in application design 50 DB2 for z/OS
data element databases 102, 124
description 45 security 164
homonym 48 DB2 for z/OS access
isolating repeating 51 JBP region, from a
listing 46 programming model 32
naming 47 JMP region, from a
synonym 47 programming model 30
data elements, grouping into hierarchies 50 DBA 10
data entity 45 DBASF, formatted OSAM buffer pool statistics 180
data entry database 102 DBASS, formatted summary of OSAM buffer pool
data mask 40 statistics 182
data redundancy 35 DBASU, unformatted OSAM buffer pool statistics 181
data sensitivity 38 DBCTL environment 101, 123
data sensitivity, defined 157 DBD (database description) 37
data storage methods DBDs (database descriptions)
combined file 36 description 4
in a database 36 DBESF, formatted OSAM subpool statistics 185
separate files 35 DBESO, formatted OSAM pool online statistics 188
data structure 38 DBESS, formatted summary of OSAM pool statistics 187
data structure conflicts, resolving 147 DBESU, unformatted OSAM subpool statistics 187
DataAtlas 50, 211 DCCTL environment 101
database deadlock, program 104
access to 123 debug a program, how to 194
administrator 10 debug a program, How to 202
changes, backing out 134 DEDB (data entry database) 144
changes, committing 21 DEDB (data entry database) and the PROCOPT operand 160
descriptor (DBD) 37 deferred program switch 169
example, medical hierarchy 8 definition
hierarchy 8, 37 data 44
options 139 dependent segment 9
record, processing 39 root segment 9
recovery 132 dependent segment 9
unavailability 116, 135 dependents
database and data communications security 44 direct 102
database descriptions (DBDs) 4 sequential 102
DATABASE macro 161 description, segment 9
database record 9 design of
database recovery an application 43
backing out database changes 134 conversation 168
checkpoints, description 132 local view 50
restarting your program, description 135 designing
database statistics, retrieving 180 terminal screen 167
database types determination of mappings 56
areas 102 device input format (DIF), control block 166
DB2 for z/OS 102, 124 device output format (DOF), control block 166

Index 227
DFSAPPC message switch 96 DL/I test program (DFSDDLT0) (continued)
DFSCONE0 (Conversational Abnormal Termination exit compare statements 176
routine) 170 control statements 176
DFSDDLT0 (DL/I test program) 175 description 176
DFSDLTR0 (DL/I image capture). status statements 176
See DL/I image capture (DFSDLTR0) programs testing DL/I call sequences 175, 199
DFSERA10 utility 201 DL/I, getting started with CICS 5
DFSERA50 exit routine 201 DLITRACE control statement 200
DFSMDA macro 119 documentation for users 212
DIB (DLI interface block) 40 documentation of
dictionary, data 50 data 48
DIF (device input format), control block 166 the application design process 44
differences between CICS and command-level batch or BMP DOF (device output format), control block 166
programs 19 dump control, CICS 198
direct access methods duplicate values, isolating 53
characteristics 140 dynamic allocation 119, 138
HDAM 141 dynamic backout 104
HIDAM 142 dynamic MSDBs (main storage databases) 11
PHDAM 139, 141
PHIDAM 139, 142
types of 140
direct dependents 102
E
EBCDIC 133
Distributed Sync Point 68
EDF (Execution Diagnostic Facility) 198
DL/I
editing
databases, read and update 19
considerations in your application 166
DL/I access methods
messages
considerations in choosing 139
considerations in message and screen design 166
DEDB 144
overview 165
direct access 140
elements
GSAM 146
data, description 45
HDAM 141
data, naming 47
HIDAM 142
emergency restart 217
HISAM 145
EMH (expedited message handler) 106
HSAM 145
end a conversation, how to 169
MSDB 143
enhanced STAT call formats for statistics
PHDAM 139, 141
OSAM buffer subpool 184
PHIDAM 139, 142
VSAM buffer subpool 190
sequential access 144
entity, data 45
SHISAM 146
environments
SHSAM 146
DB/DC 101
DL/I call trace 176
DBCTL 101
DL/I calls 39
DCCTL 101
codes 18
options in 101, 123
error routines 18
program and database types 100
exceptional conditions 18
ERASE parameter 160
message calls
error
list of 17
execution 194, 203
system service calls
initialization 194, 203
list of 17
error routines 3
usage 17
explanation 3
DL/I calls, general information
I/O errors 4
getting started with 1, 15
I/O errors in your program 18
DL/I calls, testing DL/I call sequences 175, 199
programming errors 4, 18
DL/I database
system errors 4, 18
access to 123
types of errors 4, 18
description 124
ESTAE routines 118
DL/I image capture (DFSDLTR0) programs 199
example
DL/I Open Database Access (ODBA) interface 7
current roster 46
DL/I options
field level sensitivity 147
field level sensitivity 147
instructor schedules 60
logical relationships 152
instructor skills report 59
secondary indexing 148
local view 57
DL/I program structure 1, 15
logical relationships 152
DL/I test program (DFSDDLT0)
schedule of classes 57
call statements 176
examples
checking program performance 176
bank account database 11
comments statements 176
medical database 8

228 Application Programming Planning Guide


exceptional conditions 18 group data elements
EXEC DLI commands 39 into hierarchies 50
Execution Diagnostic Facility (EDF) 198 with their keys 54
execution errors 194, 203 GSAM (Generalized Sequential Access Method)
existing application, converting an 45 accessing GSAM databases 123
explicit API for LU 6.2 devices 73 database type 102
express PCBs 173 description 146
Extended Restart 113, 135

H
F HALDB (High Availability Large Database) 149
Fast Path HALDB partitions
databases 102 data availability 3
DEDB (data entry database) 144 error settings 3
DEDB and the PROCOPT operand 160 handling 3
IFPs 105 restrictions for loading logical child segments 3
MSDB (main storage database) 102, 143 scheduling 3
field level sensitivity status codes 3
as a security mechanism 158 HDAM (Hierarchical Direct Access Method) 141
defining 39 HIDAM (Hierarchical Indexed Direct Access Method) 142
description 147 hierarchical database
example 147 example 24
specifying 148 relational database, compared to 23
uses 148 hierarchical database example, medical 8, 9, 37
fields Hierarchical Direct Access Method (HDAM) 141
columns, compared to 23 Hierarchical Indexed Direct Access Method (HIDAM) 142
in SQL queries 26 Hierarchical Indexed Sequential Access Method (HISAM) 145
File Select and Formatting Print Program (DFSERA10) 114 Hierarchical Sequential Access Method (HSAM) 145
fixed, MSDBs (main storage databases) 11 hierarchy
flow diagrams, LU 6.2 bank account database 11
CPI-C driven commit scenario 89 description 8, 37
DFSAPPC, synchronous SL=none 82 grouping data elements 50
DL/I program backout scenario 90, 91 medical database 8
DL/I program commit scenario 88 hierarchy examples 8, 11
DL/I program ROLB scenario 91 High Availability Large Database (HALDB) 149
local CPI communications driven program, SL=none 83 HALDB partitions
local IMS Command data availability 3
asynchronous SL=confirm 81 error settings 3
local IMS command, SL=none 80 handling 3
local IMS conversational transaction, SL=none 79 restrictions for loading logical child segments 3
local IMS transaction scheduling 3
asynchronous SL=confirm 78 status codes 3
asynchronous SL=none 77 HISAM (Hierarchical Indexed Sequential Access Method) 145
synchronous SL=confirm 76 homonym, data element 48
synchronous SL=none 75 HOUSHOLD segment 11
multiple transactions in same commit 93 HSAM (Hierarchical Sequential Access Method) 145
remote MSC conversation
asynchronous SL=confirm 86
asynchronous SL=none 85
synchronous SL=confirm 87
I
I/O area 40
synchronous SL=none 84
DL/I 20
frequency, checkpoint 116
I/O PCB
full-function databases
in different environments 125
and the PROCOPT operand 160
requesting during PSBGEN 132
how accessed, CICS 124
identification of
how accessed, IMS 102
recovery requirements 115
identifying
application data 45
G online security requirements 163
gather requirements output message destinations 171
for conversational processing 168 security requirements 157
gathering requirements IDs, checkpoint 133
for database options 139 IFP (IMS Fast Path) program
for message processing options 163 databases that can be accessed 101
Generalized Sequential Access Method (GSAM) 146 differences from an MPP 106
GO processing option 115 recovery 106

Index 229
IFP (IMS Fast Path) program (continued) JMP (Java message processing) regions
restrictions 106 DB2 for z/OS access
ILLNESS segment 10 programming model 30
image capture program description 29
CICS application program 199 programming models 29
IMS application program 176 JMP applications
immediate program switch 169 programming models 29
implicit API for LU 6.2 devices 73 JOURNAL parameter 214
IMS Fast Path (IFP) programs, description of 105
IMS hierarchical database interface for Java
using 27
IMS Spool API application design 213
K
key sensitivity 158
INIT system service call 118
keyboard shortcuts xv
initialization errors 194, 203
keys, data 54
INQY system service call 118
instructor
schedules 60
skills report 59 L
integrity limit access with signon security 163
how DL/I protects data 131 link to another online program 129
read without 161 LIST parameter 178
interface block, DL/I 20 listing data elements 46
interface, AIB 40 local view
Introduction to Resource Recovery 68 designing 50
invalid processing and ROLB/SETS/ROLLS calls 170 examples 57
IPDS and IMS Spool API 215 locking protocol 160
ISC (Intersystem Communication) 106 LOCKMAX= parameter, BMP programs 115
isolation of LOG call
duplicate values 53 description 193
repeating data elements 51 use in monitoring 202
ISRT system service call 213 log records
issue checkpoints 103 type 18 134
X’18’ 114
LOG system service call 208
J log, system 104
logical child segments
Java Batch Processing (JBP)
HALDB (High Availability Large Database), restrictions 3
applications 109
logical relationships
databases that can be accessed 101
defining 155
Java batch processing (JBP) regions
description 152
DB2 for z/OS access
example 152
programming model 32
LTERM, local and remote 96
description 31
LU 6.2 devices, signon security 163
programming models 31
LU 6.2 partner program design
Java Message Processing (JMP)
DFSAPPC message switch 96
applications 109
flow diagrams 74
databases that can be accessed 101
integrity after conversation completion 94
Java message processing (JMP) regions
scenarios 87
DB2 for z/OS access
programming model 30
description 29
programming models 29 M
JBP (Java Batch Processing) macros
applications 109 DATABASE 161
databases that can be accessed 101 DFSMDA 119
JBP (Java batch processing) regions TRANSACT 109
DB2 for z/OS access main storage database (MSDB) 143
programming model 32 main storage database (MSDBs)
description 31 types
programming models 31 nonrelated 12
JDBC main storage databases (MSDBs)
explanation 27 dynamic 11
JES Spool/Print server 215 types
JMP (Java Message Processing) related 11
applications 109 many-to-many mapping 56
databases that can be accessed 101 mapped conversation, APPC 67
mappings, determining 56
mask, data 40

230 Application Programming Planning Guide


medical database example 8 output messages, identifying destinations for 171
description 9
segments 8, 9
message
input descriptor (MID), control block 166
P
parameters
output 171
BKO 104
output descriptor (MOD), control block 166
ERASE 160
processing options 163
JOURNAL 214
message calls
LIST 178
list of 17
LOCKMAX 115
methods of data storage
MODE 112
combined file 36
PROCOPT 159
database 36
RTRUNC 170
separate files 35
TRANSACT 112
MFS (Message Format Service)
TXTU 215
control blocks 166
WFI 109
overview 166
Partitioned Hierarchical Direct Access Method
MID (message input descriptor), control block 166
(PHDAM) 139, 141
MOD (message output descriptor), control block 166
Partitioned Hierarchical Indexed Direct Access Method
mode
(PHIDAM) 139, 142
multiple 114
Partitioned Secondary Index (PSINDEX) 149
processing 111
parts of DL/I program 1, 15
response 171
pass control of processing 39
single 114
pass control to other applications 129
MODE parameter 112
password security 164, 165
MPP (message processing program)
PATIENT segment 9
databases that can be accessed 101, 104
PAYMENT segment 11
description 104
PCB (program communication block)
executing 105
call 129
MSDB (main storage database) 102, 143
description 38
MSDBs (main storage databases)
express 173
types
masks
nonrelated 12
description 2, 16
related 11
performance
multiple mode 111, 114
impact 214
MVS SJF (Scheduler JCL Facility) 214
maximizing online 130
PHDAM (Partitioned Hierarchical Direct Access
Method) 139, 141
N PHIDAM (Partitioned Hierarchical Indexed Direct Access
names of data elements 47 Method) 139, 142
naming conventions 43 physical structure of a database 37
NDM (Non-Discardable Messages) routine 111 PL/I language 119
network-qualified LU name 96 position, reestablishing with checkpoint calls 115, 132
NOSTAE and NOSPIE 119 primarily sequential processing 145
print checkpoint log records, how to 134
problem determination 194, 203
O process database records 39
process of requests 39
ODBA
process of requirements, analyzing 99, 121
application programs
processing mode 111
testing 205
processing options
ODBA (Open Database Access) interface, DLI/I
A (all) 159
getting started with 7
D (delete) 159
one-to-many mapping 56
defined 157
online processing
E (exclusive) 160
databases that can be accessed 123
G (get)
description 125
description and concurrent record access 159
linking and passing control to other applications 129
general description 159
performance, maximizing 130
GO (read only)
online programs 104
description 160
online security
invalid pointers and T and N options 160
password security 164
N option 161
supplying information about your application 165
risks of GOx options 161
terminal 164
T option 161
Open Database Access (ODBA) interface, DL/I
I (insert) 159
getting started with 7
K (key) 159
OSAM buffer pool, retrieving statistics 180
R (replace) 159

Index 231
PROCOPT parameter 159 recovery (continued)
PROCOPT=GO 114 I/O PCB, requesting during PSBGEN 132
program identifying requirements 115
batch structure 1, 15 in a batch-oriented BMP 107, 128
entry 20 in batch programs 104
program communication block (PCB) 38 recovery of databases 134
program deadlock 104 Recovery process
program sensitivity 117 distributed 71
program specification block (PSB) 38 local 70
program specification blocks (PSBs) Recovery, Resource 68
description 4 redundant data 35
program switch reestablish position in database 115
deferred 169 relational database
immediate 169 hierarchical database, compared to 23
program test 175 relational databases 102
program types, environments and database types 100 relationships
program waits 115 between data elements 50
programming models data, hierarchical 8, 37
JBP applications defining logical 155
symbolic checkpoint and restart 31 mapping data 56
with rollback 32 relationships between data aggregates 56
without rollback 31 releasing
JMP applications 29 resources 21
DB2 for z/OS data access 30 remote DL/I 123
IMS data access 30 repetitive data elements, isolating 51
with rollback 30 reply to the terminal in a conversation 169
without rollback 29 report of instructor schedules 60
programs reports, creating 50
DL/I image capture 199 requests, processing 39
DL/I test 175 required application data, analyzing 45
online 104 requirements, analyzing processing 99
TM batch 104 resolving data structure conflicts 147
protected resources 68 resource managers 69
protocol, locking 160 Resource Recovery
PSB (program specification block) application program 69
APSB (allocate program specification block) 73 Introduction to 68
CMPAT=YES 103 protected resources 68
description 38 recoverable resources 68
scheduling in a call-level program 129 resource managers 69
PSBs (program specification blocks) sync-point manager 69
description 4 Resource Recovery Services/Multiple Virtual Storage
pseudo-abend 117 (RRS/MVS)
PSINDEX (Partitioned Secondary Index) 149 introduction to 68
PURG system service call 214 resources
protected 68
recoverable 68
Q security 43
resources, releasing 21
QC status code 109
response mode, description 171
quantitative relationship between data aggregates 56
restart your program
code for, description 135
with basic CHKP 115
R with symbolic CHKP 115
read access, specify with PROCOPT operand 159 restart, emergency 217
read without integrity 161 Restart, Extended 113, 135
read-only access, specify with PROCOPT operand 160 retrieval call, status code 18
reason code, checking 18 retrieval calls
record status codes, exceptional 3
database processing 39 retrieval of IMS database statistics 180
database, description of 9 RETRY option 119
record descriptor word (RDW), IMS Spool API 215 return code, checking 18
recording risks to security, combined files 36
data availability 49 ROLB system service call 104, 134
information about your program 211 ROLL system service call 134
recoverable resources 68 ROLS system service call 104, 118, 137
recovery root anchor point 141
considerations in conversations 170 root segment, definition 9

232 Application Programming Planning Guide


roster, current 46 SETU system service call 137
routine, error 18 shared queues option 164
routines SHISAM (Simple Hierarchical Indexed Sequential Access
DFSERA50 201 Method) 146
ESTAE 118 SHSAM (Simple Hierarchical Sequential Access Method) 146
STAE 118 signon security 163, 165
routines, error 3 simple HISAM (SHISAM) 146
rows simple HSAM (SHSAM) 146
relational representation, in 26 single mode 106, 111, 114
segment instances, compared to 23 skills report, instructor 59
RRS/MVS (Resource Recovery Services/Multiple Virtual SPA (scratchpad area) 169
Storage) 73 specification of
RTRUNC parameter 170 field level sensitivity 148
frequency, checkpoint 116
SPIE routine 119
S SPM (sync-point manager) 67
Spool Display and Search Facility (SDSF) 217
schedule a PSB, in a call-level program, how to 129
SQL (Structured Query Language) 102
schedule, classes example 57
example query 26
scheduling HALDB (High Availability Large Database) 3
STAE routines 118
screen design considerations 167
STAT call
SDSF (Spool Display and Search Facility) 217
formats for statistics
secondary indexing
OSAM buffer pool, STAT call 180
description 148
OSAM buffer subpool, enhanced STAT call 184
examples of uses 149
VSAM buffer subpool, enhanced STAT call 190
Partitioned Secondary Index (PSINDEX) 149
VSAM buffer subpool, STAT call 182
specifying 150
system service 208
security
use in debugging 179, 202
and the PROCOPT= operand 159
statistics, database 180
database 157, 159
status code, QC 109
field level sensitivity 158
status codes
identifying online requirements 163
blank 3, 18
key sensitivity 158
checking 3, 18
of databases and data communications 44
error routine 18
of resources 43
error routines 3
password security 164
exception conditions 18
risks of combined files 36
exceptional call results 3
segment sensitivity 158
HALDB (high availability large databases) partitions 3
signon 163
retrieval call 18
supplying information about your application 165
retrieval calls 3
terminal 164
STATUS statement 176, 199
segment
storage of data
description 9
in a combined file 36
preventing access to by other programs 131
in a database 36
sensitivity 158
in separate files 35
segments
structure
in medical database example 8
data 38
in SQL queries 26
physical, of a database 37
medical database example 9
structure of data, methods 50
tables, compared to 23
Structured Query Language (SQL) 102
SELECT keyword
summary of symbolic CHKP and basic CHKP 113
example query 26
supply security information, how to 165
sensitivity
symbolic checkpoint
data 38
description 113, 133
field level 39, 158
IDs, specifying 133
general description 157
issuing 135
key 158
restart 135
program 117
restart with 115
segment 158
sync_level values 67
sequential access methods
sync-point manager (SPM) 67, 69
characteristics of 144
synchronous conversation, description for LU 6.2
HISAM 145
transactions 65
HSAM 145
synonym, data element 47
types of 144
sysplex data-sharing 108
sequential dependents 102
system log
sequential processing only 145
on tape 104
SETO system service call 213
storage 104
SETS system service call 104, 118, 137

Index 233
system service calls
CHNG 213
V
I/O PCB, requesting during PSBGEN 132 values, isolating duplicate 53
INIT 118 VBASF, formatted VSAM subpool statistics 182
INQY 118 VBASS, formatted summary of VSAM subpool statistics 184
ISRT 213 VBASU, unformatted VSAM subpool statistics 183
list of 17 VBESF, formatted VSAM subpool statistics 190
LOG 193, 208 VBESS, formatted summary of VSAM subpool statistics 192
PURG 214 VBESU, unformatted VSAM subpool statistics 192
ROLB 104, 134 view of data, a program's 37
ROLL 134 view, local 57
ROLS 104, 118, 137 VisualGen 50
SETO 213 VSAM buffer subpool, retrieving
SETS 104, 118, 137 enhanced subpool statistics 190
SETU 137 statistics 182, 190
STAT 180, 208
system service requests, functions provided 126
W
wait-for-input (WFI)
T transactions 106, 109
tables waits, program 115
relational representation, in 24 WFI parameter 109
segments, compared to 23 writing information to the system log 193
take checkpoints, how to 132
terminal screen, designing 167
terminal security 164, 165 X
termination of a PSB, restrictions 129 X’18’ log record 114
termination, abnormal 111 XRST (Extended Restart) 113
test of application programs
using BTS II 176
using DFSDDLT0 199
using DL/I test program 175
Z
what you need 175, 197 z/OS files
test of DL/I call sequences 175, 199 access to 102, 123
test, unit 175, 197 description 124
testing status codes 3 z/OS Scheduler JCL Facility (SJF) 214
TM batch program 104
token, definition of 170
trace control facility 199
TRANSACT macro 112
transaction code 104, 105
transaction response mode 106
transaction-oriented BMPs.
See BMP (batch message processing) program
TREATMNT segment 10
TSO application programs 112
two-phase commit process
UOR 70
two-phase commit protocol 69
TXTU parameter 215
type 18 log record 134

U
unavailability of data 116, 135
unique identifier, data 48
unit of work 110
unit test 175, 197
UOR (unit of recovery) 70
update access, specify with PROCOPT operand 160
user requirements, analyzing 43
utilities
Batch Backout 104
DFSERA10 134, 207
File Select and Formatting Print program 114

234 Application Programming Planning Guide




Program Number: 5635-A01

Printed in USA

SC18-9697-02
Spine information:

IMS Version 10 Application Programming Planning Guide




You might also like