Application Programming Planning Guide
Application Programming Planning Guide
Application Programming Planning Guide
Version 10
SC18-9697-02
IMS
Version 10
SC18-9697-02
Note
Before using this information and the product it supports, read the information in “Notices” on page 219.
This edition applies to IMS Version 10 (program number 5635-A01) and to all subsequent releases and modifications
until otherwise indicated in new editions. This edition replaces SC18-9697-01.
© Copyright IBM Corporation 1974, 2010.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 3. How CICS EXEC DLI application programs work with IMS . . . . . . . . . 19
Getting started with EXEC DLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Contents v
What happens in a conversation . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Designing a conversation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Important points about the SPA . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Recovery considerations in conversations . . . . . . . . . . . . . . . . . . . . . . . . 170
Identifying output message destinations . . . . . . . . . . . . . . . . . . . . . . . . . 171
The originating terminal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
To other programs and terminals . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Chapter 16. Managing the IMS Spool API overall design . . . . . . . . . . . . . . 213
IMS Spool API design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Sending data to the JES spool data sets . . . . . . . . . . . . . . . . . . . . . . . . . . 214
IMS Spool API performance considerations . . . . . . . . . . . . . . . . . . . . . . . . 214
JES initiator considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
Application managed text units . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
BSAM I/O area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
IMS Spool API application coding considerations . . . . . . . . . . . . . . . . . . . . . . 215
Print data formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Programming interface information . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
IMS Version 10 library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Supplementary publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Publication collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Accessibility titles cited in the IMS Version 10 library . . . . . . . . . . . . . . . . . . . . . 224
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Contents vii
viii Application Programming Planning Guide
Figures
| 1. Organization of the IMS Version 10 library in the information center. . . . . . . . . . . . . . xviii
2. DL/I program elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
3. Normal relationship between programs, PSBs, PCBs, DBDs, and databases . . . . . . . . . . . . . 4
4. Relationship between programs and multiple PCBs (concurrent processing) . . . . . . . . . . . . 5
| 5. Structure of a call-level CICS program . . . . . . . . . . . . . . . . . . . . . . . . . 6
| 6. Medical hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7. DL/I program elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
8. Structure of a command-level batch or BMP program . . . . . . . . . . . . . . . . . . . 20
| 9. Segments of the Dealership sample database . . . . . . . . . . . . . . . . . . . . . . 24
| 10. Relational representation of the Dealership sample database . . . . . . . . . . . . . . . . . 25
| 11. Segment occurrences in the Dealership sample database . . . . . . . . . . . . . . . . . . 26
| 12. Relational representation of segment occurrences in the Dealership database . . . . . . . . . . . . 26
| 13. JMP or JBP applications that use the Java class libraries for IMS . . . . . . . . . . . . . . . . 28
14. Accounting program's view of the database . . . . . . . . . . . . . . . . . . . . . . . 38
15. Patient illness program's view of the database . . . . . . . . . . . . . . . . . . . . . . 39
16. Current roster for technical education example . . . . . . . . . . . . . . . . . . . . . . 46
17. Current roster after step 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
18. Current roster after step 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
19. Current roster after step 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
20. Schedule of courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
21. Course schedule after step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
22. Instructor skills report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
23. Instructor skills after step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
24. Instructor schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
25. Instructor schedules step 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
26. Instructor schedules step 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
27. Participants in resource recovery . . . . . . . . . . . . . . . . . . . . . . . . . . 69
28. Two-phase commit process with one resource manager . . . . . . . . . . . . . . . . . . . 70
29. Distributed resource recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
30. Flow of a local IMS synchronous transaction when Sync_level=None . . . . . . . . . . . . . . 75
31. Flow of a local IMS synchronous transaction when Sync_level=Confirm . . . . . . . . . . . . . 76
32. Flow of a local IMS asynchronous transaction when Sync_level=None . . . . . . . . . . . . . . 77
33. Flow of a local IMS asynchronous transaction when Sync_level=Confirm . . . . . . . . . . . . . 78
34. Flow of a local IMS conversational transaction when Sync_level=None. . . . . . . . . . . . . . 79
35. Flow of a local IMS command when Sync_level=None . . . . . . . . . . . . . . . . . . . 80
36. Flow of a local IMS asynchronous command when Sync_level=Confirm . . . . . . . . . . . . . 81
37. Flow of a message switch when Sync_level=None . . . . . . . . . . . . . . . . . . . . 82
38. Flow of a local CPI communications driven program when Sync_level=None . . . . . . . . . . . 83
39. Flow of a remote IMS synchronous transaction when Sync_level=None . . . . . . . . . . . . . 84
40. Flow of a remote IMS asynchronous transaction when Sync_level=None . . . . . . . . . . . . . 85
41. Flow of a remote IMS asynchronous transaction when Sync_level=Confirm . . . . . . . . . . . . 86
42. Flow of a remote IMS synchronous transaction when Sync_level=Confirm . . . . . . . . . . . . 87
43. Standard DL/I program commit scenario when Sync_Level=Syncpt. . . . . . . . . . . . . . . 88
44. CPI-C driven commit scenario when Sync_Level=Syncpt . . . . . . . . . . . . . . . . . . 89
45. Standard DL/I program U119 backout scenario when Sync_Level=Syncpt. . . . . . . . . . . . . 90
46. Standard DL/I program U0711 backout scenario when Sync_Level=Syncpt . . . . . . . . . . . . 91
47. Standard DL/I program ROLB scenario when Sync_Level=Syncpt . . . . . . . . . . . . . . . 92
48. Multiple transactions in same commit when Sync_Level=Syncpt . . . . . . . . . . . . . . . . 93
49. Documenting user task descriptions: current roster example . . . . . . . . . . . . . . . . . 100
50. Single mode and multiple mode . . . . . . . . . . . . . . . . . . . . . . . . . . 112
51. Current roster task description . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
52. Patient hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
53. Indexing a root segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
54. Indexing a dependent segment. . . . . . . . . . . . . . . . . . . . . . . . . . . 151
55. Patient and inventory hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . 154
Prerequisite knowledge
Before using this information, you should have knowledge of either IMS Database
Manager (DB) or IMS Transaction Manager (TM), including the access methods
used by IMS. You should also understand basic IMS concepts, your installation’s
IMS system, and have general knowledge of the tasks involved in project planning.
The following information from IBM® Press can help you gain an understanding of
basic IMS concepts: An Introduction to IMS by Dean H. Meltz, Rick Long, Mark
Harrington, Robert Hain, and Geoff Nicholls (ISBN # 0-13-185671-5). Go to the IMS
Web site at www.ibm.com/ims for details.
IBM offers a wide variety of classroom and self-study courses to help you learn
IMS. For a complete list of courses available, go to the IMS home page on the Web
at www.ibm.com/ims and link to the Training and Certification page.
If you are a CICS® user, you should understand a similar level of information for
CICS. The IMS concepts explained in this manual are limited to those concepts
pertinent to designing application programs. You should also know how to use
COBOL, PL/I, Assembler language, Pascal, or C language.
Accessibility features
The following list includes the major accessibility features in z/OS products,
including IMS Version 10. These features support:
v Keyboard-only operation.
v Interfaces that are commonly used by screen readers and screen magnifiers.
v Customization of display attributes such as color, contrast, and font size.
Keyboard navigation
You can access IMS Version 10 ISPF panel functions by using a keyboard or
keyboard shortcut keys.
For information about navigating the IMS Version 10 ISPF panels using TSO/E or
ISPF, refer to the z/OS TSO/E Primer, the z/OS TSO/E User’s Guide, and the z/OS
ISPF User’s Guide. These guides describe how to navigate each interface, including
the use of keyboard shortcuts or function keys (PF keys). Each guide includes the
default settings for the PF keys and explains how to modify their functions.
| The reasons for reorganizing and rearchitecting the IMS library so dramatically are
| to achieve the following goals:
| v Group similar information together in a more intuitive organization. For
| example, in the IMS Version 10 library, all messages and codes are in the
| messages and codes books, rather than distributed across multiple books, and all
| appear in the information center under Troubleshooting for IMS. As another
| example, all exit routines are now in one book, the IMS Version 10: Exit Routine
| Reference, and appear in the information center under IMS reference
| information ->Exit routines, rather than being distributed across six books as
| they were in the IMS Version 9 library.
| v Rewrite information to better support user tasks. Table 2 on page xix describes
| the high-level user tasks and the IMS Version 10 books that support those tasks.
| v Separate information into three basic types of topics: task, concept, and reference.
| v Utilize the DITA (Darwin Information Type Architecture) open source tagging
| language.
| There are known limitations with BookManager output. If you encounter problems
| in the BookManager information with Web addresses, syntax diagrams, wide
| examples, or tables, refer to the information in the information center or in a PDF
| book.
| The following figure illustrates the organization of the IMS Version 10 library,
| including how that information is organized within the information center, which
| is available at http://publib.boulder.ibm.com/infocenter/imzic.
|
|
| Figure 1. Organization of the IMS Version 10 library in the information center
|
| The following table describes high-level user tasks and the IMS Version 10 books
| that support those tasks. The IMS library also includes the IMS Version 10: Master
| Index and Glossary, which provides a central index for all of the IMS Versioná10
| information, as well as a glossary of IMS terms. The combined IMS Version 10:
| Master Index and Glossary is available only in PDF format. The master index for
| IMS Version 10 and the IMS glossary are included in the information center.
Subsections:
v “IMS environments”
v “DL/I and your application program” on page 3
v “DL/I codes” on page 3
v “Database descriptions (DBDs) and program specification blocks (PSBs)” on
page 4
v “DL/I for CICS” on page 5
v “DL/I using the ODBA interface” on page 7
v “Database hierarchy examples” on page 8
Related Reading:
v If your installation uses the IMS Transaction Manager (IMS TM), see IMS Version
10: Communications and Connections Guide for information on transaction
management functions.
v Information on DL/I EXEC commands is in the IMS Version 10: Application
Programming Guide.
IMS environments
Your application program can execute in different IMS environments. The three
online environments are DB/DC, DBCTL, and DCCTL. The two batch
environments are DB batch and TM batch.
Related reading: For information on these environments, see IMS Version 10:
System Administration Guide.
The information in this section applies to all application programs that run in IMS.
The main elements in an IMS application program are:
v Program entry
v Program communication block (PCB) or application interface block (AIB)
definition
v I/O (input/output) area definition
v DL/I calls
v Program termination
Figure 2 on page 2 shows how these elements relate to each other. The numbers on
the right in Figure 2 on page 2 refer to the notes that follow.
Recommendation: If your program does not use the return code in this way,
set the return code to 0 as a programming convention. Your
program can use the return code for this same purpose in
Batch Message Processing (BMP) regions. Message
Processing Programs (MPPs) cannot pass return codes.
DL/I codes
This section contains information about the different DL/I codes that you will
encounter when working with IMS Database Manager Application Programs.
| The status codes your application program should test for are those that indicate
| exceptional but valid conditions. Your application program should check for status
| codes that indicate that the call was successful, such as blanks. If IMS returns a
| status code that you did not expect, your program should branch to an error
| routine. For information about the status codes for the DL/I calls, see IMS:
| Messages and Codes Reference, Volume 4: IMS Component Codes.
In a typical program, status codes that you should test for apply to the get calls.
Some status codes indicate exceptional conditions for other calls, and you should
provide routines other than error routines for these situations. For example, AH
means that a required segment search argument (SSA) is missing, and AT means
that the user I/O area is too long.
Also note that logical child segments cannot be loaded into a HALDB PHDAM or
PHIDAM database. Logical child segments must be inserted later in an update run.
Any attempt to load a logical child segment in either a PHDAM or PHIDAM
database results in status code LF.
Error routines
If your program detects an error after checking for blanks and exceptional
conditions in the status code, it should branch to an error routine and print as
Two kinds of errors can occur in your program: programming errors and system or
I/O errors. Programming errors, are usually your responsibility to find and fix.
These errors are caused by things like an invalid parameter, an invalid call, or an
I/O area that is too long. System or I/O errors are usually resolved by the system
programmer or the equivalent specialist at your installation.
Because every application program should have an error routine, and because each
installation has its own ways of finding and debugging program errors, you
probably have your own standard error routines.
A DBD describes the content and hierarchic structure of the physical or logical
database. DBDs also supply information to IMS to help in locating segments.
A PSB specifies the database segments an application program can access and the
functions it can perform on the data, such as read only, update, or delete. Because
an application program can access multiple databases, PSBs are composed of one
or more program control blocks (PCBs). The PSB describes the way a database is
viewed by your application program.
Figure 3 shows the normal relationship between application programs, PSBs, PCBs,
DBDs, and databases.
Figure 3. Normal relationship between programs, PSBs, PCBs, DBDs, and databases
Figure 4 on page 5 shows concurrent processing, which uses multiple PCBs for the
same database.
| Figure 5 on page 6 shows the structure of a call-level CICS program. See Figure 5
| on page 6 notes for a description of each program element depicted in the figure.
|
| Figure 5. Structure of a call-level CICS program
|
Notes to Figure 5:
1. I/O area. IMS passes segments to and from the program in the program's I/O
area.
2. PCB. IMS describes the results of each DL/I call in the database PCB mask.
| 3. One of the following:
| v Application interface block (AIB). If you chose to use the AIB, the AIB
| provides the program with addresses of the PCBs and return codes from
| the CICS-DL/I interface.
| v User interface block (UIB). If you chose not to use AIB, the UIB provides
| the program with addresses of the PCBs and return codes from the
| CICS-DL/I interface.
| The horizontal line between number 3 (UIB) and number 4 (Program entry) in
| Figure 5, represents the end of the declarations section and the start of the
| executable code section of the program.
| The three parts that access IMS DL/I through the ODBA interface are common
| logic flow for single Resource Manager scenarios and multiple Resource Manager
| scenarios. The following steps describe the common logic flow for both scenarios:
| 1. I/O area. IMS passes segments to and from the application program in its I/O
| area.
| 2. PCB. IMS describes the results of each DL/I call in the database PCB mask.
| 3. Application interface block (AIB). The AIB provides the program with
| addresses of the PCBs and return codes from the ODBA to DL/I interface.
| 4. Program entry. Obtain and initialize the AIB.
| 5. Initialize the ODBA interface.
| 6. Schedule the PSB. This step identifies the PSB that your program will use and
| also provides a place for IMS to keep internal tokens.
| 7. Issue DL/I calls. Issue DL/I calls to read and update the database. The
| following calls are available:
| v Retrieve
| v Replace
| v Delete
| v Insert
| The logic flow for how the programmer commits changes for single Resource
| Manager scenarios follows. The programmer:
| 1. Commits database changes. No DL/I calls, including system service calls such
| as LOG or STAT, can be made between the commit and the termination of the
| deallocate PSB (DPSB) call.
| 2. Terminates the PSB.
| 3. Optional: Terminates the ODBA interface.
| 4. Returns to the environment that initialized the application program.
| The logic flow for how the programmer commits changes for multiple Resource
| Manager scenarios follows. The programmer:
| 1. Terminates the PSB.
| 2. Optional: Terminates the ODBA interface.
| 3. Commits changes.
| 4. Returns to the environment that initialized the application program.
| The programmer can make multiple allocate PSB (APSB) requests before
| terminating the ODBA interface. The ODBA interface only needs to be initialized
| once in the address space and the programmer can repeat the
| schedule/commit/end schedule process as many times as they want.
The examples in this information use the medical hierarchy shown in Figure 6 on
page 9 and the bank hierarchies shown in Table 9 on page 12, Table 10 on page 12,
and Table 11 on page 13. The hierarchies used in the medical hierarchy example are
used with full-function databases and Fast Path data entry databases (DEDBs). The
bank hierarchies are an example of an application program used with main storage
databases (MSDBs). To understand these examples, familiarize yourself with the
hierarchies and segments that each hierarchy contains.
|
| Figure 6. Medical hierarchy
|
| Each piece of data represented in Figure 6 is called a segment in the hierarchy. Each
| segment contains one or more fields of information. The PATIENT segment, for
| example, contains all the information that relates strictly to the patient: the
| patient's identification number, name, and address.
| Definitions: A segment is the smallest unit of data that an application program can
| retrieve from the database. A field is the smallest unit of a segment.
| The PATIENT segment in the medical database is the root segment. The segments
| below the root segment are the dependents, or children, of the root. For example,
| ILLNESS, BILLING, and HOUSHOLD are all children of PATIENT. ILLNESS,
| BILLING, and HOUSHOLD are called direct dependents of PATIENT; TREATMNT
| and PAYMENT are also dependents of PATIENT, but they are not direct
| dependents, because they are at a lower level in the hierarchy.
| A database record is a single root segment (root segment occurrence) and all of its
| dependents. In the medical example, a database record is all of the information
| about one patient.
| Each database record has only one root segment occurrence, but it might have
| several occurrences at lower levels. For example, the database record for a patient
| contains only one occurrence of the PATIENT segment type, but it might contain
| several ILLNESS and TREATMNT segment occurrences for that patient.
| The tables that follow show the layouts of each segment in the hierarchy.
| The segment’s field names are in the first row of each table. The number below
| each field name is the length in bytes that has been defined for that field.
| v PATIENT Segment
| Table 3 on page 10 shows the PATIENT segment.
| It has three fields:
| – The patient’s number (PATNO)
| – The patient’s name (NAME)
| – The patient's address (ADDR)
| PATIENT has a unique key field: PATNO. PATIENT segments are stored in
| ascending order based on the patient number. The lowest patient number in the
| database is 00001 and the highest is 10500.
The two types of MSDBs are related and nonrelated. In related MSDBs, each segment
is “owned” by one logical terminal. The "owned" segment can only be updated by
the terminal that owns it. In nonrelated MSDBs, the segments are not owned by
logical terminals. “Related MSDBs” and “Nonrelated MSDBs” on page 12 illustrate
the differences between these types of databases.
Related MSDBs
Related MSDBs can be fixed or dynamic. In a fixed related MSDB, you can store
summary data about a particular teller at a bank. For example, you can have an
identification code for the teller's terminal. Then you can keep a count of that
teller's transactions and balance for the day. This type of application requires a
segment with three fields:
TELLERID A two-character code that identifies the teller
Table 9 shows what the segment for this type of application program looks like.
Table 9. Teller segment in a fixed related MSDB
TELLERID TRANCNT TELLBAL
In a dynamic related MSDB, you can store data summarizing the activity of all
bank tellers at a single branch. For example, this segment contains:
BRANCHNO The identification number for the branch
TOTAL The bank branch's current balance
TRANCNT The number of transactions for the branch on that day
DEPBAL The deposit balance, giving the total dollar amount of deposits for
the branch
WTHBAL The withdrawal balance, giving the dollar amount of the
withdrawals for the branch
Table 10 shows what the branch summary segment looks like in a dynamic related
MSDB.
Table 10. Branch summary segment in a dynamic related MSDB
BRANCHNO TOTAL TRANCNT DEPBAL WTHBAL
Nonrelated MSDBs
A nonrelated MSDB is used to store data that is updated by several terminals
during the same time period. For example, you might store data about an
Related Reading:
v If your installation uses IMS Database Manager, see IMS Version 10:
Communications and Connections Guide for information on writing applications
that access IMS databases.
v Information on DL/I EXEC commands is in the IMS Version 10: Application
Programming Guide.
Subsections:
v “Application program environments”
v “DL/I elements”
v “DL/I calls” on page 17
DL/I elements
The information in this section applies to all application programs that run in IMS.
The main elements in an IMS application program consist of the following:
v Program entry
v Program Communication Block (PCB) or Application Interface Block (AIB)
definition
v I/O area definition
v DL/I calls
v Program termination
Figure 7 on page 16 shows how these elements relate to each other. The numbers
on the right in Figure 7 on page 16 refer to the notes that follow.
Notes to Figure 7:
1. Program entry. IMS passes control to the application program with a list of
associated PCBs.
2. PCB or AIB. IMS describes the results of each DL/I call using the AIBTDLI
interface in the application interface block (AIB) and, when applicable, the
program communication block (PCB). To find the results of a DL/I call, your
program must use the PCB that is referenced in the call. To find the results of
the call using the AIBTDLI interface, your program must use the AIB.
Your application program can use the PCB address that is returned in the AIB
to find the results of the call. To use the PCB, the program defines a mask of
the PCB and can then reference the PCB after each call to determine the success
or failure of the call. An application program cannot change the fields in a PCB;
it can only check the PCB to determine what happened when the call was
completed.
3. Input/output (I/O) area. IMS passes segments to and from the program in the
program's I/O area.
4. DL/I calls. The program issues DL/I calls to perform the requested function.
5. Program Termination. The program returns control to IMS DB when it has
finished processing. In a batch program, your program can set the return code
and pass it to the next step in the job.
Recommendation: If your program does not use the return code in this way, it
is a good idea to set it to 0 as a programming convention. Your program can
use the return code for this same purpose in BMPs. (MPPs cannot pass return
codes.)
You can issue calls to perform transaction management functions (message calls)
and to obtain IMS TM system services (system service calls):
Related reading: The DL/I calls are discussed in detail in IMS Version 10:
Application Programming Guide.
| The status codes your application program should test for are those that indicate
| exceptional but valid conditions. Your application program should check for status
| codes that indicate that the call was successful, such as blanks. If IMS returns a
| status code that you did not expect, your program should branch to an error
| routine. For information about the status codes for the DL/I calls, see IMS:
| Messages and Codes Reference, Volume 4: IMS Component Codes.
In a typical program, you should test for status codes that apply only to Get calls.
Some status codes indicate exceptional conditions for other calls. When your
program is retrieving messages, there are situations that you should expect and for
which you should provide routines other than error routines. For example, QC
means that no additional input messages are available for your program in the
message queue, and QD means that no additional segments are available for this
message.
Error routines
If, after checking for blanks and exceptional conditions in the status code, you find
that there has been an error, your program should branch to an error routine and
print as much information as possible about the error before terminating. Print the
status code as well. Determining which call was being executed when the error
occurred, the parameter of the IMS call, and the contents of the PCB will be
helpful in understanding the error.
Two kinds of errors can occur. First, programming errors are usually your
responsibility; they are the ones you can find and fix. These errors are caused by
things like an invalid parameter, an invalid call, or an I/O area that is too long.
The other kind of error is something you cannot usually fix; this is a system or I/O
error. When your program has this kind of error, the system programmer or the
equivalent specialist at your installation should be able to help.
Because every application program should have an error routine available to it,
and because each installation has its own ways of finding and debugging program
errors, installations usually provide their own standard error routines.
Your EXEC DLI application uses EXEC DLI commands to read and update DL/I
databases. These applications can execute as pure batch, as a BMP program
running with DBCTL or DB/DC, or as an online CICS program using DBCTL.
Your EXEC DLI program can also issue system service commands when using
DBCTL.
| Subsections:
| v “Getting started with EXEC DLI”
Notes to Figure 8:
1I/O areas. DL/I passes segments to and from the program in the I/O areas.
You may use a separate I/O area for each segment.
2Key feedback area. DL/I passes, on request, the concatenated key of the
lowest-level segment retrieved to the key feedback area.
3DL/I Interface Block (DIB). DL/I and CICS place the results of each
command in the DIB. The DIB contains most of the same information returned
in the DB PCB for programs using the call-level interface.
Note: The horizontal line between 3 and 4 represents the end of the
declarations section and the start of the executable code section of the
program.
4Program entry. Control is passed to your program during program entry.
5Issue EXEC DLI commands. Commands read and update information in the
database.
6Check the status code. To find out the results of each command you issue,
you should check the status code in the DIB after issuing an EXEC DLI
command for database processing and after issuing a checkpoint command.
7Issue checkpoint. Issue checkpoints as needed to establish places from which
to restart. Issuing a checkpoint commits database changes and releases
resources.
Requirement: CICS Transaction Server for z/OS runs with this version of IMS.
Unless a distinction needs to made, all supported versions are referred to as CICS.
For a complete list of supported software, see the IMS Version 10: Release Planning
Guide.
Chapter 3. How CICS EXEC DLI application programs work with IMS 21
22 Application Programming Planning Guide
|
| For additional information about the Java class libraries for IMS, see the topic
| “Hardware and software requirements”of the IMS Version 10: Release Planning
| Guide.
| Subsections:
| v “How Java application programs work with IMS databases”
| v “How Java application programs work with IMS transactions” on page 27
|
| How Java application programs work with IMS databases
| You can write Java application programs that access IMS database resources using
| either the IMS hierarchical database interface for Java or the JDBC interface.
| For additional information about programming Java applications to work with IMS
| databases, see the IMS Version 10: Application Programming Guide.
| Subsections:
| v “Comparison of hierarchical and relational databases”
| v “Overview of the IMS hierarchical database interface for Java” on page 27
| v “JDBC access to IMS” on page 27
| The name of an IMS segment becomes the table name in an SQL query, and the
| name of a field becomes the column name in the SQL query.
| This section compares the Dealership sample database, which is shipped with the
| Java API for IMS DB, to a relational representation of the database.
|
| Figure 9. Segments of the Dealership sample database
|
| The Dealer segment identifies a dealer that sells cars. The segment contains a
| dealer name in the field DLRNAME, and a unique dealer number in the field
| DLRNO.
| Dealers carry car types, each of which has a corresponding Model segment. A
| Model segment contains a type code in the field MODTYPE.
| Each car that is ordered for the dealership has an Order segment. A Stock segment
| is created for each car that is available for sale in the dealer’s inventory. When the
| car is sold, a Sales segment is created.
| The following shows a relational representation of the IMS database record shown
| in Figure 9.
| Important: This figure is provided to help you understand how to use JDBC calls
| in a hierarchical environment. The Java API for IMS DB does not
| change the structure of IMS data in any way.
|
|
| Figure 10. Relational representation of the Dealership sample database
|
| If a segment does not have a unique key, which is similar to a primary key in
| relational databases, view the corresponding relational table as having a generated
| primary key added to its column (field) list. An example of a generated primary
| key is in the Model table (segment) of Figure 10. Similar to referential integrity in
| relational databases, you cannot insert, for example, an Order (child) segment to
| the database without it being a child of a specific Model (parent) segment.
| Also note that the field (column) names have been renamed. You can rename
| segments and fields to more meaningful names by using the DLIModel utility.
|
| Figure 11. Segment occurrences in the Dealership sample database
|
| The Dealer segment occurrences have dependent Model segment occurrences. The
| following figure shows the relational representation of the dependent model
| segment occurrences.
|
|
|
| Figure 12. Relational representation of segment occurrences in the Dealership database
|
| The following example shows the SELECT statement of an SQL call. Model is a
| segment name that is used as a table name in the query:
| SELECT * FROM Model
| In both of the preceding examples, Model and ModelTypeCode are alias names
| that you assign by using the DLIModel utility. These names likely will not be the
| See the IMS Version 10: Application Programming Guide for the database description
| (DBD) of the Dealership sample database.
| You can use either the IMS hierarchical database interface for Java or the JDBC
| interface to access IMS data. However, the IMS hierarchical database interface for
| Java offers more controlled access than the higher-level JDBC interface package
| provides.
| Related Reading: For detailed information about the classes in the IMS hierarchical
| database interface for Java, see the Java API specification for IMS under
| “Application programming APIs” in the Information Management Software for
| z/OS Solutions Information Center at http://publib.boulder.ibm.com/infocenter/
| imzic.
| For information about the subset of SQL keywords and SQL keyword usage, see
| the topic “SQL keywords and extensions for the JDBC driver for IMS” in the IMS
| Version 10: Application Programming API Reference.
| This information uses the Dealership sample applications that are shipped with the
| Java API for IMS DB to describe how to use the IMS JDBC driver to access an IMS
| database.
|
| How Java application programs work with IMS transactions
| You can write Java application programs that process IMS transactions using Java
| message processing (JMP) regions and Java batch processing (JBP) regions. These
| two IMS dependent regions provide a Java Virtual Machine (JVM) environment for
| Java applications.
| JMP regions and JBP regions operate like any other IMS dependent regions. A JMP
| region is analogous to an MPP, and a JBP is analogous to a non-message-driven
| BMP. However, the fundamental difference between JMP and JBP regions
| compared to MPP regions and BMP regions is that JMP regions and JBP regions
| All IMS dependent regions are designed to support program switching, which
| means that a program can call another program, regardless of what region the
| program is stored in. For example, a program in an MPP region can call a program
| in a JMP region. Likewise, a program in a JMP region can call a program in an
| MPP region.
| Important: JMP and JBP regions are not necessary if your application runs in
| WebSphere Application Server, DB2 for z/OS, or CICS. JMP or JBP
| regions are needed only if your application will run in an IMS
| dependent region.
| The following figure shows a Java application that is running in a JMP or a JBP
| region. Calls from the JDBC interface or the IMS hierarchical database interface for
| Java are passed to the Java class libraries for IMS, which convert the calls to DL/I
| calls.
|
|
|
| Figure 13. JMP or JBP applications that use the Java class libraries for IMS
|
| JMP regions and JBP regions can run applications that are written in Java,
| object-oriented COBOL, or a combination of the two.
| JMP regions and JBP applications can access DB2 for z/OS databases in addition to
| IMS databases.
| For additional information about JMP and JBP regions, see the IMS Version 10:
| Application Programming Guide.
| Subsections:
| v “Java message processing (JMP) regions” on page 29
| v “Java batch processing (JBP) regions” on page 31
| JMP applications are flexible in how they process transactions and where they send
| the output. JMP applications send any output messages back to the message
| queues and process the next message with the same transaction code. The program
| runs until there are no more messages with the same transaction code. JMP
| applications share the following characteristics:
| v They are small.
| v They can produce output that is needed immediately.
| v They can access IMS or DB2 for z/OS data in a DB/DC environment and DB2
| for z/OS data in a DCCTL environment.
| JMP applications are started when IMS receives a message with a transaction code
| for the JMP application and schedules the message. JMP applications end when
| there are no more messages with that transaction code to process.
| Basic JMP application: A transaction begins when the application gets an input
| message. To get an input message, the application calls the getUniqueMessage
| method. After a message is processed, IMS will commit and end the transaction on
| the behalf of the application. All subsequent getUniqueMessage methods can then
| be made.
| JMP application with rollback: A JMP application can roll back database
| processing and output messages any number of times during a transaction. A
| rollback call backs out all database processing and output messages to the most
| recent commit. The transaction must end with a commit call when the program
| issues a rollback call, even if no further database or message processing occurs
| after the rollback call.
| JMP application that accesses IMS or DB2 for z/OS data: When a JMP
| application accesses only IMS data, it needs to open a database connection only
| once to process multiple transactions, as shown in “Basic JMP application” on page
| 29. However, a JMP application that accesses DB2 for z/OS data must open and
| close a database connection for each message that is processed.
| The following skeleton code is valid for DB2 for z/OS database access, IMS
| database access, or both DB2 for z/OS and IMS database access.
| public static void main(String args[]) {
|
| while(MessageQueue.getUniqueMessage(...)){ //Get input message, which
| //starts transaction
|
| conn = DriverManager.getConnection(...); //Establish DB connection
|
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| conn.close(); //Close DB connection
| ...
| IMSTransaction.getTransaction().commit(); //Commit and end transaction
| Related Reading: For more information about accessing DB2 for z/OS data from a
| JMP application, see IMS Version 10: Application Programming Guide.
| The following skeleton code is for a JBP application with checkpoint and restart.
| JBP application with rollback: Similarly to JMP applications, JBP applications can
| also roll back database processing and output messages. A final commit call is
| required before the application can end, even if no further database processing
| occurs or output messages are sent after the last rollback call.
| JBP application that accesses DB2 for z/OS or IMS data: Like a JBP application
| that accesses IMS data, a JBP application that accesses DB2 for z/OS data connects
| to a database, performs database processing, periodically commits, and disconnects
| Related Reading: For more information about accessing DB2 for z/OS data from a
| JBP application, see IMS Version 10: Application Programming Guide.
| The following skeleton code is valid for DB2 for z/OS database access, IMS
| database access, or both DB2 for z/OS and IMS database access.
| public void doBegin() ... { //Application logic runs
| //doBegin method
| conn = DriverManager.getConnection(...); //Establish DB connection
| repeat {
| repeat {
| results=statement.executeQuery(...); //Perform DB processing
| ...
| MessageQueue.insertMessage(...); //Send output messages
| ...
| }
| IMSTransaction.getTransaction().commit(); //Periodic commits divide
| } //work
|
| conn.close(); //Close DB connection
|
| IMSTransaction.getTransaction().commit(); //Commit the DB
| //connection close
| return;
| }
| JBP application to access GSAM data: A JBP application that accesses GSAM
| data is able to connect to a database, perform database processing, periodically
| commit, and disconnect from the database at the end of the application. GSAM
| data is frequently referred to as z/OS data sets or, more commonly, as flat files.
| This kind of data is non-hierarchical in structure.
| The following skeleton code is for a JBP application that access GSAM data.
| GSAMConnection connection = GSAMConnection.createInstance(...); //Establish DB
| //connection
| repeat {
| GSAMRecord record = connection.getNext(...); //Perform DB processing
| }
| connection.close(); //Close DB connection
|
| IMSTransaction.getTransaction().commit(); //Commit the DB connection close
Subsections:
v “Storing and processing information in a database”
v “Tasks for developing an application” on page 40
Example: Suppose that a medical clinic keeps separate files for each of its
departments, such as the clinic department, the accounting department, and the
ophthalmology department:
v The clinic department keeps data about each patient who visits the clinic, such
as:
Identification number
Name
Address
Illnesses
Date of each illness
Date patient came to clinic for treatment
Treatment given for each illness
Doctor that prescribed treatment
Charge for treatment
v The accounting department also keeps information about each patient. The
information that the accounting department might keep for each patient is:
Identification number
Name
Address
Charge for treatment
Amount of payments
If each of these departments keeps separate files, each department uses only the
data that it needs, but much of the data is redundant. For example, every
department in the clinic uses at least the patient's number, name, and address.
Updating the data is also a problem, because if a department changes a piece of
data, the same data must be updated in each separate file. Therefore, it is difficult
to keep the data in each department's files current. Current data might exist in one
file while defunct data remains in another file.
Using a combined file solves the updating problem, because all the data is in one
place, but it creates a new problem: the programs that process this data must
access the entire file record to get to the part that they need. For example, to
process only the patient's number, charges, and payments, an accounting program
must access all of the other fields also. In addition, changing the format of any of
the fields within the patient's record affects all the application programs, not just
the programs that use that field.
Using combined files can also involve security risks, because all of the programs
have access to all of the fields in a record.
In addition, storing data in a database has two advantages that neither of the other
ways has:
v If you change the format of part of a database record, the change does not affect
the programs that do not use the changed information.
v Programs are not affected by how the data is stored.
Because the program is independent of the physical data, a database can store all
the data only once and yet make it possible for each program to use only the data
that it needs. In a database, what the data looks like when it is stored is different
from what it looks like to an application program.
| Database hierarchies
| The examples in this information use the medical hierarchy shown in “Database
| hierarchy examples” on page 8.
For example, the DBD for the medical database hierarchy shown in Figure 6 on
page 9 describes the physical structure of the hierarchy and each of the six
segment types in the hierarchy: PATIENT, ILLNESS, TREATMNT, BILLING,
PAYMENT, and HOUSHOLD.
Related Reading: For more information on generating DBDs, see IMS Version 10:
Database Utilities Reference.
The data structures that are available to the program contain only segments that
the program is sensitive to. The PCB also defines how the application program is
allowed to process the segments in the data structure: whether the program can
only read the segments, or whether it can also update them.
To obtain the highest level of data availability, your PCBs should request the
fewest number of sensitive segments and the least capability needed to complete
the task.
All the DB PCBs for a single application program are contained in a program
specification block (PSB). A program might use only one DB PCB (if it processes only
one data structure) or it might use several DB PCBs, one for each data structure.
Related Reading: For more information on generating PSBs, see IMS Version 10:
Database Utilities Reference.
A program that updates the database with information on patients' illnesses and
treatments, in contrast, would need to process the PATIENT, ILLNESS, and
TREATMNT segments. You could define the data structure shown in Figure 15 on
page 39 for this program.
Sometimes a program needs to process all of the segments in the database. When
this is true, the program's view of the database as defined in the DB PCB is the
same as the database hierarchy that is defined in the DBD.
Related Reading: For more information, see IMS Version 10: Database Administration
Guide.
An application program can read and update a database. When you update a
database, you can replace, delete, or add segments. In IMS, you indicate in the
DL/I call the segment you want to process, and whether you want to read or
update it. In CICS, you can indicate what you want using either a DL/I call or an
EXEC DLI command.
Developing specifications
Developing specifications involves defining what your application will do, and
how it will be done. The task of developing specifications is not described in this
information because it depends entirely on the specific application and your
standards.
Subsections:
v “An overview of application design”
v “Identifying application data” on page 45
v “Designing a local view” on page 50
The purpose of this overview is to give you a frame of reference so that you can
understand where the techniques and guidelines explained in this section fit into
the process. The order in which you perform the tasks described here, and the
importance you give to each one, depend on your settings. Also, the individuals
involved in each task, and their titles, might differ depending on the site. The tasks
are as follows:
v Establish your standards
Throughout the design process, be aware of your established standards. Some of
the areas that standards are usually established for are:
– Naming conventions (for example, for databases and terminals)
– Formats for screens and messages
– Control of and access to the database
– Programming and conventions (for common routines and macros)
Setting up standards in these areas is usually an ongoing task that is the
responsibility of database and system administrators.
v Follow your security standards
Security protects your resources from unauthorized access and use. As with
defining standards, designing an adequate security system is often an ongoing
task. As an application is modified or expanded, often the security must be
changed in some way also. Security is an important consideration in the initial
stages of application design.
Establishing security standards and requirements is usually the responsibility of
system administration. These standards are based on the requirements of your
applications.
Some security concerns are:
– Access to and use of the databases
– Access to terminals
When analyzing the required application data, you can categorize the data as
either an entity or a data element.
When you store this data in an IMS database, groups of data elements are potential
segments in the hierarchy. Each data element is a potential field in that segment.
Subsections:
v “Listing data elements” on page 46
v “Naming data elements” on page 47
v “Documenting application data” on page 48
Suppose that one of the education company's requirements is for each Ed Center to
print weekly current rosters for all classes at the Ed Center. The current roster is to
give information about the class and the students enrolled in the class.
Headquarters wants the current rosters to be in the format shown in Figure 16.
CHICAGO 01/04/04
CONFIRMED = 30
WAIT—LISTED = 1
CANCELED = 2
To list the data elements for a particular business process, look at the required
output. The current roster shown in Figure 16 is the roster for the class, “Transistor
Theory” to be given in the Chicago Ed Center, starting on January 14, 2004, for ten
days. Each course has a course code associated with it—in this case, 41837. The
code for a particular course is always the same. For example, if Transistor Theory
is also offered in New York, the course code is still 41837. The roster also gives the
names of the instructors who are teaching the course. Although the example only
shows one instructor, a course might require more than one instructor.
For each student, the roster keeps the following information: a sequence number
for each student, the student's name, the student's company (CUST), the company's
location, the student's status in the class, and the student's absences and grade. All
the above information on the course and the students is input information.
The current date (the date that the roster is printed) is displayed in the upper right
corner (01/04/04). The current date is an example of data that is output only data;
it is generated by the operating system and is not stored in the database.
The bottom-left corner gives a summary of the class status. This data is not
included in the input data. These values are determined by the program during
processing.
After you have listed the data elements, choose the major entity that these
elements describe. In this case, the major entity is class. Although a lot of
information exists about each student and some information exists about the
course in general, together all this information relates to a specific class. If the
information about each student (for example, status, absence, and grade) is not
related to a particular class, the information is meaningless. This holds true for the
data elements at the top of the list as well: The Ed Center, the date the class starts,
and the instructor mean nothing unless you know what class they describe.
Before you begin naming data elements, be aware of the naming standards that
you are subject to. When you name data elements, use the most descriptive names
possible. Remember that, because other applications probably use at least some of
the same data, the names should mean the same thing to everyone. Try not to limit
the name's meaning only to your application.
Recommendation: Use global names rather than local names. A global name is a
name whose meaning is clear outside of any particular application. A local name is
a name that, to be understood, must be seen in the context of a particular
application.
One of the problems with using local names is that you can develop synonyms,
two names for the same data element.
Example: In the current roster example, suppose the student's company was
referred to simply as “company” instead of “customer”. But suppose the
accounting department for the education company used the same piece of data in
When you choose data element names, use qualifiers so that each name can mean
only one thing.
Example: Suppose Headquarters, for each course that is taught, assigns a number
to the course as it is developed and calls this number the “sequence number”. The
Ed Centers, as they receive student enrollments for a particular class, assign a
number to each student as a means of identification within the class. The Ed
Centers call this number the “sequence number”. Thus Headquarters and the Ed
Centers are using the same name for two separate data elements. This is called a
homonym. You can solve the homonym problem by qualifying the names. The
number that Headquarters assigns to each course can be called “course code”
(CRSCODE), and the number that the Ed Centers assign to their students can be
called “student sequence number” (STUSEQ#).
Choose data element names that identify the element and describe it precisely.
Make your data element names:
Unique The name is clearly distinguishable from other
names.
Self-explanatory The name is easily understood and recognized.
Concise The name is descriptive in a few words.
Universal The name means the same thing to everyone.
You should also record control information about the data. Such information
should address the following questions:
v What action should the program take when the data it attempts to access is not
available?
v If the format of a particular data element changes, which business processes
does that affect? For example, if an education database has as one of its data
elements a five-digit code for each course, and the code is changed to six digits,
which business processes does this affect?
v Where is the data now? Know the sources of the data elements required by the
application.
v Which business processes make changes to a particular data element?
v Are there security requirements about the data in your application? For example,
you would not want information such as employees' salaries available to
everyone?
v Which department owns and controls the data?
One way to gather and record this information is to use a form similar to the one
shown in Table 13. The amount and type of data that you record depends on the
standards that you are subject to. For example, Table 13 lists the ID number, data
element name, length, the character format, the allowed, null, default values, and
the number of occurrences.
Table 13. Example of data elements information form
Data
element Char. Null Default
ID # name Length format Allowed values values value Number of occurrences
5 Course 5 bytes Hexa- 0010090000 00000 N/A There are 200 courses in
Code decimal the curriculum. An
average of 10 are new or
revised per year. An
average of 5 are dropped
per year.
25 Status 4 bytes Alpha- CONF WAIT blanks WAIT 1 per student
numeric CANC
36 Student 20 bytes Alpha- Alpha only blanks N/A There are 3 to 100
Name numeric students per class with
an average of 40 per
class.
Definitions: A data aggregate is a group of data elements. When you have grouped
data elements by the entity they describe, you can determine the relationships
between the data aggregates. These relationships are called mappings. Based on the
mappings, you can design a conceptual data structure for the business process. You
should document this process as well.
Data structuring can be done in many different ways. The method explained in
this section is one example.
Subsections:
v “Grouping data elements into hierarchies”
v “Determining mappings” on page 56
Data elements have values and names. In the student data elements example, the
values are a particular student's sequence number, the student's name, company,
company location, the student's status in the class, the student's absences, and
grade. The names of the data aggregate are not unique—they describe all the
As you group data elements into data aggregates and data structures, look at the
data elements that make up each group and choose one or more data elements that
uniquely identify that group. This is the data aggregate's controlling key, which is
the data element or group of data elements in the aggregate that uniquely
identifies the aggregate. Sometimes you must use more than one data element for
the key in order to uniquely identify the aggregate.
By following the three steps explained in this section, you can develop a
conceptual data structure for a business process's data. However, you are not
developing the logical data structure for the program that performs the business
process. The three steps are:
1. Separate repeating data elements in a single occurrence of the data aggregate.
2. Separate duplicate values in multiple occurrences of the data aggregate.
3. Group each data element with its controlling keys.
The data elements defined as multiple are the elements that repeat. Separate the
repeating data elements by shifting them to a lower level. Keep data elements with
their controlling keys.
The data elements that repeat for a single class are: STUSEQ#, STUNAME, CUST,
LOCTN, STATUS, ABSENCE, and GRADE. INSTRS is also a repeating data
element, because some classes require two instructors, although this class requires
only one.
When you separate repeating data elements into groups, you have the structure
shown in Figure 17 on page 52.
Figure 17 shows these aggregates with the keys indicated with leading asterisks (*).
The keys for the data aggregates are shown in Table 15.
Table 15. Data aggregates and keys for current roster after step 1
Data aggregate Keys
Course aggregate EDCNTR, DATE, CRSCODE
Student aggregate EDCNTR, DATE, CRSCODE, STUSEQ#
Instructor aggregate EDCNTR, DATE, CRSCODE, INSTRS
The asterisks in Figure 17 identify the key data elements. For the Class aggregate,
it takes multiple data elements to identify the course, so you need multiple data
elements to make up the key. The data elements that comprise the Class aggregate
are:
v Controlling key element, STUSEQ#
v STUNAME
v CUST
v LOCTN
v STATUS
v ABSENCE
Along with these keys inherited from the root segment, Course aggregate:
Along with these Keys inherited from the root segment, Course aggregate:
v EDCNTR
v DATE
v CRSCODE
After you have shifted repeating data elements, make sure that each element is in
the same group as its controlling key. INSTRS is separated from the group of data
elements describing a student because the information about instructors is
unrelated to the information about the students. The student sequence number
does not control who the instructor is.
In the example shown in Figure 17 on page 52, the Student aggregate and
Instructor aggregate are both dependents of the Course aggregate. A dependent
aggregate's key includes the concatenated keys of all the aggregates above the
dependent aggregate. This is because a dependent's controlling key does not mean
anything if you don't know the keys of the higher aggregates. For example, if you
knew that a student's sequence number was 4, you would be able to find out all
the information about the student associated with that number. This number
would be meaningless, however, if it were not associated with a particular course.
But, because the key for the Student aggregate is made up of Ed Center, date, and
course code, you can deduce which class the student is in.
In this step, compare the two occurrences and shift the fields with duplicate values
(TRANS THEORY and so on) to a higher level. If you need to, choose a controlling
key for aggregates that do not yet have keys.
In Table 16 on page 53, CRSNAME, CRSCODE, and LENGTH are the fields that
have duplicate values. Much of this process is intuitive. Student status and grade,
although they can have duplicate values, should not be separated because they are
not meaningful values by themselves. These values would not be used to identify a
particular student. This becomes clear when you remember to keep data elements
with their controlling keys. When you separate duplicate values, you have the
structure shown in Figure 18.
Step 3. grouping data elements with their controlling keys: This step is often a
check on the first two steps. (Sometimes the first two steps have already done
what this step instructs you to do.)
At this stage, make sure that each data element is in the group that contains its
controlling key. The data element should depend on the full key. If the data
In this example, CUST and LOCTN do not depend on the STUSEQ#. They are
related to the student, but they do not depend on the student. They identify the
company and company address of the student.
CUST and LOCTN are not dependent on the course, the Ed Center, or the date,
either. They are separate from all of these things. Because a student is only
associated with one CUST and LOCTN, but a CUST and LOCTN can have many
students attending classes, the CUST and LOCTN aggregate should be above the
student aggregate.
Figure 19 shows these aggregates and keys indicated with leading asterisks (*).
Figure 19 shows what the structure looks like when you separate CUST and
LOCTN.
The keys for the data aggregates are shown in Table 17.
Table 17. Data aggregates and keys for current roster after step 3
Data aggregate Keys
Course aggregate CRSCODE
Class aggregate CRSCODE, EDCNTR, DATE
Customer aggregate CUST, LOCTN
Determining mappings
When you have arranged the data aggregates into a conceptual data structure, you
can examine the relationships between the data aggregates. A mapping between
two data aggregates is the quantitative relationship between the two. The reason
you record mappings is that they reflect relationships between segments in the
data structure that you have developed. If you store this information in an IMS
database, the DBA can construct a database hierarchy that satisfies all the local
views, based on the mappings. In determining mappings, it is easier to refer to the
data aggregates by their keys, rather than by their collected data elements.
The two possible relationships between any two data aggregates are:
v One-to-many
For each segment A, one or more occurrences of segment B exist. For example,
each class maps to one or more students.
Mapping notation shows this in the following way:
COURSE SCHEDULE
DATE LOCATION
APRIL 14 BOSTON
APIRL 21 CHICAGO
.
.
.
NOVEMBER 18 LOS ANGELES
1. Gather the data. Table 18 on page 58 lists the data elements and two
occurrences of the data aggregate.
2. Analyze the data relationships. First, group the data elements into a conceptual
data structure.
a. Separate repeating data elements in one occurrence of the data aggregate by
shifting them to a lower level, as shown in Figure 21
1. Gather the data. Table 20 lists the data elements and two occurrences of the
data aggregate.
Table 20. Instructor skills data elements
Data elements Occurrence 1 Occurrence 2
INSTR REYNOLDS, P.W. MORRIS, S. R.
CRSCODE multiple multiple
CRSNAME multiple multiple
2. Analyze the data relationships. First, group the data elements into a conceptual
data structure.
a. Separate repeating data elements in one occurrence of the data aggregate by
shifting to a higher level as shown in Figure 23
b. Separate any duplicate values in the two occurrences of the data aggregate.
No duplicate values exist in this data aggregate.
c. Group data elements with their keys.
INSTRUCTOR SCHEDULES
1. Gather the data. Table 21 lists the data elements and two occurrences of the
data aggregate.
Table 21. Instructor schedules data elements
Data elements Occurrence 1 Occurrence 2
INSTR BENSON, R. J. MORRIS, S. R.
CRSNAME multiple multiple
CRSCODE multiple multiple
EDCNTR multiple multiple
DATE(START) multiple multiple
2. Analyze the data relationships. First, group the data elements into a conceptual
data structure.
a. Separate repeating data elements in one occurrence of the data aggregate by
shifting data elements to a lower level as shown in Figure 25.
Subsections:
v “Overview of APPC and LU 6.2”
v “Application program types”
v “Application objectives” on page 65
v “Choosing conversation attributes” on page 65
v “Conversation type” on page 66
v “Conversation state” on page 67
v “Synchronization level” on page 67
v “Distributed sync point” on page 68
v “Application programming interface for LU type 6.2” on page 72
v “LU 6.2 partner program design” on page 73
A modified standard DL/I application program receives its messages using DL/I
GU calls to the I/O PCB and issues output responses using DL/I ISRT calls. CPI
Communications calls can also be used to allocate new conversations and to send
and receive data for them.
Related Reading: For a list of the CPI Communications calls, see Common
Programming Interface Communications Reference.
Use a modified standard DL/I application program when you want to use an
existing standard DL/I application program to establish a conversation with
another LU 6.2 device or the same network destination. The standard DL/I
application program is optionally modified and uses new functions, new
application and transaction definitions, and modified DL/I calls to initiate LU 6.2
application programs. Program calls and parameters are available to use the
IMS-provided implicit API and the CPI Communications explicit API.
Application objectives
Each application type has a different purpose, and its ease-of-use varies depending
on whether the program is a standard DL/I, modified standard DL/I, or a CPI
Communications driven application program. Table 22 on page 65 lists the purpose
and ease-of-use for each application type (standard DL/I, modified standard DL/I,
and PI-C driven). This information must be balanced with IMS resource use.
Table 22. Using application programs in APPC
Ease of use
Purpose of Standard DL/I Modified standard
application program program DL/I program PI-C driven program
Inquiry Easy Neutral Very Difficult
Data Entry Easy Easy Difficult
Bulk Transfer Easy Easy Neutral
Cooperative Difficult Difficult Desirable
Distributed Difficult Neutral Desirable
High Integrity Neutral Neutral Desirable
Client Server Easy Neutral Very Difficult
Synchronous conversation
A conversation is synchronous if the partner waits for the response on the same
conversation used to send the input data.
For examples of transaction flow, see “LU 6.2 flow diagrams” on page 74.
Asynchronous conversation
A conversation is asynchronous if the partner program normally deallocates a
conversation after sending the input data. Output is sent to the TP name of
DFSASYNC.
Example:
MC_ALLOCATE TPN(OTHERTXN)
MC_SEND_DATA ’THIS MUST BE A MESSAGE SWITCH, IMS COMMAND’
MC_SEND_DATA ’OR A NON-RESP NON-CONV TRANSACTION’
MC_DEALLOCATE
For examples of transaction flow, see “LU 6.2 flow diagrams” on page 74.
For examples of transaction flow, see “LU 6.2 flow diagrams” on page 74.
Conversation type
The APPC conversation type defines how data is passed on and retrieved from
APPC verbs. It is similar in concept to file blocking and affects both ends of the
conversation.
Related Reading: For more information on basic and mapped conversations, see
v Systems Network Architecture: LU 6.2 Reference: Peer Protocols and
v Systems Network Architecture: Transaction Programmer's Reference Manual for LU
Type 6.2
Conversation state
CPI Communications uses conversation state to determine what the next set of
actions will be. Examples of conversation states are:
RESET The initial state before communications begin.
SEND The program can send or optionally receive.
RECEIVE The program must receive or abort.
CONFIRM The program must respond to a partner.
Synchronization level
The APPC synchronization level defines the protocol that is used when changing
conversation states. APPC and IMS support the following sync_level values:
NONE Specifies that the programs do not issue calls or recognize returned
parameters relating to synchronization.
CONFIRM Specifies that the programs can perform confirmation processing
on the conversation.
SYNCPT Specifies that the programs participate in coordinated commit
processing on resources that are updated during the conversation
under the RRS/MVS recovery platform. A conversation with this
level is also called a protected conversation.
Application programmers can now develop APPC application programs (local and
remote) and remote OTMA application programs that use RRS/MVS as the
sync-point manager, rather than IMS. This enhancement enables resources across
multiple platforms to be updated and recovered in a coordinated manner.
The final participant in this resource recovery protocol is the application program,
the program accessing and updating protected resources. The application program
decides whether the data is to be committed or aborted and relates this decision to
the sync-point manager. The sync-point manager then coordinates the actions in
support of this decision among the resource managers.
After the sync-point manager has gathered all the votes, phase two begins. If all
votes are to commit the changes, then the phase two action is commit. Otherwise,
phase two becomes a backout. System failures, communication failures, resource
manager failures, or application failures are not barriers to the completion of the
two-phase commit process.
Notes:
1. The application and IMS make a connection.
2. IMS expresses protected interest in the work started by the application. This
tells RRS/MVS that IMS will participate in the 2-phase commit process.
3. The application makes a read request to an IMS resource.
4. Control is returned to the application following its read request.
5. The application updates a protected resource.
6. Control is returned to the application following its update request.
7. The application requests that the update be made permanent by way of the
SRRCMIT call.
8. RRS/MVS calls IMS to do the prepare (phase 1) process.
9. IMS returns to RRS/MVS with its vote to commit.
10. RRS/MVS calls IMS to do the commit (phase 2) process.
11. IMS informs RRS/MVS that it has completed phase 2.
12. Control is returned to the application following its commit request.
Restriction:
v Extended Recovery Facility (XRF)
Running protected conversations in an IMS-XRF environment does not
guarantee that the alternate system can resume and resolve any unfinished work
started by the active system. This process is not guaranteed because a failed
resource manager must re-register with its original RRS system if the RRS is still
available when the resource manager restarts. Only if the RRS on the active
system is not available can an XRF alternate register with another RRS in the
sysplex and obtain the incomplete unit of recovery data of the failing active.
Recommendation: Because IMS retains indoubt units-of-recovery indefinitely
until they're resolved, a switch back to the original active system should be done
as soon as possible to pickup unit of recovery information to resolve and
complete all the work of the resource managers involved. If this is not possible,
the indoubt units-of-recovery can be resolved using commands.
v Remote Site Recovery (RSR)
Active systems tracked by a remote system in an RSR environment can
participate in protected conversations, although it will be necessary to resolve
indoubt units-of-recovery using commands if they should exist after a takeover
to a remote site has been done. This is because the remote site is probably not
part of the active sysplex and the new IMS cannot acquire unfinished
unit-of-recovery information from RRS. IMS provides commands to interrogate
protected conversation work and to resolve the unfinished unit-of-recovery if
necessary.
v Batch and Non-Message-Driven BMPs in a DBCTL Environment
Distributed Sync Point does not support the IMS batch environment. In a
DBCTL environment, there are no inbound protected conversations possible.
However, a BMP in a DBCTL environment can allocate an outbound protected
conversation, which will be supported by Distributed Sync Point and RRS/MVS.
Implicit API
The implicit API accesses an APPC conversation indirectly. This API uses the
standard DL/I calls (GU, ISRT, PURG) to send and receive data. It allows application
programs that are not specific to LU 6.2 protocols to use LU 6.2 devices. The API
uses new and changed DL/I calls (CHNG, INQY, SETO) to utilize LU 6.2. Using the
existing IMS application programming base, you can write specific applications for
LU 6.2 using this API and not using the CPI Communications calls. Although the
implicit API uses only some of the LU 6.2 capabilities, it can be a useful
simplification for many applications. The implicit API also provides function
outside of LU 6.2, like message queueing and automatic asynchronous message
delivery.
IMS generates all CPI Communications calls under the implicit API. The
application interaction is strictly with the IMS message queue.
The remote LU 6.2 system must be able to handle the LU 6.2 flows. APPC/MVS
generates these flows from the CPI Communications calls issued by the IMS
application program using the implicit API. An IMS application program can use
the explicit API to issue the CPI Communications directly. This is useful with
remote LU 6.2 systems that have incomplete LU 6.2 implementations, or that are
incompatible with the IMS implicit API support. See the LU 6.2 data flow
examples under “LU 6.2 partner program design.”
Explicit API
The explicit API (the CPI Communications API) can be used by any IMS
application program to access an APPC conversation directly. IMS resources are
available to the CPI Communications driven application program only if the
application issues the APSB (Allocate PSB) call. The CPI Communications driven
application program must use the CPI-RR SRRCMIT and SRRBACK verbs to initiate an
IMS sync point or backout, or if SYNCLVL=SYNCPT is specified, to communicate
the sync point decision to the RRS/MVS sync point manager.
Related Reading: For a description of the SRRCMIT and SRRBACK verbs, see SAA CPI
Resource Recovery Reference.
Differences in buffering and encapsulation of control data with user data may
cause variations in the flows. The control data are the 3 returned fields from the
Receive APPC verb: Status_received, Data_received, and Request_to_send_received.
Any variations based on these differences will not affect the function or use of the
flows.
Figure 35 on page 80 shows the flow of a local IMS command when Sync_level is
None.
Figure 38 on page 83 shows the flow of a CPI-C driven program when Sync_level
is None.
The scenarios shown in Figure 43 on page 88, Figure 44 on page 89, Figure 45 on
page 90, Figure 46 on page 91, and Figure 47 on page 92 provide examples of the
two-phase process for the supported application program types. The LU 6.2 verbs
are used to illustrate supported functions and interfaces between the components.
Only parameters pertinent to the examples are included. This does not imply that
other parameters are not supported.
Notes:
1Sync_Level=Syncpt triggers a protected resource update.
2This application program inserts output for the remote application to
the IMS message queue.
3 The GU initiates the transfer of the output.
4 The remote application sends a Confirmed after receiving data
(output).
5 IMS issues ATRCMIT (equivalent to SRRCMIT) to start the two-phase
process.
Notes:
1Sync_Level=Syncpt triggers a protected resource update.
2 The programs send and receive data.
3 The remote application decides to commit the updates.
4 The CPI-C program issues SRRCMIT to commit the changes.
5 The commit return code is returned to the remote application.
Notes:
1Sync_Level=Syncpt triggers a protected-resource update.
2 This application program inserts output for the remote application
to the IMS message queue.
3 The GU initiates the transfer of the output.
4 The remote application decides to back out any updates.
5 IMS abends the application with a U119 to back out the application.
6 The backout return code is returned to the remote application.
Notes:
1Sync_Level=Syncpt triggers a protected-resource update.
2 This application program inserts output for the remote application
to the IMS message queue.
3 The GU initiates the transfer of the output.
4 The remote application sends a Confirmed after receiving data
(output).
5 IMS issues ATBRCVW on behalf of the DL/I application to wait
for a commit or backout.
6 The remote application decides to back out any updates.
7 IMS abends the application with U0711 to back out the application.
8 The backout return code is returned to the remote application.
Notes:
1Sync_Level=Syncpt triggers a protected-resource update.
2 This application program inserts output for the remote application
to the IMS message queue.
3 DL/I program issues a ROLB. ABENDU0711 with Return Code X’20’
is issued.
Notes:
1 An allocate with Sync_Level=Syncpt triggers a protected resource
update with Conversation 1.
2 The first transaction provides the output for Conversation 1.
3 An allocate with Sync_Level=Syncpt triggers a protected resource
update with Conversation 2.
4 The second transaction provides the output for Conversation 2.
Integrity tables
Table 23 shows the results, from the viewpoint of the IMS partner system, of
normal conversation completion, abnormal conversation completion due to a
session failure, and abnormal conversation completion due to non-session failures.
These results apply to asynchronous and synchronous conversations and both
input and output. This table also shows the outcome of the message, and the
action that the partner system takes when it detects the failure. An example of an
action, under “LU 6.2 Session Failure,” is a programmable work station (PWS)
resend.
Table 23. Message integrity of conversations
Conversation attributes Normal LU 6.2 session failure1 Other failure2
Synchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=NONE Output: Reliable Output: PWS resend Output: Reliable
Synchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=CONFIRM Output: Reliable Output: Reliable Output: Reliable
Synchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=SYNCPT Output: Reliable Output: Reliable Output: Reliable
Asynchronous Input: Ambiguous Input: Undetectable Input: Undetectable
Sync_level=NONE Output: Reliable Output: Reliable Output: Reliable
Asynchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=CONFIRM Output: Reliable Output: Reliable Output: Reliable
Asynchronous Input: Reliable Input: PWS resend Input: Reliable
Sync_level=SYNCPT Output: Reliable Output: Reliable Output: Reliable
Notes:
1. A session failure is a network-connectivity breakage.
2. A non-session failure is any other kind of failure, such as invalid security
authorization.
3. IMS resends asynchronous output if CONFIRM is lost; therefore, the PWS must tolerate
duplicate output.
A Sync_level value of NONE does not apply to asynchronous output, because IMS
always uses Sync_level=CONFIRM for such output.
Table 24. Results of processing when integrity is compromised
State of window1 Probability of action
before accepting Probability of Possible action while while sending
Conversation attributes transaction window state sending response response
Synchronous ALLOCATE to Medium Can lose or send Medium
Sync_level=NONE PREPARE_TO_ duplicate output.
RECEIVE return
Table 25 indicates how IMS recovers APPC transactions across IMS warm starts,
XRF takeovers, APPC session failures, and MSC link failures.
Table 25. Recovering APPC messages
IMS warm start APPC (LU 6.2) MSC LINK
Message type (NRE or ERE) XRF takeover session fail failure
Local Recoverable Tran.,
Non Resp., Non Conversation
- APPC Sync. Conv. Mode Discarded (2) Discarded (4) Discarded (6) N/A (9)
- APPC Async. Conv. Mode Recovered Recovered Recovered (1) N/A (9)
Local Recoverable Tran.,
Conv. or Resp. mode
- APPC Sync. Conv. Mode Discarded (2) Discarded (4) Discarded (6) N/A (9)
- APPC Async. Conv. Mode N/A (8) N/A (8) N/A (8) N/A (8,9)
Local Non Recoverable Tran.,
- APPC Sync. Conv. Mode Discarded (2) Discarded (6) N/A (9)
- APPC Async. Conv. Mode Discarded (2) Discarded (4) Recovered (1) N/A (9)
Remote Recoverable Tran.,
Non Resp., Non Conv.
- APPC Sync. Conv. Mode Discarded (2,5) Discarded (3,5) Recovered (1) Recovered (7)
- APPC Async. Conv. Mode Recovered Recovered Recovered (1) Recovered (7)
Remote Recoverable Tran.,
Conv. or Resp. mode
- APPC Sync. Conv. Mode Discarded (2,5) Discarded (3,5) Recovered (1) Recovered (7)
- APPC Async. Conv. Mode N/A (8) N/A (8) N/A (8) N/A (8)
Notes:
1. This recovery scenario assumes the message was enqueued before failure; otherwise, the message is discarded.
2. The message is discarded during IMS warm-start processing.
3. The message is discarded when the MSC link is restarted and when the message is taken off the queue (for
sending across the link).
4. The message is discarded when the message region is started and when the message is taken off the queue (for
processing by the application program).
5. For all remote MSC APPC transactions, if the message has already been sent across the MSC link to the remote
system when the failure occurs in the local IMS, the message is processed. After the message is processed by the
remote application program and a response message is sent back to the local system, it is enqueued to the
DFSASYNC TP name of the LU 6.2 device or program that submitted the original transaction.
6. At sync point, the User Message Control Error exit routine (DFSCMUX0) can prevent the transaction from being
aborted and the output message can be rerouted (recovered).
For more information about this exit routine, see IMS Version 10: Exit Routine Reference.
7. The standard MSC Link recovery protocol recovers all messages that are queued or are in the process of being
sent across the MSC link when the link fails.
8. IMS conversational-mode and response-mode transactions cannot be submitted from APPC asynchronous
conversation sessions. APPC synchronous conversation-mode must be used.
9. MSC link failures do not affect local transactions.
Messages sent with the LTERM= option are directed to IMS-managed local or
remote LTERMs. Messages sent without the LTERM= option are sent to the
appropriate LU 6.2 application or IMS application program.
Because the LTERM can be an LU 6.2 descriptor name, the message is sent to the
LU 6.2 application program as if an LU 6.2 device had been explicitly selected.
Related Reading: For more information about DFSAPPC, see IMS Version 10:
Communications and Connections Guide.
Subsections:
v “Defining IMS application requirements”
v “Accessing databases with your IMS application program” on page 100
v “Accessing data: the types of programs you can write for your IMS application”
on page 102
v “IMS programming integrity and recovery considerations” on page 110
v “Dynamic allocation for IMS databases” on page 119
Answers to questions like these can help you decide on the number of application
programs that the processing will require, and on the types of programs that
SECURITY: None.
Note: JMP and JBP programs cannot access DB2 for z/OS databases.
Related Reading: For information on processing DB2 for z/OS databases, see
DB2 for z/OS and OS/390 Application Programming and SQL Guide.
v z/OS Files
BMPs (in both the DB/DC and DBCTL environment) are the only type of online
application program that can access z/OS files for their input or output. Batch
programs can also access z/OS files.
v GSAM Databases (Generalized Sequential Access Method)
Generalized Sequential Access Method (GSAM) is an access method that makes
it possible for BMPs and batch programs to access a sequential z/OS data set as
a simple database. A GSAM database can be accessed by z/OS or by IMS.
Accessing data: the types of programs you can write for your IMS
application
You must decide what type of program to use: batch programs, message
processing programs (MPPs), IMS Fast Path (IFP) applications, batch message
processing (BMP) applications, Java Message Processing (JMP) applications, or Java
Batch Processing (JBP) applications. As Table 26 on page 101 shows, the types of
programs you can use depend on whether you are running in the batch, DB/DC,
or DBCTL environment.
DB batch processing
These topics describe DB batch processing and can help you decide if this batch
program is appropriate for your application.
To issue checkpoints (or other system service calls), you must specify an I/O PCB
for your program. To obtain an I/O PCB, use the compatibility option by
specifying CMPAT=YES in the PSBGEN statement in your program's PSB.
Related Reading: For more information on obtaining an I/O PCB, see IMS Version
10: Application Programming Guide.
System log on DASD: If the system log is stored on DASD, using the BKO
execution parameter you can specify that IMS is to dynamically back out the
changes that the program has made to the database since its last commit point.
Related Reading: For information on using the BKO execution parameter, see IMS
Version 10: System Definition Reference.
IMS performs dynamic backout for a batch program when an IMS-detected failure
occurs, for example, when a deadlock is detected. Logging to DASD makes it
possible for batch programs to issue the SETS, ROLB, and ROLS system service calls.
These calls cause IMS to dynamically back out changes that the program has made.
Related Reading: For information on the SETS, ROLB, and ROLS calls, see the
information about recovering databases and maintaining database integrity in
either of the following books:
v IMS Version 10: Application Programming Guide
v IMS Version 10: Database Administration Guide
System log on tape: If a batch application program terminates abnormally and the
batch system log is stored on tape, you must use the IMS Batch Backout utility to
back out the program's changes to the database.
TM batch processing
A TM batch program acts like a DB batch program with the following differences:
v It cannot access full-function databases, but it can access DB2 for z/OS
databases, GSAM databases, and z/OS files.
v To issue checkpoints for recovery, you need not specify CMPAT=YES in your
program's PSB. (The CMPAT parameter is ignored in TM batch.) The I/O PCB is
always the first PCB in the list.
v You cannot dynamically back out a database because IMS does not own the
databases.
Using an MPP
The primary purpose of an MPP is to process requests from users at terminals and
from other application programs. Ideally, MPPs are very small, and the processing
they perform is tailored to respond to requests quickly. They process messages as
their input, and send messages as responses.
MPPs are executed through transaction codes. When you define an MPP, you
associate it with one or more transaction codes. Each transaction code represents a
transaction the MPP is to process. To process a transaction, a user at a terminal
enters a code for that transaction. IMS then schedules the MPP associated with that
code, and the MPP processes the transaction. The MPP might need to access the
database to do this. Generally, an MPP goes through these five steps to process a
transaction:
1. Retrieve a message from IMS.
2. Process the message and access the database as necessary.
3. Respond to the message.
4. Repeat the process until no messages are forthcoming.
5. Terminate.
Defining priorities and processing limits gives system administration some control
over load balancing and processing.
Using an IFP
You should use an IFP if you need quick processing and can accept the
characteristics and constraints associated with IFPs.
Restrictions:
v An IMS program cannot send messages to an IFP transaction unless it is in
another IMS system that is connected using Intersystem Communication (ISC).
v MPPs cannot pass conversations to an IFP transaction.
Recovering an IFP
IFPs must be defined as single mode. This means that a commit point occurs each
time the program retrieves a message. Because of this, you do not need to issue
checkpoint calls.
Because BMPs can degrade response times, your response time requirements
should be the main consideration in deciding the extent to which you will use
batch message processing. Therefore, use BMPs accordingly.
If a batch-oriented BMP fails, IMS and DB2 for z/OS back out the database
updates the program has made since the last commit point. You then restart the
program with JCL. If the BMP processes z/OS files, you must provide your own
method of taking checkpoints and restarting.
If you have a BMP perform an update for an MPP, design the BMP so that, if the
BMP terminates abnormally, you can reenter the last message as input for the BMP
when you restart it. For example, suppose an MPP gathers database updates for
three BMPs to process, and one of the BMPs terminates abnormally. You would
need to reenter the message that the terminating BMP was processing to one of the
other BMPs for reprocessing.
BMPs can process transactions defined as wait-for-input (WFI). This means that
IMS allows the BMP to remain in virtual storage after it has processed the
available input messages. IMS returns a QC status code, indicating that the
program should terminate when one of the following occurs:
v The program reaches its time limit.
v The master terminal operator enters a command to stop processing.
v IMS is terminated with a checkpoint shutdown.
You specify WFI for a transaction on the WFI parameter of the TRANSACT macro
during IMS system definition.
Like MPPs, BMPs can send output messages to several destinations, including
other application programs. See “Identifying output message destinations” on page
171 for more information.
JMP applications can access IMS data or DB2 for z/OS data using JDBC. JMP
applications run in JMP regions which have JVMs (Java Virtual Machines). For
more information about JMPs, see the IMS Version 10: Application Programming
Guide.
To access data concurrently while protecting data integrity, IMS and DB2 for z/OS
prevent other application programs from accessing segments that your program
deletes, replaces, or inserts, until your program reaches a commit point. A commit
point is the place in the program's processing at which it completes a unit of work.
When a unit of work is completed, IMS and DB2 for z/OS commit the changes
that your program made to the database. Those changes are now permanent and
the changed data is now available to other application programs.
A commit point indicates to IMS that a program has finished a unit of work, and
that the processing it has done is accurate. At that time:
v IMS releases segments it has locked for the program since the last commit point.
Those segments are then available to other application programs.
v IMS and DB2 for z/OS make the program's changes to the database permanent.
v The current position in all databases except GSAM is reset to the start of the
database.
Table 27 lists the modes in which the programs can run. Because processing mode
is not applicable to batch programs and batch-oriented BMPs, they are not listed in
the table. The program type is listed, and the table indicates which mode is
supported.
Table 27. Processing modes
Multiple mode
Program type Single mode only only Either mode
MPP X
You specify single or multiple mode on the MODE parameter of the TRANSACT
macro.
Related Reading: For information on the TRANSACT macro, see IMS Version 10:
System Definition Reference.
DB2 for z/OS does some processing with multiple- and single-mode programs that
IMS does not. When a multiple-mode program issues a call to retrieve a new
message, DB2 for z/OS performs an authorization check. If the authorization check
is successful, DB2 for z/OS closes any SQL cursors that are open. This affects the
design of your program.
The DB2 for z/OS SQL COMMIT statement causes DB2 for z/OS to make permanent
changes to the database. However, this statement is valid only in TSO application
programs. If an IMS application program issues this statement, it receives a
negative SQL return code.
Programs that issue symbolic checkpoint calls can specify as many as seven data
areas in the program to be checkpointed. When IMS restarts the program, the
Restart call restores these areas to the condition they were in when the program
issued the symbolic checkpoint call. Because symbolic checkpoint calls do not
support z/OS files, if your program accesses z/OS files, you must supply your
own method of establishing checkpoints.
You can use symbolic checkpoint for either Normal Start or Extended Restart
(XRST).
Related Reading: For more information on checkpoint calls, see IMS Version 10:
Application Programming Guide.
The restart call, which you must use with symbolic checkpoint calls, provides a
way of restarting a program after an abnormal termination. It restores the
program's data areas to the way they were when the program issued the symbolic
checkpoint call. It also restarts the program from the last checkpoint the program
established before terminating abnormally.
All programs can use basic checkpoint calls. Because you cannot use the restart call
with the basic checkpoint call, you must provide program restart. Basic checkpoint
calls do not support either z/OS or GSAM files. IMS programs cannot use z/OS
checkpoint and restart. If you access z/OS files, you must supply your own
method of establishing checkpoints and restarting.
If you might need to back out the entire batch program, the program should issue
the checkpoint call at the beginning of the program. IMS backs out the program to
the checkpoint you specify, or to the most recent checkpoint, if you do not specify
a checkpoint. If the database is updated after the beginning of the program and
before the first checkpoint, IMS is not able to back out these database updates.
For a batch program to issue checkpoint calls, it must specify the compatibility
option in its PSB (CMPAT=YES). This generates an I/O PCB for the program,
which IMS uses as an I/O PCB in the checkpoint call.
Another important reason for issuing checkpoint calls in batch programs is that,
although they may currently run in an IMS batch region, they might later need to
access online databases. This would require converting them to BMPs. Issuing
checkpoint calls in a BMP is important for reasons other than recovery—for
example, to release database resources for other programs. So, you should initially
include checkpoints in all batch programs that you write. Although the checkpoint
support might not be needed then, it is easier to incorporate checkpoint calls
initially than to try to fit them in later.
To free database resources for other programs, batch programs that run in a
data-sharing environment should issue checkpoint calls more frequently than those
that do not run in a data-sharing environment.
The conditions that make the database available only for read and not for update
are:
The two situations where the program might encounter unavailable data are:
v The program makes a call requiring access to a database that was unavailable at
the time the program was scheduled.
v The database was available when the program was scheduled, but limited
amounts of data are unavailable. The current call has attempted to access the
unavailable data.
Regardless of the condition causing the data to be unavailable, the program has
two possible approaches when dealing with unavailable data. The program can be
insensitive or sensitive to data unavailability.
v When the program is insensitive, IMS takes appropriate action when the
program attempts to access unavailable data.
v When the program is sensitive, IMS informs the program that the data it is
attempting to access is not available.
IMS does not schedule batch programs if the data that the program can access is
unavailable. If the batch program is using block-level data sharing, it might
encounter unavailable data if the sharing system fails and the batch system
attempts to access data that was updated but not committed by the failed system.
The following conditions alone do not cause a batch program to fail during
initialization:
v A PCB refers to a HALDB.
v The use of DBRC is suppressed.
However, without DBRC, a database call using a PCB for a HALDB is not allowed.
If the program is sensitive to unavailable data, such a call results in the status code
BA; otherwise, such a call results in message DFS3303I, followed by ABENDU3303.
The INIT call informs IMS that the program is sensitive to unavailable data and
can accept the status codes that are issued when the program attempts to access
such data. The INIT call can also be used to determine the data availability for
each PCB.
The INQY call is operable in both batch and online IMS environments. IMS
application programs can use the INQY call to request information regarding output
destination, session status, the current execution environment, the availability of
databases, and the PCB address based on the PCBNAME. The INQY call is only
supported by way of the AIB interface (AIBTDLI or CEETDLI using the AIB rather
than the PCB address).
The SETS, SETU, and ROLS calls enable the application to define multiple points at
which to preserve the state of full-function (except HSAM) databases and message
activity. The application can then return to these points at a later time. By issuing a
SETS or SETU call before initiating a set of DL/I calls to perform a function, the
program can later issue the ROLS call if it cannot complete a function due to data
unavailability.
The ROLS call allows the program to roll back its IMS full-function database activity
to the state that it was in prior to a SETS or SETU call being issued. If the PSB
contains an MSDB or a DEDB, the SETS and ROLS (with token) calls are invalid. Use
the SETU call instead of the SETS call if the PSB contains a DEDB, MSDB, or GSAM
PCB.
Related Reading: For more information on using the SETS and SETU calls with the
ROLS call, see IMS Version 10: Application Programming Guide.
The ROLS call can also be used to undo all update activity (database and messages)
since the last commit point and to place the current input message on the suspend
queue for later processing. This action is initiated by issuing the ROLS call without
a token or I/O area.
Restriction: With DB2 for z/OS, you cannot use ROLS (with a token) or SETS.
In the batch region, STAE or ESTAE routines ensure that database logging and
various resource cleanup functions are complete. If the batch region is not notified
of the application program termination, resources might not be properly released.
Generally, do not use the STAE or ESTAE facility in your application program.
However, if you believe that the STAE or ESTAE facility is required, you must
observe the following basic rules:
v When the environment supports STAE or ESTAE processing, the application
program STAE or ESTAE routines always get control before the IMS STAE or
ESTAE routines. Therefore, you must ensure that the IMS STAE or ESTAE exit
routines receive control by observing the following procedures in your
application program:
– Establish the STAE or ESTAE routine only once and always before the first
DL/I call.
– When using the STAE or ESTAE facility, the application program should not
alter the IMS abend code.
– Do not use the RETRY option when exiting from the STAE or ESTAE routine.
Instead, return a CONTINUE-WITH-TERMINATION indicator at the end of
the STAE or ESTAE processing. If your application program specifies the
RETRY option, be aware that IMS STAE or ESTAE exit routines will not get
control to perform cleanup. Therefore, system and database integrity might be
compromised.
– For PL/I for MVS and VM use of STAE and SPIE, see the description of IMS
considerations in Enterprise PL/I for z/OS and OS/390 Programming Guide.
– For PL/I for MVS and VM, COBOL for z/OS, and C/C++ for MVS/ESA, if
you are using the AIBTDLI interface in a non-Language Environment enabled
system, you must specify NOSTAE and NOSPIE. However, in Language
Environment® for MVS and VM Version 1.2 or later enabled environment, the
NOSTAE and NOSPIE restriction is removed.
v The application program STAE or ESTAE exit routine must not issue DL/I calls
(DB or TM) because the original abend might have been caused by a problem
between the application and IMS. A problem between the application and IMS
could result in recursive entry to STAE or ESTAE with potential loss of database
integrity, or in problems taking a checkpoint. This also could result in a hang
condition or an ABENDU0069 during termination.
If you use dynamic allocation, do not include JCL DD statements for any database
data sets that have been defined for dynamic allocation. Check with the DBA or
comparable specialist to determine which databases have been defined for dynamic
allocation.
Subsections:
v “Defining CICS application requirements”
v “Accessing databases with your CICS application program” on page 123
v “Writing a CICS program to access IMS databases” on page 124
v “Using data sharing for your CICS program” on page 128
v “Scheduling and terminating a PSB (CICS online programs only)” on page 129
v “Linking and passing control to other programs (CICS online programs only)”
on page 129
v “How CICS distributed transactions access IMS” on page 130
v “Maximizing the performance of your CICS system” on page 130
v “Programming integrity and database recovery considerations for your CICS
program” on page 131
v “Data availability considerations for your CICS program” on page 135
v “Use of STAE or ESTAE and SPIE in IMS batch programs” on page 137
v “Dynamic allocation for IMS databases” on page 138
Answers to questions like these can help you decide on the number of application
programs that the processing will require, and on the types of programs that
perform the processing most efficiently. Although rules dealing with how many
programs can most efficiently do the required processing do not exist, here are
some suggestions:
v As you look at each programming task, examine the data and processing that
each task involves. If a task requires different types of processing and has
different time limitations (for example, weekly as opposed to monthly), that task
may be more efficiently performed by several programs.
v As you define each program, it is a good idea for maintenance and recovery
reasons to keep programs as simple as possible. The simpler a program is—the
less it does—the easier it is to maintain, and to restart after a program or system
failure. The same is true with data availability—the less data that is accessed, the
more likely the data is to be available; the more limited the data accessed, the
more likely the data is to be available.
Similarly, if the data that the application requires is physically in one place, it
might be more efficient to have one program do more of the processing than
usual. These are considerations that depend on the processing and the data of
each application.
v Documenting each of the user tasks is helpful during the design process, and in
the future when others will work with your application. Be sure you are aware
of the standards in this area. The kind of information that is typically kept is
when the task is to be executed, a functional description, and requirements for
maintenance, security, and recovery.
Example: For the Current Roster process described under “Listing data
elements” on page 46, you might record the information shown in Figure 51.
How frequently the program is run is determined by the number of classes (20)
for which the Ed Center will print current rosters each week.
SECURITY: None.
Also, consider the type of database your program must access. As shown in
Table 29, the type of program you can write and database that can be accessed
depends on the operating environment. Table 29 also includes usage notes.
Table 29. Program and database options in the CICS environments
Type of program
Environment1 you can write Type of database that can be accessed
DB batch DB batch DB2 for z/OS2
DL/I Full-function
GSAM
z/OS files
DBCTL BMP DB2 for z/OS
DEDBs
Full-function
GSAM
z/OS files
CICS online DB2 for z/OS2
DEDBs
Full-function
z/OS files (access through CICS file
control or transient data services)
Notes:
| 1. A CICS environment, or CICS remote DL/I environment also exists and is also referred
| to as function shipping. In this environment, a CICS system supports applications that
| issue DL/I calls but the CICS system does not service the requests itself.
| The CICS environment “function ships” the DL/I calls to another CICS system that is
| using DBCTL. For more information on remote DL/I, see CICS IMS Database Control
| Guide.
2. IMS does not participate in the call process.
Online programs that access IMS databases are executed in the same way as other
CICS programs.
The structure of an online program, and the way it receives status information,
depend on whether it is a call- or command-level program. However, both
command- and call-level online programs:
v Schedule a PSB (for CICS online programs). A PSB is automatically scheduled
for batch or BMP programs.
v Issue either commands or calls to access the database. Online programs cannot
mix commands and calls in one logical unit of work (LUW).
v Optionally, terminate a PSB for CICS online programs.
v Issue an EXEC CICS RETURN statement when they have finished their processing.
This statement returns control to the linking program. When the highest-level
program issues the RETURN statement, CICS regains control and terminates the
PSB if it has not yet been terminated.
An online program in the DBCTL environment can use many IMS system service
requests.
Related Reading:
v For more information on writing these types of programs, see
– IMS Version 10: Application Programming Guide or
– IMS Version 10: Application Programming API Reference
v For more details about programming techniques and restrictions, see CICS
Application Programming Reference.
v For a summary of the calls and commands an online program can issue, see
– IMS Version 10: Application Programming Guide or
– IMS Version 10: Application Programming API Reference
DL/I database or system service requests must refer to one of the program
communication blocks (PCBs) from the list of PCBs passed to your program by
IMS. The PCB that must be used for making system service requests is called the
I/O PCB. When present, it is the first PCB in the list of PCBs.
| Before you run your program, use the IMS ACBGEN utility to convert the program
| specification blocks (PSBs) and database descriptions (DBDs) to the internal control
| block format. PSBs describe the application program's characteristics and use of
| data and terminals. DBDs describe a database's physical and logical characteristics.
Because an online program shares a database with other online programs, it may
affect the performance of your online system. For more information on what you
can do to minimize the effect your program has on performance, see “Maximizing
the performance of your CICS system” on page 130.
Unlike online programs, batch programs do not schedule or terminate PSBs. This is
done automatically.
Batch programs can issue system service requests (such as checkpoint, restart, and
rollback) to perform functions such as dynamically backing out database changes
made by your program.
Related Reading: For a summary of the commands and calls that you can use in a
batch program, see:
v IMS Version 10: Application Programming Guide
When performing a PSBGEN, you must define the language of the program that
will schedule the PSB. For your program to be able to successfully issue certain
system service requests, such as a checkpoint or a rollback request, an I/O PCB
must be available for your program. To obtain an I/O PCB, specify CMPAT=YES in
the PSBGEN statement. Make all batch programs sensitive to the I/O PCB so that
checkpoints are easily introduced. Design all batch programs with checkpoint and
restart in mind. Although the checkpoint support may not be needed initially, it is
easier to incorporate checkpoints initially than to try to fit them in later. With
checkpoints, it will be easier to convert batch programs to BMP programs or to
batch programs that use data sharing.
Related Reading: For more information about obtaining an I/O PCB, see
“Requesting an I/O PCB in batch programs” on page 132. For information on how
to perform a PSBGEN, see IMS Version 10: System Utilities Reference.
Because BMPs can degrade response times, carefully consider the response time
requirements as you decide on the extent to which you will use batch message
processing. You should examine the trade-offs in using BMPs and use them
accordingly.
Unlike most batch programs, a BMP can share resources with CICS online
programs using DBCTL. In addition to committing database changes and
providing places from which to restart (as for a batch program), checkpoint calls
release resources locked for the program. For more information on issuing
checkpoint calls, see “Checkpoints in batch-oriented BMPs” on page 115.
If a batch-oriented BMP fails, IMS backs out the database updates the program has
made since the last commit point. You must restart the program with JCL. If the
BMP processes z/OS files, you must provide your own method of taking
checkpoints and restarting.
Related Reading: For more information on sharing a database with an IMS system,
see IMS Version 10: System Administration Guide.
In a CICS online program, you use a PCB call or SCHD command (for
command-level programs) to obtain the PSB for your program. Because CICS
releases the PSB your program uses when the transaction ends, your program need
not explicitly terminate the PSB. Only use a terminate request if you want to:
v Use a different PSB
v Commit all the database updates and establish a logical unit of work for backing
out updates
v Free IMS resources for use by other CICS tasks
A terminate request causes a CICS sync point, and a CICS sync point terminates
the PSB. For more information about CICS recovery concepts, see the appropriate
CICS publication.
Terminating a PSB or issuing a sync point affects the linking program. For
example, a terminate request or sync point that is issued in the program that was
linked causes the release of CICS resources enqueued in the linking program.
A BMP program, in particular, can affect the performance of the CICS online
transactions. This is because BMP programs usually make a larger number of
database updates than CICS online transactions, and a BMP program is more likely
to hold segments that CICS online programs need. Limit the number of segments
held by a BMP program, so CICS online programs need not wait to acquire them.
One way to limit the number of segments held by a BMP or batch program that
participates in IMS data sharing is to issue checkpoint requests in your program to
commit database changes and release segments held by the program. When
deciding how often to issue checkpoint requests, you can use one or more of the
following techniques:
v Divide the program into small logical units of work, and issue a checkpoint call
at the end of each unit.
How IMS protects data integrity for your program (CICS online
programs)
IMS protects the integrity of the database for programs that share data by:
v Preventing other application programs with update capability from accessing
any segments in the database record your program is processing, until your
program finishes with that record and moves to a new database record in the
same database.
v Preventing other application programs from accessing segments that your
program deletes, replaces, or inserts, until your program reaches a sync point.
When your program reaches a sync point, the changes your program has made
to the database become permanent, and the changed data becomes available to
other application programs.
Exception: If PROCOPT=GO has been defined during PSBGEN for your
program, your program can access segments that have been updated but not
committed by another program.
v Backing out database updates made by an application program that terminates
abnormally.
You may also want to protect the data your program accesses by retaining
segments for the sole use of your program until your program reaches a sync
point—even if you do not update the segments. (Ordinarily, if you do not update
the segments, IMS releases them when your program moves to a new database
record.) You can use the Q command code to reserve segments for the exclusive
use of your program. You should use this option only when necessary because it
makes data unavailable to other programs and can have an impact on
performance.
To perform these tasks, you use system service calls, described in more detail in
the appropriate application programming information for your environment.
When a batch or BMP program issues a checkpoint request, IMS writes a record
containing a checkpoint ID to the IMS/ESA® system log.
When your application program reaches a point during its execution where you
want to make sure that all changes made to that point have been physically
entered in the database, issue a checkpoint request. If some condition causes your
program to fail before its execution is complete, the database must be restored to
its original state. The changes made to the database must be backed out so that the
database is not left in a partially updated condition for access by other application
programs.
If your program runs a long time, you can reduce the number of changes that
must be backed out by taking checkpoints in your program. Then, if your program
terminates abnormally, only the database updates that occurred after the
checkpoint must be backed out. You can also restart the program from the point at
which you issued the checkpoint request, instead of having to restart it from the
beginning.
Issue a checkpoint call just before issuing a Get Unique call, which reestablishes
your position in the database record after the checkpoint is taken.
Batch and BMP programs can issue basic checkpoint calls using the CHKP call.
When you use basic checkpoint calls, you must provide the code for restarting the
program after an abnormal termination.
Batch and BMP programs can also issue symbolic checkpoint calls. You can issue a
symbolic checkpoint call by using the CHKP call. Like the basic checkpoint call, the
symbolic checkpoint call commits changes to the database and establishes places
from which the program can be restarted. In addition, the symbolic checkpoint call:
v Works with the Extended Restart call to simplify program restart and recovery.
v Lets you specify as many as seven data areas in the program to be checkpointed.
When you restart the program, the restart call restores these areas to the way
they were when the program terminated abnormally.
Specifying a checkpoint ID: Each checkpoint call your program issues must have
an identification, or ID. Checkpoint IDs must be 8 bytes in length and should
contain printable EBCDIC characters.
When you want to restart your program, you can supply the ID of the checkpoint
from which you want the program to be started. This ID is important because
when your program is restarted, IMS then searches for checkpoint information
with an ID matching the one you have supplied. The first matching ID that IMS
encounters becomes the restart point for your program. This means that checkpoint
IDs must be unique both within each application program and among application
programs. If checkpoint IDs are not unique, you cannot be sure that IMS will
restart your program from the checkpoint you specified.
One way to make sure that checkpoint IDs are unique within and among programs
is to construct IDs in the following order:
v Three bytes of information that uniquely identifies your program.
v Five bytes of information that serves as the ID within the program, for example,
a value that is increased by 1 for each checkpoint command or call, or a portion
of the system time obtained at program start by issuing the TIME macro.
If you might back out of the entire program, issue the checkpoint request at the
very beginning of the program. IMS backs out the database updates to the
checkpoint you specify. If the database is updated after the beginning of the
program and before the first checkpoint, IMS is not able to back out these database
updates.
It is a good idea to design all batch programs with checkpoint and restart in mind.
Although the checkpoint support may not be needed initially, it is easier to
incorporate checkpoint calls initially than to try to fit them in later. If the
checkpoint calls are incorporated, it is easier to convert batch programs to BMP
programs or to batch programs that use data sharing.
Printing checkpoint log records: You can print checkpoint log records by using
the IMS File Select and Formatting Print Program (DFSERA10). With this utility,
you can select and print log records based on their type, the data they contain, or
their sequential positions in the data set. Checkpoint records are type 18 log
records. IMS Version 10: System Utilities Reference describes this program.
Related Reading: For more information, see IMS Version 10: Database Utilities
Reference.
For BMP programs: If your program terminates abnormally, the changes the
program has made since the last commit point are backed out. If a system failure
occurs, or if the CICS control region or DBCTL terminates abnormally, DBCTL
emergency restart backs out all changes made by the program since the last
commit point. You need not use the IMS Batch Backout utility because DBCTL
backs out the changes. If you need to back out all changes, you can use the ROLL
system service call to dynamically back out database changes.
If you use basic checkpoint calls (for batch and BMP programs), you must provide
the necessary code to restart the program from the latest checkpoint in the event
that it terminates abnormally.
One way to restart the program from the latest checkpoint is to store repositioning
data in an HDAM database. Your program writes a database record containing
repositioning information to the HDAM database. It updates this record at
intervals. When the program terminates, the database record is deleted. At the
completion of the XRST call, the I/O area always contains a checkpoint ID used by
the restart. Normally, XRST will return the 8-byte symbolic checkpoint ID, followed
by 4 blanks. If the 8-byte ID consists of all blanks, then XRST will return the 14-byte
time-stamp ID. Also, check the status code in the PCB. The only successful status
code for an XRST call is a row of blanks.
Unavailability of a database
The conditions that make an entire database unavailable for both read and update
are the following:
v A STOP command has been issued for the database.
v A DBRECOVERY (DBR) command has been issued for the database.
v DBRC authorization for the database has failed.
The conditions that make a database available for read but not for update are:
v A DBDUMP command has been issued for the database.
Chapter 9. Analyzing CICS application processing requirements 135
v The database access value is RD (read).
The program issues the INIT call or ACCEPT STATUS GROUP A command to inform
IMS that it is sensitive to unavailable data and can accept the status codes issued
when the program attempts to access such data. The INIT request can also be used
to determine data availability for each PCB in the PSB.
ROLS allows the program to roll back its IMS activity to the state prior to the SETS
or SETU call.
Restriction: SETS or SETU and ROLS only roll back the IMS updates. They do not roll
back the updates made using CICS file control or transient data.
Additionally, you can use the ROLS call or command to undo all database update
activity since the last checkpoint.
IMS uses STAE or ESTAE routines in the IMS batch regions to ensure that database
logging and various resource cleanup functions are completed. Two important
aspects of the STAE or ESTAE facility are that:
v IMS relies on its STAE or ESTAE facility to ensure database integrity and
resource control.
v The STAE or ESTAE facility is also available to the application program.
Because of these two factors, be sure you clearly understand the relationship
between the program and the STAE or ESTAE facility.
Generally, do not use the STAE or ESTAE facility in your batch application
program. However, if you believe that the STAE or ESTAE facility is required, you
must observe the following basic rules:
v When the environment supports STAE or ESTAE processing, the application
program STAE or ESTAE routines always get control before the IMS STAE or
ESTAE routines. Therefore, you must ensure that the IMS STAE or ESTAE exit
routines receive control by observing the following procedures in your
application program:
– Establish the STAE or ESTAE routine only once and always before the first
DL/I call.
Related Reading: For more information on the definitions for dynamic allocation,
see the DFSMDA macro in IMS Version 10: System Definition Reference.
Subsections:
v “Analyzing data access”
v “Understanding how data structure conflicts are resolved” on page 147
v “Providing data security” on page 157
v “Read without integrity” on page 161
Important: PHDAM and PHIDAM are the partitioned versions of the HDAM and
HIDAM database types, respectively. The corresponding descriptions of the HDAM
and HIDAM database types therefore apply to PHDAM and PHIDAM in the these
sections .
Some of the information that you can gather to help the DBA with this decision
answers questions like the following:
v To access a database record, a program must first access the root of the record.
How will each program access root segments?
Directly
Sequentially
Again, note the difference between updating a database record and updating a
segment within the database record.
Subsections:
v “Direct access”
v “Sequential access” on page 144
v “Accessing z/OS files through IMS: GSAM” on page 146
v “Accessing IMS data through z/OS: SHSAM and SHISAM” on page 146
Direct access
The advantage of direct access processing is that you can get good results for both
direct and sequential processing. Direct access means that by using a randomizing
routine or an index, IMS can find any database record that you want, regardless of
the sequence of database records in the database.
The direct access methods use pointers to maintain the hierarchic relationships
between segments of a database record. By following pointers, IMS can access a
path of segments without passing through all the segments in the preceding paths.
In addition, when you delete data from a direct-access database, the new space is
available almost immediately. This gives you efficient space utilization; therefore,
A disadvantage of direct access is that you have a larger IMS overhead because of
the pointers. But if direct access fulfills your data access requirements, it is more
efficient than using a sequential access method.
Subsections:
v “Primarily direct processing: HDAM”
v “Direct and sequential processing: HIDAM” on page 142
v “Main storage database: MSDB” on page 143
v “Data entry database: DEDB” on page 144
HDAM is efficient for a database that is usually accessed directly but sometimes
sequentially. HDAM uses a randomizing routine to locate its root segments and
then chains dependent segments together according to the pointer options chosen.
The z/OS access methods that HDAM can use are Virtual Storage Access Method
(VSAM) and Overflow Storage Access Method (OSAM).
Although HDAM can place roots and dependents anywhere in the database, it is
better to choose HDAM options that keep roots and dependents close together.
To use HDAM for sequential access of database records by root key, you need to
use a secondary index or a randomizing routine that stores roots in physical key
sequence.
HIDAM is the access method that is most efficient for an approximately equal
amount of direct and sequential processing. The z/OS access methods it can use
are VSAM and OSAM. The specific requirements that HIDAM satisfies are:
v Direct and sequential access of records by their root keys
v Direct access of paths of dependents
v Adding new database records and new segments because the new data goes into
the nearest available space
v Deleting database records and segments because the space created by a deletion
can be used by any new segment
HIDAM can satisfy most processing requirements that involve an even mixture of
direct and sequential processing. However, HIDAM is not very efficient with
sequential access of dependents.
HIDAM uses two databases. The primary database holds the data. An index
database contains entries for all of the root segments in order by their key fields.
For each key entry, the index database contains the address of that root segment in
the primary database.
When you access a root, you supply the key to the root. HIDAM looks the key up
in the index to find the address of the root and then goes to the primary database
to find the root.
HIDAM chains dependent segments together so that when you access a dependent
segment, HIDAM uses the pointer in one segment to locate the next segment in the
hierarchy.
When you process database records directly, HIDAM locates the root through the
index and then locates the segments from the root. HIDAM locates dependents
through pointers.
If you plan to process database records sequentially, you can specify special
pointers in the DBD for the database so that IMS does not need to go to the index
to locate the next root segment. These pointers chain the roots together. If you do
not chain roots together, HIDAM always goes to the index to locate a root
segment. When you process database records sequentially, HIDAM accesses roots
in key sequence in the index. This only applies to sequential processing; if you
want to access a root segment directly, HIDAM uses the index, and not pointers in
other root segments, to find the root segment you have requested.
MSDB segments are stored as root segments only. Only one type of pointer, the
forward chain pointer, is used. This pointer connects the segment records in the
database.
End of Diagnosis, Modification or Tuning Information
DEDB characteristics: DEDBs are hierarchic databases that can have as many as
15 hierarchic levels, and as many as 127 segment types. They can contain both
direct and sequential dependent segments. Because the sequential dependent
segments are stored in chronological order as they are committed to the database,
they are useful in journaling applications.
DEDBs support a subset of functions and options that are available for a HIDAM
or HDAM database. For example, a DEDB does not support indexed access
(neither primary index nor secondary index), or logically related segments.
A DEDB can be partitioned into multiple areas, with each area containing a
different collection of database records. The data in a DEDB area is stored in a
VSAM data set. Root segments are stored in the root-addressable part of an area,
with direct dependents stored close to the roots for fast access. Direct dependents
that cannot be stored close to their roots are stored in the independent overflow
portion of the area. Sequential dependents are stored in the sequential dependent
portion at the end of the area so that they can be quickly inserted. Each area data
set can have up to seven copies, making the data easily available to application
programs.
End of Diagnosis, Modification or Tuning Information
Sequential access
When you use a sequential access method, the segments in the database are stored
in hierarchic sequence, one after another, with no pointers.
IMS full-function has two sequential access methods. Like the direct access
methods, one has an index and the other does not:
v HSAM only processes root segments and dependent segments sequentially.
v HISAM processes data sequentially but has an index so that you can access
records directly. HISAM is primarily for sequentially processing dependents, and
directly processing database records.
HSAM databases are very simple databases. The data is stored in hierarchic
sequence, one segment after the other, and no pointers or indexes are used.
End of Diagnosis, Modification or Tuning Information
The situations in which your processing has some of these characteristics but
where HISAM is not necessarily a good choice, occur when:
v You must access dependents directly.
v You have a high number of inserts and deletes.
HISAM does not immediately reuse space. When you insert a new segment,
HISAM databases shift data to make room for the new segment, and this leaves
unused space after deletions. HISAM space is reclaimed when you reorganize a
HISAM database.
End of Diagnosis, Modification or Tuning Information
You commonly use GSAM to send input to and receive output from batch-oriented
BMPs or batch programs. To process a GSAM database, an application program
issues calls similar to the ones it issues to process a full-function database. The
program can read data sequentially from a GSAM database, and it can send output
to a GSAM database.
GSAM is a sequential access method. You can only add records to an output
database sequentially.
These access methods can be particularly helpful when you are converting data
from z/OS files to an IMS database. SHISAM is indexed and SHSAM is not.
SHSAM and SHISAM databases can be accessed by z/OS access methods without
IMS, which is useful during transitions.
Subsections:
v “Using different fields: field-level sensitivity”
v “Resolving processing conflicts in a hierarchy: secondary indexing” on page 148
v “Creating a new hierarchy: logical relationships” on page 152
A program that printed mailing labels for employees' checks each week would not
need all the data in the segment. If the DBA decided to use field-level sensitivity
for that application, the program would receive only the fields it needed in its I/O
area. The I/O area would contain the EMPNAME and ADDRESS fields. Table 31
shows what the program's I/O area would contain.
Table 31. Employee segment with field-level sensitivity
EMPNAME ADDRESS
Another situation in which field-level sensitivity is very useful is when new uses
of the database involve adding new fields of data to an existing segment. In this
situation, you want to avoid re-coding programs that use the current segment. By
using field-level sensitivity, the old programs can see only the fields that were in
the original segment. The new program can see both the old and the new fields.
To understand these conflicts and how secondary indexing can resolve them,
consider the examples of two application programs that process the patient
hierarchy, shown in Figure 52 on page 149. Three segment types in this hierarchy
are:
v PATIENT contains three fields: the patient's identification number, name, and
address. The patient number field is the key field.
v ILLNESS contains two fields: the date of the illness and the name of the illness.
The date of the illness is the key field.
v TREATMNT contains four fields: the date the medication was given; the name of
the medication; the quantity of the medication that was given; and the name of
the doctor who prescribed the medication. The date that the medication was
given is the key field.
Example: Suppose you have an online application program that processes requests
about whether an individual has ever been to the clinic. If you are not sure
whether the person has ever been to the clinic, you will not be able to supply the
identification number for the person. But the key field of the PATIENT segment is
the patient's identification number.
Segment occurrences of a segment type (for example, the segments for each of the
patients) are stored in a database in order of their keys (in this case, by their
patient identification numbers). If you issue a request for a PATIENT segment and
identify the segment you want by the patient's name instead of the patient's
identification number, IMS must search through all of the PATIENT segments to
find the PATIENT segment you have requested. IMS does not know where a
particular PATIENT segment is just by having the patient's name.
Related reading: For more information on HALDB, see IMS Version 10: Database
Administration Guide.
In the previous example, the target segment and the source segment are the same
segment—the PATIENT segment in the patient hierarchy. When the source segment
and the target segment are different segments, secondary indexing solves the
processing conflict.
The PATIENT segment that IMS returns to the application program's I/O area
looks the same as it would if secondary indexing had not been used.
The key feedback area is different. When IMS retrieves a segment without using a
secondary index, IMS places the concatenated key of the retrieved segment in the
key feedback area. The concatenated key contains all the keys of the segment's
parents, in order of their positions in the hierarchy. The key of the root segment is
first, followed by the key of the segment on the second level in the hierarchy, then
the third, and so on—with the key of the retrieved segment last.
But when you retrieve a segment from an indexed database, the contents of the
key feedback area after the request are a little different. Instead of placing the key
of the root segment in the left-most bytes of the key feedback area, DL/I places the
key of the pointer segment there. Note that the term “key of the pointer segment,”
as used here, refers to the key as perceived by the application program—that is,
the key does not include subsequence fields.
When you use the secondary index to retrieve one of the segments in this
hierarchy, the key feedback area contains one of the following:
v If you retrieve segment A, the key feedback area contains the key of the pointer
segment from the secondary index.
v If you retrieve segment B, the key feedback area contains the key of the pointer
segment, concatenated with the key of segment B.
v If you retrieve segment C, the key of the pointer segment, the key of segment B,
and the key of segment C are concatenated in the key feedback area.
Although this example creates a secondary index for the root segment, you can
index dependent segments as well. If you do this, you create an inverted structure:
the segment you index becomes the root segment, and its parent becomes a
dependent.
When you retrieve the segments in the secondary index data structure on the right,
IMS returns the following to the key feedback area:
Example: Suppose that the medical clinic wants to print a monthly report of the
patients who have visited the clinic during that month. If the application program
that processes this request does not use a secondary index, the program has to
retrieve each PATIENT segment, and then retrieve the ILLNESS segment for each
PATIENT segment. The program tests the date in the ILLNESS segment to
determine whether the patient has visited the clinic during the current month, and
prints the patient's name if the answer is yes. The program continues retrieving
PATIENT segments and ILLNESS segments until it has retrieved all the PATIENT
segments.
But with a secondary index, you can make the processing of the program simpler.
To do this, you index the PATIENT segment on the date field in the ILLNESS
segment. When you define the PATIENT segment in the DBD, you give IMS the
name of the field on which you are indexing the PATIENT segment, and the name
of the segment that contains the index field. The application program can then
request a PATIENT segment and qualify the request with the date in the ILLNESS
segment. The PATIENT segment that is returned to the application program looks
just as it would if you were not using a secondary index.
In this example, the PATIENT segment is the target segment; it is the segment that
you want to retrieve. The ILLNESS segment is the source segment; it contains the
information that you want to use to qualify your request for PATIENT segments.
The index segment in the secondary database is the pointer segment. It points to
the PATIENT segments.
Figure 55 on page 154 shows the hierarchies that Program A and Program B
require for their processing. Their processing requirements conflict: they both need
to have access to the information that is contained in the TREATMNT segment in
the patient database. This information is:
v The date that a particular medication was given
v The name of the medication
v The quantity of the medication given
v The doctor that prescribed the medication
Figure 55 on page 154 shows the hierarchies for Program A and Program B.
Program A needs the PATIENT segment, the ILLNESS segment, and the
TREATMNT segment. Program B needs the ITEM segment, the VENDOR segment,
the SHIPMENT segment, and the DISBURSE segment. The TREATMNT segment
and the DISBURSE segment contain the same information.
Instead of storing this information in both hierarchies, you can use a logical
relationship. A logical relationship solves the problem by storing a pointer from
where the segment is needed in one hierarchy to where the segment exists in the
other hierarchy. In this case, you can have a pointer in the DISBURSE segment to
the TREATMNT segment in the medical database. When IMS receives a request for
information in a DISBURSE segment in the purchasing database, IMS goes to the
TREATMNT segment in the medical database that is pointed to by the DISBURSE
segment. Figure 56 on page 155 shows the physical hierarchy that Program A
would process and the logical hierarchy that Program B would process. DISBURSE
is a pointer segment to the TREATMNT segment in Program A's hierarchy.
Figure 57 on page 156 shows the hierarchies for each of these application
programs.
Logical relationships can solve this problem by using pointers. Using pointers in
this example would mean that the ITEM segment in the purchasing database
would contain a pointer to the actual data stored in the ITEM segment in the
supplies database. The VENDOR segment, on the other hand, would actually be
stored in the purchasing database. The VENDOR segment in the supplies database
would point to the VENDOR segment that is stored in the purchasing database.
If you did not use logical relationships in this situation, you would:
v Keep the same data in both paths, which means that you would be keeping
redundant data.
v Have the same disadvantages as separate files of data:
– You would need to update multiple segments each time one piece of data
changed.
– You would need more storage.
Subsections:
v “Providing data availability”
v “Keeping a program from accessing the data: data sensitivity”
v “Preventing a program from updating data: processing options” on page 159
The SENSEG statement defines a segment type in the database to which the
application program is sensitive. A separate SENSEG statement must exist for each
segment type. The segments can physically exist in one database or they can be
derived from several physical databases. If an application program is sensitive to a
segment that is below the root segment, it must also be sensitive to all segments in
the path from the root segment to the sensitive segment.
Related Reading: For more information on using field-level sensitivity for data
security and using the SENSEG statement to limit the scope of the PCBs, see IMS
Version 10: Database Administration Guide.
You define each of these levels of sensitivity in the PSB for the application
program. Key sensitivity is defined in the processing option for the segment.
Processing options indicate to IMS exactly what a particular program may or may
not do to the data. You specify a processing option for each hierarchy that the
application program processes; you do this in the DB PCB that represents each
hierarchy. You can specify one processing option for all the segments in the
hierarchy, or you can specify different processing options for different segments
within the hierarchy.
Segment sensitivity
You define what segments an application program is sensitive to in the DB PCB for
the hierarchy that contains those segments.
Example: Suppose that the patient hierarchy shown in Figure 52 on page 149
belongs to the medical database shown in Figure 59. The patient hierarchy is like a
subset of the medical database.
PATIENT is the root segment and the parent of the three segments below it:
ILLNESS, BILLING, and HOUSHOLD. Below ILLNESS is TREATMNT. Below
BILLING is PAYMENT.
Field-level sensitivity
In addition to providing data independence for an application program, field-level
sensitivity can also act as a security mechanism for the data that the program uses.
If a program needs to access some of the fields in a segment, but one or two of the
fields that the program does not need to access are confidential, you can use
field-level sensitivity. If you define that segment for the application program as
containing only the fields that are not confidential, you prevent the program from
accessing the confidential fields. Field-level sensitivity acts as a mask for the fields
to which you want to restrict access.
Key sensitivity
To access a segment, an application program must be sensitive to all segments at a
higher level in the segment's path. In other words, in Figure 60 on page 159, a
program must be sensitive to segment B in order to access segment C.
Related Reading: For a thorough description of the processing options see, IMS
Version 10: System Utilities Reference.
Processing options provide data security because they limit what a program can do
to the hierarchy or to a particular segment. Specifying only the processing options
the program requires ensures that the program cannot update any data it is not
supposed to. For example, if a program does not need to delete segments from a
database, the D option need not be specified.
The following locking protocol allows IMS to make this determination. If the root
segment is updated, the root lock is held at update level until commit. If a
dependent segment is updated, it is locked at update level. When exiting the
database record, the root segment is demoted to read level. When a program enters
the database record and obtains the lock at either read or update level, the lock
manager provides feedback indicating whether or not another program has the
lock at read level. This determines if dependent segments will be locked when they
are accessed. For HISAM, the primary logical record is treated as the root, and the
overflow logical records are treated as dependent segments.
When using block-level or database-level data sharing for online and batch
programs, you can use additional processing options.
Related Reading:
v For a special case involving HISAM delete byte with parameter ERASE=YES see,
IMS Version 10: Database Administration Guide.
v For more information on database and block-level data sharing, see IMS Version
10: System Administration Guide.
E option
With the E option, your program has exclusive access to the hierarchy or to the
segment you use it with. The E option is used in conjunction with the options G, I,
D, R, and A. While the E program is running, other programs cannot access that
data, but may be able to access segments that are not in the E program's PCB. No
dynamic enqueue by program isolation is done, but dynamic logging of database
updates will be done.
GO option
When your program retrieves a segment with the GO option, IMS does not lock
the segment. While the read without integrity program reads the segment, it
remains available to other programs. This is because your program can only read
the data (termed read-only); it is not allowed to update the database. No dynamic
enqueue is done by program isolation for calls against this database. Serialization
between the program with PROCOPT=GO and any other update program does not
occur; updates to the same data occur simultaneously.
If a segment has been deleted and another segment of the same type has been
inserted in the same location, the segment data and all subsequent data that is
returned to the application may be from a different database record.
T option
When you use the T option with GO and the segment you are retrieving contains
an invalid pointer, the response from an application program depends on whether
the program is accessing a full-function or Fast Path database.
For calls to full-function databases, the T option causes DL/I to automatically retry
the operation. You can retrieve the updated segment, but only if the updating
program has reached a commit point or has had its updates backed out since you
last tried to retrieve the segment. If the retry fails, a GG status code is returned to
your program.
For calls to Fast Path DEDBs, option T does not cause DL/I to retry the operation.
A GG status code is returned. The T option must be specified as PROCOPT=GOT,
GOT, or GOTP.
For example, consider an index database (VSAM KSDS), which has an index
component and a data component. The index component contains only hierarchic
control information, relating to the data component CI where a given keyed record
is located. Think of this as the way that the index component CI maintains the
high key in each data component CI. Inserting a keyed record into a KSDS data
component CI that is already full causes a CI split. That is, some portion of the
records in the existing CI are moved to a new CI, and the index component is
adjusted to point to the new CI.
Example: Suppose the index CI shows the high key in the first data CI as KEY100,
and a split occurs. The split moves keys KEY051 through KEY100 to a new CI; the
index CI now shows the high key in the first data CI as KEY050, and another entry
shows the high key in the new CI as KEY100.
A program that is reading is without integrity, which already read the “old” index
component CI into its buffer pool (high key KEY100), does not point to the newly
created data CI and does not attempt to access it. More specifically, keyed records
that exist in a KSDS at the time a read-without-integrity program starts might
never be seen. In this example, KEY051 through KEY100 are no longer in the first
data CI even though the “old” copy of the index CI in the buffer pool still
indicates that any existing keys up to KEY100 are in the first data CI.
Hypothetical cases also exist where the deletion of a dependent segment and the
insertion of that same segment type under a different root, placed in the same
physical location as the deleted segment, can cause simple Get Next processing to
give the appearance of only one root in the database. For example, accessing the
segments under the first root in the database down to a level-06 segment (which
had been deleted from the first root and is now logically under the last root)
would then reflect data from the other root. The next and subsequent Get Next
calls retrieve segments from the other root.
Subsections:
v “Identifying online security requirements”
v “Analyzing screen and message formats” on page 165
v “Gathering requirements for conversational processing” on page 168
v “Identifying output message destinations” on page 171
The security mechanisms that IMS provides are signon, terminal, and password
security.
Related reading: For an explanation of how to establish these types of security, see
IMS Version 10: System Administration Guide.
When a person signs on to IMS, RACF or security exits verify that the person is
authorized to use IMS before access to IMS-controlled resources is allowed. This
signon security is provided by the /SIGN ON command. You can also limit the
transaction codes and commands that individuals are allowed to enter. You do this
by associating an individual's user identification (USERID) with the transaction
codes and commands.
Related reading: For more information on security, see IMS Version 10:
Communications and Connections Guide.
Restriction: If you are using the shared-queues option, static control blocks
representing the resources needed for the security check need to be available in the
IMS system where the security check is being made. Otherwise, the security check
is bypassed.
Related reading: For more information on shared queues, see IMS Version 10:
IMSplex Administration Guide.
If you use password security with terminal security, you can restrict access to the
program even more. In the paycheck example, using password security and
terminal security means that you can restrict unauthorized individuals within the
payroll department from executing the program.
Related reading: For information on defining IMS editing procedures and on other
design considerations for IMS networks, see IMS Version 10: Communications and
Connections Guide.
The two control blocks that describe input messages to IMS are:
v The device input format (DIF) describes to IMS what the input message is to
look like when it is entered at the terminal.
v The message input descriptor (MID) tells IMS how the application program
expects to receive the input message in its I/O area.
By using the DIF and the MID, IMS can translate the input message from the way
that it is entered at the terminal to the way it should appear in the program's I/O
area.
The two control blocks that describe output messages to IMS are:
v The message output descriptor (MOD) tells IMS what the output message is to
look like in the program's I/O area.
v The device output format (DOF) tells IMS how the message should appear on
the terminal.
To define the MFS control blocks for an application program, you need to know
how you want the data to appear at the terminal and in the application program's
I/O area for both input and output.
Related reading: For more information about how you define this information to
MFS, see IMS Version 10: Application Programming Guide.
If your application will use basic edit, you should describe how you want the data
to be presented at the terminal, and what it is to look like in the program's I/O
area.
The type of data you are processing is only one consideration when you analyze
how you want the data presented at the terminal. In addition, you should weigh
the needs of the person at the terminal (the human factors aspects in your
application) against the effect of the screen design on the efficiency of the
application program (the performance factors in the application program).
Unfortunately, sometimes a trade-off between human factors and performance
factors exists. A screen design that is easily understood and used by the person at
the terminal may not be the design that gives the application program its best
performance. Your first concern should be that you are following whatever are
your established screen standards.
A terminal screen that has been designed with human factors in mind is one that
puts the person at the terminal first; it is one that makes it as easy as possible for
that person to interact with IMS. Some of the things you can do to make it easy for
the person at the terminal to understand and respond to your application program
are:
v Display a small amount of data at one time.
v Use a format that is clear and uncluttered.
v Provide clear and simple instructions.
v Display one idea at a time.
v Require short responses from the person at the terminal.
v Provide some means for help and ease of correction for the person at the
terminal.
At the same time, you do not want the way in which a screen is designed to have
a negative effect on the application program's response time, or on the system's
performance. When you design a screen with performance first in mind, you want
to reduce the processing that IMS must do with each message. To do this, the
person at the terminal should be able to send a lot of data to the application
program in one screen so that IMS does not have to process additional messages.
And the program should not require two screens to give the person at the terminal
information that it could give on one screen.
When describing how the program should receive the data from the terminal, you
need to consider the program logic and the type of data you are working with.
Definition: Conversational processing means that the person at the terminal can
communicate with the application program.
| During a conversation, the user at the terminal enters a request, receives the
| information from IMS, and enters another request. Although it is not apparent to
| the user, a conversation can be processed by several application programs or by
| one application program.
In the preceding airline example, the first program might save the flight number
and the names of the people traveling, and then pass control to another application
program to reserve seats for those people on that flight. The first program saves
this information in the SPA. If the second application program did not have the
flight number and names of the people traveling, it would not be able to do its
processing.
Designing a conversation
The first part of designing a conversation is to design the flow of the conversation.
If the requests from the person at the terminal are to be processed by only one
application program, you need only to design that program. If the conversation
should be processed by several application programs, you need to decide which
steps of the conversation each program is to process, and what each program is to
do when it has finished processing its step of the conversation.
When a person at a terminal enters a transaction code that has been defined as
conversational, IMS schedules the conversational program (for example, Program
A) associated with that transaction code. When Program A issues its first call to the
message queue, IMS returns the SPA that is defined for that transaction code to
Program A's I/O area. The person at the terminal must enter the transaction code
(and password, if one exists) only on the first input screen; the transaction code
need not be entered during each step of the conversation. IMS treats data in
subsequent screens as a continuation of the conversation started on the first screen.
After the program has retrieved the SPA, Program A can retrieve the input
message from the terminal. After it has processed the message, Program A can
either continue the conversation, or end it.
The SPA is kept with the message. When the truncated data option is on, the size
of the retained SPA is the largest SPA of any transaction in the conversation.
Example: If the conversation starts with TRANA (SPA=100), and the program
switches to a TRANB (SPA=50), the input message for TRANB will contain a SPA
segment of 100 bytes. IMS adjusts the size of the SPA so that TRANB receives only
the first 50 bytes.
However, the IMS support that adjusts the size of the SPA does not exist in either
IMS Version 5 or earlier systems. If TRANB is to execute on a remote MSC system
The application program might need to respond to the originating terminal before
the person at the originating terminal can send any more messages. This might
occur when a terminal is in response mode or in conversational mode:
v Response mode can apply to a communication line, a terminal, or a transaction.
When response mode is in effect, IMS does not accept any input from the
communication line or terminal until the program has sent a response to the
previous input message. The originating terminal is unusable (for example, the
keyboard locks) until the program has processed the transaction and sent the
reply back to the terminal.
If a response-mode transaction is processed, including Fast Path transactions,
and the application does not insert a response back to the terminal through
either the I/O PCB or alternate I/O PCB, but inserts a message to an alternate
In these processing modes, the program must respond to the originating terminal.
But sometimes the originating terminal is a physical terminal that is made up of
two components—for example, a printer and a display. If the physical terminal is
made up of two components, each component has a different logical terminal
name. To send an output message to the printer part of the terminal, the program
must use a different logical terminal name than the one associated with the input
message; it must send the output message to an alternate destination. A special
kind of alternate PCB is available to programs in these situations; it is called an
alternate response PCB.
Definition: An alternate response PCB lets you send messages when exclusive,
response, or conversational mode is in effect. See the next section for more
information.
In these processing modes, after receiving the message, the application program
must respond by issuing an ISRT call to one of the following:
v The I/O PCB.
v An alternate response PCB.
v An alternate PCB whose destination is another application program, that is, a
program-to-program switch.
v An alternate PCB whose destination is an ISC link. This is allowed only for
front-end switch messages.
Related reading: For more information on front-end switch messages, see IMS
Version 10: Exit Routine Reference.
Express PCB
Consider specifying an alternate PCB as an express PCB. The express designation
relates to whether a message that the application program inserted is actually
transmitted to the destination if the program abnormally terminates or issues a
ROLL, ROLB, or ROLS call. For all PCBs, when a program abnormally terminates or
issues a ROLL, ROLB, or ROLS call, messages that were inserted but not made
available for transmission are cancelled while messages that were made available
for transmission are never cancelled.
Definition: An express PCB is an alternate response PCB that allows your program
to transmit the message to the destination terminal earlier than when you use a
nonexpress PCB.
For a nonexpress PCB, the message is not made available for transmission to its
destination until the program reaches a commit point. The commit point occurs
when the program terminates, issues a CHKP call, or requests the next input
message and when the transaction has been defined with MODE=SNGL.
For an express PCB, when IMS has the complete message, it makes the message
available for transmission to the destination. In addition to occurring at a commit
point, it also occurs when the application program issues a PURG call using that
PCB or when it requests the next input message.
You should provide the answers to the following questions to the data
communications administrator to help in meeting your application's message
processing requirements:
v Will the program be required to respond to the terminal before the terminal can
enter another message?
v Will the program be responding only to the terminal that sends input messages?
v If the program needs to send messages to other terminals or programs as well, is
there only one alternate destination?
v What are the other terminals to which the program must send output messages?
v Should the program be able to send an output message before it terminates
abnormally?
The amount and type of testing you do depends on the individual program you
are testing. Though no strict rules for testing are available, the guidelines offered in
this section might be helpful.
Subsections:
v “What you need to test an IMS program”
v “Testing DL/I call sequences (DFSDDLT0) before testing your IMS program”
v “Using BTS II to test your IMS program” on page 176
v “Tracing DL/I calls with image capture for your IMS program” on page 176
v “Requests for monitoring and debugging your IMS program” on page 179
v “What to do when your IMS program terminates abnormally” on page 193
The purpose of testing the program is to make sure that the program can correctly
handle all the situations that it might encounter. To thoroughly test the program,
try to test as many of the paths that the program can take as possible.
Recommendations:
v Test each path in the program by using input data that forces the program to
execute each of its branches.
v Be sure that your program tests its error routines. Again, use input data that will
force the program to test as many error conditions as possible.
v Test the editing routines your program uses. Give the program as many different
data combinations as possible to make sure it correctly edits its input data.
An advantage of using DFSDDLT0 is that you can test the DL/I call sequence you
will use prior to coding your program. Testing the DL/I call sequence before you
test the program makes debugging easier, because by the time you test the
program, you know that the DL/I calls are correct. When you test the program,
and it does not execute correctly, you know that the DL/I calls are not part of the
problem if you have already tested them using DFSDDLT0.
For each DL/I call that you want to test, you give DFSDDLT0 the call and any
SSAs that you are using with the call. DFSDDLT0 then executes and gives you the
results of the call. After each call, DFSDDLT0 shows you the contents of the DB
PCB mask and the I/O area. This means that for each call, DFSDDLT0 checks the
access path you have defined for the segment, and the effect of the call. DFSDDLT0
is helpful in debugging because it can display IMS application control blocks.
To indicate to DFSDDLT0 the call you want executed, you use four types of control
statements:
Status statements establish print options for DFSDDLT0's output and select the
DB PCB to use for the calls you specify.
Comment statements let you choose whether you want to supply comments.
Call statements indicate to DFSDDLT0 the call you want to execute, any SSAs
you want used with the call, and how many times you want the call executed.
Compare statements tell DFSDDLT0 that you want it to compare its results
after executing the call with the results you supply.
In addition to testing call sequences to see if they work, you can also use
DFSDDLT0 to check the performance of call sequences.
Related Reading: For more details about using DFSDDLT0, and how to check call
sequence performance, see IMS Version 10: Application Programming Guide.
Restriction: BTS II does not work if you are using a CCTL or running under
DBCTL.
Related Reading: For information about how to use BTS II, refer to BTS Program
Reference/Operations Manual.
Tracing DL/I calls with image capture for your IMS program
The DL/I image capture program (DFSDLTR0) is a trace program that can trace
and record DL/I calls issued by all types of IMS application programs.
Restriction: The image capture program does not trace calls to Fast Path databases.
You can run the image capture program in a DB/DC or a batch environment to:
Test your program
176 Application Programming Planning Guide
If the image capture program detects an error in a call it traces, it reproduces as
much of the call as possible, although it cannot document where the error
occurred, and cannot always reproduce the full SSA.
Produce input for DFSDDLT0
You can use the output produced by the image capture program as input to
DFSDDLT0. The image capture program produces status statements, comment
statements, call statements, and compare statements for DFSDDLT0.
Debug your program
When your program terminates abnormally, you can rerun the program using
the image capture program, which can then reproduce and document the
conditions that led to the program failure. You can use the information in the
report produced by the image capture program to find and fix the problem.
Subsections:
v “Using image capture with DFSDDLT0”
v “Restrictions on using image capture output” on page 178
v “Running image capture online” on page 178
v “Running image capture as a batch job” on page 178
v “Retrieving image capture data from the log data set” on page 179
If you trace a BMP or an MPP and you want to use the trace results with
DFSDDLT0, the BMP or MPP must have exclusive write access to the databases it
processes. If the application program does not have exclusive access, the results of
DFSDDLT0 may differ from the results of the application program. When you trace
a BMP that accesses GSAM databases, you must include an //IMSERR DD
statement to get a formatted dump of the GSAM control blocks.
ON
/ TRACE SET OFF PSB psbname
NOCOMP
COMP
SET ON|OFF
Turns the trace on or off.
PSB psbname
Specifies the name of the PSB you want to trace. You can trace more than one
PSB at the same time by issuing a separate TRACE command for each PSB.
COMP|NOCOMP
Specifies whether you want the image capture program to produce data and
PCB compare statements to be used as input to DFSDDLT0.
Examples: You can use the following examples of DFSERA10 input control
statements in the SYSIN data set to retrieve the image capture program data from
the log data set:
v Print all image capture program records:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,FLDTYP=X
v Print selected image capture program records by PSB name:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT OFFSET=25,VLDTYP=C,FLDLEN=8,
VALUE=psbname, COND=E
v Format image capture program records (in a format that can be used as input
to DFSDDLT0):
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT EXITR=DFSERA50,OFFSET=25,FLDTYP=C
VALUE=psbname,FLDLEN=8,DDNAME=OUTDDN,COND=E
The enhanced OSAM and VSAM STAT calls provide additional information for
monitoring performance and fine tuning of the system for specific needs.
When the enhanced STAT call is issued, the following information is returned:
v OSAM statistics for each defined subpool
v VSAM statistics that also include hiperspace statistics
v OSAM and VSAM count fields that have been expanded to 10 digits
Subsections:
v “Retrieving database statistics: the STAT call”
v “Writing Information to the system log: the LOG request” on page 193
The STAT call is helpful in debugging a program because it retrieves IMS database
statistics. It is also helpful in monitoring and fine tuning for performance. The STAT
call retrieves OSAM database buffer pool statistics and VSAM database buffer
subpool statistics.
Related Reading: For information on coding the STAT call, see the appropriate
application programming information.
DBASF: This function value provides the full OSAM database buffer pool
statistics in a formatted form. The application program I/O area must be at least
BLOCK REQ
Number of block requests received.
FOUND IN POOL
Number of times the block requested was found in the buffer pool.
READS ISSUED
Number of OSAM reads issued.
BUFF ALTS
Number of buffers altered in the pool.
OSAM WRITES
Number of OSAM writes issued.
BLOCKS WRITTEN
Number of blocks written from the pool.
NEW BLOCKS
Number of new blocks created in the pool.
CHAIN WRITES
Number of chained OSAM writes issued.
WRITTEN AS NEW
Number of blocks created.
LOGICAL CYL FORMAT
Number of format logical cylinder requests issued.
PURGE REQ
Number of purge user requests.
RELEASE REQ
Number of release ownership requests.
ERRORS
Number of write error buffers currently in the pool or the largest number
of errors in the pool during this execution.
DBASU: This function value provides the full OSAM database buffer pool
statistics in an unformatted form. The application program I/O area must be at
least 72 bytes. Eighteen fullwords of binary data are provided:
Word Contents
1 A count of the number of words that follow.
2-18 The statistic values in the same sequence as presented by the DBASF
function value.
The first time the call is issued, the statistics for the subpool with the smallest
buffer size are provided. For each succeeding call (without intervening use of the
PCB), the statistics for the subpool with the next-larger buffer size are provided.
If index subpools exist within the local shared resource pool, the index subpool
statistics always follow statistics of the data subpools. Index subpool statistics are
also retrieved in ascending order based on the buffer size.
The final call for the series returns a GA status code in the PCB. The statistics
returned are totals for all subpools in all local shared resource pools. If no VSAM
buffer subpools are present, a GE status code is returned to the program.
VBASF: This function value provides the full VSAM database subpool statistics in
a formatted form. The application program I/O area must be at least 360 bytes.
Three 120-byte records (formatted for printing) are provided as two heading lines
and one line of statistics. Each successive call returns the statistics for the next data
subpool. If present, statistics for index subpools follow the statistics for data
subpools.
VBASU: This function value provides the full VSAM database subpool statistics
in a unformatted form. The application program I/O area must be at least 72
bytes. Eighteen fullwords of binary data are provided for each subpool:
Word Contents
1 A count of the number of words that follow.
VBASS: This function value provides a summary of the VSAM database subpool
statistics in a formatted form. The application program I/O area must be at least
180 bytes. Three 60-byte records (formatted for printing) are provided.
The final call for the series returns a GA status code in the PCB. The statistics
returned are the totals for all subpools. If no OSAM buffer subpools are present, a
GE status code is returned.
DBESF: This function value provides the full OSAM subpool statistics in a
formatted form. The application program I/O area must be at least 600 characters.
For OSAM subpools, five 120-byte records (formatted for printing) are provided.
Three of the records are heading lines and two of the records are lines of subpool
statistics.
FIXOPT
Fixed options for this subpool. Y or N indicates whether the data buffer
prefix and data buffers are fixed.
POOLID
ID of the local shared resource pool.
BSIZ Size of the buffers in this subpool. Set to ALL for total line. For the
summary totals (BSIZ=ALL), the FIXOPT and POOLID fields are replaced
by an OSM= field. This field is the total size of the OSAM subpool.
NBUFS
Number of buffers in this subpool. This is the total number of buffers in
the pool for the ALL line.
LOCATE-REQ
Number of LOCATE-type calls.
NEW-BLOCKS
Number of requests to create new blocks.
ALTER-REQ
Number of buffer alter calls. This count includes NEW BLOCK and
BYTALT calls.
PURGE-REQ
Number of PURGE calls.
DBESS: This function value provides a summary of the OSAM database buffer
pool statistics in a formatted form. The application program I/O area must be at
least 360 bytes. Six 60-byte records (formatted for printing) are provided. This STAT
call is a restructured DBASF STAT call that allows for 10-digit count fields. In
addition, the subpool header blocks give a total of the number of OSAM buffers in
the pool.
NSUBPL
Number of subpools defined for the OSAM buffer pool.
NBUFS
Total number of buffers defined in the OSAM buffer pool.
BLKREQ
Number of block requests received.
INPOOL
Number of times the block requested is found in the buffer pool.
READS
Number of OSAM reads issued.
BUFALT
Number of buffers altered in the pool.
WRITES
Number of OSAM writes issued.
BLKWRT
Number of blocks written from the pool.
NEWBLK
Number of blocks created in the pool.
DBESO: This function value provides the full OSAM database subpool statistics
in a formatted form for online statistics that are returned as a result of a /DIS POOL
command. This call can also be a user-application STAT call. When issued as an
application DL/I STAT call, the program I/O area must be at least 360 bytes. Six
60-byte records (formatted for printing) are provided.
Because there might be several buffer subpools for VSAM databases, the enhanced
STAT call repeatedly requests these statistics. If more than one VSAM local shared
resource pool is defined, statistics are retrieved for all VSAM local shared resource
pools in the order in which they are defined. For each local shared resource pool,
statistics are retrieved for each subpool according to buffer size.
The first time the call is issued, the statistics for the subpool with the smallest
buffer size are provided. For each succeeding call (without intervening use of the
PCB), the statistics for the subpool with the next-larger buffer size are provided.
If index subpools exist within the local shared resource pool, the index subpool
statistics always follow the data subpools statistics. Index subpool statistics are also
retrieved in ascending order based on the buffer size.
The final call for the series returns a GA status code in the PCB. The statistics
returned are totals for all subpools in all local shared resource pools. If no VSAM
buffer subpools are present, a GE status code is returned to the program.
VBESF: This function value provides the full VSAM database subpool statistics in
a formatted form. The application program I/O area must be at least 600 bytes. For
each shared resource pool ID, the first call returns five 120-byte records (formatted
for printing). Three of the records are heading lines and two of the records are
lines of subpool statistics.
FIXOPT
Fixed options for this subpool. Y or N indicates whether the data buffer
prefix, the index buffers, and the data buffers are fixed.
POOLID
ID of the local shared resource pool.
BSIZ Size of the buffers in this subpool. Set to ALL for total line. For the
summary totals (BSIZ=ALL), the FIXOPT and POOLID fields are replaced
VBESS: This function value provides a summary of the VSAM database subpool
statistics in a formatted form. The application program I/O area must be at least
360 bytes. For each shared resource pool ID, the first call provides six 60-byte
records (formatted for printing).
POOLID
ID of the local shared resource pool.
BSIZE Size of the buffers in this VSAM subpool.
TYPE Indicates a data (D) subpool or an index (I) subpool.
FX Fixed options for this subpool. Y or N indicates whether the data buffer
prefix, the index buffers, and the data buffers are fixed.
RRBA
Number of retrieve-by-RBA calls received by the buffer handler.
RKEY Number of retrieve-by-key calls received by the buffer handler.
BFALT
Number of logical records altered.
NREC Number of new VSAM logical records created.
SYNC PT
Number of sync point requests.
Related Reading: For information about coding the LOG request, see the
appropriate application programming reference information.
The amount and type of testing you do depends on the individual program.
Though strict rules for testing are not available, the guidelines provided in this
section might be helpful.
Subsections:
v “What you need to test a CICS program”
v “Testing your CICS program” on page 198
v “Requests for monitoring and debugging your CICS program” on page 202
v “What to do when your CICS program terminates abnormally” on page 202
The purpose of testing the program is to make sure that the program can correctly
handle all the situations that it might encounter.
To thoroughly test the program, try to test as many of the paths that the program
can take as possible. For example:
v Test each path in the program by using input data that forces the program to
execute each of its branches.
v Be sure that your program tests its error routines. Again, use input data that will
force the program to test as many error conditions as possible.
v Test the editing routines your program uses. Give the program as many different
data combinations as possible to make sure it correctly edits its input data.
Subsections:
v “Using the Execution Diagnostic Facility (command-level only)”
v “Using CICS dump control”
v “Using CICS trace control” on page 199
v “Tracing DL/I calls with image capture” on page 199
You can run EDF on the same terminal as the program you are testing.
Related Reading: For more information about using EDF, see “Execution
(Command-Level) Diagnostic Facility” in CICS Application Programming Reference.
Subsections:
v “Using image capture with DFSDDLT0”
v “Running image capture online” on page 200
v “Running image capture as a batch job” on page 200
v “Example of DLITRACE” on page 201
v “Special JCL requirements” on page 201
v “Notes on using image capture” on page 201
v “Retrieving image capture data from the log data set” on page 201
If you trace a BMP and you want to use the trace results with DFSDDLT0, the
BMP must have exclusive write access to the databases it processes. If the
application program does not have exclusive access, the results of DFSDDLT0 may
differ from the results of the application program.
ON
/ TRACE SET OFF PSB psbname
NOCOMP
COMP
SET ON|OFF
Turns the trace on or off.
PSB psbname
Specifies the name of the PSB you want to trace. You can trace more than one
PSB at the same time, by issuing a separate TRACE command for each PSB.
COMP|NOCOMP
Specifies whether you want the image capture program to produce data and
PCB compare statements to be used with DFSDDLT0.
Example of DLITRACE
This example shows a DLITRACE control statement that:
v Traces the first 14 DL/I calls or commands that the program issues
v Sends the output to the IMS log data set
v Produces data and PCB comparison statements for DFSDDLT0
//DFSVSAMP DD *
DLITRACE LOG=YES,STOP=14,COMP
/*
Examples: You can use the following examples of DFSERA10 input control
statements in the SYSIN data set to retrieve the image capture program data from
the log data set:
v Print all image capture program records:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,FLDTYP=X
v Print selected image capture program records by PSB name:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT OFFSET=25,VLDTYP=C,FLDLEN=8,
VALUE=psbname, COND=E
v Format image capture program records (in a format that can be used as input
to DFSDDLT0):
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT EXITR=DFSERA50,OFFSET=25,FLDTYP=C
VALUE=psbname,FLDLEN=8,DDNAME=OUTDDN,COND=E
Subsections:
v “Tracing DL/I calls with image capture to test your ODBA program” on page
206
v “Using image capture with DFSDDLT0 to test your ODBA program” on page
206
v “Running image capture online” on page 207
v “Retrieving image capture data from the log data set” on page 207
v “Requests for monitoring and debugging your ODBA program” on page 208
v “What to do when your ODBA program terminates abnormally” on page 208
Be aware of your established test procedures before you start to test your program.
To begin testing, you need the following items:
v A test JCL statement
v A test database
Always begin testing programs against test-only databases. Do not test programs
against production databases. If the program is faulty it might damage or delete
critical data.
v Test input data
The input data that you use need not be current, but it should be valid data. You
cannot be sure that your output data is valid unless you use valid input data.
The purpose of testing the program is to make sure that the program can correctly
handle all the situations that it might encounter. To thoroughly test the program,
try to test as many of the paths that the program can take as possible. For
example:
Test each path in the program by using input data that forces the program to
execute each of its branches. Be sure that your program tests its error routines.
Again, use input data that will force the program to test as many error conditions
as possible. Test the editing routines your program uses. Give the program as
many different data combinations as possible to make sure it correctly edits its
input data. Table 33 lists the tools you can use to test Online (IMSDB), Batch, and
BMP programs.
Table 33. Tools you can use for testing your program
Tool Online (IMS DB) Batch BMP
DFSDDLT0 No Yes¹ Yes
DL/I image capture Yes Yes Yes
program
Tracing DL/I calls with image capture to test your ODBA program
The DL/I image capture program (DFSDLTR0) is a trace program that can trace
and record DL/I calls issued by batch, BMP, and online (IMS DB environment)
programs. You can produce calls for use as input to DFSDDLT0. You can use the
image capture program to:
v Test your program
If the image capture program detects an error in a call it traces, it reproduces as
much of the call as possible, although it cannot document where the error
occurred, and cannot always reproduce the full SSA.
v Produce input for DFSDDLT0 (DL/I test program)
You can use the output produced by the image capture program as input to
DFSDDLT0. The image capture program produces status statements, comment
statements, call statements, and compare statements for DFSDDLT0. For
example, you can use the image capture program with a ODBA application, to
produce calls for DFSDDLT0.
v Debug your program
When your program terminates abnormally, you can rerun the program using
the image capture program. The image capture program can then reproduce and
document the conditions that led to the program failure. You can use the
information in the report produced by the image capture program to find and
fix the problem.
If you trace a BMP and you want to use the trace results with DFSDDLT0, the
BMP must have exclusive write access to the databases it processes. If the
application program does not have exclusive access, the results of DFSDDLT0 may
differ from the results of the application program.
| ON
/ TRACE SET OFF PSB psbname
NOCOMP
COMP
|
| SET ON|OFF
| Turns the trace on or off.
| PSB psbname
| Specifies the name of the PSB you want to trace. You can trace more than
| one PSB at the same time by issuing a separate TRACE command for each
| PSB.
| COMP|NOCOMP
| Specifies whether you want the image capture program to produce data
| and PCB compare statements to be used with DFSDDLT0.
Examples: You can use the following examples of DFSERA10 input control
statements in the SYSIN data set to retrieve the image capture program data from
the log data set:
v Print all image capture program records:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,FLDTYP=X
v Print selected image capture program records by PSB name:
Column 1 Column 10
OPTION PRINT OFFSET=5,VALUE=5F,COND=M
OPTION PRINT OFFSET=25,VLDTYP=C,FLDLEN=8,
VALUE=psbname, COND=E
v Format image capture program records (in a format that can be used as input to
DFSDDLT0):
| ODBA does not issue any return or reason codes. Most non-terminating errors for
| ODBA application programs are communicated in AIB return and reason codes.
First, you can record as much information as possible about the circumstances
under which the program terminated abnormally; and second, you can check for
certain initialization and execution errors.
Subsections:
v “Documentation for other programmers”
v “Documentation for users” on page 212
Many places establish standards for program documentation; make sure you are
aware of your established standards.
The reason you record this information is so that people who maintain your
program know why you chose certain commands, options, call structures, and
command codes. For example, if the DBA were considering reorganizing the
database in some way, information about why your program accesses the data the
way it does would be helpful.
Again, the amount of information you include and the form in which you
document it depend upon you and your application. These documentation
guidelines are provided as suggestions.
At a minimum, include the following information for those who use your program:
v What one needs in order to use the program, for example:
– For online programs, is there a password?
– For batch programs, what is the required JCL?
v The input that one needs to supply to the program, for example:
– For an MPP, what is the MOD name that must be entered to initially format
the screen?
– For a CICS online program, what is the CICS transaction code that must be
entered? What terminal input is expected?
– For a batch program, is the input in the form of a tape, or a disk data set? Is
the input originally output from a previous job?
v The content and form of the program's output, for example:
– If it is a report, show the format or include a sample listing.
– For an online application program, show what the screen will look like.
v For online programs, if decisions must be made, explain what is involved in
each decision. Present the choices and the defaults.
If the people that will be using your program are unfamiliar with terminals, they
will need a user's guide also. This guide should give explicit instructions on how
to use the terminal and what a user can expect from the program. The guide
should contain discussions of what should be done if the task or program abends,
whether the program should be restarted, or if the database requires recovery.
Although you may not be responsible for providing this kind of information, you
should provide any information that is unique to your application to whomever is
responsible for this kind of information.
This section describes the design of the IMS Spool API and how an application
program uses it.
Subsections:
v “IMS Spool API design”
v “Sending data to the JES spool data sets” on page 214
v “IMS Spool API performance considerations” on page 214
v “IMS Spool API application coding considerations” on page 215
Related Reading: For more information about the IMS Spool API, see:
v IMS Version 10: Application Programming Guide
v IMS Version 10: System Administration Guide
The IMS Spool API support uses existing DL/I calls to provide data set allocation
information and to place data into the print data set. These calls are:
v The CHNG call. This call is expanded so that print data set characteristics can be
specified for the print data set that will be allocated. The process uses the
alternate PCB as the interface block associated with the print data set.
v The ISRT call. This call is expanded to perform dynamic allocation of the print
data set on the first insert, and to write data to the data set. The data set is
considered in-doubt until the unit of work (UOW) terminates. If possible, the
sync point process deletes all in-doubt data sets for abending units of work and
closes and deallocates data sets for normally terminating units of work.
v The SETO call. This is a call, SETO (Set Options), introduced by this support. Use
this call to create dynamic output text units to be used with the subsequent CHNG
call. If the same output descriptor is used for many print data sets, the overhead
can be reduced by using the SETO call to prebuild the text units necessary for the
dynamic output process.
The options list parameter on the CHNG and SETO calls contains the data set printer
processing options. These options direct the output to the appropriate IMS Spool
API data set. These options are validated for the DL/I call by the MVSScheduler
JCL Facility (SJF). If the options are invalid, error codes are returned to the
application. To receive the error information, the application program specifies a
feedback area in the CHNG or SETO DL/I call parameter list. If the feedback area is
present, information about the options list error is returned directly to the
application.
Another initiator consideration is the use of the JES job journal for the dependent
region. If the job step has a journal associated with it, the information for z/OS
checkpoint restart is recorded in the journal. Because IMS dependent regions
cannot use z/OS checkpoint restart, specify JOURNAL=NO for the JES2 initiator
procedure and the JES3 class associated with the dependent regions execution
class. You can also specify the JOURNAL= on the JES3 //*MAIN statement for
dependent regions executing as jobs.
Be aware of the following: No testing has been done to determine the amount of
overhead that might be saved using prebuilt text units.
If the application's I/O area can easily be placed in 24-bit storage, the need to
move the I/O area can be avoided and possible performance improvements
achieved.
Be aware of the following: No testing has been done to determine the amount of
performance improvement possible.
Since a record can be written by BSAM directly from the application's I/O area, the
area must be in the format expected by BSAM. The format must contain:
v Variable length records
v A Block Descriptor Word (BDW)
v A Record Descriptor Word (RDW)
Related Reading: For more information on the formats of the BDW and RDW, see
MVS/XA Data Administration Guide. The format of the I/O area is described in
more familiar IMS terms in IMS Version 10: Application Programming Guide.
Chapter 16. Managing the IMS Spool API overall design 215
Message integrity options
The IMS Spool API provides support for message integrity. This is necessary
because IMS cannot properly control the disposition of a print data set when:
v IMS abnormal termination does not execute because of a hardware or software
problem.
v A dynamic deallocation error exists for a print data set.
v Logic errors are in the IMS code.
In these conditions, IMS might not be able to stop the JES subsystem from printing
partial print data sets. Also, the JES subsystems do not support a two-phase sync
point.
Print disposition
The most common applications using Advanced Function Printing (AFP) are TSO
users and batch jobs. If any of these applications are creating print data sets when
a failure occurs, the partial print data sets will probably print and be handled in a
manual fashion. Many IMS applications creating print data sets can manage partial
print data sets in the same manner. For those applications that need more control
over the automatic printing by JES of partial print data sets, the IMS Spool API
provides the following integrity options. However, these options alone might not
guarantee the proper disposition of partial print data sets. These options are the b
variable following the IAFP keyword used with the CHNG call.
b=0
Indicates no data set protection
This is probably the most common option. When this option is selected, IMS
does not do any special handling during allocation or deallocation of the print
data set. If this option is selected, and any condition occurs that prevents IMS
from properly disposing the print data set, the partial data set probably prints
and must be controlled manually.
b=1
Indicates SYSOUT HOLD protection
This option ensures that a partial print data set is not released for printing
without a JES operator taking direct action. When the data set is allocated, the
allocation request indicates to JES that this print data set be placed in SYSOUT
HOLD status. The SYSOUT HOLD status is maintained for this data set if IMS
cannot deallocate the data set for any reason. Because the print data set is in
HOLD status, a JES operator must identify the partial data set and issue the
JES commands to delete or print this data set.
If the print data set cannot be deleted or printed:
v Message DFS0012I is issued when a print data set cannot be deallocated.
v Message DFS0014I is issued during IMS emergency restart when an in-doubt
print data set is found. The message provides information to help the JES
operator find the proper print data set and effect the proper print
disposition.
Some of the information includes:
– JOBNAME
– DSNAME
– DDNAME
– A recommendation on what IMS believes to be the proper disposition for
the data set (for example, printing or deleting).
Message options
The third option on the IAPF keyword controls informational messages issued by
the IMS Spool API support. These messages inform the JES operator of in-doubt
data sets that need action.
c=0
Indicates that no DFS0012I or DFS0014I messages are issued for the print data
set. You can specify c=0 only if b=0 is specified.
c=m
Indicates that DFS0012I and DFS0014I messages are issued if necessary. You
can specify c=m or if b=1 or if b=2, it is the default.
Chapter 16. Managing the IMS Spool API overall design 217
becomes the destination of messages sent using this alternate PCB. When ISRT calls
are issued against the PCB, the data is sent to the LTERM or transaction.
However, the destination name field has no meaning to the IMS Spool API
function unless b=2 is specified following the IAFP keyword.
If any option other than 2 is selected, the name is not used by IMS.
The LTERM name appears in error messages and log records. Use a name that
identifies the routine creating the print data set. This information can aid in
debugging application program errors.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
Intellectual Property Licensing
Legal and Intellectual Property Law
IBM Japan Ltd.
1623-14, Shimotsuruma, Yamato-shi
Kanagawa 242-8502 Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
J46A/G4
555 Bailey Avenue
San Jose, CA 95141-1003
U.S.A.
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web in the topic “Copyright
and trademark information” at http://www.ibm.com/legal/copytrade.shtml.
Index 227
DFSAPPC message switch 96 DL/I test program (DFSDDLT0) (continued)
DFSCONE0 (Conversational Abnormal Termination exit compare statements 176
routine) 170 control statements 176
DFSDDLT0 (DL/I test program) 175 description 176
DFSDLTR0 (DL/I image capture). status statements 176
See DL/I image capture (DFSDLTR0) programs testing DL/I call sequences 175, 199
DFSERA10 utility 201 DL/I, getting started with CICS 5
DFSERA50 exit routine 201 DLITRACE control statement 200
DFSMDA macro 119 documentation for users 212
DIB (DLI interface block) 40 documentation of
dictionary, data 50 data 48
DIF (device input format), control block 166 the application design process 44
differences between CICS and command-level batch or BMP DOF (device output format), control block 166
programs 19 dump control, CICS 198
direct access methods duplicate values, isolating 53
characteristics 140 dynamic allocation 119, 138
HDAM 141 dynamic backout 104
HIDAM 142 dynamic MSDBs (main storage databases) 11
PHDAM 139, 141
PHIDAM 139, 142
types of 140
direct dependents 102
E
EBCDIC 133
Distributed Sync Point 68
EDF (Execution Diagnostic Facility) 198
DL/I
editing
databases, read and update 19
considerations in your application 166
DL/I access methods
messages
considerations in choosing 139
considerations in message and screen design 166
DEDB 144
overview 165
direct access 140
elements
GSAM 146
data, description 45
HDAM 141
data, naming 47
HIDAM 142
emergency restart 217
HISAM 145
EMH (expedited message handler) 106
HSAM 145
end a conversation, how to 169
MSDB 143
enhanced STAT call formats for statistics
PHDAM 139, 141
OSAM buffer subpool 184
PHIDAM 139, 142
VSAM buffer subpool 190
sequential access 144
entity, data 45
SHISAM 146
environments
SHSAM 146
DB/DC 101
DL/I call trace 176
DBCTL 101
DL/I calls 39
DCCTL 101
codes 18
options in 101, 123
error routines 18
program and database types 100
exceptional conditions 18
ERASE parameter 160
message calls
error
list of 17
execution 194, 203
system service calls
initialization 194, 203
list of 17
error routines 3
usage 17
explanation 3
DL/I calls, general information
I/O errors 4
getting started with 1, 15
I/O errors in your program 18
DL/I calls, testing DL/I call sequences 175, 199
programming errors 4, 18
DL/I database
system errors 4, 18
access to 123
types of errors 4, 18
description 124
ESTAE routines 118
DL/I image capture (DFSDLTR0) programs 199
example
DL/I Open Database Access (ODBA) interface 7
current roster 46
DL/I options
field level sensitivity 147
field level sensitivity 147
instructor schedules 60
logical relationships 152
instructor skills report 59
secondary indexing 148
local view 57
DL/I program structure 1, 15
logical relationships 152
DL/I test program (DFSDDLT0)
schedule of classes 57
call statements 176
examples
checking program performance 176
bank account database 11
comments statements 176
medical database 8
H
F HALDB (High Availability Large Database) 149
Fast Path HALDB partitions
databases 102 data availability 3
DEDB (data entry database) 144 error settings 3
DEDB and the PROCOPT operand 160 handling 3
IFPs 105 restrictions for loading logical child segments 3
MSDB (main storage database) 102, 143 scheduling 3
field level sensitivity status codes 3
as a security mechanism 158 HDAM (Hierarchical Direct Access Method) 141
defining 39 HIDAM (Hierarchical Indexed Direct Access Method) 142
description 147 hierarchical database
example 147 example 24
specifying 148 relational database, compared to 23
uses 148 hierarchical database example, medical 8, 9, 37
fields Hierarchical Direct Access Method (HDAM) 141
columns, compared to 23 Hierarchical Indexed Direct Access Method (HIDAM) 142
in SQL queries 26 Hierarchical Indexed Sequential Access Method (HISAM) 145
File Select and Formatting Print Program (DFSERA10) 114 Hierarchical Sequential Access Method (HSAM) 145
fixed, MSDBs (main storage databases) 11 hierarchy
flow diagrams, LU 6.2 bank account database 11
CPI-C driven commit scenario 89 description 8, 37
DFSAPPC, synchronous SL=none 82 grouping data elements 50
DL/I program backout scenario 90, 91 medical database 8
DL/I program commit scenario 88 hierarchy examples 8, 11
DL/I program ROLB scenario 91 High Availability Large Database (HALDB) 149
local CPI communications driven program, SL=none 83 HALDB partitions
local IMS Command data availability 3
asynchronous SL=confirm 81 error settings 3
local IMS command, SL=none 80 handling 3
local IMS conversational transaction, SL=none 79 restrictions for loading logical child segments 3
local IMS transaction scheduling 3
asynchronous SL=confirm 78 status codes 3
asynchronous SL=none 77 HISAM (Hierarchical Indexed Sequential Access Method) 145
synchronous SL=confirm 76 homonym, data element 48
synchronous SL=none 75 HOUSHOLD segment 11
multiple transactions in same commit 93 HSAM (Hierarchical Sequential Access Method) 145
remote MSC conversation
asynchronous SL=confirm 86
asynchronous SL=none 85
synchronous SL=confirm 87
I
I/O area 40
synchronous SL=none 84
DL/I 20
frequency, checkpoint 116
I/O PCB
full-function databases
in different environments 125
and the PROCOPT operand 160
requesting during PSBGEN 132
how accessed, CICS 124
identification of
how accessed, IMS 102
recovery requirements 115
identifying
application data 45
G online security requirements 163
gather requirements output message destinations 171
for conversational processing 168 security requirements 157
gathering requirements IDs, checkpoint 133
for database options 139 IFP (IMS Fast Path) program
for message processing options 163 databases that can be accessed 101
Generalized Sequential Access Method (GSAM) 146 differences from an MPP 106
GO processing option 115 recovery 106
Index 229
IFP (IMS Fast Path) program (continued) JMP (Java message processing) regions
restrictions 106 DB2 for z/OS access
ILLNESS segment 10 programming model 30
image capture program description 29
CICS application program 199 programming models 29
IMS application program 176 JMP applications
immediate program switch 169 programming models 29
implicit API for LU 6.2 devices 73 JOURNAL parameter 214
IMS Fast Path (IFP) programs, description of 105
IMS hierarchical database interface for Java
using 27
IMS Spool API application design 213
K
key sensitivity 158
INIT system service call 118
keyboard shortcuts xv
initialization errors 194, 203
keys, data 54
INQY system service call 118
instructor
schedules 60
skills report 59 L
integrity limit access with signon security 163
how DL/I protects data 131 link to another online program 129
read without 161 LIST parameter 178
interface block, DL/I 20 listing data elements 46
interface, AIB 40 local view
Introduction to Resource Recovery 68 designing 50
invalid processing and ROLB/SETS/ROLLS calls 170 examples 57
IPDS and IMS Spool API 215 locking protocol 160
ISC (Intersystem Communication) 106 LOCKMAX= parameter, BMP programs 115
isolation of LOG call
duplicate values 53 description 193
repeating data elements 51 use in monitoring 202
ISRT system service call 213 log records
issue checkpoints 103 type 18 134
X’18’ 114
LOG system service call 208
J log, system 104
logical child segments
Java Batch Processing (JBP)
HALDB (High Availability Large Database), restrictions 3
applications 109
logical relationships
databases that can be accessed 101
defining 155
Java batch processing (JBP) regions
description 152
DB2 for z/OS access
example 152
programming model 32
LTERM, local and remote 96
description 31
LU 6.2 devices, signon security 163
programming models 31
LU 6.2 partner program design
Java Message Processing (JMP)
DFSAPPC message switch 96
applications 109
flow diagrams 74
databases that can be accessed 101
integrity after conversation completion 94
Java message processing (JMP) regions
scenarios 87
DB2 for z/OS access
programming model 30
description 29
programming models 29 M
JBP (Java Batch Processing) macros
applications 109 DATABASE 161
databases that can be accessed 101 DFSMDA 119
JBP (Java batch processing) regions TRANSACT 109
DB2 for z/OS access main storage database (MSDB) 143
programming model 32 main storage database (MSDBs)
description 31 types
programming models 31 nonrelated 12
JDBC main storage databases (MSDBs)
explanation 27 dynamic 11
JES Spool/Print server 215 types
JMP (Java Message Processing) related 11
applications 109 many-to-many mapping 56
databases that can be accessed 101 mapped conversation, APPC 67
mappings, determining 56
mask, data 40
Index 231
PROCOPT parameter 159 recovery (continued)
PROCOPT=GO 114 I/O PCB, requesting during PSBGEN 132
program identifying requirements 115
batch structure 1, 15 in a batch-oriented BMP 107, 128
entry 20 in batch programs 104
program communication block (PCB) 38 recovery of databases 134
program deadlock 104 Recovery process
program sensitivity 117 distributed 71
program specification block (PSB) 38 local 70
program specification blocks (PSBs) Recovery, Resource 68
description 4 redundant data 35
program switch reestablish position in database 115
deferred 169 relational database
immediate 169 hierarchical database, compared to 23
program test 175 relational databases 102
program types, environments and database types 100 relationships
program waits 115 between data elements 50
programming models data, hierarchical 8, 37
JBP applications defining logical 155
symbolic checkpoint and restart 31 mapping data 56
with rollback 32 relationships between data aggregates 56
without rollback 31 releasing
JMP applications 29 resources 21
DB2 for z/OS data access 30 remote DL/I 123
IMS data access 30 repetitive data elements, isolating 51
with rollback 30 reply to the terminal in a conversation 169
without rollback 29 report of instructor schedules 60
programs reports, creating 50
DL/I image capture 199 requests, processing 39
DL/I test 175 required application data, analyzing 45
online 104 requirements, analyzing processing 99
TM batch 104 resolving data structure conflicts 147
protected resources 68 resource managers 69
protocol, locking 160 Resource Recovery
PSB (program specification block) application program 69
APSB (allocate program specification block) 73 Introduction to 68
CMPAT=YES 103 protected resources 68
description 38 recoverable resources 68
scheduling in a call-level program 129 resource managers 69
PSBs (program specification blocks) sync-point manager 69
description 4 Resource Recovery Services/Multiple Virtual Storage
pseudo-abend 117 (RRS/MVS)
PSINDEX (Partitioned Secondary Index) 149 introduction to 68
PURG system service call 214 resources
protected 68
recoverable 68
Q security 43
resources, releasing 21
QC status code 109
response mode, description 171
quantitative relationship between data aggregates 56
restart your program
code for, description 135
with basic CHKP 115
R with symbolic CHKP 115
read access, specify with PROCOPT operand 159 restart, emergency 217
read without integrity 161 Restart, Extended 113, 135
read-only access, specify with PROCOPT operand 160 retrieval call, status code 18
reason code, checking 18 retrieval calls
record status codes, exceptional 3
database processing 39 retrieval of IMS database statistics 180
database, description of 9 RETRY option 119
record descriptor word (RDW), IMS Spool API 215 return code, checking 18
recording risks to security, combined files 36
data availability 49 ROLB system service call 104, 134
information about your program 211 ROLL system service call 134
recoverable resources 68 ROLS system service call 104, 118, 137
recovery root anchor point 141
considerations in conversations 170 root segment, definition 9
Index 233
system service calls
CHNG 213
V
I/O PCB, requesting during PSBGEN 132 values, isolating duplicate 53
INIT 118 VBASF, formatted VSAM subpool statistics 182
INQY 118 VBASS, formatted summary of VSAM subpool statistics 184
ISRT 213 VBASU, unformatted VSAM subpool statistics 183
list of 17 VBESF, formatted VSAM subpool statistics 190
LOG 193, 208 VBESS, formatted summary of VSAM subpool statistics 192
PURG 214 VBESU, unformatted VSAM subpool statistics 192
ROLB 104, 134 view of data, a program's 37
ROLL 134 view, local 57
ROLS 104, 118, 137 VisualGen 50
SETO 213 VSAM buffer subpool, retrieving
SETS 104, 118, 137 enhanced subpool statistics 190
SETU 137 statistics 182, 190
STAT 180, 208
system service requests, functions provided 126
W
wait-for-input (WFI)
T transactions 106, 109
tables waits, program 115
relational representation, in 24 WFI parameter 109
segments, compared to 23 writing information to the system log 193
take checkpoints, how to 132
terminal screen, designing 167
terminal security 164, 165 X
termination of a PSB, restrictions 129 X’18’ log record 114
termination, abnormal 111 XRST (Extended Restart) 113
test of application programs
using BTS II 176
using DFSDDLT0 199
using DL/I test program 175
Z
what you need 175, 197 z/OS files
test of DL/I call sequences 175, 199 access to 102, 123
test, unit 175, 197 description 124
testing status codes 3 z/OS Scheduler JCL Facility (SJF) 214
TM batch program 104
token, definition of 170
trace control facility 199
TRANSACT macro 112
transaction code 104, 105
transaction response mode 106
transaction-oriented BMPs.
See BMP (batch message processing) program
TREATMNT segment 10
TSO application programs 112
two-phase commit process
UOR 70
two-phase commit protocol 69
TXTU parameter 215
type 18 log record 134
U
unavailability of data 116, 135
unique identifier, data 48
unit of work 110
unit test 175, 197
UOR (unit of recovery) 70
update access, specify with PROCOPT operand 160
user requirements, analyzing 43
utilities
Batch Backout 104
DFSERA10 134, 207
File Select and Formatting Print program 114
Printed in USA
SC18-9697-02
Spine information: