Administration Guide: Database Manager
Administration Guide: Database Manager
Administration Guide: Database Manager
Administration Guide:
Database Manager
Version 9
SC18-7806-00
IMS
Administration Guide:
Database Manager
Version 9
SC18-7806-00
Note
Before using this information and the product it supports, be sure to read the general information under “Notices” on page
549.
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Chapter 4. Security . . . . . . . . . . . . . . . . . . . . . . . 31
Restricting the Scope of Data Access . . . . . . . . . . . . . . . . 31
Restricting Processing Authority . . . . . . . . . . . . . . . . . . . 31
Restricting Access by Non-IMS Programs . . . . . . . . . . . . . . . 33
Using the Dictionary to Help Establish Security . . . . . . . . . . . . . 34
Contents v
Extending DEDB Independent Overflow Online . . . . . . . . . . . . 458
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 465
Specifying Rules in the Physical DBD . . . . . . . . . . . . . . . . 465
Insert Rules . . . . . . . . . . . . . . . . . . . . . . . . . 466
Replace Rules . . . . . . . . . . . . . . . . . . . . . . . . 469
Using the DLET Call . . . . . . . . . . . . . . . . . . . . . . 475
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Programming Interface Information . . . . . . . . . . . . . . . . . 551
Trademarks. . . . . . . . . . . . . . . . . . . . . . . . . . 552
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . 553
IMS Version 9 Library . . . . . . . . . . . . . . . . . . . . . . 553
Supplementary Publications . . . . . . . . . . . . . . . . . . . . 554
Publication Collections . . . . . . . . . . . . . . . . . . . . . 554
Accessibility Titles Cited in This Library . . . . . . . . . . . . . . . 554
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
Figures ix
165. Specifying the RMNAME keyword . . . . . . . . . . . . . . . . . . . . . . . . 244
166. Database Record for Logical Record Examples . . . . . . . . . . . . . . . . . . . 246
167. Short Logical Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
168. Long Logical Records . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
169. Database Record for Logical Records Example . . . . . . . . . . . . . . . . . . . 247
170. Logical Records Example with Two Read Operations . . . . . . . . . . . . . . . . . 247
171. Levels in a VSAM Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
172. First Example MSDB Record Held in Exclusive Mode . . . . . . . . . . . . . . . . . 277
173. Second Example MSDB Record Held in Exclusive Mode. . . . . . . . . . . . . . . . 277
174. The DBD Generation Process . . . . . . . . . . . . . . . . . . . . . . . . . 292
175. Structure of DBD Generation Input . . . . . . . . . . . . . . . . . . . . . . . . 292
176. Example of a Date Field within a Segment Defined as Three 2–Byte Fields and One 6–Byte
Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
177. Partition Default Information . . . . . . . . . . . . . . . . . . . . . . . . . . 297
| 178. Change Partition Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
179. Sample Command to Define an ILDS . . . . . . . . . . . . . . . . . . . . . . . 301
180. The PSB Generation Process. . . . . . . . . . . . . . . . . . . . . . . . . . 302
181. Structure of PSB Generation Input . . . . . . . . . . . . . . . . . . . . . . . . 302
182. Example of a SENSEG Relationship . . . . . . . . . . . . . . . . . . . . . . . 303
183. The ACB Generation Process. . . . . . . . . . . . . . . . . . . . . . . . . . 304
184. Segment Sizes and Average Segment Occurrences . . . . . . . . . . . . . . . . . 313
185. JCL allocating an OSAM data set . . . . . . . . . . . . . . . . . . . . . . . . 319
186. The Load Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
187. Loading a Database Using Existing Files . . . . . . . . . . . . . . . . . . . . . 323
188. Basic Initial Load Program Logic . . . . . . . . . . . . . . . . . . . . . . . . 325
189. Sample Load Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
190. Restartable Initial Load Program Logic . . . . . . . . . . . . . . . . . . . . . . 328
191. Sample Restartable Initial Load Program . . . . . . . . . . . . . . . . . . . . . 329
192. JCL used to initially load a database . . . . . . . . . . . . . . . . . . . . . . . 330
193. IMS Monitor Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
194. Fast Path Transaction Event Timings . . . . . . . . . . . . . . . . . . . . . . . 338
195. Steps in Reorganizing When Logical Relationships or Secondary Indexes Exist . . . . . . . 346
| 196. Steps for Reorganizing HALDB Partitions When Logical Relationships or Secondary Indexes
| Exist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
197. HISAM Reorganization Unload Utility (DFSURUL0) . . . . . . . . . . . . . . . . . . 347
198. HISAM Reorganization Reload Utility (DFSURRL0) . . . . . . . . . . . . . . . . . . 348
199. HD Reorganization Unload Utility (DFSURGU0) . . . . . . . . . . . . . . . . . . . 348
200. HD Reorganization Reload Utility (DFSURGL0) . . . . . . . . . . . . . . . . . . . 349
201. Database Prereorganization Utility (DFSURPR0) . . . . . . . . . . . . . . . . . . . 350
202. Database Scan Utility (DFSURGS0) . . . . . . . . . . . . . . . . . . . . . . . 351
203. Database Prefix Resolution Utility (DFSURG10) . . . . . . . . . . . . . . . . . . . 352
204. Database Prefix Update Utility (DFSURGP0) . . . . . . . . . . . . . . . . . . . . 353
205. HISAM Reorganization Unload and Reload Utilities Used for Create, Merge, or Replace
Secondary Indexing Operations . . . . . . . . . . . . . . . . . . . . . . . . . 354
206. HISAM Reorganization Unload Utility Used for Extract Secondary Indexing Operations . . . . 355
207. Database Surveyor Utility (DFSPRSUR) . . . . . . . . . . . . . . . . . . . . . . 356
208. Partial Database Reorganization Utility (DFSPRCT1) . . . . . . . . . . . . . . . . . 357
| 209. Offline Reorganization of a HALDB database . . . . . . . . . . . . . . . . . . . . 360
| 210. Example: The HD Reorganization Unload Utility Control Statement to Unload One Partition 361
| 211. Example: The HD Reorganization Unload Utility Control Statement to Unload Multiple Partitions 361
| 212. Example: Sample JCL to Unload a HALDB Partition . . . . . . . . . . . . . . . . . 362
| 213. Example: IEC161I message during reload . . . . . . . . . . . . . . . . . . . . . 362
| 214. Example: JCL to Reload a HALDB Partition . . . . . . . . . . . . . . . . . . . . 363
| 215. Example RECON Listing: DB Record for a HALDB in Cursor-Active Status . . . . . . . . . 366
| 216. The Relationship between Input Data Sets and Output Data Sets during the Online
| Reorganization of a HALDB Partition . . . . . . . . . . . . . . . . . . . . . . . 367
Figures xi
273. Logical Parent, Virtual Pairing—Virtual Delete Rule Example: Calls and Status Codes . . . . . 484
274. Logical Parent, Physical Pairing—Virtual Delete Rule Example . . . . . . . . . . . . . 485
275. Logical Parent, Physical Pairing—Virtual Delete Rule Example: Before and After . . . . . . . 485
276. Logical Parent, Physical Pairing—Virtual Delete Rule Example: Calls and Status . . . . . . . 485
277. Physical Parent, Virtual Pairing—Bidirectional Virtual Example. . . . . . . . . . . . . . 486
278. Physical Parent, Virtual Pairing—Bidirectional Virtual Example: Before and After . . . . . . . 486
279. Deleting Last Logical Child Deletes Physical Parent . . . . . . . . . . . . . . . . . 486
280. Logical Child, Virtual Pairing—Physical Delete Rule Example . . . . . . . . . . . . . . 487
281. Logical Child, Virtual Pairing—Physical Delete Rule Example: Deleting the Logical Child 487
282. Logical Child, Virtual Pairing—Physical Delete Rule Example: Before and After . . . . . . . 488
283. Logical Child, Virtual Pairing—Logical Delete Rule Example . . . . . . . . . . . . . . 488
284. Logical Child, Virtual Pairing—Logical Delete Rule Example: Calls and Status . . . . . . . . 489
285. Logical Child, Virtual Pairing—Logical Delete Rule Example: Before and After . . . . . . . . 489
286. Logical Child, Physical Pairing—Physical or Logical Delete Rule Example . . . . . . . . . 490
287. Logical Child, Physical Pairing—Physical or Logical Delete Rule Example: Calls and Status 490
288. Logical Child, Physical Pairing—Physical or Logical Delete Rule Example: Before and After 491
289. Logical Child, Virtual Pairing—Virtual Delete Rule Example . . . . . . . . . . . . . . . 491
290. Logical Child, Virtual Pairing—Virtual Delete Rule Example: Calls and Status . . . . . . . . 492
291. Logical Child, Virtual Pairing—Virtual Delete Rule Example: Before and After . . . . . . . . 492
292. Logical Child, Physical Pairing—Virtual Delete Rule Example . . . . . . . . . . . . . . 493
293. Logical Child, Physical Pairing—Virtual Delete Rule Example: Calls and Status . . . . . . . 493
294. Logical Child, Physical Pairing—Virtual Delete Rule Example: Before and After . . . . . . . 494
295. (Part 1 of 5). Example of Deleted Segments Accessibility . . . . . . . . . . . . . . . 495
296. (Part 2 of 5). Example of Deleted Segments Accessibility . . . . . . . . . . . . . . . 496
297. (Part 3 of 5). Example of Deleted Segments Accessibility . . . . . . . . . . . . . . . 496
298. (Part 4 of 5). Example of Deleted Segments Accessibility: Database Calls . . . . . . . . . 497
299. (Part 5 of 5). Example of Deleted Segments Accessibility . . . . . . . . . . . . . . . 497
300. Example of Abnormal Termination . . . . . . . . . . . . . . . . . . . . . . . . 498
301. Example of Violation of the Physical Delete Rule . . . . . . . . . . . . . . . . . . 499
302. Example of Violation of the Physical Delete Rule: Database Calls . . . . . . . . . . . . 499
303. Example of Treating the Physical Delete Rule as Logical. . . . . . . . . . . . . . . . 500
304. Example of Treating the Physical Delete Rule as Logical: Database Calls . . . . . . . . . 500
305. Insert, Delete, and Replace Rules Summary . . . . . . . . . . . . . . . . . . . . 503
306. Partitioned Databases panel (DSPXPAA) . . . . . . . . . . . . . . . . . . . . . 512
307. Help Action Bar Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
308. Exit Confirmation Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
309. ISPF Member List Display (DSPXPAM) . . . . . . . . . . . . . . . . . . . . . . 514
310. File Action Bar Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
| 311. Partitioned Database Information (DSPXPOA) . . . . . . . . . . . . . . . . . . . 516
312. Partition Default Information (DSPXPCA) . . . . . . . . . . . . . . . . . . . . . 518
313. Automatic Definition Status . . . . . . . . . . . . . . . . . . . . . . . . . . 522
| 314. Change Partition (DSPXPPA) . . . . . . . . . . . . . . . . . . . . . . . . . . 524
| 315. Selection String Editor (DSPXPKE) . . . . . . . . . . . . . . . . . . . . . . . 526
316. Change Data Set Groups, Part 1 (DSPXPGA) . . . . . . . . . . . . . . . . . . . 527
317. Change Data Set Groups, Part 2 (DSPXPGB) . . . . . . . . . . . . . . . . . . . 528
318. Change a Data Set Group (DSPXPGC) . . . . . . . . . . . . . . . . . . . . . . 528
| 319. Database Partitions Panel, Sorted by Partition ID (DSPXPLA) . . . . . . . . . . . . . . 529
| 320. Database Partitions Panel, Sorted by Key (DSPXPLB) . . . . . . . . . . . . . . . . 530
321. Database Partitions Panel, Sorted by Name (DSPXPLC). . . . . . . . . . . . . . . . 531
322. File Action Bar Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
323. Edit Action Bar Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
324. Searching the Partition List . . . . . . . . . . . . . . . . . . . . . . . . . . 534
325. View Action Bar Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
326. Change Partition Panel (DSPXPPB) . . . . . . . . . . . . . . . . . . . . . . . 535
327. Change Data Set Groups, Part 1 (DSPXPGA) . . . . . . . . . . . . . . . . . . . 536
| 328. Partitioned Database Information (DSPXPOA) . . . . . . . . . . . . . . . . . . . 536
Figures xiii
xiv Administration Guide: Database Manager
Tables
1. Licensed Program Full Names and Short Names . . . . . . . . . . . . . . . . . . . xvii
2. Types of IMS Databases and the z/OS Access Methods They Can Use . . . . . . . . . . . 11
3. Example of Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . 22
| 4. Suffixes for DD names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5. Combined Mappings for Local Views . . . . . . . . . . . . . . . . . . . . . . . 50
6. Keys and Associated Data Elements . . . . . . . . . . . . . . . . . . . . . . . 51
7. Summary of Database Characteristics and Options for Database Types . . . . . . . . . . 59
8. Comparison of SHSAM, SHISAM, and GSAM Databases . . . . . . . . . . . . . . . . 77
| 9. Maximum Sizes for HDAM, HIDAM, PHDAM, and PHIDAM Databases . . . . . . . . . . . 79
10. CI Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
11. Root Segment Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
12. Sequential Dependent Segment Format . . . . . . . . . . . . . . . . . . . . . . 121
13. Direct Dependent Segment Format . . . . . . . . . . . . . . . . . . . . . . . . 121
14. MSDBINIT Record Format . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
| 15. Required CFRM List Structure Storage Sizes . . . . . . . . . . . . . . . . . . . . 150
16. Parts List for the Model 1 Bicycle Example . . . . . . . . . . . . . . . . . . . . . 167
17. Delete Rule Restrictions for Logically Related Databases Using Data Capture Exit Routines 220
18. Examples of Multiple Data Set Grouping . . . . . . . . . . . . . . . . . . . . . . 232
19. Levels of Enqueue of an MSDB Record . . . . . . . . . . . . . . . . . . . . . . 275
20. Example of MSDB Record Status: Shared (S) or Owned Exclusively (E) . . . . . . . . . . 275
21. File Names and Data Sets to Allocate. . . . . . . . . . . . . . . . . . . . . . . 295
| 22. Minimum and maximum number of data sets for HALDB partitions. . . . . . . . . . . . . 299
23. Required Fields and Pointers in a Segment’s Prefix . . . . . . . . . . . . . . . . . 312
24. Calculating the Average Database Record Size . . . . . . . . . . . . . . . . . . . 313
25. VSAM Control Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
26. Monitor Data for Fast Path Transactions . . . . . . . . . . . . . . . . . . . . . . 339
| 27. IMS Versions that Can Access HALDBs that Are Capable of Being Reorganized Online . . . . 370
| 28. Data Set Name Examples for HALDB Online Reorganization . . . . . . . . . . . . . . 373
| 29. Mapping Startup Tasks to Commands for HALDB Online Reorganization . . . . . . . . . . 373
| 30. Mapping Monitoring Tasks to Commands for HALDB Online Reorganization . . . . . . . . 374
| 31. Mapping Modifying and Tuning Tasks to Commands for HALDB Online Reorganization . . . . 374
32. Steps in Reorganizing a Database to Add a Logical Relationship . . . . . . . . . . . . . 441
33. Replace Rules for Logical View 1 . . . . . . . . . . . . . . . . . . . . . . . . 473
34. Replace Rules for Logical View 2 . . . . . . . . . . . . . . . . . . . . . . . . 474
| 35. Specifying Insert, Delete, and Replace Rules . . . . . . . . . . . . . . . . . . . . 503
36. Length and Format of an OSAM DEB . . . . . . . . . . . . . . . . . . . . . . . 507
This book describes how to design, implement, and maintain different types of IMS
databases and is divided into two parts:
v Part 1, “General Information on IMS Database Administration,” on page 1
describes important concepts to keep in mind throughout the database
administration process.
v Part 2, “Administering IMS Databases,” on page 35 describes the steps in the
database administration process.
With IMS Version 9, you can reorganize HALDB partitions online, either by using
the integrated HALDB Online Reorganization function or by using an external
product. In this information, the term HALDB Online Reorganization refers to the
integrated HALDB Online Reorganization function that is part of IMS Version 9,
unless otherwise indicated.
Prerequisite Knowledge
Before using this book, you should understand basic IMS concepts and your
installation’s IMS system. IMS can run in the following environments: DB Batch,
DCCTL, TM Batch, DB/DC, DBCTL. You should understand the environments that
apply to your installation. The IMS concepts explained here pertain only to
administering the IMS database. You should know how to use DL/I calls and
languages such as assembler, COBOL, PL/I, and C.
For definitions of terms used in this manual and references to related information in
other IMS manuals, see IMS Version 9: Master Index and Glossary.
required_item
required_item
optional_item
If an optional item appears above the main path, that item has no effect on the
execution of the syntax element and is used only for readability.
optional_item
required_item
v If you can choose from two or more items, they appear vertically, in a stack.
If you must choose one of the items, one item of the stack appears on the main
path.
required_item required_choice1
required_choice2
If choosing one of the items is optional, the entire stack appears below the main
path.
required_item
optional_choice1
optional_choice2
If one of the items is the default, it appears above the main path, and the
remaining choices are shown below.
default_choice
required_item
optional_choice
optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be
repeated.
required_item repeatable_item
required_item repeatable_item
A repeat arrow above a stack indicates that you can repeat the items in the
stack.
v Sometimes a diagram must be split into fragments. The syntax fragment is
shown separately from the main syntax diagram, but the contents of the fragment
should be read as if they are on the main path of the diagram.
required_item fragment-name
fragment-name:
required_item
optional_item
The IMS Version 9 information is now available in the DB2 Information Management
Software Information Center for z/OS Solutions, which is available at
http://publib.boulder.ibm.com/infocenter/dzichelp. The DB2 Information Management
Software Information Center for z/OS Solutions provides a graphical user interface
for centralized access to the product information for IMS, IMS Tools, DB2 Universal
Database (UDB) for z/OS, DB2 Tools, and DB2 Query Management Facility
(QMF™).
The chapter titled ″DLIModel Utility″ has moved from IMS Version 9: IMS Java
Guide and Reference to IMS Version 9: Utilities Reference: System.
The DLIModel utility messages that were in IMS Version 9: IMS Java Guide and
Reference have moved to IMS Version 9: Messages and Codes, Volume 1.
Terminology Changes
IMS Version 9 introduces new terminology for IMS commands:
type-1 command
A command, generally preceded by a leading slash character, that can be
entered from any valid IMS command source. In IMS Version 8, these
commands were called classic commands.
type-2 command
A command that is entered only through the OM API. Type-2 commands
are more flexible than type-2 commands and can have a broader scope. In
IMS Version 8, these commands were called IMSplex commands or
enhanced commands.
Accessibility Enhancements
Accessibility features help a user who has a physical disability, such as restricted
mobility or limited vision, to use software products. The major accessibility features
in z/OS products, including IMS, enable users to:
v Use assistive technologies such as screen readers and screen magnifier
software
v Operate specific or equivalent features using only the keyboard
v Customize display attributes such as color, contrast, and font size
Accessible Information
Online information for IMS Version 9 is available in BookManager format, which is
an accessible format. All BookManager functions can be accessed by using a
keyboard or keyboard shortcut keys. BookManager also allows you to use screen
readers and other assistive technologies. The BookManager READ/MVS product is
included with the z/OS base product, and the BookManager Softcopy Reader (for
workstations) is available on the IMS Licensed Product Kit (CD), which you can
download from the Web at www.ibm.com.
Chapter 4. Security . . . . . . . . . . . . . . . . . . . . . . . 31
Restricting the Scope of Data Access . . . . . . . . . . . . . . . . 31
Restricting Processing Authority . . . . . . . . . . . . . . . . . . . 31
Restricting Access by Non-IMS Programs . . . . . . . . . . . . . . . 33
Protecting Data with VSAM Passwords . . . . . . . . . . . . . . . 33
Encrypting Your Database . . . . . . . . . . . . . . . . . . . . 33
Using the Dictionary to Help Establish Security . . . . . . . . . . . . . 34
In this Chapter:
v “Database Administration Overview”
v “Open Database Access (ODBA)” on page 4
v “Database Administration Tasks” on page 4
v “Concepts and Terminology” on page 6
v “Optional Functions” on page 17
v “How to Define Your Database to IMS” on page 18
v “How Application Programs View the Database” on page 18
This book presents the database administration tasks in the order in which you
normally perform the tasks. You perform some tasks in a specific sequence in the
database development process while other tasks are ongoing. It is important for you
to grasp not only what the tasks are (see “Database Administration Tasks” on page
4), but also how they interrelate.
This first part of the book provides important concepts and procedures for the entire
database administration process. The second part contains the chapters
corresponding to particular tasks of database administration.
DL/I
| Data Language/I (DL/I) is the IMS data manipulation language, which is a common
| high-level interface between a user application and IMS. DL/I calls are invoked from
| application programs written in languages such as PL/I, COBOL, VS Pascal, C, and
| Ada. It can also be invoked from assembler language application programs by
| subroutine calls. IMS lets the user define data structures, relate structures to the
| application, load structures, and reorganize structures.
Related Reading: For detailed information about how application programs use
DL/I, see IMS Version 9: Application Programming: Database Manager and IMS
Version 9: Application Programming: EXEC DLI Commands for CICS and IMS.
CICS
Customer Information Control System (CICS) accesses IMS databases through the
database resource adapter (DRA). CICS or other transaction management
Whenever tasks differ for CICS users, a brief description about the differences is
included.
Related Reading: For a description of RRS and its uses, see the information on
RRS Distributed Sync Point in IMS Version 9: Administration Guide: Transaction
Manager.
From the perspective of IMS, the z/OS address space involved appears to be
another region called the z/OS application region.
Related Reading: For more information on using these database utilities, see
the IMS Version 9: Utilities Reference: System and the IMS Version 9: Utilities
Reference: Database and Transaction Manager.
Establishing security. You can keep unauthorized persons from accessing the
data in your database by using program communication blocks (PCBs). With
PCBs, you can control how much of the database a given user can see, and
what can be done with that data. In addition, you can take steps to keep
non-IMS programs from accessing your database.
Setting up standards and procedures. It is important to set standards and
procedures for application and database development. This is especially true in
an environment with multiple applications. If you have guidelines and standards,
you will save time in application development and avoid problems later on such
as inconsistent naming conventions or programming standards.
To understand this topic, you must know what a DL/I call is and how to code it. You
must understand function codes and Segment Search Arguments (SSAs) in DL/I
calls and know what is meant when a call is referred to as qualified or unqualified
(explained in IMS Version 9: Application Programming: Database Manager).
The segments within a database record exist in a hierarchy. A hierarchy is the order
in which segments are arranged. The order implies something. The school database
is storing data about courses that are taught. The COURSE segment is at the top
of the hierarchy. The other types of data in segments in the database record would
be meaningless if there was no COURSE.
Root Segment
The COURSE segment is called the root segment. Only one root segment exists
within a database record. All other segments in the database record (such as:
INSTR, REPORT, STUDENT, GRADE, and PLACE) are called dependent
segments. The existence of dependent segments hinges on the existence of a root
segment. For example, without the root segment COURSE, there would be no
reason for having a PLACE segment stating in which room the course was held.
The third level of dependent segments, REPORT and GRADE, is subject to the
existence of second level segments INSTR and STUDENT. For example, without
the second level segment STUDENT, there would be no reason for having a
GRADE segment indicating the grade the student received in the course.
The previous database is actually the design of the database. It shows the segment
types for the database. Figure 2 on page 8 shows the actual database record with
the segment occurrences.
Chapter 1. Introduction to IMS Databases 7
Concepts and Terminology
The following topic discusses the hierarchy in more detail. Subsequent topics
describe the objects in a database, what they consist of and the rules governing
their existence and use. These objects are:
The database record
The segments in a database record
The fields within a segment
The Hierarchy
A database is composed of a series of database records, records contain
segments, and segments are arranged in a hierarchy in the database record.
The sequence goes from the top of the hierarchy to the bottom in the first (left
most) path or leg of the hierarchy. When the bottom of the database is reached, the
sequence is from left to right. When all segments have been stored in that path of
the hierarchy, the sequencing begins in the next path to the right, again proceeding
from top to bottom and then left to right. (In the second leg of the hierarchy there is
nothing to go to at the right.) The sequence in which segments are stored is loosely
called “top to bottom, left to right.”
Figure 3 shows sequencing of segment types for the school database shown in
Figure 1 on page 7. The sequence of segment types are stored in the following
order:
1. COURSE (top to bottom)
2. INSTR
3. REPORT
4. STUDENT (left to right)
5. GRADE (top to bottom)
6. PLACE (left to right)
Figure 4 on page 10 shows the segment occurrences for the school database
record as shown in Figure 2 on page 8. Because there are multiple occurrences of
segment types, segments are read ″front to back″ in addition to ″top to bottom, left
to right.″ The segment occurrences for the school database are stored in the
following order:
1. Math (top to bottom)
2. James
3. ReportA
4. ReportB (front to back)
5. Baker (left to right)
6. Pass (top to bottom)
7. Coe (front to back)
8. Inc (top to bottom)
9. Room2 (left to right)
Note that the numbering sequence is still initially from top to bottom. At the bottom
of the hierarchy, however, observe that there are two occurrences of the REPORT
segment.
Because you are at the bottom of the hierarchy, both segment occurrences are
picked up before you move to the right in this path of the hierarchy. Both reports
relate to the instructor segment James; therefore it makes sense to keep them
stored together in the database. In the second path of the hierarchy, there are also
two segment occurrences in the student segment. You are not at the bottom of the
hierarchic path until you reach the grade segment Pass. Therefore, sequencing is
not “interrupted” by the two occurrences of the student segment Baker and Coe.
This makes sense because you are keeping student and grade Baker and Pass
together.
Note that the grade Inc under student Coe is not considered another occurrence
under Baker. Coe and Inc become a separate path in the hierarchy. Only when you
reach the bottom of a hierarchic path is the “top to bottom, left to right” sequencing
interrupted to pick up multiple segment occurrences. You can refer to sequencing in
the hierarchy as “top to bottom, front to back, left to right”, but “front to back” only
occurs at the bottom of the hierarchy. Multiple occurrences of a segment at any
other level are sequenced as separate paths in the hierarchy.
A segment is the smallest piece of data IMS can store. If an application program
issues a Get-Unique (GU) call for the student segment BAKER (see Figure 4 on
page 10), the current position is immediately after the BAKER segment occurrence.
If an application program then issues an unqualified GN call, IMS moves forward in
the database and returns the PASS segment occurrence.
The Database
IMS allows you to define many different database types. You define the database
type that best suits your application’s processing requirements. You need to know
that each IMS database has its own access method, because IMS runs under
control of the z/OS operating system. The operating system does not know what a
segment is because it processes logical records, not segments. IMS access
methods therefore manipulate segments in a database record. When a logical
record needs to be read, operating system access methods (or IMS) are used.
Table 2 lists the IMS database types you can define, the IMS access methods they
use and the operating system access methods you can use with them. Although
each type of database varies slightly in its access method, they all use database
records.
Table 2. Types of IMS Databases and the z/OS Access Methods They Can Use
IMS or Operating System
Type of IMS Access Methods that Can Be
Database Full Name of Database Type Used
HSAM Hierarchical Sequential Access Method BSAM or QSAM
SHSAM Simple Hierarchical Sequential Access BSAM or QSAM
Method
HISAM Hierarchical Indexed Sequential Access VSAM
Method
Table 2. Types of IMS Databases and the z/OS Access Methods They Can Use (continued)
IMS or Operating System
Type of IMS Access Methods that Can Be
Database Full Name of Database Type Used
SHISAM Simple Hierarchical Indexed Sequential VSAM
Access Method
GSAM Generalized Sequential Access Method QSAM/BSAM or VSAM
HDAM Hierarchical Direct Access Method VSAM or OSAM
PHDAM Partitioned Hierarchical Direct Access VSAM or OSAM
Method
HIDAM Hierarchical Indexed Direct Access VSAM or OSAM
Method
PHIDAM Partitioned Hierarchical Indexed Direct VSAM or OSAM
Access Method
1
DEDB Data Entry Database Media Manager
2
MSDB Main Storage Database N/A
Notes:
1. For DBCTL, only available to BMPs
2. Not applicable to DBCTL
The only other thing to understand is that a specific database record, when stored
in the database, does not need to contain all the segment types you originally
designed. To exist in a database, a database record need only contain an
occurrence of the root segment. In the school database, all four of the records
shown in Figure 7 on page 13 can be stored.
However, no segment can be stored unless its parent is also stored. For example,
you could not store the records shown in Figure 8.
Occurrences of any of the segment types can later be added to or deleted from the
database.
The Segment
A database record consists of one or more segments, and the segment is the
smallest piece of data IMS can store. Here are some additional facts you need to
know about segments:
v A database record can contain a maximum of 255 segment types. The space you
allocate for the database limits the number of segment occurrences.
v You determine the length of a segment; however, a segment cannot be larger
than the physical record length of the device on which it is stored.
v The length of segments is specified by segment type. A segment type can be
either variable or fixed in length.
| Figure 9 shows the format of a fixed-length segment. Figure 10 shows the format of
| a variable-length segment. Segments consist of two parts (a prefix and the data),
| except when using a SHSAM or SHISAM database. In SHSAM and SHISAM
| databases, the segment consists of only the data. In a GSAM database, segments
| do not exist (see “GSAM Databases” on page 76 for more information about GSAM
| databases).
IMS uses the prefix portion of the segment to “manage” the segment. The prefix
portion of a segment consists of: segment code, delete byte, and in some
databases, a pointer and counter area. Application programs do not “see” the prefix
portion of a segment. The data portion of a segment contains your data, arranged
in one or more fields.
Related Reading: For information on MSDB and DEDB segments, see “Main
Storage Databases (MSDBs)” on page 128 and “Data Entry Databases” on page
109.
Segment Code
IMS needs a way to identify each segment type stored in a database. It uses the
segment code field for this purpose. When loading a segment type, IMS assigns it a
unique identifier (an integer from 1 to 255). IMS assigns numbers in ascending
sequence, starting with the root segment type (number 1) and continuing through all
dependent segment types in hierarchic sequence.
Delete Byte
When an application program deletes a segment from a database, the space it
occupies might or might not be immediately available to reuse. Deletion of a
segment is described in the discussions of the individual database types. For now,
know that IMS uses this prefix byte to track the status of a deleted segment.
Related Reading: For information on the meaning of each bit in the delete byte,
see Appendix A, “Meaning of Bits in the Delete Byte,” on page 463.
The length of the pointer and counter area depends on how many addresses a
segment contains and whether logical relationships are used. These topics are
covered in more detail later in this book.
The application program accesses segments in a database using the name of the
segment type. If an application program needs to reference part of a segment, a
field name can be defined to IMS for that part of the segment. Field names are
used in segment search arguments (SSAs) to qualify calls. An application program
can see data even if you do not define it as a field. But an application program
cannot qualify an SSA on the data unless it is defined as a field.
The maximum number of fields that you can define for a segment type is 255. The
maximum number of fields that can be defined for a database is 1000. Note that
1000 refers to types of fields in a database, not occurrences. The number of
occurrences of fields in a database is limited only by the amount of storage you
have defined for your database.
You can use a sequence field, often referred to as a key, to keep occurrences of a
segment type in key sequence under a given parent. For example, in the database
record shown in Figure 11 on page 16, there are three segment occurrences of the
STUDENT segment, and the STUDENT segment has three data elements.
Figure 11. Three Segment Occurrences and Three Data Elements of the STUDENT Segment
Suppose you need the STUDENT segment, when stored in the database, to be in
alphabetic order by student name. If you define a field on the NAME data as a
unique sequence field, IMS stores STUDENT segment occurrences in alphabetical
sequence. Figure 12 shows three occurrences of the STUDENT segment in
alphabetical sequence.
When you define a sequence field in a root segment of a HISAM, HDAM, PHDAM,
HIDAM, or PHIDAM database, an application program can use it to access a
specific root segment, and thus a specific database record. By using a sequence
field, an application program does not need to search the database sequentially to
find a specific database record, but can retrieve records sequentially (for HISAM,
HIDAM, and PHIDAM databases).
You can also use a sequence field in other ways when using the IMS optional
functions of logical relationships or secondary indexing. These other uses are
discussed in detail later in this book.
The important things to know now about sequence fields are that:
v You do not always need to define a sequence field. This book describes cases
where a sequence field is necessary.
v The sequence field value can be defined as unique or non-unique.
v The data or value in the sequence field is called the “key” of the segment.
Optional Functions
IMS has several optional functions you can use for your database. These are
discussed briefly below and described in detail in Chapter 8, “Choosing Optional
Database Functions,” on page 151. You need a cursory understanding of these
functions before reading this book because they may be referred to before they are
actually described.
If you have the IBM DB/DC (database/data communication) Data Dictionary, you
can use it to define your database (except for DEDBs and MSDBs). The DB/DC
Data Dictionary may contain all the information you need to produce a DBD.
If you have the IBM DB/DC Data Dictionary, you can use it to define an application
program’s access to the database. It can contain all the information needed to
produce a PSB.
You must set up and test procedures and standards for database design,
application development, application programs’ use of the database, application
design, and for batch operation. These standards are guidelines that change when
installation requirements change.
In the area of database design, for example, you can establish standard practices
for handling the following items:
v Database structure and segmentation
Number of segments within a database
Placement of segments
Size of segments
Use of variable-length segments
When to use segment edit/compression
When to use secondary data set groups
Number of databases within an application
When and how to use field-level sensitivity
Database size
v Access methods
When to use HISAM
Choice of record size for HISAM
HISAM organization using VSAM
When to use GSAM
Use of physical child/physical twin pointers
Use of twin backward pointers
Use of child last pointers
HIDAM or PHIDAM index organization using VSAM
In the area of application programs use of the database, establish standards for the
following:
v Putting update and read functions in separate programs
v How many transaction types to allow per application program
v When applications are to issue a deliberate abnormal termination and the range
of abend codes permitted applications
v Whether application programs are permitted to issue messages to the master
terminal
v The method of referencing data in the IOAREA, and referencing IMS variables
(such as PCBs and SSAs)
v Use of predefined structures (PCB masks, SSAs, or database segment formats)
by applications
v Use of GU calls to the message queue
v Re-usability of MPP and BMP programs
v Use of qualified calls and SSAs
v Use of path calls
v Use of the CHANGE call
v Use of the “system” calls (PURG, LOG, STAT, SNAP, GCMD, and CMD)
v Use of the dictionary or COPY or STRUCTURE libraries for data elements and
structures
v The holding of design reviews and inspections
Naming Conventions
This topic contains information about:
v “General Rules for Establishing Naming Conventions”
v “HALDB Naming Conventions” on page 22
HALDB DD names
| IMS constructs the DD names for each partition by adding a 1-byte suffix to the
| partition name for the data sets in that partition. The suffix for the first DD name is
| A, the suffix for the second DD name is B, and so on up to J.
| For a PSINDEX database, there is only one data set per partition, so only one DD
| name with a suffix of A is required.
| The resulting DD names with the suffix might match already existing DD names and
| you must avoid duplication of DD names. The DD names are not case sensitive
| and can result in JCL errors if specified in lower case in batch jobs.
| In a PHDAM database, HALDB OLR increases the maximum number of data sets
| associated with a partition to twenty-one. In a PHIDAM database, which includes a
| primary index, HALDB OLR increases the maximum number of data sets
| associated with a partition to twenty-three. In either case, HALDB OLR only needs
| as many new data sets as exist in the partition at the time the reorganization
| process begins.
| Related Reading: For more information on HALDB OLR, see “HALDB Online
| Reorganization” on page 364.
| PHDAM, or PHIDAM—you are defining. The naming convention for HALDB data
| sets within a partition is designed to simplify the naming of multiple data sets.
DL/I pointers within the segment prefix that point into another partition use a
halfword binary number as the target partition identification. DL/I must be able to
correlate this number to the correct partition. By using a data set naming
convention, DL/I can correlate the halfword binary number to the data set name for
the partition. You specify the base name and the suffix is assigned by DL/I.
| Extended Naming Convention for Data Set When Using HALDB Online
| Reorganization: To distinguish between the data sets that HALDB OLR
| reorganizes and the data sets into which HALDB OLR moves the reorganized data,
| HALDB OLR uses the same extended naming convention it uses for DD names: the
| characters M through V for the primary data sets and the character Y for the
| additional primary index. For data sets, IMS combines these characters with the
| partition ID to form the suffix that uniquely identifies each HALDB data set.
| Related Reading: For more information about HALDB OLR naming conventions,
| see “Data Set Naming Conventions for HALDB Online Reorganization” on page
| 372.
In this chapter:
v “The Design Review”
v “Design Review 1” on page 26
v “Design Review 2” on page 26
v “Design Review 3” on page 27
v “Design Review 4” on page 27
v “Code Inspection 1” on page 28
v “Who Attends Code Inspection 1” on page 28
v “Code Inspection 2” on page 28
v “Security Inspection” on page 29
v “Post-Implementation Review” on page 29
Your role in the review process is to ensure that a good database design is
developed and then effectively implemented. The role is ongoing and provides a
supporting framework for the other database administration tasks described in this
book.
in the review will differ slightly from one installation to the next. What you need to
understand is the importance of the reviews and the tasks performed at them. Here
is some general information about reviews:
v People attending all reviews (in addition to database administrators) include a
review team and the system designer. The review team generally has no
responsibility for developing the system. The review team consists of a small
group of people whose purpose is to ensure continuity and objectivity from one
review to the next. The system designer writes the initial functional specifications.
v At the end of each review, make a list of issues raised during the review. These
issues are generally change requirements. Assign each issue to a specific person
for resolution, and set a target date for resolution. If certain issues require major
changes to the system, schedule other reviews until you resolve all major issues.
v If you have a data dictionary, update it at the end of each review to reflect any
decisions that you made. The dictionary is an important aid in keeping
information current and available especially during the first four reviews when you
make design decisions.
Design Review 1
The first design review takes place after initial functional specifications for the
system are complete. Its purpose is to ensure that all user requirements have been
identified and that design assumptions are consistent with objectives. No detailed
design for the system is or should be available at this point. The review of the
specifications will determine whether the project is ready to proceed to a more
detailed design. When design review 1 concludes successfully, its output is an
approved set of initial functional specifications.
People who attend design review 1, in addition to the regular attendees, include
someone from the organization that developed the requirement and anyone
participating in the development of detailed design. You are at the review primarily
for information. You also look at:
The relationship between data elements
Whether any of the needed data already exists
Design Review 2
The second design review takes place after final functional specifications for the
system are complete. This means the overall logic for each program in the system
is defined, as well as the interface and interactions between programs. Audit and
security requirements are defined at this point, along with most data requirements.
When design review 2 is successfully concluded, its output is an approved set of
final functional specifications.
Everyone who attended design review 1 should attend design review 2. People
from test and maintenance groups attend as observers to begin getting information
for test case design and maintenance. Those concerned with auditing and security
can also attend.
Your role in this review is still primarily to gather information. You also look at:
v Whether the specifications meet user requirements
v Whether the relationship between data items is correct
v Whether any of the required data already exists
v Whether audit and security requirements are consistent with user requirements
Design Review 3
The third design review takes place after initial logic specifications for the system
are complete. At this point, high level pseudo code or flowcharts are complete.
These can only be considered complete when major decision points in the logic are
defined, calls or references to external data and modules are defined, and the
general logic flow is known. All modules and external interfaces are defined at this
point, definition of data requirements is complete, and database and data files are
designed. Initial test and recovery plans are available; however, no code has been
written. When design review 3 concludes successfully, its output is an approved set
of initial logic specifications.
Everyone who attended design review 2 should attend design review 3. If the
project is large, those developing detailed design need only be present during the
review of their portion of the project.
Your role in this review is to ensure that the flow of transactions is consistent with
the database design you are creating.
At this point in the design review process, you are designing hierarchies and
starting to design the database. These tasks are described in Chapter 5, “Analyzing
Data Requirements,” on page 45, Chapter 6, “Choosing Full-Function Database
Types,” on page 55, Chapter 8, “Choosing Optional Database Functions,” on page
151, and Chapter 9, “Designing Full-Function Databases,” on page 241.
Design Review 4
The fourth design review takes place after design review 3 is completed and all
interested parties are satisfied that system design is essentially complete. No
special document is examined at this review, although final functional specifications
and either initial or final logic specifications are available. The primary objective of
this review is to make sure that system performance will be acceptable.
The people who attend all design reviews (moderator, review team, database
administrator, and system designer) should attend design review 4. Others attend
only as specific detail is required.
At this point in the review process, you are almost finished with the database
administration tasks along with designing and testing your database. These tasks
are described in Chapter 5, “Analyzing Data Requirements,” on page 45, Chapter 6,
“Choosing Full-Function Database Types,” on page 55, and Chapter 12,
“Developing Test Databases,” on page 307.
Code Inspection 1
The first code inspection takes place after final logic specifications for the system
are complete.
At this point, no code is written but the final functional specifications have been
interpreted. Both pseudo code and flowcharts have a statement or logic box for
every 5 to 25 lines of assembler language code, 5 to 15 lines of COBOL code, or 5
to 15 lines of PL/I code that needs writing. In addition, module prologues are
written, and entry and exit logic along with all data areas are defined.
The objective of this review is to ensure that the correctly developed logic interprets
the functional specification. Code inspection 1 also provides an opportunity to
review the logic flow for any performance implications or problems. When code
inspection 1 successfully concludes, its output is an approved set of final logic
specifications.
Your role in this review is now a less active one than it has been. You are there to
ensure that everyone adheres to the use of data and access sequences defined in
the previous reviews.
At this point in the review process, you are starting the database administration
tasks defined in Chapter 12, “Developing Test Databases,” on page 307,
Chapter 11, “Implementing Database Design,” on page 291, and Chapter 13,
“Loading Databases,” on page 311.
Code Inspection 2
The code inspection 2 takes place after coding is complete and before testing by
the test organization begins. The objective of the second code inspection is to make
sure module logic matches pseudo code or flowcharts. Interface and register
conventions along with the general quality of the code are checked. Documentation
and maintainability of the code are evaluated.
Your role in this review is the same as your role in code inspection 1.
At this point in the review process, you are almost finished with the database
administration tasks of developing a test database, implementing the database
design, and loading the database.
During your testing of the database, you should run the DB monitor (described in
Chapter 14, “Monitoring Databases,” on page 335) to make sure your database still
meets the performance expectations you have established.
Security Inspection
The security inspection is optional but highly recommended if security is a
significant concern. Security inspections can take place at any appropriate point in
the system development process. Define security strategy early, and check its
implementation during design reviews. This particular security inspection takes
place after all unit and integration testing is complete. The purpose of the review is
to look for any code that violates the security of system interfaces, secured
databases, tables, or other high-risk items.
People who attend the security inspection review include the moderator, system
designer, designated security officer, and database administrator. Because the
database administrator is responsible for implementing and monitoring the security
of the database, you might, in fact, be the designated security officer. If security is a
significant concern, you might prefer that the review team not attend this inspection.
During this and other security inspection, you are involved in the database
administration task of establishing security defined in Chapter 4, “Security,” on page
31.
Post-Implementation Review
It is highly recommended that you conduct a post-implementation review. The
post-implementation review is typically held about six months after the database
system is running. Its objective is to make sure the system is meeting user
requirements.
Everyone who has been involved in design and implementation of the database
system should attend the post-implementation review. If the system is not meeting
user requirements, the output of this review should be a plan to correct design or
performance problems to meet user requirements.
This chapter deals primarily with how you can control a user’s view of data and the
user’s actions with respect to the data.
Related Reading: If you use CICS, see CICS RACF® Security Guide for
information on establishing security.
Figure 14 on page 32 shows an example. The top of the figure shows the
hierarchical structure for a PAYROLL database as seen by you and defined by the
DBD. For certain applications, it is not necessary (nor desirable) to access the
SALARY segment. By omitting SENSEG statement in the DB PCB for the SALARY
segment, you can make it seem that this segment simply does not exist. By doing
this, you have denied unauthorized users access to the segment, and you have
denied users knowledge of its very existence.
For this method to be successful, the segment being masked off must not be in the
search path of an accessed segment. If it is, then the application is made aware of
at least the key of the segment to be “hidden.”
With field-level sensitivity, you can achieve the same masking effect at the field
level. If SALARY and NAME were in the same segment, you could still restrict
access to the SALARY field without denying access to other fields in the segment.
PCB statement. The PROCOPT statement tells IMS what actions you will permit
against the database. A program can do what is declared in the PROCOPT.
For example, the DBD in Figure 13 describes a payroll database that stores the
name, address, position, and salary of employees. The hierarchical structure of the
database record is shown in Figure 14.
DBD NAME=PAYROLL,...
DATASET ...
SEGM NAME=NAME,PARENT=0...
FIELD NAME=
SEGM NAME=ADDRESS,PARENT=NAME,...
FIELD NAME=
SEGM NAME=POSITION,PARENT=NAME,...
FIELD NAME=
SEGM NAME=SALARY,PARENT=NAME,...
FIELD
. NAME=
.
.
PCB TYPE=DB.DBDNAME=PAYROLL,...
SENSEG NAME=NAME,PARENT=0,...
SENSEG NAME=ADDRESS,PARENT=NAME,...
SENSEG
. NAME=POSITION,PARENT=NAME,...
.
.
Figure 16 shows what the payroll database record looks like to the application
based on the DB PCB. It looks just like the database record in Figure 14 on page
32 except that the SALARY segment is hidden.
This method is only useful in the batch environment, and VSAM password checking
is bypassed entirely in the online system. (If you have RACF installed, you can use
it to protect VSAM data sets.)
Details of the PASSWD parameter of the DBD statement can be found in IMS
Version 9: Utilities Reference: System.
Do not change the key or the location of the key field in index databases or in root
segments of HISAM data bases.
Chapter 4. Security 33
Using the Dictionary to Help Establish Security
You can use the dictionary to define your authorization matrixes. Through the
extensibility feature, you can define terminals, programs, users, data, and their
relationships to each other. In this way, you can produce reports that show:
dangerous trends, who uses what from which terminal, and which user gets what
data. For each user, the dictionary could be used to list the following information:
v Programs that can be used
v Types of transactions that can be entered
v Data sets that can be read
v Data sets that can be modified
v Categories of data within a data set that can be read
v Categories of data that can be modified
A business process, in an application, is one of the tasks your end user needs
done. For example, in an education application, printing a class roster is a business
process.
A local view describes a conceptual data structure and the relationships between
the pieces of data in the structure for one business process.
To understand the method explained in this chapter, you need to be familiar with the
terminology and examples explained in the introductory chapter on application
design in IMS Version 9: Application Programming: Design Guide. The chapter of
the design guide explains how to develop local views for the business processes in
an application.
Local View
Designing a structure that satisfies the data requirements of the business processes
in an application requires an understanding of the requirements for each of those
business processes. A local view of the business process describes these
requirements because the local view provides:
v A list of all the data elements the process requires and their controlling keys
v The conceptual data structure developed for each process, showing how the data
elements are grouped into data aggregates
v The mappings between the data aggregates in each process
This chapter uses a company that provides technical education to its customers as
an example. The education company has one headquarters, called HQ, and several
local education centers, called Ed Centers. HQ develops the courses offered at
each of the Ed Centers. Each Ed Center is responsible for scheduling classes it will
offer and for enrolling students for those classes.
The local views used in this chapter are for the following business processes in an
education application:
Current Roster
Schedule of Classes
Instructor Skills Report
Instructor Schedules
The information in the subtopics of this topic summarizes the local views developed
in the introductory chapter on application design in IMS Version 9: Application
Programming: Design Guide.
Figure 17 shows the conceptual data structure for the current roster.
Figure 18 shows the conceptual data structure for the class schedule.
Figure 19 shows the conceptual data structure for the instructor skills report.
Figure 20 shows the conceptual data structure for the instructor schedules.
A one-to-many mapping means that for each segment A there are one or more
segment Bs; shown like this: A ──────── B. For example, in the Current Roster
(Figure 17 on page 47), there is a one-to-many relationship between course and
class. For each course, there can be several classes scheduled, but a class is
A many-to-many mapping means that for each segment A there are many segment
Bs, and for each segment B there are many segment As. This is shown like this: A
──────── B. A many-to-many relationship is not a dependent relationship, since
it usually occurs between data aggregates in two separate data structures and
indicates a conflict in the way two business processes need to process that data.
When you implement a data structure with DL/I, there are three strategies you can
apply to solve data conflicts:
Defining logical relationships
Establishing secondary indexes
Storing the data in two places (also known as carrying duplicate data).
Related Reading: “Resolving Data Conflicts” on page 52 explains the kinds of data
conflicts that secondary indexes and logical relationships can resolve.
The first step in designing a conceptual data structure is to combine the mappings
of all the local views. To do this, go through the mappings for each local view and
make a consolidated list of mappings (see Table 5). As you review the mappings:
v Do not record duplicate mappings. At this stage you need to cover each
variation, not each occurrence.
v If two data aggregates in different local views have opposite mappings, use the
more complex mapping. This will include both mappings when they are
combined. For example, if local view #1 has the mapping A ──────── B, and
local view #2 has the mapping A ──────── B, use a mapping that includes
both these mappings. In this case, this is A ──────── B.
Table 5. Combined Mappings for Local Views
Mapping Local View
Course ──────── Class 1, 2, 4
Class ──────── Student 1
Class ──────── Instructor 1
Customer/location ──────── Student 1
Instructor ──────── Course 3, 4
Using the combined mappings, you can construct the data structures shown in
Figure 21.
Two conflicts exist in these data structures. First, STUDENT is dependent on both
CUST and CLASS. Second, there is an opposite mapping between COURSE and
INSTR, and INSTR and COURSE. If you implemented these structures with DL/I,
you could use logical relationships to resolve the conflicts. “Analyzing Requirements
for Logical Relationships” on page 52 explains how.
When you use DL/I, consider how each of the data elements in the structure you
have developed should be grouped into segments. Also, consider how DL/I can
solve any existing data conflicts in the structure. The topics “Assigning Data
Elements to Segments” and “Resolving Data Conflicts” on page 52 in this chapter
explain how you assign data elements to segments, and how DL/I can resolve data
conflicts.
List the data elements next to their keys, as shown in Table 6. The key and its
associated data elements become the segment content.
Table 6. Keys and Associated Data Elements
Data Aggregate Key Data Elements
COURSE CRSCODE CRSNAME, LENGTH, PRICE
CUSTOMER/LOCATION CUST, LOCTN
CLASS EDCNTR, DATE
STUDENT STUSEQ# STUNAME, ABSENCE, STATUS,
GRADE
INSTRUCTOR INSTR
If a data element is associated with different keys in different local views, then you
must decide which segment will contain the data element. The other thing you can
do is to store duplicate data. To avoid doing this, store the data element with the
key that is highest in the hierarchy. For example, if the keys ALPHA and BETA were
both associated with the data element XYZ (one in local view 1 and one in local
view 2), and ALPHA were higher in the hierarchy, store XYZ with ALPHA to avoid
having to repeat it.
Suppose that you are part of our technical education company and need to
determine (from a terminal) whether a particular student is enrolled in a class. If you
are unsure about the student’s enrollment status, you probably do not know the
student’s sequence number. The key of the STUDENT segment, however, is
STUSEQ#. Let’s say you issue a request for a STUDENT segment, and identify the
segment you need by the student’s name (STUNAME). Instead of the student’s
sequence number (STUSEQ#), IMS searches through all STUDENT segments to
find that one. Assuming the STUDENT segments are stored in order of student
sequence numbers, IMS has no way of knowing where the STUDENT segment is
just by having the STUNAME.
Using a secondary index in this example is like making STUNAME the key field of
the STUDENT segment for this business process. Other business processes can
still process this segment with STUSEQ# as the key.
To do this, you can index the STUDENT segment on STUNAME in the secondary
index. You can index any field in a segment. When you index a field, indicating to
IMS that you are using a secondary index for that segment, IMS processes the
segment as though the indexed field were the key.
Defining logical relationships lets you create a hierarchic structure that does not
exist in storage but can be processed as though it does. You can relate segments
in separate hierarchies. The data structure created from these logical relationships
is called a logical structure. To relate segments in separate hierarchies, store the
segment in the path by which it is accessed most frequently. Store a pointer to the
segment in the path where it is accessed less frequently.
In the hierarchy shown in Figure 21 on page 51, two possible parents exist for the
STUDENT segment. If the CUST segment is part of an existing database, you can
define a logical relationship between the CUST segment and the STUDENT
segment. You would then have the hierarchies shown in Figure 22. The
CUST/STUDENT hierarchy would be a logical structure.
The other conflict you can see in Figure 21 on page 51, is the one between
COURSE and INSTR. For one course there are several classes, and for one class
there are several instructors (COURSE ───── CLASS ───── INSTR), but
each instructor can teach several courses (INSTR ───── COURSE). You can
resolve this conflict by using a bidirectional logical relationship. You can store the
INSTR segment in a separate hierarchy, and store a pointer to it in the INSTR
segment in the course hierarchy. You can also store the COURSE segment in the
course hierarchy, and store a pointer to it in the COURSE segment in the INSTR
hierarchy. This bidirectional logical relationship would give you the two hierarchies
shown in Figure 23, eliminating the need to carry duplicate data.
IMS allows you to define twelve database types. Each type has different
organization processing characteristics. Except for DEDB and MSDB, all the
database types are discussed in this chapter.
In this chapter:
v “Sequential Storage Method” on page 56
v “Direct Storage Method” on page 56
v “Databases Supported with DBCTL” on page 56
v “Databases Supported with DCCTL” on page 57
v “Performance Considerations Overview” on page 57
v “HSAM Databases” on page 60
v “HISAM Databases” on page 65
v “SHSAM, SHISAM and GSAM Databases” on page 74
v “HDAM, PHDAM, HIDAM, and PHIDAM Databases” on page 78
v “Managing I/O Errors” on page 107
Related Reading: For information on DEDBs and MSDBs see, “Data Entry
Databases” on page 109 and “Main Storage Databases (MSDBs)” on page 128.
Understanding how the database types differ enables you to pick the type that best
suits your application’s processing requirements.
Each database type has its own access method. The following figure lists each type
and the access method it uses:
Type of Database Access Method
HSAM Hierarchical Sequential Access Method
HISAM Hierarchical Indexed Sequential Access Method
SHSAM Simple Hierarchical Sequential Access Method
SHISAM Simple Hierarchical Indexed Sequential Access
Method
GSAM Generalized Sequential Access Method
Restriction: GSAM does not apply to CICS
applications.
HDAM Hierarchical Direct Access Method
PHDAM Partitioned Hierarchical Direct Access Method
HIDAM Hierarchical Indexed Direct Access Method
PHIDAM Partitioned Hierarchical Indexed Direct Access
Method
PSINDEX Partitioned Secondary Index Database
DEDB Data Entry Database (Hierarchical Direct Access)
© Copyright IBM Corp. 1974, 2004 55
MSDB Main Storage Database (Hierarchical Direct Access)
Based on the access method used, the various databases can be classified into two
groups: sequential storage and direct storage.
For quick reference, see Table 7 on page 59 for a summary of HSAM, HISAM,
HDAM, PHDAM, HIDAM, PHIDAM, DEDB, and MSDB database characteristics.
Databases can be accessed through DBCTL from IMS BMP regions, as well as
from independent transaction-management subsystems. Only batch-oriented BMP
programs are supported because DBCTL provides no message or transaction
support.
CICS online programs can access the same IMS database concurrently; however,
an IMS batch program must have exclusive access to the database (if you are not
participating in IMS data sharing).
If you have batch jobs that currently access IMS databases through IMS data
sharing, you can convert them to run as BMPs directly accessing databases
through DBCTL, thereby improving performance. You can additionally convert
current batch programs to BMPs to access DEDBs.
Related Reading: For more information on converting a batch job to a BMP, see
IMS Version 9: Application Programming: Design Guide and IMS Version 9:
Administration Guide: System.
Related Reading:
| v For more information on ESAF, see IMS Version 9: Customization Guide
| v For more information on RRSAF, see DB2 Universal Database for z/OS
| Administration Guide
Related Reading: For information on DEDBs and MSDBs, see “Data Entry
Databases” on page 109 and “Main Storage Databases (MSDBs)” on page 128.
General Sequential (GSAM)
v Supported by DCCTL
v No hierarchy, database records, segments, or keys
v No DLET or REPL
v ISRT adds records at end of data set
v GN and GU processed in batch or BMP applications only
v Allows IMS symbolic checkpoint calls and restart from checkpoint (except
VSAM-loaded databases)
v Good for converting data to IMS and for passing data
v Not accessible from an MPP or JMP region
v Space efficient
v Not time efficient
VSAM
v Fixed- or variable-length records are usable
v VSAM ESDS DASD stored
v IMS symbolic checkpoint call allowed
v Restart from checkpoint not allowed
BSAM/QSAM
v Stored on DASD
v VSAM accessible
v All DL/I calls allowed
v Good for converting data to IMS and for passing data
v Not space efficient
v Time efficient
Hierarchic Direct
Segments are linked by pointers
HDAM and PHDAM
v Supported by DBCTL
v Hashing access to roots
v Sequential access by secondary index to segments
v All DL/I calls allowed
v Stored on DASD in VSAM ESDS or OSAM data set
v Good for direct access to records
v Hierarchic pointers allowed
– Hierarchic sequential access to dependent segments
– Better performance than child and twin pointers
– Less space required than child and twin pointers
v Child and twin pointers allowed
– Direct access to pointers
– More space required by additional index VSAM ESDS
database
HIDAM and PHIDAM
v Supported by DBCTL
v Indexed access to roots
v Pointer access to dependent segments
v All DL/I calls allowed
v Stored on DASD in VSAM ESDS or OSAM data set
v Good for random and sequential access to records
v Good for random access to segment paths
v Hierarchic pointers allowed
– Hierarchic sequential access to dependent segments
– Better performance than child and twin pointers
– Less space required than child and twin pointers
v Child and twin pointers allowed
– Direct access to pointers
– More space required by additional index VSAM ESDS
database
Table 7 gives a summary of database characteristics, functions, and options for the
different database types.
Table 7. Summary of Database Characteristics and Options for Database Types
Characteristic HSAM HISAM HDAM PHDAM HIDAM PHIDAM DEDB MSDB
Hierarchical Structures Y Y Y Y Y Y Y N
Table 7. Summary of Database Characteristics and Options for Database Types (continued)
Characteristic HSAM HISAM HDAM PHDAM HIDAM PHIDAM DEDB MSDB
Direct Access Storage Y Y Y Y Y Y Y N
Multiple Data Set N N Y Y Y Y N N
Groups
Logical Relationships N Y Y Y Y Y N N
Variable-Length N Y Y Y Y Y Y N
Segments
Segment N Y Y Y Y Y Y N
Edit/Compression
Data Capture Exit N Y Y Y Y Y Y N
Routines
Field-Level Sensitivity Y Y Y Y Y Y N N
Primary Index N Y N N Y Y N N
Secondary Index N Y Y Y Y Y N N
Logging, Recovery, N Y Y Y Y Y Y Y
Offline Reorganization
VSAM N Y Y Y Y Y Y N/A
OSAM N N Y Y Y Y N N/A
QSAM/BSAM Y N N N N N N N/A
Boolean Operators Y Y Y Y Y Y Y N
Command Codes Y Y Y Y Y Y Y N
Subset Pointers N N N N N N Y N
Uses Main Storage N N N N N N N Y
High Parallelism (field N N N N N N N Y
call)
Compaction Y Y Y Y Y Y Y N
DBRC Support Y Y Y Y Y Y Y N/A
Partitioning Support N N N Y N Y Y N
Data Sharing Y Y Y Y Y Y Y N
Partition Sharing N N N Y N Y Y N
Block Level Sharing Y Y Y Y Y Y Y N
Area Sharing N/A N/A N/A N/A N/A N/A Y N/A
Record Deactivation N N N N N N Y N/A
Database Size med med med lg med lg lg sml
Online Utilities N N N N N N Y N
| Online Reorganization N N N Y N Y Y N
Batch Y Y Y Y Y Y N N
HSAM Databases
| Hierarchical sequential access method (HSAM) databases use the sequential
| method of accessing data. All database records and all segments within each
| database record are physically adjacent in storage. An HSAM database can be
| stored on tape or on a direct-access storage device. They are processed using
HSAM data sets are loaded with root segments in ascending key sequence (if keys
exist for the root) and dependent segments in hierarchic sequence. You do not
need to define a key field in root segments. You must, however, present segments
to the load program in the order in which they must be loaded. HSAM data sets use
a fixed-length, unblocked record format (RECFM=F), which means that the logical
record length is the same as the physical block size.
HSAM databases can only be updated by rewriting them. Delete (DLET) and
replace (REPL) calls are not allowed, and insert (ISRT) calls are only allowed when
the database is being loaded. Although the field-level sensitivity option can be used
with HSAM databases the following options cannot be used with HSAM databases:
v Multiple data set groups
v Logical relationships
v Secondary indexing
v Variable-length segments
v Segment edit/compression facility
v Data Capture exit routines
v Asynchronous data capture
v Logging, recovery, or reorganization
Figure 25 shows how the example HSAM database, shown in Figure 24, would be
stored in blocks.
In the data set, a database record is stored in one or more consecutive blocks. You
define what the block size will be. Each block is filled with segments of the
database record until there is not enough space left in the block to store the next
segment. When this happens, the remaining space in the block is padded with
zeros and the next segment is stored in the next consecutive block. When the last
segment of a database record has been stored in a block, any unused space, if
sufficient, is filled with segments from the next database record.
In storage, an HSAM segment consists of a 2-byte prefix followed by user data. The
first byte of the prefix is the segment code, which identifies the segment type to
IMS. This number can be from 1 to 255. The segment code is assigned to the
segment by IMS in ascending sequence, starting with the root segment and
continuing through all dependents in hierarchic sequence. The second byte of the
prefix is the delete byte. Because DLET calls cannot be used against an HSAM
database, the second byte is not used.
| After position in an HSAM database has been established, the way in which GU
| calls are handled depends on whether a sequence field is defined for the root
| segment and what processing options are in effect. Figure 26 shows a flow chart of
| the actions taken based on whether a sequence field is defined and what
| processing options are in effect.
When a GU call is issued and the root segment sequence field is not defined,
search forward from beginning of database. If the sequence field is defined for the
root and the SSA key is less than the SAA® key on the last call, search forward
from the current position in the database. If the sequence field is defined for the
root and the SSA key is greater than the SSA key on the last call, the GU call is
As stated previously, DLET and REPL calls cannot be issued against an HSAM
database. ISRT calls are allowed only when the database is being loaded. To
update an HSAM database, you must write a program that merges the current
HSAM database and the update data. The update data can be in one or more files.
The output data set created by this process is the new updated HSAM database.
Figure 27 illustrates this process.
HISAM Databases
In a hierarchical indexed sequential access method (HISAM) database, as with an
HSAM database, segments in each database record are related through physical
adjacency in storage. Unlike HSAM, however, each HISAM database record is
indexed, allowing direct access to a database record. In defining a HISAM
database, you must define a unique sequence field in each root segment. These
sequence fields are then used to construct an index to root segments (and
therefore database records) in the database.
Except for logging and recovery, each of these options is discussed in detail in later
parts of this book. For detailed discussions of logging and recovery, see the IMS
Version 9: Database Recovery Control (DBRC) Guide and Reference.
There are several things you need to know about storage of HISAM database
records:
v You define the logical record length of both the primary and overflow data set
(subject to the rules listed in this chapter). The logical record length can be
different for each data set. This allows you to define the logical record length in
the primary data set as large enough to hold an “average” database record or the
most frequently accessed segments in the database record. Logical record length
in the overflow data set can then be defined (subject to some restrictions) as
whatever is most efficient given the characteristics of your database records.
v Logical records are grouped into control intervals (CIs). A control interval is the
unit of data transferred between an I/O device and storage. You define the size
of CIs.
v Each database record starts at the beginning of a logical record in the primary
data set. A database record can only occupy one logical record in the primary
data set, but overflow segments of the database record can occupy more than
one logical record in the overflow data set.
v Segments in a database record cannot be split and stored across two logical
records. Because of this and because each database record starts a new logical
record, unused space exists at the end of many logical records. When the
database is initially loaded, IMS inserts a root segment with a key of all X'FF's as
the last root segment in the database.
Figure 29 on page 67 shows four HISAM database records (shown in Figure 28) as
they are initially stored on the primary and overflow data sets.
In storage, a HISAM segment (see Figure 29) consists of a 2-byte prefix followed by
user data. The first byte of the prefix is the segment code, which identifies the
segment type to IMS. This number can be from 1 to 255. The segment code is
assigned to the segment by IMS in ascending sequence, starting with the root
segment and continuing through all dependents in hierarchic sequence. The second
byte of the prefix is the delete byte.
Each logical record in the primary data set contains the root plus all dependents of
the root (in hierarchic sequence) for which there is enough space. The remaining
segments of the database record are put in the overflow data set (again in
hierarchic sequence). The two “parts” of the database record are chained together
with a direct-address pointer. When overflow segments in a database record use
more than one logical record in the overflow data set (the case for the first and
second database record in Figure 29), the logical records are also chained together
with a direct-address pointer. Note in the figure that HISAM indexes do not contain
a pointer to each root segment in the database. Rather, they point to the highest
root key in each block or CI.
Accessing Segments
In HISAM, when an application program issues a call with a segment search
argument (SSA) qualified on the key of the root segment, the segment is found by:
1. Searching the index for the first pointer with a value greater than or equal to the
specified root key (the index points to the highest root key in each CI)
2. Following the index pointer to the correct CI
3. Searching this CI for the correct logical record (the root key value is compared
with each root key in the CI)
4. When the correct logical record (and therefore database record) is found,
searching sequentially through it for the specified segment
Figure 31. Inserting a Root Segment into a HISAM Database (Free Logical Record Exists in
the CI)
Related Reading: For information on the OPTIONS statement, see IMS Version 9:
Installation Volume 2: System Definition and Tailoring and Chapter 9, “Designing
Full-Function Databases,” on page 241.
The split can occur at the point at which the root is inserted or midpoint in the CI.
After the CI is split, free logical records exist in each new CI and the new root is
inserted into the proper CI in root key sequence. If, as was the case in Figure 31,
logical records in the new CI contained roots with higher keys, those logical records
would be “pushed down” to create space for the new logical record.
Figure 32. Inserting a Root Segment into a HISAM Database (No Free Logical Record Exists
in the CI)
Figure 33 on page 71 shows how segment insertion takes place when there is
enough space in the logical record. The new dependent is stored in its proper
hierarchic position in the logical record by shifting the segments that hierarchically
follow it to the right in the logical record.
Figure 33. Inserting a Dependent Segment into a HISAM Database (Space Exists in the
Logical Record)
Figure 34 on page 72 shows how segment insertion takes place when there is not
enough space in the logical record. As in the previous case, new dependents are
always stored in their proper hierarchic sequence in the logical record. However, all
segments to the right of the new segment are moved to the first empty logical
record in the overflow data set.
Figure 34. Inserting a Dependent Segment into a HISAM Database (No Space Exists in the
Logical Record)
Deleting Segments
When segments are deleted from a HISAM database, they are marked as deleted
in the delete byte in their prefix. They are not physically removed from the
database; the one exception to this is discussed later in this topic. Dependent
segments of the deleted segment are not marked as deleted, but because their
parent is, the dependent segments cannot be accessed. These unmarked segments
(as well as segments marked as deleted) are deleted when the database is
reorganized.
One thing you should note is that when a segment is accessed that hierarchically
follows deleted segments in a database record, the deleted segments must still be
“searched through”. This concept is shown in Figure 35 and in Figure 36.
Segment B2 is deleted from this database record. This means that segment B2 and
its dependents (C1, C2, and C3) can no longer be accessed, even though they still
exist in the database.
A request to access segment D1 is made. Although segments B2, C1, C2, and C3
cannot be accessed, they still exist in the database. Therefore they must still be
“searched through” even though they are inaccessible as shown in Figure 36.
Figure 36. Accessing a HISAM Segment That Hierarchically Follows Deleted Segments
In one situation, deleted segments are physically removed from the database. If the
deleted segment is a root, the logical record containing the root is erased, provided
neither the root nor any of its dependents is involved in a logical relationship. The
default is ERASE=YES, and no ″mark buffer altered″ takes place. Thus a
PROCOPT=G read job will not have to wait for locks after another job has set the
delete byte, and will return a segment not found condition. To be consistent with
other DB types, use ERASE=NO to cause a wait for physical delete prior to
attempted read.
Related Reading: For more information on the ERASE parameter of the DBD
statement, see the IMS Version 9: Installation Volume 2: System Definition and
Tailoring.
After the logical record is removed, its space is available for reuse. However, any
overflow logical record containing dependents of this root is not available for reuse.
Except for this special condition, you must unload and reload a HISAM database to
regain space occupied by deleted segments.
Replacing Segments
Replacing segments in a HISAM database is straightforward as long as fixed length
segments are being used. The data in the segment, once changed, is returned to
its original location in storage. The key field in a segment cannot be changed.
SHSAM Databases
A simple HSAM (SHSAM) database is an HSAM database containing only one type
of segment, a root segment. The segment has no prefix, because no need exists
for a segment code (there is only one segment type) or for a delete byte (deletes
are not allowed).
SHSAM databases can be accessed by z/OS BSAM and QSAM because SHSAM
segments contain user data only (no IMS prefixes). The ISRT, DLET, and REPL
calls cannot be used to update. However, ISRT can be used to load an SHSAM
database. Only GET calls are valid for processing an SHSAM database. These
allow retrieval only of segments from the database. To update an SHSAM database,
it must be reloaded. The situations in which SHSAM is typically used are explained
in the introduction to this topic. Before deciding to use SHSAM, read the topic on
GSAM databases, because GSAM has many of the same functions as SHSAM.
Unlike SHSAM, however, GSAM files cannot be accessed from a message
processing region. GSAM does allow you to take checkpoints and perform restart,
though.
Although SHSAM databases can use the field-level sensitivity option, they cannot
use any of the following options:
v Logical relationships
v Secondary indexing
v Multiple data set groups
v Variable-length segments
v Segment edit/compression facility
v Data Capture exit routines
v Logging, recovery, or reorganization
SHISAM Databases
A simple HISAM (SHISAM) database is a HISAM database containing only one type
of segment, a root segment. The segment has no prefix, because no need exists
for a segment code (there is only one segment type) or for a delete byte (deletes
are done using a VSAM erase operation). SHISAM databases must be KSDSs;
they are accessed through VSAM. Because SHISAM segments contain user data
only (no IMS prefixes), they can be accessed by VSAM macros and DL/I calls. All
the DL/I calls can be issued against SHISAM databases.
The IMS symbolic checkpoint call makes restart easier than the z/OS basic
checkpoint call. If the z/OS data set the application program is using is converted to
a SHISAM database data set, the symbolic checkpoint call can be used. This allows
application programs to take checkpoints during processing and then restart their
programs from a checkpoint. The primary advantage of this is that, if the system
fails, application programs can recover from a checkpoint rather than lose all
processing that has been done. One exception applies to this: An application
program for initially loading a database that uses VSAM as the operating system
access method cannot be restarted from a checkpoint. Application programs using
GSAM databases can also issue symbolic checkpoint calls. Application programs
using SHSAM databases cannot.
Before deciding to use SHISAM, you should read the next topic on GSAM
databases. GSAM has many of the same functions as SHISAM. Unlike SHISAM,
however, GSAM files cannot be accessed from a message processing region.
SHISAM databases can use field-level sensitivity and Data Capture exit routines,
but they cannot use any of the following options:
v Logical relationships
v Secondary indexing
v Multiple data set groups
v Variable-length segments
v Segment edit/compression facility
GSAM Databases
GSAM databases are sequentially organized databases designed to be compatible
with z/OS data sets. GSAM databases can be on a data set previously created or
one later accessed by the z/OS access methods VSAM or QSAM/BSAM. GSAM
data sets can use fixed-length or variable-length records when VSAM is used, or
fixed-length, variable-length or undefined-length records when QSAM/BSAM is
used. If VSAM is used to process a GSAM database, the VSAM data set must be
entry sequenced and on a DASD. If QSAM/BSAM is used, the physical sequential
(DSORG=PS) data set can be placed on a DASD or tape unit. GSAM is designed
to be compatible with z/OS data sets. The GSAM database has no hierarchy,
database records, segments or keys.
In general, always use DISP=OLD for GSAM data sets when restarting from a
checkpoint even if you used DISP=MOD on the original execution of the job step. If
you use DISP=OLD, the data set is positioned at its beginning. If you use
DISP=MOD, the data set is positioned at its end.
Because GSAM databases are supported in a DCCTL environment, you may use
them when you need to process sequential non-IMS data sets using a BMP
program.
GSAM databases are loaded in the order in which you present records to the load
program. You cannot issue DLET and REPL calls against GSAM databases;
however, you can issue ISRT calls after the database is loaded but only to add
records to the end of the data set. Records are not randomly added to a GSAM
data set.
If you have application programs that need access to both IMS and z/OS data sets,
you can use SHSAM, SHISAM, or GSAM. Which one you use depends on what
functions you need. Table 8 compares the characteristics and functions available for
each of the three types of databases.
Table 8. Comparison of SHSAM, SHISAM, and GSAM Databases
Characteristics and Functions SHSAM SHISAM GSAM
Hierarchic structure applicable? NO NO NO
Segment prefix exist? NO NO NO
Variable-length records used? NO NO YES
1
Checkpoint/restart possible? NO YES YES1
Compatible with non-IMS data sets? YES YES YES
Can VSAM be used as the operating system NO YES YES
access method?
Can BSAM be used as the operating system YES NO YES
access method?
Accessible from a batch region? YES YES YES
| Table 9. Maximum Sizes for HDAM, HIDAM, PHDAM, and PHIDAM Databases (continued)
| Maximum Data Set Maximum Number of Maximum Database
| Data Set Type Size Data Sets Size
| VSAM PHDAM or 4 GB 10 010 data sets (10 40 040 GB
| PHIDAM Database data sets per
| partition; 1001
| partitions per
| database)
|
| Related Reading: For information on OSAM data sets, see Appendix C, “Using
| OSAM as the Access Method,” on page 507.
Related Reading:
v Except for logging and recovery, each of these options is discussed in detail in
the topics of this chapter. For information on logging and recovery, see IMS
Version 9: Operations Guide.
v For information on the online reorganization of HALDB partitions, see “HALDB
Online Reorganization” on page 364.
Several different types of direct-address pointers exist, and you will see how each
works in the topics that follow in this section. However, there are three basic types:
v Hierarchic pointers, which point from one segment to the next in either forward or
forward and backward hierarchic sequence
v Physical child pointers, which point from a parent to each of its first or first and
last children, for each child segment type
v Physical twin pointers, which point forward or forward and backward from one
segment occurrence of a segment type to the next, under the same parent
Each type of pointer is examined separately in this topic. The topic “Mixing
Pointers” on page 89, discusses how pointers can be mixed. In the subtopics in this
topic, each type of pointer is illustrated, and the database record on which each
illustration is based is shown in Figure 40.
When an application program issues a call for a segment, HF pointers are followed
until the specified segment is found. In this sense, the use of HF pointers in an HD
database is similar to using a sequentially organized database. In both, to reach a
dependent segment all segments that hierarchically precede it in the database
record must be examined. HF pointers should be used when segments in a
database record are typically processed in hierarchic sequence and processing
does not require a significant number of delete operations. If there are a lot of
delete operations, hierarchic forward and backward pointers (explained next) might
be a better choice.
Four bytes are needed in each dependent segment’s prefix for the HF pointer. Eight
bytes are needed in the root segment. More bytes are needed in the root segment
because the root points to both the next root segment and first dependent segment
in the database record. HF pointers are specified by coding PTR=H in the SEGM
statement in the DBD.
The backward pointers are useful only when all of the following are true:
v Direct pointers from logical relationships or secondary indexes point to the
segment being deleted or one of its dependent segments.
v These pointers are used to access the segment.
v The segment is deleted.
Eight bytes are needed in each dependent segment’s prefix to contain HF and HB
pointers. Twelve bytes are needed in the root segment. More bytes are needed in
the root segment because the root points:
v Forward to a dependent segment
v Forward to the next root segment in the database
v Backward to the preceding root segment in the database
HF and HB pointers are specified by coding PTR=HB in the SEGM statement in the
DBD.
With PCF pointers, the hierarchy is only partly connected. No pointers exist to
connect occurrences of the same segment type under a parent. Physical twin
pointers (explained in “Types of Pointers You Can Specify” on page 81) can be
used to form this connection. Use PCF pointers when segments in a database
record are typically processed randomly and either sequence fields are defined for
the segment type, or if not defined, the insert rule is FIRST or HERE. If sequence
fields are not defined and new segments are inserted at the end of existing
segment occurrences, the combination of PCF and physical child last (PCL)
pointers (explained next) can be a better choice.
Related Reading:
v For more information on insert rules, see IMS Version 9: Application
Programming: Database Manager.
v For information on specifying insert rules using the RULES= parameter of the
SEGM segment definition statement, see IMS Version 9: Utilities Reference:
System.
Four bytes are needed in each parent segment for each PCF pointer. PCF pointers
are specified by coding PARENT=((name,SNGL)) in the SEGM statement in the
DBD. This is the SEGM statement for the child being pointed to, not the SEGM
statement for the parent. Note, however, that the pointer is stored in the parent
segment.
Note that if only one physical child of a particular parent segment exists, the PCF
and PCL pointers both point to the same segment. As with PCF pointers, PCF and
PCL pointers leave the hierarchy only partly connected, and no pointers exist to
connect occurrences of the same segment type under a parent. Physical twin
pointers (explained in “Types of Pointers You Can Specify” on page 81) can be
used to form this connection.
PCF and PCL pointers (as opposed to just PCF pointers) are typically used when:
v No sequence field is defined for the segment type.
v New segment occurrences of a segment type are inserted at the end of all
existing segment occurrences.
On insert operations, if the ISRT rule of LAST has been specified, segments are
inserted at the end of all existing segment occurrences for that segment type. When
PCL pointers are used, fast access to the place where the segment will be inserted
is possible. This is because there is no need to search forward through all segment
occurrences stored before the last occurrence. PCL pointers also give application
programs fast retrieval of the last segment in a chain of segment occurrences.
Application programs can issue calls to retrieve the last segment by using an
unqualified SSA with the command code L. When a PCL pointer is followed to get
the last segment occurrence, any further movement in the database is forward.
A PCL pointer does not enable you to search from the last to the first occurrence of
a series of dependent child segment occurrences.
Four bytes are needed in each parent segment for each PCF and PCL pointer. PCF
and PCL pointers are specified by coding the PARENT= operand in the SEGM
statement in the DBD as PARENT=((name,DBLE)). This is the SEGM statement for
the child being pointed to, not the SEGM statement for the parent. Note, however,
that the pointers are stored in the parent segment.
A parent segment can have SNGL specified on one immediately dependent child
segment type and DBLE specified on another.
Figure 45 on page 87 shows the result of specifying PCF and PCL pointers in the
following DBD.
DBD
SEGM A
SEGM B PARENT=((name.SNGL)) (specifies PCF pointer only)
SEGM C PARENT=((name.DBLE)) (specified PCF and PCL pointers)
| Note that, except in PHIDAM databases, PTF pointers can be specified for root
| segments. When this is done in an HDAM or PHDAM database, the root segment
| points to the next root in the database chained off the same root anchor points
| (RAP). If no more root segments are chained from this RAP, the PTF pointer is
| zero.
| When PTF pointers are specified for root segments in a HIDAM database, the root
| segment does not point to the next root in the database. For an explanation of
| where the root segment points, see “Use of RAPs in a HIDAM Database” on page
| 98.
| If you specify PTF pointers on a root segment in a HIDAM database, the HIDAM
| index must be used for all sequential processing of root segments. Using only PTF
| pointers increases access time. You can eliminate this overhead by specifying PTF
| and physical twin backward (PTB) pointers (discussed in “Physical Twin Forward
| and Backward Pointers” on page 88).
| You cannot use PTF pointers for root segments in a PHIDAM database. PHIDAM
| databases only support PTF pointers for dependent segments.
With PTF pointers, the hierarchy is only partly connected. No pointers exist to
connect parent and child segments. Physical child pointers can be used to form this
connection. PTF pointers should be used when segments in a database record are
typically processed randomly, and you do not need sequential processing of
database records.
Four bytes are needed for the PTF pointer in each segment occurrence of a given
segment type. PTF pointers are specified by coding PTR=T in the SEGM statement
in the DBD. This is the SEGM statement for the segment containing the pointer.
The combination of PCF and PTF pointers is used as the default when pointers are
not specified in the DBD. Figure 46 show PTF pointers:
Note that PTF and PTB pointers can be specified for root segments. When this is
done, the root segment points to both the next and the previous root segment in the
database. As with PTF pointers, PTF and PTB pointers leave the hierarchy only
partly connected. No pointers exist to connect parent and child segments. Physical
child pointers (explained previously) can be used to form this connection.
PTF and PTB pointers (as opposed to just PTF pointers) should be used on the
root segment of a HIDAM or a PHIDAM database when you need fast sequential
processing of database records. By using PTB pointers in root segments, an
application program can sequentially process database records without IMS’ having
to refer to the HIDAM or PHIDAM index. For HIDAM databases, PTB pointers
improve performance when deleting a segment in a twin chain accessed by a
virtually paired logical relationship. Such twin-chain access occurs when a delete
from the logical access path causes DASD space to be released.
Eight bytes are needed for the PTF and PTB pointers in each segment occurrence
of a given segment type. PTF and PTB pointers are specified by coding PTR=TB in
the SEGM statement in the DBD.
Mixing Pointers
Because pointers are specified by segment type, the various types of pointers can
be mixed within a database record. However, only hierarchic or physical, but not
both, can be specified for a given segment type. The types of pointers that can be
specified for a segment type are:
HF Hierarchic forward
HF and HB Hierarchic forward and backward
PCF Physical child first
PCF and PCL Physical child first and last
PTF Physical twin forward
PTF and PTB Physical twin forward and backward
Figure 48 on page 90 shows a database record in which pointers have been mixed.
Note that, in some cases, for example, dependent segment B, many pointers exist
even though only one type of pointer is or can be specified. Also note that if a
segment is the last segment in a chain, its last pointer field is set to zero (the case
for segment E1, for instance). One exception is noted in the rules for mixing
pointers. Figure 48 has a legend that explains what specification in the PTR= or
PARENT= operand causes a particular pointer to be generated.
Or:
1. PF
2. PTB
3. PCF
4. PCL
The databases referred to here are the HDAM or PHDAM and the HIDAM or
PHIDAM databases. HIDAM and PHIDAM each have an additional database, the
primary index database; for HIDAM, you allocate it; for PHIDAM, IMS allocates it;
for both, IMS maintains the index. This topic examines the index database when
dealing with the storage of HIDAM records. Figure 49 shows the general format of
an HD database and some of the special fields used in it.
HD databases use a single data set, that is either a VSAM ESDS or an OSAM data
set. The data set contains one or more CIs (VSAM ESDS) or blocks (OSAM).
Database records in the data set are in unblocked format. Logical record length is
the same as the block size when OSAM is used. When VSAM is used, logical
record length is slightly less than CI size. (VSAM requires some extra control
information in the CI.) You can either specify logical record length yourself or have it
done by the Database Description Generation (DBDGEN) utility. The utility
generates logical record lengths equal to a quarter, third, half, or full track block.
Note that the database in Figure 49 contains areas of free space. This free space
could be the result of delete or replace operations done on segments in the data
set. Remember, space can be reused in HD databases. Or it could be free space
you set aside when loading the database. HD databases allow you to set aside free
space by specifying that periodic blocks or CIs be left free or by specifying that a
percentage of space in each block or CI be left free.
Examine the four fields illustrated in Figure 49. Three of the fields are used to
manage space in the database. The remaining one, the anchor point area, contains
the addresses of root segments. The fields are:
v Bit map. Bit maps contain a string of bits. Each bit describes whether enough
space is available in a particular CI or block to hold an occurrence of the longest
segment defined in the data set group. The first bit says whether the CI or block
that the bit map is in has free space. Each consecutive bit says whether the next
consecutive CI or block has free space. When the bit value is one, it means the
CI or block has enough space to store an occurrence of the longest segment
type you have defined in the data set group. When the bit value is zero, not
enough space is available.
The first bit map in an OSAM data set is in the first block of the first extent of the
data set. In VSAM data sets, the second CI is used for the bit map and the first
CI is reserved. The first bit map in a data set contains n bits that describe space
availability in the next n-1 consecutive CIs or blocks in the data set. After the first
bit map, another bit map is stored at every nth CI or block to describe whether
space is available in the next group of CIs or blocks in the data set.
| For a HALDB partition, the first bit map block stores the partition ID (2 bytes) and
| the reorganization number (2 bytes). These are stored before the FSEAP at the
| beginning of the block.
An example bit map is shown in Figure 50.
v Free space element anchor point (FSEAP). FSEAPs are made up of two 2-byte
fields. The first contains the offset, in bytes, to the first free space element (FSE)
in the CI or block. FSEs describe areas of free space in a block or CI. The
second field identifies whether this block or CI contains a bit map. If the block or
CI does not contain a bit map, the field is zeros. One FSEAP exists at the
beginning of every CI or block in the data set. IMS automatically generates and
maintains FSEAPs.
An FSEAP is shown in Figure 51 on page 93.
The FSEAP in the first bit map block in an OSAM data set has a special use. It
is used to contain the DBRC usage indicator for the database. The DBRC usage
indicator is used at database open time for update processing to verify usage of
the correct DBRC RECON data set.
v Free space element (FSE). An FSE describes each area of free space in a CI or
block that is 8 or more bytes in length. IMS automatically generates and
maintains FSEs. FSEs occupy the first 8 bytes of the area that is free space.
FSEs consist of three fields:
– Free space chain pointer (CP) field. This field contains, in bytes, the offset
from the beginning of this CI or block to the next FSE in the CI or block. This
field is 2 bytes long. The CP field is set to zero if this is the last FSE in the
block or CI.
– Available length (AL) field. This field contains, in bytes, the length of the free
space identified by this FSE. The value in this field includes the length of the
FSE itself. The AL field is 2 bytes long.
– Task ID (ID) field. This field contains the task ID of the program that freed the
space identified by the FSE. The task ID allows a given program to free and
reuse the same space during a given scheduling without contending for that
space with other programs. The ID field is 4 bytes long.
An FSE is shown in Figure 52.
v Anchor point area. The anchor point area is made up of one or more 4-byte root
anchor points (RAPs). Each RAP contains the address of a root segment. For
HDAM, you specify the number of RAPs you need on the RMNAME parameter in
the DBD statement. For PHDAM, you specify the number of RAPs you need on
the RMNAME parameter in the DBD statement, or by using the HALDB Partition
Definition utility, or on the DBRC INIT.PART command. For HIDAM (but not
PHIDAM), you specify whether RAPs exist by specifying PTR=T or PTR=H for a
root segment type. Only one RAP per block or CI is generated. How RAPs are
used in HDAM, PHDAM, and HIDAM differs. Therefore RAPs will be examined
further in the following topics:
Figure 54 shows sample Skills database records. Figure 55 on page 95 shows how
these records are stored in a HDAM or HIDAM database.
When the database is initially loaded, the root and each dependent segment are put
in the root addressable area until the next segment to be stored will cause the total
space used to exceed the amount of space you specified in the BYTES operand. At
this point, all remaining dependent segments in the database record are stored in
the overflow area.
In an HDAM or a PHDAM database, the order in which you load database records
does not matter. The user randomizing module determines where each root is
stored. However, as with all types of databases, when the database is loaded, all
dependents of a root must be loaded in hierarchic sequence following the root.
When the database is initially loaded, it puts the root and segments in the first
available space in the specified CI or block, if this is possible. IMS then puts the
4-byte address of the root in the RAP of the CI or block designated by the
randomizing module. RAPs only exist in the root addressable area. If space is not
available in the root addressable area for a root, it is put in the overflow area. The
root, however, is chained from a RAP in the root addressable area.
If the randomizing module generates the same relative block and RAP number for
more than one root, the RAP points to a single root and all additional roots with the
same relative block and RAP number are chained to each other using physical twin
pointers. Roots are always chained in ascending key sequence. If non-unique keys
exist, the ISRT rules of FIRST, LAST, and HERE determine the sequence in which
roots are chained (These ISRT rules are explained in IMS Version 9: Application
Programming: Database Manager). All roots chained like this from a single anchor
point area are called synonyms.
Figure 55 on page 95 shows two HDAM or PHDAM database records and how they
appear in storage after initial load. In this example, enough space exists in the
specified block or CI to store the roots, and the unique relative block and RAP
numbers for each root generated by the randomizing module. The bytes parameter
specifies enough space for five segments of the database record to fit in the root
addressable area. All remaining segments are put in the overflow area. When
HDAM or PHDAM database records are initially loaded, dependent segments that
cannot fit in the root addressable area are simply put in the first available space in
the overflow area.
Note how segments in the database record are chained together. In this case,
hierarchic pointers are used instead of the combination of physical child/physical
twin pointers. Each segment points to the next segment in hierarchic sequence.
Also note that two RAPs were specified per CI or block and each of the roots
loaded is pointed to by a RAP. For simplicity, Figure 55 on page 95 does not show
the various space management fields.
Note how segments in a database record are chained together. In this case,
hierarchic pointers were used instead of the combination of physical child/physical
twin pointers. Each segment points to the next segment in hierarchic sequence. No
RAPs exist in Figure 56. Although HIDAM databases can have RAPs, you probably
do not need to use them. The reason for not using RAPs is explained in “Use of
RAPs in a HIDAM Database” on page 98.
The prefix portion of the index segment contains the delete byte and the root’s
address. The data portion of the index segment contains the key field of the root
being indexed. This key field identifies which root segment the index segment is for
and remains the reason why root segments in a HIDAM or PHIDAM database must
have unique sequence fields. Each index segment is a separate logical record.
Figure 58 shows the index database that IMS would generate when the two
database records in Figure 56 on page 97 were loaded.
| In HIDAM databases, RAPs are generated only if you specify PTR=T or PTR=H for
| a root segment. When either of these is specified, one RAP is put at the beginning
| of each CI or block, and root segments within the CI or block are chained from the
| RAP in reverse order based on the time they were inserted. By this method, the
| RAP points to the last root inserted into the block or CI, and the hierarchic or twin
| forward pointer in the first root inserted into the block or CI is set to zero. The
| hierarchic or twin forward pointer in each of the other root segments in the block
| points to the previous root inserted in the block. Figure 59 shows what happens if
| you specify PTR=T or PTR=H for root segments in a HIDAM database.
Figure 59. Specifying PTR=T or PTR=H for Root Segments in a HIDAM Database
| Note that if you specify PTR=H for a PHIDAM root, you get an additional hierarchic
| pointer to the first dependent in the hierarchy. In Figure 59, a “1” indicates where
| this additional hierarchic pointer would appear.
| The implication of using PTR=T or PTR=H is that the pointer from one root to the
| next cannot be used to process roots sequentially. Instead, the HIDAM index must
| be used for all sequential root processing, and this increases access time. Specify
| PTR=TB or PTR=HB for root segments in a HIDAM database. Then no RAP is
| generated, and GN calls against root segments proceed along the normal physical
| twin forward chain. If no pointers are specified for HIDAM root segments, the
| default is PTR=T.
Accessing Segments
The way in which a segment in an HD database is accessed depends on whether
the DL/I call for the segment is qualified or unqualified.
Qualified Calls
When a call is issued for a root segment and the call is qualified on the root
segment’s key, the way in which the database record containing the segment is
found depends on whether the database is HDAM, PHDAM, HIDAM, or PHIDAM. In
an HDAM or a PHDAM database, the randomizing module generates the root
Once the root segment is found, if the qualified call is for a dependent segment,
IMS searches for the dependent by following the pointers in each dependent
segment’s prefix. The exact way in which the search proceeds depends on the type
of pointers you are using. Figure 60 shows how a dependent segment is found
when PCF and PTF pointers are used.
Figure 60. How Dependent Segments Are Found Using PCF and PTF Pointers
Unqualified Calls
When an unqualified call is issued for a segment, the way in which the search
proceeds depends on:
v Whether the database is HDAM, PHDAM, HIDAM, or PHIDAM
v Whether a root or dependent segment is being accessed
v Where position in the database is currently established
v What type of pointers are being used
v Where parentage is set (if the call is a GNP)
Because of the many variables, it is not practical to generalize on how a segment is
accessed.
3. Once the index segment is created, the root segment is stored in the database
at the location specified by the HD space search algorithm. How this algorithm
works is described in “How the HD Space Search Algorithm Works” on page
103.
The “before” picture shows the CI containing the bit map (in VSAM, the bit map is
always in the second CI in the database). The second bit in the bit map is set to 1,
which says there is free space in the next CI. In the next CI (CI #3):
v The FSEAP says there is an FSE (which describes an area of free space) 8
bytes from the beginning of this CI.
v The anchor point area (which has one RAP in this case) contains zeros because
no root segments are currently stored in this CI.
v The FSE AL field says there is 497 bytes of free space available starting at the
beginning of this FSE.
The SKILL1 root segment to be inserted is only 32 bytes long; therefore CI #3 has
plenty of space to store SKILL1.
The “after” picture shows how the space management fields in CI #3 are updated
when SKILL1 is inserted.
v The FSEAP now says there is an FSE 40 bytes from the beginning of this CI.
v The RAP points to SKILL1. The pointer value in the RAP is derived using the
following formula:
Pointer value = (CI size)*(CI number - 1) + Offset with the CI root segment
Chapter 6. Choosing Full-Function Database Types 101
HDAM, PHDAM, HIDAM, and PHIDAM
In this case, the pointer value is 1032 (pointer value = 512 x 2 + 8).
v The FSE has been “moved” to the beginning of the remaining area of free space.
The FSE AL field says there is 465 bytes (497 - 32) of free space available,
starting at the beginning of this FSE.
Figure 62. Updating the Space Management Fields in an HDAM or PHDAM Database
As with the insertion of root segments into an HD database, the various space
management fields in the database need to be updated (This process was
explained and illustrated in “Updating the Space Management Fields When a Root
Segment Is Inserted” on page 101).
Deleting Segments
When a segment is deleted in an HD database, it is physically removed from the
database. The space it occupied can be reused when new segments are inserted.
As with the insertion of segments into an HD database, the various space
management fields need to be updated (This process was explained and illustrated
in “Updating the Space Management Fields When a Root Segment Is Inserted” on
page 101).
v The bit map needs to be updated if the block or CI from which the segment is
deleted now contains enough space for a segment to be inserted. (Remember,
the bit map says whether enough space exists in the block or CI to hold a
segment of the longest type defined. Thus, if the deleted segment did not free up
enough space for the longest segment type defined, the bit map is not changed.)
v The FSEAP needs to be updated to show where the first FSE in the block or CI
is now located.
v When a segment is deleted, a new FSE might be created or the AL field value in
the FSE that immediately precedes the deleted segment might need to be
updated.
v If the deleted segment is a root segment in an HDAM or a PHDAM database, the
value in its PTF pointer is put in the RAP or in the PTF pointer that pointed to it.
This maintains the chain off the RAP and removes the deleted segment from the
chain.
If a deleted segment is next to an already available area of space, the two areas
are combined into one unless they are created by an online task that has not yet
reached a sync point.
Replacing Segments
Replacing segments in HD databases is straightforward as long as fixed-length
segments are used. The segment data, once changed, is simply returned to its
original location in storage. The key field in a segment cannot be replaced.
Provided sufficient adjacent space is available, the segment data is returned to its
original location when a variable-length segment is replaced with a longer segment.
If adjacent space is unavailable, space is obtained from the overflow area for the
lengthened data portion of the segment. This segment is referred to as a “separated
data segment.” It has a 2-byte prefix consisting of a 1-byte segment code and a
1-byte delete flag, followed by the segment data. The delete byte of the separated
data segment is set to X'FF', indicating that this is a separated data segment. A
pointer is built immediately following the original segment to point to the separated
data. Bit 4 of the delete byte of the original segment is set ON to indicate that the
data for this segment is separated. The unused remaining space in the original
segment is available for reuse.
Root Segment
The most desirable block depends on the access method. For HDAM or PHDAM
roots, the most desirable block is the one containing either the RAP or root
segment that will point to the root being inserted. For HIDAM or PHIDAM roots, if
the root does not have a twin backward pointer, the most desirable block is the one
containing the root with the next higher key. If the root has a twin backward pointer,
the most desirable block is the root with the next lower key.
Dependent Segment
The most desirable block is the one containing the segment that points to the
inserted segment. If both physical child and physical twin pointers are used, the
most desirable block is the one containing either the parent or the
immediately-preceding twin. If hierarchic pointers are used, the most desirable block
is the one containing the immediately-preceding segment in the hierarchy.
All search ranges defined in the HD space search algorithm, excluding steps 9
through 11, are limited to the physical extent that includes the most desirable block.
When the most desirable block is in the overflow area, the search ranges, excluding
steps 9 through 11, are restricted to the overflow area.
The steps in the HD space search algorithm follow. They are arranged in the
sequence in which they are performed. The first time any one of the steps in the list
results in available space, the search is ended and the segment is stored.
9. In any block or CI at the end of the data set, as determined by consulting the
bit map. The data sets will be extended as far as possible before going to the
next step.
10. In any block or CI in the data set where space exists, as determined by
consulting the bit map. (This step is not used when a HIDAM or PHIDAM
database is loaded.)
If the dependent segment being inserted is at the highest level in a secondary data
set group, the place and the way in which space is found differ:
v First, if the segment has no twins, steps 1 through 8 in the HD space search
algorithm are skipped.
v Second, if the segment has a twin that precedes it in the twin chain, the most
desirable block is the one containing that twin.
v Third, if the segment has only twins that follow it in the twin chain, the most
desirable block is the one containing the twin to which the new segment is
chained.
Locking Protocols
IMS uses locks to isolate the database changes made by concurrently executing
programs. Locking is accomplished by using either the Program Isolation (PI) lock
manager or the IRLM. The PI lock manager provides only four locking levels and
the IRLM supports eleven lock states.
The IRLM also provides support for “feedback only” and “test” locking required, and
it supplies feedback on lock requests compatible with feedback supplied by the PI
lock manager.
Because data is always accessed hierarchically, when a lock on a root (or anchor)
is obtained, IMS determines if any programs hold locks on dependent segments. If
no program holds locks on dependent segments, it is not necessary to lock
dependent segments when they are accessed.
The following locking protocol allows IMS to make this determination. If a root
segment is updated, the root lock is held at update level until commit. If a
dependent segment is updated, it is locked at update level. When exiting the
database record, the root segment is demoted to read level. When a program
enters the database record and obtains the lock at either read or update level, the
lock manager provides feedback indicating whether or not another program has the
Chapter 6. Choosing Full-Function Database Types 105
HDAM, PHDAM, HIDAM, and PHIDAM
lock at read level. This determines if dependent segments will be locked when they
are accessed. For HISAM, the primary logical record is treated as the root, and the
overflow logical records are treated as dependent segments.
Related Reading: For a special case involving the HISAM delete byte with
parameter ERASE=YES, see “Deleting Segments” on page 72.
These lock protocols apply when the PI lock manager is used; however, if the IRLM
is used, no lock is obtained when a dependent segment is updated. Instead, the
root lock is held at single update level when exiting the database record. Therefore,
no additional locks are required if a dependent segment is inserted, deleted, or
replaced.
If a root segment is returned in hold status, the root lock obtained when entering
the database record prevents another user with update capability from entering the
database record. If a dependent segment is returned in hold status, a Q command
code test lock is required. An indicator is turned on whenever a Q command code
lock is issued for a database. This indicator is reset whenever the only application
scheduled against the database ends. If the indicator is not set, then no Q
command code locks are outstanding and no test lock is required to return a
dependent segment in hold status.
If a Q command code is issued on any segment, the buffer is locked. This prevents
the sharing system from updating the buffer until the Q command code lock is
released.
| When NOTWIN pointers are specified on a PHIDAM root, a lock on the next higher
| non-deleted root is required to provide data integrity. The additional lock is obtained
| by reading the index until a non-deleted index entry is found and then locking the
| RBA of the root segment as the resource name.
When you access an HDAM or a PHDAM database, the anchor of the desired root
segment is locked as long as position exists on any root chained from that anchor.
Therefore, if an update PCB has position on an HDAM or PHDAM root, no other
user can access that anchor. When a segment has been updated and the IRLM is
used, no other user can access the anchor until the user that is updating commits.
If the PI lock manager is used and an uncommitted unit of work holds the anchor,
locks are needed to access all root and dependent segments chained from the
anchor until the user that is updating commits.
When a database I/O error occurs in a sysplex environment, the local system
maintains the buffer and informs all members of the data-sharing group with
registered interest in the database that the CI is unavailable. Subsequent DL/I
requests for that CI receive a failure return code as long as the I/O error persists.
Although you do not have to register your databases with DBRC for error handling
to work, registration is required for HALDBs and highly recommended for all other
full-function databases.
If an error occurs on a database registered with DBRC and the system stops, the
database could be damaged if the system is restarted and a /DBR command is not
issued prior to accessing the database. The restart causes the error buffers to be
restored as they were when the system stopped. If the same block had been
updated during the batch run, the batch update would be overlaid.
Both DEDBs and MSDBs use the direct method of storing data. With the direct
method, the hierarchic sequence of segments is maintained by putting
direct-address pointers in each segment’s prefix.
For a summary of the different characteristics of all IMS database types, including
Fast Path databases, see Table 7 on page 59.
In this chapter:
v “Data Entry Databases”
v “Main Storage Databases (MSDBs)” on page 128
v “Fast Path Virtual Storage Option” on page 135
v “Fast Path Synchronization Points” on page 149
v “Managing I/O Errors and Long Wait Times” on page 149
| Several characteristics of DEDBs also make DEDBs useful when you must gather
| detailed and summary information. These characteristics include:
| Area format
| Area data set replication
| Record deactivation
| Non-recovery option
| MSDBs with non-terminal-related keys to VSO DEDBs. You can use the
| MSDB-to-DEDB Conversion utility to do so.
DEDB Functions
DEDBs and MSDBs have many similar functions, including:
v Virtual storage
v The field (FLD) call
v Fixed length segments
v MSDB or DEDB commit view
| DEDB Areas
| A DEDB can be organized into one or more data sets called areas. Areas increase
| the efficiency, capacity, and flexibility of DEDBs. This topic discusses DEDB areas
| and how to work with them.
The randomizing module is used to determine which records are placed in each
area. Because of the area concept, larger databases can exceed the limitation of
232 bytes for a single VSAM data set. Each area can have its own space
management parameters. You can choose these parameters according to the
message volume, which can vary from area to area. DEDB areas can be allocated
on different volume types.
optionally together with one online utility, can access an area concurrently within a
database, as long as they are using different CIs. CI sizes can be 512 bytes, 1K,
2K, 4K, and up to 28K in 4K increments. The media manager and Integrated
Catalog Facility catalog of Data Facility Storage Management Subsystem (DFSMS)
are required.
| You can limit the overhead of opening areas by preopening your DEDB areas. You
| can also distribute this overhead between the startup process and online operation
| by preopening only those areas that applications use the most and by leaving all
| other areas closed until an application first accesses them.
| You specify the preopen status of an area using the PREOPEN and NOPREO
| parameters of the DBRC INIT.DBDS command or CHANGE.DBDS command.
| By default IMS preopens all DEDB areas that have been assigned preopen status
| during the startup process; however, preopening a large number of DEDB areas
| during the startup process can delay data processing. To avoid this delay, you can
| have IMS preopen DEDB areas after the startup process and asynchronously to the
| execution of your application programs. In this case, if IMS has not preopened a
| DEDB area when an application program attempts to access the area, IMS opens
| the DEDB area at that time. You can specify this behavior by using the FPOPN=
| keyword in the IMS and DBC startup procedures. Specifically, FPOPN=P causes
| IMS to preopen DEDB areas after startup and asynchronous to application program
| execution.
| The FPOPN= keyword determines how IMS reopens DEDB areas for both normal
| restarts (/NRE) and emergency restarts (/ERE).
| Related Reading:
| v For more information about the FPOPN= keyword and the IMS and DBC
| procedures, see IMS Version 9: Installation Volume 2: System Definition and
| Tailoring.
| v For more information about DBRC and DBRC commands, see the IMS Version
| 9: Database Recovery Control (DBRC) Guide and Reference.
| Reopening DEDB Areas During an Emergency Restart: You can specify how
| IMS reopens DEDB areas during an emergency restart by using the FPOPN=
| keyword in the IMS procedure or DBC procedure. The following list describes how
| the FPOPN= keyword affects the reopening of DEDB areas during an emergency
| restart:
| FPOPN=N
| During the startup process, IMS opens only those areas that have preopen
| status. This is the default.
| FPOPN=P
| After the startup process completes and asynchronous to the resumption of
| application processing, IMS opens only those areas that have preopen status.
| FPOPN=R
| After the startup process completes and asynchronous to the resumption of
| application processing, IMS opens only those areas that were open prior to the
| abnormal termination. All DEDB areas that were closed at the time of the
| abnormal termination, including DEDB areas with a preopen status, will remain
| closed when you restart IMS.
| FPOPN=D
| Suppresses the preopen process. DEDB areas that have a preopen status are
| not preopened and remain closed until they are first accessed by an application
| program or until they are manually opened with a /START AREA command.
| FPOPN=D overrides, but does not change, the preopen status of DEDB areas
| as set by the PREOPEN parameter of the DBRC commands INIT.DBDS and
| CHANGE.DBDS.
| Related Reading: For more information about the FPOPN= keyword and the IMS
| and DBC startup procedures, see IMS Version 9: Installation Volume 2: System
| Definition and Tailoring.
| You can resume access to a stopped DEDB by starting it with the /START DATABASE
| command. You can also resume access to a stopped area by starting it with the
| /START AREA command. The /START AREA command does not open areas unless
| you have specified them as PREOPEN areas.
| You can specify how IMS restarts and reopens DEDB areas after the IRLM
| reconnects, by using the FPRLM= keyword in the IMS and DBC procedures. The
| following list describes how the FPRLM= keyword affects the reopening of DEDB
| areas after an IRLM failure has been corrected:
| FPRLM=N
| All DEDB areas remain stopped and unopened until you issue a /START
| DATABASE or /START AREA command. This is the default.
| FPRLM=S
| After IRLM reconnects, IMS restarts, but does not reopen, all areas that were
| open at the time of the IRLM failure. IMS restarts the DEDB areas
| asynchronously to the resumption of application processing.
| FPRLM=R
| After IRLM reconnects, IMS restores all DEDB areas to their state at the time of
| the IRLM failure, restarting and reopening DEDB areas regardless of whether
| the DEDB areas have preopen status. IMS restores the DEDB areas
| asynchronously to the resumption of application processing.
| FPRLM=A
| After IRLM reconnects, IMS restarts and reopens all DEDB areas that were
| open at the time of the IRLM failure and all DEDB areas that have a preopen
| status, even if they were closed at the time of the IRLM failure. IMS restores
| the DEDB areas asynchronously to the resumption of application processing.
| Related Reading:
| v For more information about the FPRLM= keyword and the IMS and DBC
| procedures, see IMS Version 9: Installation Volume 2: System Definition and
| Tailoring.
| v For more information about IRLM, see:
| – IMS Version 9: Operations Guide
| – IMS Version 9: Administration Guide: System
Read Error: When a read error is detected in an area, the application program
receives an AO status code. An Error Queue Element (EQE) is created, but not
written to the second CI nor sent to the sharing system in a data sharing
environment. Application programs can continue to access that area; they are
prevented only from accessing the CI in error. After read errors on four different CIs,
the area data set (ADS) is stopped. The read errors must be consecutive; that is, if
there is an intervening write error, the read EQE count is cleared. This read error
processing only applies to a multiple area data set (MADS) environment.
Write Error: When a write error is detected in an area, an EQE is created and
application programs are allowed access to the area until the EQE count reaches
11. Even though part of a database might not be available (one or more areas are
stopped), the database is still logically available and transactions using that
database are still scheduled. If multiple data sets make up the area, chances are
that one copy of the data will always be available.
When a write error occurs to a DEDB using MADS, an EQE is created for the ADS
that had the write error. In this environment, when the maximum of 10 EQEs is
reached, the ADS is stopped.
When a write error to a recoverable DEDB area using a single ADS occurs, IMS
invokes the I/O toleration (IOT) processing. IMS allocates a virtual buffer in ECSA
and copies the control interval in error from the Fast Path common buffer to the
virtual buffer. IMS records the creation of the virtual buffer with an X’26’ log record.
If the database is registered with DBRC, an Extended Error Queue Element (EEQE)
is created and registered in DBRC. The EEQE identifies the control interval in error.
In a data sharing environment using IRLM, all sharing partners are notified of the
creation of the EEQE.
The data that is tolerated is available to the IMS system that created the EEQE.
The sharing partner will get an ’AO’ status when it requests that CI because the
data is not available. When a request is made for a control interval that is tolerated,
the data is copied from the virtual buffer to a common buffer. When an update is
performed on the data, it is copied back to the virtual buffer. A standard X’5950’ log
record is generated for the update.
Every write error is represented by an EEQE on an area basis. The EEQEs are
maintained by DBRC and logged to the IMS log as X’26’ log records. There is no
logical limit to the number of EEQEs that can exist for an area. There is a physical
storage limitation in DBRC and ECSA for the number of EEQEs that can be
maintained. This limit is installation dependent. To make sure that we do not
overextend DBRC or ECSA usage, a limited number of EEQEs are allowed for a
DEDB. The limit is 100. After 100 EEQEs are created for an area, the area is
stopped.
During system checkpoint, /STO, and /VUN commands, IMS attempts to write back
the CIs in error. If the write is successful, the EEQE is removed. If the write is
unsuccessful, the EEQE remains.
Record Deactivation
If an error occurs while an application program is updating a DEDB, it is not
necessary to stop the database or even the area. IMS continues to allow application
programs to access that area. It only prevents them from accessing the control
interval in error by creating an EQE for the error CI. If there are multiple copies of
the area, chances are that one copy of the data will always be available. It is
unlikely that the same control interval will be in error in all copies of the area. IMS
automatically makes an area data set unavailable when a count of 11 errors has
been reached for that data set.
Record deactivation minimizes the effect of database failure and errors to the data
in these ways:
v If multiple copies of an area data set are used, and an error occurs while an
application program is trying to update that area, the error does not need to be
corrected immediately. Other application programs can continue to access the
data in that area through other available copies of that area.
v If a copy of an area has a number of I/O errors, you can create a new copy from
existing copies of the area using the DEDB Area Data Set Create utility. The
copy with the errors can then be destroyed.
Non-Recovery Option
By specifying a VSO or non-VSO DEDB as nonrecoverable, you can improve online
performance and reduce database change logging of your DEDBs. IMS does not
log any changes from a nonrecoverable DEDB, nor does it keep any updates in the
DBRC RECON data set. All areas are nonrecoverable in a nonrecoverable DEDB.
SDEPs are not supported for nonrecoverable DEDBs. After IMS calls DBRC to
authorize the areas, IMS checks for SDEPs. If IMS finds SDEPs, IMS calls DBRC
to unauthorize them and IMS stops them. You must remove the SDEP segment
type from the DEDB design before IMS will authorize the DEDB.
Related Reading: For information on how IMS handles nonrecoverable DEDB write
errors, which can happen during restart or XRF takeover, see “Write Error” on page
113.
Before changing the recoverability of a DEDB, issue a /STOP DB, /STO AREA, or /DBR
DB command. To change a recoverable DEDB to a nonrecoverable DEDB, use the
DBRC command CHANGE.DB DBD() NONRECOV. To change nonrecoverable DEDB to a
recoverable DEDB, use the command CHANGE.DB DBD() RECOVABL.
The Create utility can create its new copy on a different device, as specified in its
job control language (JCL). If your installation was migrating data to other storage
devices, then this process could be carried out while the online system was still
executing, and the data would remain current.
To ensure all copies of a DEDB remain identical, IMS updates all copies when a
change is made to only one copy.
If an ADS fails open during normal open processing of a DEDB with multiple data
sets (MADS), none of the copies of the ADS can be allocated, and the area is
stopped. However, when open failure occurs during emergency restart, only the
failed ADS is unallocated and stopped. The other copies of the ADS remain
available for use.
| If you specify that a DEDB does not allow data sharing, only one IMS system can
| access a DEDB area at a time; however, other IMS systems can still access the
| other areas the DEDB contains.
| If you specify that a DEDB allows data sharing, multiple IMS systems can access
| the same DEDB area at the same time. Sharing a single DEDB area is equivalent
| to block-level sharing of full-function databases.
| You can specify the level of data sharing that a DEDB allows by using the
| SHARELVL parameter in the DBRC commands INIT.DB and CHANGE.DB. If any IMS
| has already authorized the database, changing the SHARELVL does not modify the
| database record. The SHARELVL parameter applies to all areas in a DEDB.
| You can share DEDB areas directly from DASD or from a coupling facility structure
| using the Virtual Storage Option (VSO).
| Related Reading:
| v For general information on VSO, including its benefits and use, see “Fast Path
| Virtual Storage Option” on page 135.
| v For specific information on sharing VSO DEDB areas, see “Sharing of VSO
| DEDB Areas” on page 138.
| v For more information on the SHARELVL parameter, see the IMS Version 9:
| Database Recovery Control (DBRC) Guide and Reference.
| v For general information on data sharing, see IMS Version 9: Administration
| Guide: System.
To define fixed-length segments, specify a single value for the BYTES= parameter
during DBDGEN in the SEGM macro. To define variable-length segments, specify
two values for the BYTES= parameter during DBDGEN in the SEGM macro.
Application programs for fixed-length-segment DEDBs, like MSDBs, do not see the
length (LL) field at the beginning of each segment. Application programs for
variable-length-segment DEDBs do see the length (LL) field at the beginning of
each segment, and must use it to process the segment properly.
Fixed-length-segment application programs using REPL and ISRT calls can omit the
length (LL) field.
Figure 65 on page 118 shows these parts of a DEDB area. Each part is described
in detail in the following topics:
v “Root Addressable Part” on page 119
v “Independent Overflow Part” on page 119
v “Sequential Dependent Part” on page 119
v “CI and Segment Formats” on page 119
| When a DEDB data set is initialized by the DEDB initialization utility (DBFUMIN0),
| additional CIs are created for internal use, so the DEDB area will actually contain
| more CIs than are shown in Figure 65. These extra CIs were used for the DEDB
| Direct Reorganization utility (DBFUMDR0), which went out service with IMS Version
| 5 and was replaced by the High-Speed DEDB Direct Reorganization utility
| (DBFUHDR0). Although IMS does not use the extra CIs, DBFUMIN0 creates them
| for compatibility purposes.
Each UOW in the root addressable part is further divided into a base section and
an overflow section. The base section contains CIs of a UOW that are addressed
by the randomizing module, whereas the overflow section of the UOW is used as a
logical extension of a CI within that UOW.
Root and direct dependent segments are stored in the base section. Both can be
stored in the overflow section if the base section is full.
The following four diagrams—Figure 66, Figure 67 on page 120, Figure 68 on page
121, and Figure 69 on page 121—show the following formats:
v CI format
v Root segment format
v Sequential dependent segment format
v Direct dependent segment format
The tables that follow each figure—Table 10 on page 120, Table 11 on page 120,
Table 12 on page 121, and Table 13 on page 121, respectively—describe the
sections of the CI and segments in the order that the sections appear in the
graphic.
Figure 67. Root Segment Format (with Sequential and Direct Dependent Segments with
Subset Pointers)
Each CI in the base section of a UOW in an area has a single anchor point.
Sequential processing using GN calls processes the roots in the following order:
1. Ascending area number
2. Ascending UOW
3. Ascending key in each anchor point chain
Each root segment contains, in ascending key sequence, a PTF pointer containing
the RBA of the next root.
DDEP segments can be defined with or without a unique sequence field, and are
stored in ascending key sequence.
DEDBs allow a PCL pointer to be used. This pointer makes it possible to access
the last physical child of a segment type directly from the physical parent. The
INSERT rule LAST avoids the need to follow a potentially long physical child pointer
chain.
Subset pointers are a means of dividing a chain of segment occurrences under the
same parent into two or more groups, of subsets. You can define as many as eight
subset pointers for any segment type, dividing the chain into as many as nine
subsets. Each subset pointer points to the start of a new subset.
Related Reading: For more information on defining and using subset pointers, see
IMS Version 9: Application Programming: Database Manager.
If all SDEP dependents were chained from a single root segment, processing with
Get Next in Parent calls would result in a backward sequential order. (Some
applications are able to use this method.) Normally, SDEP segments are retrieved
sequentially only by using the DEDB Sequential Dependent Scan utility
(DBFUMSC0), described in IMS Version 9: Utilities Reference: Database and
Transaction Manager. SDEP segments are then processed by offline jobs.
SDEP segments are used for data collection, journaling, and auditing applications.
The level of enqueue at which ROOT and SDEP segment CIs are originally
acquired depends on the intent of the transaction. If the intent is update, all
acquired CIs (with the exception of SDEP CIs) are held at the EXCLUSIVE level. If
the intent is not update, they’re held at the SHARED level. Of course, there is the
potential for deadlock.
The level of enqueue, just described, is reexamined each time the buffer stealing
facility is invoked. The buffer stealing facility examines each buffer (and CI) that is
already allocated and updates its level of enqueue.
All other enqueued CIs are released and therefore can be allocated by other
regions.
Related Reading: For more information about the buffer stealing facility, see “Fast
Path Buffer Allocation Algorithm” on page 283.
The general rule for inserting a segment into a DEDB is the same as it is for an HD
database. The rule is to store the segment (root and direct dependents) into the
most desirable block.
For root segments, the most desirable block is the RAP CI. For direct dependents,
the most desirable block is the root CI. When space for storing either roots or direct
dependents is not available in the most desirable block, the DEDB insert algorithm
(described next) searches for additional space. Space to store a segment could
exist:
v In the dependent overflow
v In an independent overflow CI currently owned by this UOW
This algorithm attempts to store the data in the minimum amount of CIs rather than
scatter database record segments across a greater number of RAP and overflow
CIs. The trade-off is improved performance for future database record access
versus optimum space utilization.
The DEDB insert algorithm searches for additional space when space is not
available in the most desirable block. For root segments, if the RAP CI does not
have sufficient space to hold the entire record, it contains the root and as many
direct dependents as possible. Base CIs that are not randomizer targets go unused.
The algorithm next searches for space in the first dependent overflow CI for this
UOW. From the header of the first dependent overflow CI, a determination is made
whether space exists in that CI.
Related Reading: For information on DEDB CI format and allocation, see IMS
Version 9: Diagnosis Guide and Reference.
If the CI pointed to by the current overflow pointer does not have enough space, the
next dependent overflow CI (if one exists) is searched for space. The current
overflow pointer is updated to point to this dependent overflow CI. If no more
dependent overflow CIs are available, then the algorithm searches for space in the
independent overflow part.
When an independent overflow CI has been selected for storing data, it can be
considered a logical extension of the overflow part for the UOW that requested it.
Figure 71 on page 126 shows how a UOW is extended into independent overflow.
This UOW, defined as 10 CIs, includes 8 Base CIs and 2 dependent overflow CIs.
Additional space is needed to store the database records that randomize to this
UOW. Two independent overflow CIs have been acquired, extending the size of this
UOW to 12 CIs. The first dependent overflow CI has a pointer to the second
independent overflow CI indicating that CI is the next place to look for space.
The DEDB free space algorithm is used to free dependent overflow and
independent overflow CIs. When a dependent overflow CI becomes entirely empty,
it becomes the CI pointed to by the current overflow pointer in the first dependent
overflow CI, indicating that this is the first overflow CI to use for overflow space if
the most desirable block is full. An independent overflow CI is owned by the UOW
to which it was allocated until every segment stored in it has been removed. When
the last segment in an independent overflow CI is deleted, the empty CI is made
available for reuse. When the last segment in a dependent overflow CI is deleted, it
can be reused as described at the beginning of this topic.
Reorganization
During online reorganization, the segments within a UOW are read in GN order and
written to the reorganization UOW. This process inserts segments into the
reorganization UOW, eliminating embedded free space. If all the segments do not fit
into the reorganization UOW (RAP CI plus dependent overflow CIs), then new
independent overflow CIs are allocated as needed. When the data in the
reorganization UOW is copied back to the correct location, then:
v The newly acquired independent overflow CIs are retained.
v The old segments are deleted.
v Previously allocated independent overflow CIs are freed.
Segment Deletion
A segment is deleted either by an application DLET call or because a segment is
REPLaced with a different length. Segment REPLace can cause a segment to
move. Full Function handles segment length increases differently from DEDBs. In
Full Function, an increased segment length that does not fit into the available free
space is split, and the data is inserted away from the prefix. For DEDBs, if the
replaced segment is changed, it is first deleted and then reinserted. The insertion
process follows the normal space allocation rules.
For more information on tuning DEDBs, see “Tuning Fast Path Systems” on page
415.
DEDB processing uses the same call interface as DL/I processing. Therefore, any
DL/I call or calling sequence executed against a DEDB has the same logical result
as if executed against an HDAM or PHDAM database.
Because of differences in sync point processing, there are differences in the way
database updates are committed. IFPs that request full function resources, or MPPs
(or BMPs) that request DEDB (or MSDB) resources operate in “mixed mode”. The
performance and resource use implications are discussed in “Fast Path
Synchronization Points” on page 149.
An MSDB is defined in the DBD in the same way as any other IMS database, by
coding ACCESS=MSDB in the DBD statement. The REL keyword in the DATASET
statement selects one of the four MSDB types.
Both dynamic and fixed terminal-related MSDBs have the following characteristics:
v The record can be updated only through processing of messages issued from the
LTERM that owns the record. However, the record can be read using messages
from any LTERM.
v The name of the LTERM that owns a segment is the key of the segment. An
LTERM cannot own more than one segment in any one MSDB.
v The key does not reside in the stored segment.
v Each segment in a fixed terminal-related MSDB is assigned to and owned by a
different LTERM.
MSDBs provide a high degree of parallelism and are suitable for applications in the
banking industry (such as general ledger). To provide fast access and allow
frequent update to this data, MSDBs reside in virtual storage during execution.
MSDBs Storage
The MSDB Maintenance utility (DBFDBMA0) creates the MSDBINIT sequential data
set in physical ascending sequence (see Figure 73 on page 130). During a cold
start, or by operator request during a normal warm start, the sequential data set
MSDBINIT is read and the MSDBs are created in virtual storage (see Figure 72).
During a warm start, the control program uses the current checkpoint data set for
initialization. The MSDB Maintenance utility can also modify the contents of an old
MSDBINIT data set. For warm start, the master terminal operator can request use
of the IMS.MSDBINIT, rather than a checkpoint data set.
Figure 73 shows the MSDBINIT record format. Table 14 on page 130 explains the
record parts.
MSDB records contain no pointers except the forward chain pointer (FCP)
connecting free segment records in the terminal-related dynamic database.
Figure 74 on page 131 shows a high-level view of how MSDBs are arranged in
priority sequence.
On a cold start (including /ERE CHKPT 0), MSDBs are loaded from the MSDBINIT
data set.
Even with the preceding restrictions, the result of a call to the database with no
SSA, an unqualified SSA, or a qualified SSA remains the same as a call to the
full-function database. For example, a retrieval call without an SSA returns the first
record of the MSDB or the full-function database, depending on the environment in
which you are working. The following list shows the type of compare or search
technique used for a qualified SSA.
Type of Compare
Figure 75 on page 133 shows a layout of the four MSDBs and the control blocks
and tables necessary to access them. The Extended Communications Node Table
(ECNT) is located by a pointer from the Extended System Contents Directory
(ESCD), which in turn is located by a pointer from the System Contents Directory
(SCD). The ESCD contains first and last header pointers to the MSDB header
queue. Each of the MSDB headers contains a pointer to the start of its respective
database area.
Figure 75 on page 133 shows the ECNT and MSDB storage layout.
Position in an MSDB
Issuing a DL/I call causes a position pointer to fix on the current segment. The
meaning of “next segment” depends on the key of the MSDB. The current segment
in a non-terminal-related database without LTERM keys is the physical segment
against which a call was issued. The next segment is the following physically
adjacent segment after the current segment. The other three databases, using
LTERM names as keys, have a current pointer fixed on a position in the ECNT
table. Each entry in the table represents one LTERM name and segment pointers to
every MSDB with which LTERM works. A zero entry indicates no association
between an LTERM and an MSDB segment. If nonzero, the next segment is the
next entry in the table. The zero entries are skipped until a nonzero entry is found.
Modification is done with the CHANGE form of the FLD call. The value of a field
can be tested with the VERIFY form of the FLD call. These forms of the call allow
an application program to test a field value before applying the change. If a VERIFY
fails, all CHANGE requests in the same FLD call are denied. This call is described
in IMS Version 9: Application Programming: Database Manager.
The preceding differences become more critical when transactions update or refer
to both full function DL/I and MSDB data. Updates to full function DL/I databases
and DEDBs are immediately available while MSDB changes are not. For example, if
you issue a GHU and a REPL for a segment in an MSDB, then you issue another
get call for the same segment in the same commit interval, the segment IMS
returns to you is the “old” value, not the updated one.
If processing is not single mode, this difference can increase. In the case of multiple
mode processing, the sync point processing is not invoked for every transaction.
Your solution might be to ask for single mode processing when MSDB data is to be
updated.
For high-end performance applications with DEDBs, defining your DEDB areas as
VSO allows you to realize the following performance improvements:
v Reduced read I/O
| After an IMS and VSAM control interval (CI) has been brought into virtual
| storage, all subsequent I/O read requests read the data from virtual storage
| rather than from DASD.
v Decreased locking contention
For VSO DEDBs, locks are released after both of the following:
– Logging is complete for the second phase of an application synchronization
(commit) point
– The data has been moved into virtual storage
For non-VSO DEDBs, locks are held at the VSAM CI-level and are released only
after the updated data has been written to DASD.
v Fewer writes to the area data set
Updated data buffers are not immediately written to DASD; instead they are kept
in the data space and written to DASD at system checkpoint or when a threshold
is reached.
In all other respects, VSO DEDBs are the same as non-VSO DEDBs. Therefore,
VSO DEDB areas are available for IMS DBCTL and LU 6.2 applications, as well as
other IMS DB or IMS TM applications. Use the DBRC commands INIT.DBDS and
CHANGE.DBDS to define VSO DEDB areas.
The virtual storage for VSO DEDB areas is housed differently depending on the
share level assigned to the area. VSO DEDB areas with share levels of 0 and 1 are
loaded into a z/OS data space. VSO DEDB areas with share levels of 2 and 3 are
loaded into a coupling facility cache structure.
| Coupling facility cache structures are defined by the system administrator and can
| accommodate either a single DEDB area or multiple DEDB areas. Cache structures
| that support multiple DEDB areas are called multi-area structures. For more
| information on multi-area structures, see IMS Version 9: Administration Guide:
| System.
| The actual size available for a VSO area is the maximum size (2 GB) minus
| amounts used by z/OS (from 0 to 4 KB) and IMS Fast Path (approximately 100
| KB). To see the size, usage, and other statistics for a VSO DEDB area, enter the
| /DISPLAY FPVIRTUAL command.
| v The DEDB Area Data Set Compare utility (DBFUMMH0) does not support VSO
| DEDB areas.
Related Reading:
v See “Accessing a Data Space” on page 143 for more information on data
spaces.
v See IMS Version 9: Command Reference for more information on the /DISPLAY
commands.
If you specify NOPREL, and you want the area to be preopened, you must
separately specify PREOPEN for the area.
| CFSTR1
| Defines the name of the cache structure in the primary coupling facility.
| Cache structure names must follow z/OS coupling facility naming
| conventions. CFSTR1 uses the name of the DEDB area as its default. This
| parameter is valid only for VSO DEDB areas that are defined with
| SHARELVL(2|3).
| Related Reading: For detailed information on coupling facility naming, see
| “Coupling Facility Structure Naming Convention” on page 140.
| CFSTR2
| Defines the secondary coupling facility cache structure name when you use
| IMS-managed duplexing of structures. The cache structure name must
| follow z/OS coupling facility naming conventions. CFSTR2 does not provide
| a default name. This parameter is valid only for VSO areas of DEDBs that
| are defined with SHARELVL(2|3) and that are single-area structures. This
| parameter cannot be used with multi-area structures, which use
| system-managed duplexing.
| Related Reading:
| v For detailed information on coupling facility naming, see “Coupling
| Facility Structure Naming Convention” on page 140.
| v For more information on multi-area structures, see IMS Version 9:
| Administration Guide: System.
| MAS Defines a VSO DEDB area as using a multi-area structure as opposed to a
| single-area structure.
| Related Reading: For more information on multi-area structures, see IMS
| Version 9: Administration Guide: System.
| NOMAS
| Defines a VSO DEDB area as using a single-area cache structure as
| opposed to a multi-area structure. NOMAS is the default.
| LKASID
| Indicates that buffer lookaside is to be performed on read requests for this
| area. For VSO DEDB areas that use a multi-area structure, lookaside can
| also be specified using the DFSVSMxx PROCLIB member. If there is a
| discrepancy between the specifications in DBRC and those in DFSVSMxx,
| the specifications in DFSVSMxx are used.
| Related Reading: For additional information on defining private buffer
| pools, see “Defining a Private Buffer Pool Using the DFSVSMxx
| IMS.PROCLIB Member” on page 141.
| NOLKASID
| Indicates that buffer lookaside is not to be performed on read requests for
| this area.
| Related Reading: For additional information on defining private buffer
| pools, see “Defining a Private Buffer Pool Using the DFSVSMxx
| IMS.PROCLIB Member” on page 141.
| When a NOPREO area is also defined as shared VSO with a share level of 2 or 3,
| you can open the area with the /START AREA command. This connects the area to
| the VSO structures.
You can use the DBRC commands to define your VSO DEDB areas at any time; it
is not necessary that IMS be active. The keywords specified on these DBRC
commands go into effect at two different points in Fast Path processing:
v Control region startup
After the initial checkpoint following control region initialization, DBRC provides a
list of areas with any of the VSO options (VSO, NOVSO, PRELOAD, and
NOPREL) or either of the PREOPEN or NOPREO options. The options are then
maintained by IMS Fast Path.
v Command processing
When you use a /START AREA command, DBRC provides the VSO options or
PREOPEN|NOPREO options for the area. If the area needs to be preopened or
preloaded, it is done at this time.
When you use a /STOP AREA command, any necessary VSO processing is
performed.
Related Reading: See IMS Version 9: Command Reference for details on start
and stop processing.
The coupling facility policy software and its cache structure services provide
interfaces and services to z/OS that allow sharing of VSO DEDB data in shared
storage. Shared storage controls VSO DEDB reads and writes:
v A read of a VSO CI brings the CI into the coupling facility from DASD.
v A write of an updated VSO CI copies the CI to the coupling facility from main
storage, and marks it as changed.
v Changed CI data is periodically written back to DASD.
The XES and z/OS services provide a way of manipulating the data within the
cache structures. They provide high performance, data integrity, and data
consistency for multiple IMS systems sharing data.
Duplexing Structures
| Duplexed structures are duplicate structures for the same area. Duplexing allows
| you to have dual structure support for your VSO DEDB areas, which helps to
| ensure the availability and recoverability of your data.
If you have dual structures, IMS systems below Version 8 cannot connect to
structures with different sizes.
System-Managed Rebuild
You can reconfigure a coupling facility while keeping all VSO structures online by
copying the structures to another coupling facility. There is no change to the VSO
definition.
DEFINE POLICY(POLICY1)
DEFINE CF(FACIL01)
ND(123456)
SIDE(0)
ID(01)
DUMPSPACE(2000)
DEFINE CF(FACIL02)
ND(123456)
SIDE(1)
ID(02)
DUMPSPACE(2000)
DEFINE STR(LIST01)
SIZE(1000)
PREFLIST(FACIL01,FACIL02)
EXCLLIST(CACHE01)
DEFINE STR(CACHE01)
SIZE(1000)
PREFLIST(FACIL02,FACIL01)
EXCLLIST(LIST01)
/*
In the example, the programmer defined one list structure (LIST01) and one cache
structure (CACHE01).
Attention: When defining a cache structure to DBRC, ensure that the name is
identical to the name used in the CFRM policy (see “Registering a
Cache Structure Name with DBRC”).
Figure 77. Defining a VSO Area Coupling Facility Structure Name in DBRC
where:
poolname 8 character name of the pool. Used in displays and reports.
size The buffer size of the pool. All the standard DEDB-supported buffer
sizes are supported.
pbuf The primary buffer allocation. The first allocation receives this
number of buffers. Maximum value is 99999.
sbuf The secondary buffer allocation. If the primary allocation starts to
run low, another allocation of buffers is made. This amount
indicates the secondary allocation amount. Maximum value is
99999.
maxbuf The maximum number of buffers allowed for this pool. It is a
combination of PBUF plus some iteration of SBUF. Maximum value
is 99999.
LKASID|NOLKASID
Indicates whether this pool is to be used as a local cache with
buffer lookaside capability. This value is cross-checked with the
DBRC specification of LKASID to determine which pool the area will
use. If there is an inconsistency between the DEDB statement and
DBRC, the DBRC value takes precedence.
dbname Association of the pool to a specific area or DBD. If the dbname is
an area name, then the pool is used only by that area. If the
dbname specifies a DBD name, the pool is used by all areas in that
DBD. The dbname is first checked for an area name then for a
DBD name.
DEDB=(POOL1,512,400,50,800,LKASID)
DEDB=(POOL2,8196,100,20,400,NOLKASID)
If the customer does not define a private buffer pool, the default parameter values
are calculated as follows:
DEDB=(poolname,XXX,64,16,512)
where:
v XXX is the CI size of the area to be opened.
v The initial buffer allocation is 64.
v The secondary allocation is 16.
v The maximum number of buffers for the pool is 512.
v The LKASID option is specified if it is specified in DBRC for the area.
| Except for the following parameters, the parameters for DEDBMAS are the same as
| those in the DFSVSMxx DEDB= keyword:
| cisize The control interval size of the area. All areas that share a
| multi-area structure must have the same control interval size. If
| there is a discrepancy between the control interval size of the area
| used in creating the structure and the control interval size of the
| area attempting to share the structure, the open process for the
| area attempting to share the structure fails.
| strname The required 1- to 16-character name of the primary coupling
| facility structure. The installation must have defined the structure in
| the CFRM administrative policy. The structure name must follow the
| naming conventions of the CFRM. If the name has fewer than 16
| characters, the system pads the name with blanks. The valid
| characters are A–Z, 0–9, and the characters $, &, #, and _. Names
| must be uppercase and start with alphabetic character.
| DREF data spaces use a combination of central storage and expanded storage, but
| no auxiliary storage. Data spaces without the DREF option use central storage,
| expanded storage, and auxiliary storage, if auxiliary storage is available.
IMS acquires additional data spaces for VSO areas, both with DREF and without,
as needed.
IMS assigns areas to data spaces using a “first fit” algorithm. The entire root
addressable portion of an area (including independent overflow) resides in the data
space. The sequential dependent portion does not reside in the data space.
The amount of space needed for an area in a data space is (CI size) × (number of
CIs per UOW) × ((number of UOWs in root addressable portion) + (number of
UOWs in independent overflow portion)) rounded to the next 4 KB.
| Expressed in terms of the parameters of the DBDGEN AREA statement, this formula is
| (SIZE parameter value) × (UOW parameter value) × (ROOT parameter value)
| rounded to the next 4 KB.
The actual amount of space in a data space available for an area (or areas) is two
gigabytes (524,288 blocks, 4 KB each) minus an amount reserved by z/OS (from 0
to 4 KB) minus an amount used by IMS Fast Path (approximately 100 KB). You can
use the /DISPLAY FPVIRTUAL command to determine the actual storage usage of a
particular area.
Related Reading: For sample output from this command, see IMS Version 9:
Command Reference.
Without VSO, the VSAM CI (physical block) is the smallest available resource for
DEDB resource request management and locking. If there is an update to any part
of the CI, the lock is held until the whole CI is rewritten to DASD. No other
requester is allowed access to any part of the CI until the first requester’s lock is
released.
With VSO, the database segment is the smallest available resource for DEDB
resource request management and locking. Segment-level locking is available only
for the root segment of a DEDB with a root-only structure, and when that root
segment is a fixed-length segment. If processing options R or G are specified in the
calling PCB, IMS can manage and control DEDB resource requests and serialize
change at the segment level; for other processing options, IMS maintains VSAM CI
locks. Segment locks are held only until the segment updates are applied to the CI
in the data space. Other requesters for different segments in the same CI are
allowed concurrent access.
A VSO DEDB resource request for a segment causes the entire CI to be copied
into a common buffer. VSO manages the segment request at a level of control
consistent with the request and its access intent. VSO also manages access to the
CI that contains the segment but at the share level in all cases. A different user’s
subsequent request for a segment in the same CI accesses the image of the CI
already in the buffer.
Updates to the data are applied directly to the CI in the buffer at the time of the
update. Segment-level resource control and serialization provide integrity among
multiple requesters. After an updated segment is committed and applied to the copy
of the CI in the data space, other requesters are allowed access to the updated
segment from the copy of the CI in the buffer.
If after a segment change the requester’s updates are not committed for any
reason, VSO copies the unchanged image of the segment from the data space to
the CI in the buffer. VSO does not allow other requesters to access the segment
until VSO completes the process of removing the uncommitted and cancelled
updates. Locking at the segment level is not supported for shared VSO areas. Only
CI locking is supported.
| is opened by the first IMS system to complete its control region initialization.
| IMS will not attempt to preopen the area for any other IMS.
| SHARELVL(1)
| One updater, many readers: in a data sharing environment, a
| SHARELVL(1) area with the PREOPEN option is preopened by all sharing
| IMS systems. The first IMS system to complete its control region
| initialization has update authorization; all others have read authorization.
| If the SHARELVL(1) area is a VSO area, it is allocated to a data space by
| any IMS that opens the area. If the area is defined as VSO PREOPEN or
| VSO PRELOAD, it is allocated to a data space by all sharing IMS systems.
| If the area is defined as VSO NOPREO NOPREL, it is allocated to a data
| space by all IMS systems, as each opens the area. The first IMS to access
| the area has update authorization; all others have read authorization.
| SHARELVL(2)
| Block-level sharing: a SHARELVL(2) area with at least one coupling facility
| structure name (CFSTR1) defined is shared at the block or control interval
| (CI) level within the scope of a single IRLM. Multiple IMS systems can be
| authorized for update or read processing if they are using the same IRLM.
| SHARELVL(3)
| Block-level sharing: a SHARELVL(3) area with at least one coupling facility
| structure name (CFSTR1) defined is shared at the block or control interval
| (CI) level within the scope of multiple IRLMs. Multiple IMS systems can be
| authorized for nonexclusive access.
Input Processing
When an application program issues a read request to a VSO area, IMS checks to
see if the data is in the data space. If the data is in the data space, it is copied from
the data space into a common buffer and passed back to the application. If the data
is not in the data space, IMS reads the CI from the area data set on DASD into a
common buffer, copies the data into the data space, and passes the data back to
the application.
For SHARELVL(2|3) VSO areas, Fast Path uses private buffer pools. Buffer
lookaside is an option for these buffer pools. When a read request is issued against
a SHARELVL(2|3) VSO area using a lookaside pool, a check is made to see if the
requested data is in the pool. If the data is in the pool, a validity check to XES is
made. If the data is valid, it is passed back to the application from the local buffer. If
the data is not found in the local buffer pool or XES indicates that the data in the
pool is not valid, the data is read from the coupling facility structure and passed to
the application. When the buffer pool specifies the no-lookaside option, every
request for data goes to the coupling facility.
For those areas that are defined as load-on-demand (using the VSO and NOPREL
options), the first access to the CI is from DASD. The data is copied to the data
space and then subsequent reads for this CI retrieve the data from the data space
rather than from DASD. For those areas that are defined using the VSO and PRELOAD
options, all access to CIs comes from the data space.
Whether the data comes from DASD or from the data space is transparent to the
processing done by application programs.
Output Processing
During phase 1 of synchronization point processing VSO data is treated the same
as non-VSO data. The use of VSO is transparent to logging.
During phase 2 of the synchronization point processing VSO and non-VSO data are
treated differently. For VSO data, the updated data is copied to the data space, the
lock is released and the buffer is returned to the available queue. The relative byte
address (RBA) of the updated CI is maintained in a bitmap. If the RBA is already in
the bitmap from a previous update, only one copy of the RBA is kept. At interval
timer, the updated CIs are written to DASD. This batching of updates reduces the
amount of output processing for CIs that are frequently updated. While the updates
are being written to DASD, they are still available for application programs to read
or update because copies of the data are made within the data space just before it
is written.
For SHARELVL(2|3) VSO areas, the output thread process is used to write updated
CIs to the coupling facility structures. When the write is complete, the lock is
released. XES maintains the updated status of the data in the directory entry for the
CI.
If a read error occurs during preloading, an error message flags the error, but the
preload process continues. If a subsequent application program call accesses a CI
that was not loaded into the data space due to a read error, the CI request goes out
to DASD. If the read error occurs again, the application program receives an “A0”
status code, just as with non-VSO applications. If instead the access to DASD is
successful this time, the CI is loaded into the data space.
Write Errors: When a write error occurs, IMS create an error queue element
(EQE) for the CI in error. For VSO areas, all read requests are satisfied by reading
the data from the data space. Therefore, as long as the area continues to reside in
the data space, the CI that had the write error continues to be available. When the
area is removed from the data space, the CI is no longer available and any request
for the CI receives an “AO” status code.
Read Errors: For VSO areas, the first access to a CI causes it to be read from
DASD and copied into the data space. From then on, all read requests are satisfied
from the data space. If there is a read error from the data space, z/OS abends.
For VSO areas that have been defined with the PRELOAD option, the data is
preloaded into the data space; therefore, all read requests are satisfied from the
data space.
Related Reading: See “The PRELOAD Option” on page 146 for a discussion of
read error handling during the preload process.
There is a maximum of three read errors allowed from a structure. When the
maximum is reached and there is only one structure defined, the area is stopped
and the structure is disconnected.
When the maximum is reached and there are two structures defined, the structure
in error is disconnected. The one remaining structure is used. If a write error to a
structure occurs, the CI in error is deleted from the structure and written to DASD.
The delete of the CI is done from the sharing partners. If none of the sharers can
delete the CI from the structure, an EQE is generated and the CI is deactivated. A
maximum of three write errors are allowed to a structure. If there are two structures
defined and one of them reaches the maximum allowed, it is disconnected.
Checkpoint Processing
During a system checkpoint, all of the VSO area updates that are in the data space
are written to DASD. All of the updated CIs in the CF structures are also written to
DASD. Only CIs that have been updated are written. Also, all updates that are in
progress are allowed to complete before checkpoint processing continues.
| During emergency restart log processing, IMS tracks VSO area updates differently
| depending on the share level of the VSO area. For share levels 0 and 1, IMS uses
| data spaces to track VSO area updates. For share levels 2 and 3, IMS uses a
| buffer in memory to track VSO area updates.
IMS also obtains a single non-DREF data space which it releases at the end of
restart. If restart log processing is unable to get the data space or main storage
resources it needs to perform VSO REDO processing, the area is stopped and
marked as “recovery needed”.
| By default, at the end of emergency restart, IMS opens areas defined with the
| PREOPEN or PRELOAD options. IMS then loads areas with the PRELOAD option into a
| data space or coupling facility structure. You can alter this behavior by using the
| FPOPN keyword of the IMS procedure to have IMS restore all VSO DEDB areas to
| their open or closed state at the time of the failure.
| Related Reading: For more information on specifying how IMS reopens DEDB
| areas during an emergency restart, see “Reopening DEDB Areas During an
| Emergency Restart” on page 111.
| VSO areas without the PREOPEN or PRELOAD options are assigned to a data space
| during the first access following emergency restart.
During tracking, the alternate uses data spaces to track VSO area updates: in
addition to the data space resources used for VSO areas, the alternate obtains a
single non-DREF data space which it releases at the end of takeover. If XRF
tracking or takeover is unable to get the data space or main storage resources it
needs to perform VSO REDO processing, the area is stopped and marked
“recovery needed”.
Following an XRF takeover, areas that were open or in the data space remain open
or in the data space. The VSO options and PREOPEN|NOPREO options that were in
effect for the active IMS before the takeover remain in effect on the alternate (the
new active) after the takeover. Note that these options may not match those defined
to DBRC. For example, a VSO area removed from virtual storage by the /VUNLOAD
command before the takeover is not restored to the data space after the takeover.
VSO areas defined with the preload option are preloaded at the end of the XRF
takeover. In most cases, dependent regions can access the area before preloading
begins, but until preloading completes, some area read requests may have to be
retrieved from DASD.
If, during application processing, a Fast Path program issues a call to a database
other than MSDB or DEDB, or to an alternate PCB, the processing is serialized with
full function events. This can affect the performance of the Fast Path program. In
the case of a BMP or MPP making a call to a Fast Path database, the Fast Path
resources are held, and the throughput for Fast Path programs needing these
resources can be affected.
Multiple Area Data Sets I/O Timing (MADSIOT) helps you avoid the excessively
long wait times (also known as a long busy) that can occur while a RAMAC® disk
array performs internal recovery processing.
Restriction: MADSIOT applies only to multiple area data sets (MADS). For single
area data sets (ADS), IMS treats the long busy condition as a permanent I/O error
handled by the Fast Path I/O toleration function. The MADSIOT function works only
on a system that supports the long busy state.
To invoke MADSIOT, you must define the MADSIOT keyword on the DFSVSMxx
PROCLIB member. The /STA MADSIOT and /DIS AREA MADSIOT commands serve to
start and monitor the MADSIOT function.
| Table 15 shows the required CFRM list structure storage sizes when the number of
| changed CIs is 1 000, 5 000, 20 000, and 30 000.
| Table 15. Required CFRM List Structure Storage Sizes
| Altered number of CIs Required Storage Size (listheadernum=50)
| (entrynum)
| 1 000 1 792 KB
| 5 000 3 584 KB
| 20 000 11 008 KB
| 30 000 15 616 KB
|
| Note: The values for Required Storage Size in Table 15 are for CF level 12 and
might change at higher CF levels.
The CFRM list structure sizes in Table 15 were estimated using the following
formula: storage size = 24576 + 712 * listheadernum + 107 * entrynum
Related Reading:
v For additional information on the MADSIOT keyword, see the topic on the
DFSVSMxx PROCLIB member in IMS Version 9: Installation Volume 2: System
Definition and Tailoring.
v For an example of defining CFRM policies, see the IMS Version 9: Common
Queue Server Guide and Reference.
v For information on the /STA MADSIOT and /DIS AREA MADSIOT commands, see the
IMS Version 9: Command Reference.
This chapter explains the following functions and describes when and how to use
them:
v “Logical Relationships”
v “Secondary Indexes” on page 186
v “Variable-Length Segments” on page 209
v “Segment Edit/Compression Exit Routine” on page 212
v “Data Capture Exit Routines” on page 215
v “Field-Level Sensitivity” on page 220
v “Multiple Data Set Groups” on page 230
v “Block-Level Data Sharing and CI Reclaim” on page 237
v “HALDB Single Partition Processing” on page 237
v “Integrated HALDB Online Reorganization Function” on page 238
v “Storing XML Data in IMS Databases” on page 238
Notes:
1. These functions do not apply to GSAM, MSDB, HSAM, and SHSAM databases.
2. Only the variable-length segment function, the Segment Edit/Compression exit
routine, and the Data Capture exit routine apply to DEDBs.
Logical Relationships
The following database types support logical relationships:
v HISAM
v SHISAM
v HDAM
v PHDAM
v HIDAM
v PHIDAM
Logical relationships resolve conflicts in the way application programs need to view
segments in the database. With logical relationships, application programs can
access:
v Segment types in an order other than the one defined by the hierarchy
v A data structure that contains segments from more than one physical database.
Example: Two databases, one for orders that a customer has placed and one for
items that can be ordered, are called ORDER and ITEM. The ORDER database
contains information about customers, orders, and delivery. The ITEM database
contains information about inventory.
If an application program needs data from both databases, this can be done by
defining a logical relationship between the two databases. As shown in Figure 79, a
path can be established between the ORDER and ITEM databases using a
segment type, called a logical child segment, that points into the ITEM database.
Figure 79 is a simple implementation of a logical relationship. In this case, ORDER
is the physical parent of ORDITEM. ORDITEM is the physical child of ORDER and
the logical child of ITEM.
In a logical relationship, there is a logical parent segment type and it is the segment
type pointed to by the logical child. In this example, ITEM is the logical parent of
ORDITEM. ORDITEM establishes the path or connection between the two segment
types. If an application program now enters the ORDER database, it can access
data in the ITEM database by following the pointer in the logical child segment from
the ORDER to the ITEM database.
The physical parent and logical parent are the two segment types between which
the path is established. The logical child is the segment type that establishes the
path. The path established by the logical child is created using pointers.
Like the other types of logical relationships, a physically paired relationship can be
established between two segment types in the same or different databases. The
relationship shown in Figure 82 allows either the ORDER or the ITEM database to
be entered. When either database is entered, a path exists using the logical child to
cross from one database to the other.
logical child segment. Or if a logical child segment is inserted into one database,
IMS inserts a paired logical child segment into the other database.
With physical pairing, the logical child is duplicate data, so there is some increase
in storage requirements. In addition, there is some extra maintenance required
because IMS maintains data on two paths. In the next type of logical relationship
examined, this extra space and maintenance do not exist; however, IMS still allows
you to enter either database. IMS also performs the maintenance for you.
To define a virtually paired relationship, two logical child segment types are defined
in the physical databases involved in the logical relationship. Only one logical child
is actually placed in storage. The logical child defined and put in storage is called
the real logical child. The logical child defined but not put in storage is called the
virtual logical child.
Note the trade-off between physical and virtual pairing. With virtual pairing, there is
no duplicate logical child and maintenance of paired logical children. However,
virtual pairing requires the use and maintenance of additional pointers, called logical
twin pointers.
A direct pointer consists of the direct address of the segment being pointed to, and
it can only be used to point into a database where a segment, once stored, is not
moved. This means the logical parent segment must be in an HD (HDAM, PHDAM,
HIDAM, or PHIDAM) database, since the logical child points to the logical parent
segment. The logical child segment, which contains the pointer, can be in a HISAM
or an HD database except in the case of HALDB. In the HALDB case, the logical
child segment must be in an HD (PHDAM or PHIDAM) database. A direct LP
pointer is stored in the logical child’s prefix, along with any other pointers, and is
four bytes long. Figure 84 on page 157 shows the use of a direct LP pointer. In a
HISAM database, pointers are not required between segments because they are
stored physically adjacent to each other in hierarchic sequence. Therefore, the only
time direct pointers will exist in a HISAM database is when there is a logical
relationship using direct pointers pointing into an HD database.
In Figure 84, the direct LP pointer points from the logical child ORDITEM to the
logical parent ITEM. Because it is direct, the LP pointer can only point to an HD
database. However, the LP pointer can “exist” in a HISAM or an HD database. The
LP pointer is in the prefix of the logical child and consists of the 4-byte direct
address of the logical parent.
Note: The LPCK part of the logical child segment is considered non-replaceable
and is not checked to see whether the I/O area is changed. When the LPCK
is virtual, checking for a change in the I/O area causes a performance
problem. Changing the LPCK in the I/O area does not cause the REPL call
to fail. However, the LPCK is not changed in the logical child segment.
With symbolic pointers, if the database the logical parent is in is HISAM or HIDAM,
IMS uses the symbolic pointer to access the index to find the correct logical parent
segment. If the database containing the logical parent is HDAM, the symbolic
pointer must be changed by the randomizing module into a block and RAP address
to find the logical parent segment. IMS accesses a logical parent faster when direct
pointing is used.
In Figure 85, the symbolic LP pointer points from the logical child ORDITEM to the
logical parent ITEM. With symbolic pointing, the ORDER and ITEM databases can
be either HISAM or HD. The LPCK, which is in the first part of the data portion of
the logical child, functions as a pointer from the logical child to the logical parent,
and is the pointer used in the logical child.
The LCF pointer points from a logical parent to the first occurrence of each of its
logical child types. The LCL pointer points to the last occurrence of the logical child
segment type for which it is specified. A LCL pointer can only be specified in
conjunction with a LCF pointer. Figure 86 on page 159 shows the use of the LCF
pointer. These pointers allow you to cross from the ITEM database to the logical
child ORDITEM in the ORDER database. However, although you are able to cross
databases using the logical child pointer, you have only gone from ITEM to the
logical child ORDITEM. To go to the ORDER segment, use the physical parent
pointer explained in “Physical Parent Pointer” on page 159.
LCF and LCL pointers are direct pointers. They contain the 4-byte direct address of
the segment to which they point. This means the logical child segment, the segment
being pointed to, must be in an HD database. The logical parent can be in a HISAM
or HD database. If the logical parent is in a HISAM database, the logical child
segment must point to it using a symbolic pointer. LCF and LCL pointers are stored
in the logical parent’s prefix, along with any other pointers. Figure 86 shows a LCF
pointer.
Figure 86. Logical Child First (LCF) Pointer (Used in Virtual Pairing Only)
In Figure 86, the LCF pointer points from the logical parent ITEM to the logical child
ORDITEM. Because it is a direct pointer, it can only point to an HD database,
although, it can exist in a HISAM or an HD database. The LCF pointer is in the
prefix of the logical parent and consists of the 4-byte RBA of the logical child.
In Figure 86, you saw that you could cross from the ITEM to the ORDER database
when virtual pairing was used, and this was done using logical child pointers.
However, the logical child pointer only got you from ITEM to the logical child
ORDITEM. Figure 87 on page 160 shows how to get to ORDER. The PP pointer in
ORDITEM points to its physical parent ORDER. If ORDER and ITEM are in an HD
database but are not root segments, they (and all other segments in the path of the
root) would also contain PP pointers to their physical parents.
PP pointers are direct pointers. They contain the 4-byte direct address of the
segment to which they point. PP pointers are stored in a logical child or logical
parent’s prefix, along with any other pointers.
In Figure 87, the PP pointer points from the logical child ORDITEM to its physical
parent ORDER. It is generated automatically by IMS for all logical child and logical
parent segments in HD databases. In addition, it is in the prefix of the segment that
contains it and consists of the 4-byte direct address of its physical parent. PP
pointers are generated in all segments from the logical child or logical parent back
up to the root.
An LTF pointer points from a specific logical twin to the logical twin stored after it.
An LTB pointer can only be specified in conjunction with an LTF pointer. When
specified, an LTB points from a given logical twin to the logical twin stored before it.
Logical twin pointers work in a similar way to the physical twin pointers used in HD
databases. As with physical twin backward pointers, LTB pointers improve
performance on delete operations. They do this when the delete that causes DASD
space release is a delete from the physical access path. Similarly, PTB pointers
improve performance when the delete that causes DASD space release is a delete
from the logical access path.
Figure 88 on page 161 shows use of the LTF pointer. In this example, ORDER 123
has two items: bolt and washer. The ITEMORD segments beneath the two ITEM
segments use LTF pointers. If the ORDER database is entered, it can be crossed to
the ITEMORD segment for bolts in the ITEM database. Then, to retrieve all items
for ORDER 123, the LTF pointers in the ITEMORD segment can be followed. In
Figure 88 only one other ITEMORD segment exists, and it is for washers. The LTF
pointer in this segment, because it is the last twin in the chain, contains zeros.
LTF and LTB pointers are direct pointers. They contain the 4-byte direct address of
the segment to which they point. This means LTF and LTB pointers can only exist in
HD databases. Figure 88 shows a LTF pointer.
Figure 88. Logical Twin Forward (LTF) Pointer (Used in Virtual Pairing Only)
In Figure 88, the LTF pointer points from a specific logical twin to the logical twin
stored after it. In this example, it points from the ITEMORD segment for bolts to the
ITEMORD segment for washers. Because it is a direct pointer, the LTF pointer can
only point to an HD database. The LTF pointer is in the prefix of a logical child
segment and consists of the 4-byte RBA of the logical twin stored after it.
Indirect Pointers
HALDBs (PHDAM, PHIDAM, and PSINDEX databases) use direct and indirect
pointers for pointing from one database record to another database record.
Figure 89 shows how indirect pointers are used.
The use of indirect pointers prevents the problem of misdirected pointers that would
otherwise occur when a database is reorganized.
The repository for the indirect pointers is the indirect list data set. The misdirected
pointers after reorganization are self-healing using indirect pointers.
Figure 90. Defining a Physical Parent to Logical Parent Path in a Logical Database
In addition, when LC pointers are used in the logical parent and logical twin and PP
pointers are used in the logical child, a logical parent to physical parent path is
created. To define use of the path, the logical child and physical parent are defined
as one concatenated segment type that is a physical child of the logical parent, as
shown in Figure 91. Again, definition of the path is done in a logical database.
Figure 91. Defining a Logical Parent to Physical Parent Path in a Logical Database
When use of a physical parent to logical parent path is defined, the physical parent
is the parent of the concatenated segment type. When an application program
retrieves an occurrence of the concatenated segment type from a physical parent,
the logical child and its logical parent are concatenated and presented to the
application program as one segment. When use of a logical parent to physical
parent path is defined, the logical parent is the parent of the concatenated segment
type. When an application program retrieves an occurrence of the concatenated
segment type from a logical parent, an occurrence of the logical child and its
physical parent are concatenated and presented to the application program as one
segment.
In both cases, the physical parent or logical parent segment included in the
concatenated segment is called the destination parent. For a physical parent to
logical parent path, the logical parent is the destination parent in the concatenated
segment. For a logical parent to physical parent path, the physical parent is the
destination parent in the concatenated segment.
Related Reading For information about intersection data, see “Intersection Data” on
page 164.
To identify which logical parent is pointed to by a logical child, the concatenated key
of the logical parent must be present. Each logical child segment must be present
in the application program’s I/O area when the logical child is initially presented for
loading into the database. However, if the logical parent is in an HD database, its
concatenated key might not be written to storage when the logical child is loaded. If
the logical parent is in a HISAM database, a logical child in storage must contain
the concatenated key of its logical parent.
For logical child segments, you can define a special operand on the PARENT=
parameter of the SEGM statement. This operand determines whether a symbolic
pointer to the logical parent is stored as part of the logical child segment on the
storage device. If PHYSICAL is specified, the concatenated key of the logical parent
is stored with each logical child segment. If VIRTUAL is specified, only the
intersection data portion of each logical child segment is stored.
Or:
1. TF
2. TB
3. PP
4. LTF
5. LTB
6. LP
7. PCF
8. PCL
Or:
1. TF
2. TB
3. PP
4. PCF
5. PCL
6. EPS
Multiple PCF and PCL pointers can exist in a segment type; however, more than
one of the other types of pointers can not.
Intersection Data
When two segments are logically related, data can exist that is unique to only that
relationship. In Figure 93 on page 165, for example, one of the items ordered in
ORDER 123 is 5000 bolts. The quantity 5000 is specific to this order (ORDER 123)
and this item (bolts). It does not belong to either the order or item on its own.
Similarly, in ORDER 123, 6000 washers are ordered. Again, this data is concerned
only with that particular order and item combination.
This type of data is called intersection data, since it has meaning only for the
specific logical relationship. The quantity of an item could not be stored in the
ORDER 123 segment, because different quantities are ordered for each item in
ORDER 123. Nor could it be stored in the ITEM segment, because for each item
there can be several orders, each requesting a different quantity. Because the
logical child segment links the ORDER and ITEM segments together, data that is
unique to the relationship between the two segments can be stored in the logical
child.
The two types of intersection data are: fixed intersection data (FID) and variable
intersection data (VID).
| Figure 94 on page 166 shows variable intersection data. In the ORDER 123
| segment for the item BOLT, 3000 were delivered on March 2 and 1000 were
| delivered on April 2. Because of this, two occurrences of the DELIVERY segment
| exist. Multiple segment types can contain intersection data for a single logical child
| segment. In addition to the DELIVERY segment shown in the figure, note the
| SCHEDULE segment type. This segment type shows the planned shipping date
| and the number of items to be shipped. Segment types containing VID can all exist
| at the same level in the hierarchy as shown in the figure, or they can be
| dependents of each other.
first model the manufacturer makes is Model 1, which is a boy’s bicycle. Table 16
lists the parts needed to manufacture this bicycle and the number of each part
needed to manufacture one Model 1 bicycle.
Table 16. Parts List for the Model 1 Bicycle Example
Part Number Needed
21-inch boy’s frame 1
Handlebar 1
Seat 1
Chain 1
Front fender 1
Rear fender 1
Pedal 2
Crank 1
Front sprocket 1
26-inch tube and tire 2
26-inch rim 2
26-inch spoke 72
Front hub 1
Housing 1
Break 1
Rear sprocket 1
The same company manufactures a Model 2 bicycle, which is for girls. The parts
and assembly steps for this bicycle are exactly the same, except that the bicycle
frame is a girl’s frame.
If the manufacturer stored all parts and subassemblies for both models as separate
segments in the database, a great deal of duplicate data would exist. Figure 95 on
page 168 shows the segments that must be stored just for the Model 1 bicycle. A
similar set of segments must be stored for the Model 2 bicycle, except that it has a
girl’s bicycle frame. As you can see, this leads to duplicate data and the associated
maintenance problems. The solution to this problem is to create a recursive
structure. Figure 96 on page 169 shows how this is done using the data for the
Model 1 bicycle.
In Figure 96, two types of segments exist: PART and COMPONENT segments. A
unidirectional logical relationship has been established between them. The PART
segment for Model 1 is a root segment. Beneath it are nine occurrences of
COMPONENT segments. Each of these is a logical child that points to another
PART root segment. (Only two of the pointers are actually shown to keep the figure
simple.) However, the other PART root segments show the parts required to build
the component.
For example, the pedal assembly component points to the PART root segment for
assembling the pedal. Stored beneath this segment are the following parts that
must be assembled: one front sprocket, one crank, and two pedals. With this
structure, much of the duplicate data otherwise stored for the Model 2 bicycle can
be eliminated.
Figure 97 on page 170 shows the segments, in addition to those in Figure 96, that
must be stored in the database record for the Model 2 bicycle. The logical children
in the figure, except the one for the unique component, a 21″ girl’s frame, can point
to the same PART segments as are shown in Figure 96. A separate PART segment
for the pedal assembly, for example, need not exist. The database record for both
Model 1 and 2 have the same pedal assembly, and by using the logical child, it can
point to the same PART segment for the pedal assembly.
Figure 97. Extra Database Records Required for the Model 2 Bicycle
One thing to note about recursive structures is that the physical parent and the
logical parent of the logical child are the same segment type. For example, in
Figure 96 on page 169, the PART segment for Model 1 is the physical parent of the
COMPONENT segment for pedal assembly. The PART segment for pedal assembly
is the logical parent of the COMPONENT segment for pedal assembly.
At initial database load time, if logical parents with non-unique concatenated keys
exist in a database, the resolution utilities (described in Chapter 15, “Tuning
Databases,” on page 341) attach all logical children with the same concatenated
key to the first logical parent in the database with that concatenated key.
When inserting or deleting a concatenated segment and position for the logical
parent, part of the concatenated segment is determined by the logical parent’s
concatenated key. Positioning for the logical parent starts at the root and stops on
the first segment at each level of the logical parent’s database that satisfies the key
equal condition for that level. If a segment is missing on the path to the logical
parent being inserted, a GE status code is returned to the application program
when using this method to establish position in the logical parent’s database.
A sequence field must be specified for a virtual logical child if, when accessing it
from its logical parent, you need real logical child segments retrieved in an order
determined by data in a field of the virtual logical child as it could be seen in the
application program I/O area. This sequence field can include any part of the
segment as it appears when viewed from the logical parent (that is, the
concatenated key of the real logical child’s physical parent followed by any
intersection data). Because it can be necessary to describe the sequence field of a
logical child as accessed from its logical parent in non-contiguous pieces, multiple
FIELD statements with the SEQ parameter present are permitted. Each statement
must contain a unique fldname1 parameter.
Figure 98 on page 172 shows the relationship between these three control blocks. It
assumes that the logical relationship is established between two physical
databases. The following topics explain how the physical and logical DBD are
coded when a logical relationship is defined:
v “Specifying Logical Relationships in the Physical DBD” on page 172
v “Specifying Logical Relationships in the Logical DBD” on page 176
In the SEGM statements of the examples associated with Figure 99 on page 173
and Figure 100 on page 173, only the pointers required with logical relationships
are shown. No pointers required for use with HD databases are shown. When
actually coding a DBD, you must ask for these pointers in the PTR= parameter.
Otherwise, IMS will not generate them once another type of pointer is specified.
Figure 99 shows the layout of segments. Figure 100 on page 173 shows physical
DBDs for unidirectional relationships.
This is the hierarchic structure of the two databases involved in the logical
relationship. In this example, we are defining a unidirectional relationship using
symbolic pointing. ORDITEM has an LPCK and FID, and DELIVERY and
SCHEDULE are VID.
Figure 100. Physical DBDs for Unidirectional Relationship Using Symbolic Pointing
In the ORDER database, the DBD coding that differs from normal DBD coding is
that for the logical child ORDITEM.
In the ITEM database, the DBD coding that differs from normal DBD coding is that
an LCHILD statement has been added. This statement names the logical child
ORDITEM. Because the ORDITEM segment exists in a different physical database
from ITEM, the name of its physical database, ORDDB, must be specified.
When defining a bidirectional relationship with virtual pairing, you need to code an
LCHILD statement only for the real logical child. On the LCHILD statement, you
code POINTER=SNGL or DBLE to get logical child pointers. You code the PAIR=
operand to indicate the virtual logical child that is paired with the real logical child.
When you define the SEGM statement for the real logical child, the PARENT=
parameter identifies both the physical and logical parents. You should specify logical
twin pointers (in addition to any other pointers) on the POINTER= parameter. Also,
you should define a SEGM statement for the virtual logical child even though it
does not exist. On this SEGM statement, you specify PAIRED on the POINTER=
parameter. In addition, you specify a SOURCE= parameter. On the SOURCE=
parameter, you specify the SEGM name and DBD name of the real logical child.
DATA must always be specified when defining SOURCE= on a virtual logical child
SEGM statement.
Related Reading: For more information on coding logical relationships, see IMS
Version 9: Utilities Reference: Database and Transaction Manager.
When defining a segment in a logical database, you can specify whether the
segment is returned to the program’s I/O area by using the KEY or DATA operand
on the SOURCE= parameter of the SEGM statement. DATA returns both the key
and data portions of the segment to the I/O area. KEY returns only the key portion,
and not the data portion of the segment to the I/O area.
Figure 101 illustrates the logical data structure you need to create in the application
program. It is implemented with a unidirectional logical relationship using symbolic
pointing. The root segment is ORDER from the ORDER database. Dependent on
ORDER is ORDITEM, the logical child, concatenated with its logical parent ITEM.
The application program receives both segments in its I/O area when a single call is
issued for ORDIT. DELIVERY and SCHEDULE are VID.
Figure 101. Logical Data Structure for a Unidirectional Relationship Using Symbolic Pointing
The following logical DBD is for the logical data structure shown in Figure 101:
DBD NAME=ORDLOG,ACCESS=LOGICAL
DATASET LOGICAL
SEGM NAME=ORDER,SOURCE=((ORDER,DATA,ORDDB))
SEGM NAME=ORDIT,PARENT=ORDER, X
SOURCE=((ORDITEM,DATA,ORDDB),(ITEM,DATA,ITEMDB))
SEGM NAME=DELIVERY,PARENT=ORDIT,SOURCE=((DELIVERY,DATA,ORDDB))
SEGM NAME=SCHEDULE,PARENT=ORDIT,SOURCE=((SCHEDULE,DATA,ORDDB))
DBDGEN
FINISH
END
Also, a logical DBD is needed only when an application program needs access to a
concatenated segment or needs to cross a logical relationship.
In Figure 102, DBD1 and DBD2 are two physical databases with a logical
relationship defined between them. DBD3 through DBD6 are four logical databases
that can be defined from the logical relationship between DBD1 and DBD2. With
DBD3, no logical relationship is crossed, because no physical parent or physical
dependent of a destination parent is included in DBD3. With DBD4 through DBD6,
a logical relationship is crossed in each case, because each contains a physical
parent or physical dependent of the destination parent.
Figure 103. The First Logical Relationship Crossed in a Hierarchic Path of a Logical
Database
In DBD5 in Figure 103, an additional concatenated segment type GI, is defined that
was not included in DBD4. GI allows access to segments in the hierarchic path of
the destination parent if crossed. When the logical relationship made possible by
concatenated segment GI is crossed, this is an additional logical relationship
crossed. This is because, from the root of the logical database, the logical
relationship made possible by concatenated segment type BF must be crossed to
allow access to concatenated segment GI.
Figure 104. Logical Database Hierarchy Enabled by Crossing the First Logical Relationship
v A logical database must use only those segments and physical and logical
relationship paths defined in the physical DBD referenced by the logical DBD.
v The path used to connect a parent and child in a logical database must be
defined as a physical relationship path or a logical relationship path in the
physical DBD referenced by the logical DBD.
v Physical and logical relationship paths can be mixed in a hierarchic segment path
in a logical database.
v Additional physical relationship paths, logical relationship paths, or both paths
can be included after a logical relationship is crossed in a hierarchic path in a
logical database. These paths are included by going in upward directions,
downward directions, or both directions, from the destination parent. When
proceeding downward along a physical relationship path from the destination
parent, direction cannot be changed except by crossing a logical relationship.
When proceeding upward along a physical relationship path from the destination
parent, direction can be changed.
v Dependents in a logical database must be in the same relative order as they are
under their parent in the physical database. If a segment in a logical database is
a concatenated segment, the physical children of the logical child and children of
the destination parent can be in any order. The relative order of the children or
the logical child and the relative order of the children of the destination parent
must remain unchanged.
v The same concatenated segment type can be defined multiple times with
different combinations of key and data sensitivity. Each must have a distinct
name for that view of the concatenated segment. Only one of the views can have
dependent segments. Figure 105 shows the four views of the same concatenated
segment that can be defined in a logical database. A PCB for the logical
database can be sensitive to only one of the views of the concatenated segment
type.
Figure 105. Single Concatenated Segment Type Defined Multiple Times with Different
Combinations of Key and Data Sensitivity
Figure 106 and Figure 107 show example insert, delete, and replace rules. Consider
the following questions:
1. Should the CUSTOMER segment in Figure 106 be able to be inserted by both
its physical and logical paths?
2. Should the BORROW segment be replaceable using only the physical path, or
using both the physical and logical paths?
3. If the LOANS segment is deleted using its physical path, should it be erased
from the database? Or should it be marked as physically deleted but remain
accessible using its logical path?
4. If the logical child segment BORROW or the concatenated segment
BORROW/LOANS is deleted from the physical path, should the logical path
CUST/CUSTOMER also be automatically deleted? Or should the logical path
remain?
Abbreviation Explanation
PP Physical parent segment type
LC Logical child segment type
LP Logical parent segment type
VLC Virtual logical child segment type
Figure 107. Example of the Replace, Insert, and Delete Rules: Before and After
The answer to these questions depends on the application. The enforcement of the
answer depends on your choosing the correct insert, delete, and replace rules for
the logical child, logical parent, and physical parent segments. You must first
determine your application processing requirements and then the rules that support
those requirements.
For example, the answer to question 1 depends on whether the application requires
that a CUSTOMER segment be inserted into the database before accepting the
loan. An insert rule of physical (P) on the CUSTOMER segment prohibits insertion
of the CUSTOMER segment except by the physical path. An insert rule of virtual (V)
allows insertion of the CUSTOMER segment by either the physical or logical path. It
probably makes sense for a customer to be checked (past credit, time on current
job, and so on.) and the CUSTOMER segment inserted before approving the loan
and inserting the BORROW segment. Thus, the insert rule for the CUSTOMER
segment should be P to prevent the segment from being inserted logically. (Using
the insert rule in this example provides better control of the application.)
The P delete rule prohibits physically deleting a logical parent segment before all its
logical children have been physically deleted. This means the logical path to the
logical parent is deleted first.
You need to examine all your application requirements and decide who can insert,
delete, and replace segments involved in logical relationships and how those
updates should be made (physical path only, or physical and logical path). The
insert, delete, and replace rules in the physical DBD and the PROCOPT=
parameter in the PCB are the means of control.
Related Reading: These rules are explained in detail in Appendix B, “Insert, Delete,
and Replace Rules for Logical Relationships,” on page 465.
v Direct pointers usually give faster access to logical parent segments, except
possibly HDAM or PHDAM logical parent segments, which are roots. Symbolic
pointers require extra resources to search an index for a HIDAM database. Also,
with symbolic pointers, DL/I has to navigate from the root to the logical parent if
the logical parent is not a root segment.
KEY/DATA Considerations
When you include a concatenated segment as part of a logical DBD, you control
how the concatenated segment appears in the user’s I/O area. You do this by
specifying either KEY or DATA on the SOURCE= keyword of the SEGM statement
for the concatenated segment. A concatenated segment consists of a logical child
followed by a logical (or destination) parent. You specify KEY or DATA for both
parts. For example, you can access a concatenated segment and ask to see the
following segment parts in the I/O area:
v The logical child part only
v The logical (or destination) parent part only
v Both parts
By carefully choosing KEY or DATA, you can retrieve a concatenated segment with
fewer processing and I/O resources. For example:
v Assume you have the unidirectional logical relationship shown in Figure 108 on
page 185.
v Finally, assume you only need to see the data for the LINEITEM part of the
concatenated segment.
You can avoid the extra processing and I/O required to access the MODEL part of
the concatenated segment if you:
v Code the SOURCE keyword of the concatenated segment’s SEGM statement as:
SOURCE=(lcsegname,DATA,lcdbname),(lpsegname,KEY,lpdbname)
v Store a symbolic logical parent pointer in LINEITEM. If you do not store the
symbolic pointer, DL/I must access MODEL and PRODUCT to construct it.
To summarize, do not automatically choose DATA sensitivity for both the logical
child and logical parent parts of a concatenated segment. If you do not need to see
the logical parent part, code KEY sensitivity for the logical parent and store the
symbolic logical parent pointer on DASD.
followed, DL/I usually has to access multiple database records. Accessing multiple
database records increases the resources required to process the call.
Note: You cannot store a real logical child in a HISAM database, because you
cannot have logical child pointers (which are direct pointers) in a HISAM
database.
Secondary Indexes
The following database types support secondary indexes:
v HISAM
v SHISAM
v HDAM
v PHDAM
v HIDAM
v PHIDAM
Secondary indexes are indexes that allow you to process a segment type in a
sequence other than the one defined by the segment’s key. A secondary index can
also be used to process a segment type based on a qualification in a dependent
segment.
| Figure 111, shows the root segment, COURSE, and the fields it contains. The
| course number field is a unique key field.
You chose COURSE as the root and course number as a unique key field partly
because most applications requested information based on course numbers. For
these applications, access to the information needed from the database record is
fast. For a few of your applications, however, the organization of the database
record does not provide such fast access. One application, for example, would be
to access the database by student name and then get a list of courses a student is
taking. Given the order in which the database record is now organized, access to
the courses a student is taking requires a sequential scan of the entire database.
Each database record has to be checked for an occurrence of the STUDENT
segment. When a database record for the specific student is found, then the
COURSE segment has to be referenced to get the name of the course the student
is taking. This type of access is relatively slow. In this situation, you can use a
secondary index that has a set of pointer segments for each student to all COURSE
segments for that student.
When two PCBs are used, it enables an application program to use two paths into
the database and two sequence fields. One path and sequence field is provided by
the regular processing sequence, and one is provided by the secondary index. The
secondary index gives an application program both an alternative way to enter the
database and an alternative way to sequentially process database records.
Figure 113. Format of Pointer Segments Contained in the Secondary Index Database
The first field in the prefix is the delete byte. The second field is the address of
the segment the application program retrieves from the regular database. This
field is not present if the secondary index uses symbolic pointing. Symbolic
pointing is pointing to a segment using its concatenated key. HIDAM and HDAM
can use symbolic pointing; however, HISAM must use symbolic pointing.
Symbolic pointing is not supported for PHDAM and PHIDAM databases.
For a HALDB PSINDEX database, the segment prefix of pointer segments is
slightly different. The “RBA of the segment to be retrieved field” is part of an
Extended Pointer Set (EPS), which is longer than 4 bytes. Within the prefix the
EPS is followed by the key of the target’s root.
| v Target Segment. The target segment is in the regular database, and it is the
| segment the application program needs to retrieve. A target segment is the
| segment to which the pointer segment points. The target segment can be at any
| one of the 15 levels in the database, and it is accessed directly using the RBA or
| symbolic pointer stored in the pointer segment. Physical parents of the target
| segment are not examined to retrieve the target segment (except in one special
| case discussed in “Concatenated Key Field” on page 195).
v Source Segment. The source segment is also in the regular database. The
source segment contains the field (or fields) that the pointer segment has as its
key field. Data is copied from the source segment and put in the pointer
segment’s key field. The source and the target segment can be the same
segment, or the source segment can be a dependent of the target segment. The
optional fields are also copied from the source segment with one exception,
which is discussed later in this topic.
Using the education database in Figure 114 on page 190, you can see how three
segments work together. In this example, the education database is a HIDAM
database that uses RBAs rather than symbolic pointers. Suppose an application
program needs to access the education database by student name and then list all
courses the student is taking:
v The segment the application is trying to retrieve is the COURSE segment,
because the segment contains the names of courses (COURSENM field).
Therefore, COURSE is the target segment, and needs retrieval.
v In this example, the application program is going to use the student’s name in its
DL/I call to retrieve the COURSE segment. The DL/I call is qualified using
student name as its qualifier. The source segment contains the fields used to
sequence the pointer segments in the secondary index. In this example, the
pointer segments must be sequenced by student name. The STUDENT segment
becomes the source segment. It is the fields in this segment that are copied into
the data portion of the pointer segment as the key field.
v The call from the application program invokes a search for a pointer segment
with a key field that matches the student name. Once the correct pointer
segment in the index is found, it contains the address of the COURSE segment
the application program is trying to retrieve.
Figure 115 shows how the pointer, target, and source segments work together.
Figure 115 is the call the application program issues. XNAME is the from the NAME
parameter in the XFLD statement.
COURSE is the target segment that the application program is trying to retrieve.
STUDENT is the source segment containing the one or more fields that the
application program uses as a qualifier in its call and that the data portion of a
pointer segment contains as a key.
The BAKER segment in the secondary index is the pointer segment, whose prefix
contains the address of the segment to be retrieved and whose data fields contain
the key the application program uses as a qualifier in its call.
If the target segment is the root segment in the database record, the structure the
application program perceives does not differ from the one it can access using the
regular processing sequence. However, if the target segment is not the root
segment, the hierarchy in the database record is conceptually restructured.
Figure 117 and Figure 118 on page 192 illustrate this concept.
The target segment (as shown in the figure) is segment G. Target segment G
becomes the root segment in the restructured hierarchy. All dependents of the
target segment (segments H, J, and I) remain dependents of the target segment.
However, all segments on which the target is dependent (segments D and A) and
their subordinates become dependents of the target and are put in the left most
positions of the restructured hierarchy. Their position in the restructured hierarchy is
the order of immediate dependency. D becomes an immediate dependent of G, and
A becomes an immediate dependent of D.
If the same segment is referenced more than once (as shown in Figure 118), you
must use the DBDGEN utility to generate a logical DBD that assigns alternate
names to the additional segment references. If you do not generate the logical
DBD, the PSBGEN utility will issue the message “SEG150” for the duplicate
SENSEG names.
Each pointer segment in a secondary index is stored in one logical record. A logical
record containing a pointer segment is shown in Figure 120.
|
| Figure 121. Secondary Index Entry for HALDB
|
The format of the logical record is the same in both a KSDS and ESDS data set.
The pointer field at the beginning of the logical record exists only when the key in
the data portion of the segment is not unique. If keys are not unique, some pointer
segments will contain duplicate keys. These pointer segments must be chained
together, and this is done using the pointer field at the beginning of the logical
record.
Pointer segments containing duplicate keys are stored in the ESDS in LIFO (last in,
first out) sequence. When the first duplicate key segment is inserted, it is written to
the ESDS, and the KSDS logical record containing the segment it is a duplicate of
points to it. When the second duplicate is inserted, it is inserted into the ESDS in
the next available location. The KSDS logical record is updated to point to the
second duplicate. The effect of inserting duplicate pointer segments into the ESDS
in LIFO sequence is that the original pointer segment (the one in the KSDS) is
retrieved last. This retrieval sequence should not be a problem, because duplicates,
by definition, have no special sequence.
Figure 122 on page 194 shows the fields in a pointer segment. Like all segments,
the pointer segment has a prefix and data portion. The prefix portion has a delete
byte, and when direct rather than symbolic pointing is used, it has the address of
the target segment (4 bytes). The data portion has a series of fields, and some of
them are optional. All fields in the data portion of a pointer segment contain data
taken from the source segment (with the exception of user data). These fields are
the constant field (optional), the search field, the subsequence field (optional), the
duplicate data field (optional), the concatenated key field (optional except for
HISAM), and then the data (optional).
Delete Byte
The delete byte is used by IMS to determine whether a segment has been deleted
from the database.
Pointer Field
This field, when present, contains the RBA of the target segment. The pointer field
exists when direct pointing is specified for an index pointing to an HD database.
Direct pointing is simply pointing to a segment using its actual address. The other
type of pointing that can be specified is symbolic pointing. Symbolic pointing, which
is explained under “Concatenated Key Field,” can be used to point to HD databases
and must be used to point to HISAM databases. If symbolic pointing is used, this
field does not exist.
Constant Field
This field, when present, contains a 1-byte constant. The constant is used when
more than one index is put in an index database (This topic is discussed under
“Sharing Secondary Index Databases” on page 201). The constant identifies all
pointer segments for a specific index in the shared index database. The value in the
constant field becomes part of the key.
Search Field
The data in the search field is the key of the pointer segment. All data in the search
field comes from data in the source segment. As many as five fields from the
source segment can be put in the search field. These fields do not need to be
contiguous fields in the source segment. When the fields are stored in the pointer
segment, they can be stored in any order. When stored, the fields are
concatenated. The data in the search field (the key) can be unique or non-unique.
IMS automatically maintains the search field in the pointer segment whenever a
source segment is modified.
Subsequence Field
The subsequence field, like the search field, contains from one to five fields of data
from the source segment. Subsequence fields are optional, and can be used if you
have non-unique keys. The subsequence field can make non-unique keys unique.
Making non-unique keys unique is desirable because of the many disadvantages of
non-unique keys. For example, non-unique keys require you to use an additional
data set, an ESDS, to store all index segments with duplicate keys. An ESDS
requires additional space. More important, the search for specific occurrences of
duplicates requires additional I/O operations that can decrease performance.
When a subsequence field is used, the subsequence data is concatenated with the
data in the search field. These concatenated fields become the key of the pointer
segment. If properly chosen, the concatenated fields form a unique key. (It is not
always be possible to form a unique key using source data in the subsequence
field. Therefore, you can use system related fields, explained later in the chapter, to
form unique keys.)
One important thing to note about using subsequence fields is that if you use them,
the way in which an SSA is coded does not need to change. The SSA can still
specify what is in the search field, but it cannot specify what is in the search plus
the subsequence field. Subsequence fields are not seen by the application program
unless it is processing the secondary index as a separate database.
Up to five fields from the source segment can be put in the subsequence field.
These fields do not need to be contiguous fields in the source segment. When the
fields are stored in the pointer segment, they can be stored in any order. When
stored, they are concatenated.
IMS automatically maintains the subsequence field in the pointer segment whenever
a source segment is modified.
As many as five fields from the source segment can be put in the duplicate data
field. These fields do not need to be contiguous fields in the source segment. When
the fields are stored in the pointer segment, they can be stored in any order. When
stored, they are concatenated.
IMS automatically maintains the duplicate data field in the pointer segment
whenever a source segment is modified.
used, the pointer field (4 bytes long) in the prefix is not present, but the fully
concatenated key of the target segment is generally more than 4 bytes long.
IMS automatically generates the concatenated key field when symbolic pointing is
specified.
One situation exists in which symbolic pointing is specified and IMS does not
automatically generate the concatenated key field. This situation is caused by
specifying the system-related field /CK as a subsequence or duplicate data field in
such a way that the concatenated key is fully contained. In this situation, the
symbolic pointer portion of either the subsequence field or the duplicate data field is
used.
You must initially load user data. You must also maintain it. During reorganization of
a database that uses secondary indexes, the secondary index database is rebuilt by
IMS. During this process, all user data in the pointer segment is lost.
When you use the /SX operand, the XDFLD statement in the DBD must also
specify /SX (plus any of the additional characters added to the /SX operand). The
XDFLD statement, among other things, identifies fields from the source segment
that are to be put in the pointer segment. The /SX operand is specified in the
SUBSEQ= operand in the XDFLD statement.
additional characters. The /CK operand works like the /SX operand except that the
concatenated key, rather than the RBA, of the source segment is used. Another
difference is that the concatenated key is put in the subsequence or duplicate data
field in the pointer segment. Where the concatenated key is put depends on where
you specify the /CK.
When using /CK, you can use a portion of the concatenated key of the source
segment (if some portion will make the key unique) or all of the concatenated key.
You use the BYTES= and START= operands in the FIELD statement to specify
what you need.
For example, suppose you are using the database record shown in Figure 123.
Figure 123. Database Record Showing the Source and Target for Secondary Indexes
If you specify on the FIELD statement whose name begins with /CK BYTES=21,
START=1, the entire concatenated key of the source segment will be put in the
pointer segment. If you specify BYTES=6, START=16, only the last six bytes of the
concatenated key (CLASSNO and SEQ) will be put in the pointer segment. The
BYTES= operand tells the system how many bytes are to be taken from the
concatenated key of the source segment in the PCB key feedback area. The
START= operand tells the system the beginning position (relative to the beginning
of the concatenated key) of the information that needs to be taken. As with the /SX
operand, the XDFLD statement in the DBD must also specify /CK.
To summarize: /SX and /CK fields can be included on the SUBSEQ= parameter of
the XDFLD statement to make key fields unique. Making key fields unique avoids
the overhead of using an ESDS to hold duplicate keys. The /CK field can also be
specified on the DDATA= parameter of the XDFLD statement but the field will not
become part of the key field.
When making keys unique, unique sequence fields must be defined in the target
segment type, if symbolic pointing is used Also, unique sequence fields must be
defined in all segment types on which the target segment type is dependent (in the
physical rather than restructured hierarchy in the database).
| For example: suppose you have a secondary index for the education database at
| which you have been previously looking. STUDENT is the source segment, and
| COURSE is the target segment. You might need to create pointer segments for
| students only if they are associated with a certain customer number. This could be
| done using sparse indexing, a performance enhancement of secondary indexing.
The way in which IMS maintains the index depends on the operation being
performed. Regardless of the operation, IMS always begins index maintenance by
building a pointer segment from information in the source segment that is being
inserted, deleted, or replaced. (This pointer segment is built but not yet put in the
secondary index database.)
If you reorganize your secondary index and it contains non-unique keys, the
resulting pointer segment order can be unpredictable.
In addition to the restrictions imposed by the system to protect the secondary index
database, you can further protect it using the PROT operand in the DBD statement.
When PROT is specified, an application program can only replace user data in a
pointer segment. However, pointer segments can still be deleted when PROT is
specified. When a pointer segment is deleted, the source segment that caused the
pointer segment to be created is not deleted. Note the implication of this: IMS might
try to do maintenance on a pointer segment that has been deleted. When it finds no
pointer segment for an existing source segment, it will return an NE status code.
When NOPROT is specified, an application program can replace all fields in a
pointer segment except the constant, search, and subsequence fields. PROT is the
default for this parameter.
If you are using a shared secondary index, calls issued by an application program
(for example, a series of GN calls) will not violate the boundaries of the secondary
index they are against. Each secondary index in a shared database has a unique
DBD name and root segment name.
Although using a shared index database can save some main storage, the
disadvantages of using a shared index database generally outweigh the small
amount of space that is saved by its use. For example, performance can decrease
when more than one application program simultaneously uses the shared index
database. (Search time is increased because the arm must move back and forth
between more than one secondary index.) In addition, maintenance, recovery, and
reorganization of the shared index database can decrease performance because all
secondary indexes are, to some extent, affected if one is. For example, when a
database that is accessed using a secondary index is reorganized, IMS
automatically builds a new secondary index. This means all other indexes in the
shared database must be copied to the new shared index.
If you are using a shared index database, you need to know the following
information:
v A shared index database is created, accessed, and maintained just like an index
database with a single secondary index.
v The various secondary indexes in the shared index database do not need to
index the same database.
v One shared index database could contain all secondary indexes for your
installation (if the number of secondary indexes does not exceed 16).
The use of the INDICES= parameter does not alter the processing sequence
selected for the PCB by the presence or absence of the PROCSEQ= parameter.
PCB
SENSEG NAME=COURSE, INDICES=SIDBD1
SENSEG NAME=STUDENT
Figure 126. PCB for the First Example of the INDICES= Parameter
GU COURSE COURSENM=12345&.XSTUNM=JONES
Figure 127. Application Program Call Issued for the First Example of the INDICES=
Parameter
When the call shown in Figure 127 is used, IMS gets the COURSE segment with a
number 12345. Then IMS gets a secondary index entry, one in which XSTUNM is
equal to JONES. IMS checks to see if the pointer in the secondary index points to
the COURSE segment with course number 12345. If it does, IMS returns the
COURSE segment to the application program’s I/O area. If the secondary index
pointer does not point to the COURSE segment with course number equal to
12345, IMS checks for other secondary index entries with XSTUNM equal to
JONES and repeats the compare.
If all secondary index entries with XSTUNM equal to JONES result in invalid
compares, no segment is returned to the application program. By doing this, IMS
need not search the STUDENT segments for a student with NAME equal to
JONES. This technique involving use of the INDICES= parameter is useful when
source and target segments are different.
The INDICES= parameter can also be used to reference more than one secondary
index in the source call. Figure 130 on page 204 shows the use of
INDICES=parameter.
In the Figure 128, IMS uses the SIDBD2 secondary index to get the COURSE
segment for MATH. IMS then gets a COURSE segment using the SIDBD1. IMS can
then compare to see if the two course segments are the same. If they are, IMS
returns the COURSE segment to the application program’s I/O area. If the compare
is not equal, IMS looks for other SIDBD1 pointers to COURSE segments and
repeats the compare operations. If there are still no equal compares, IMS checks
for other SIDBD2 pointers to COURSE segments and looks for equal compares to
SIDBD1 pointers. If all possible compares result in unequal compares, no segment
is returned to the application program.
Figure 128 shows the databases for the second example of the INDICES
parameter. Following the databases is the example PCB in Figure 129 and the
application programming call in Figure 130 on page 204.
PCB PROCSEQ=SIDBD2
SENSEG NAME=COURSE, INDICES=SIDBD1
SENSEG NAME=STUDENT
Figure 129. PCB for the Second Example of the INDICES= Parameter
GU COURSE SCRSNM=MATH&XSTUNM=JONES
Figure 130. Application Program Call Issued for the Second Example of the INDICES=
Parameter
The DBDs in Figure 132 on page 207 and Figure 133 on page 207 highlight the
statements and parameters coded when a secondary index is used. (Wherever
statements or parameters are omitted the parameter in the DBD is coded the same
regardless of whether secondary indexing is used.) “DBD for the EDUC Database”
and “DBD for the SINDX Database” on page 208 provide a summary of how the
statements and parameters in the DBDs in Figure 132 on page 207 and Figure 133
on page 207 are used.
CONSTANT=
This parameter (not used in the example) specifies the unique constant
required when a secondary index is part of a shared database.
SRCH=
This parameter specifies the one to five fields from the source segment that
are to be copied into the pointer segment’s search field. In this case, only
one field is being copied, the STUDNM field, which contains student names.
SUBSEQ=
This parameter specifies the one to five fields from the source segment that
are to be copied into the pointer segment’s subsequence field. These extra
fields can be used to make the key in the index unique. In this case, one
field is being copied, the /SX1 field, which contains a system-related field.
This parameter is optional.
DDATA=
This parameter (not used in the example) specifies the one to five fields from
the source segment that are to be copied into the pointer segment’s duplicate
data field. These fields can only be accessed when the secondary index is
processed as a separate database. This parameter is optional.
NULLVAL=
This parameter (not used in the example) contains a 1-byte value used to
suppress entries in the secondary index database. This parameter is
optional.
EXTRTN=
This parameter (not used in the example) specifies a user-exit routine. The
user routine gets control after a source segment is built. The routine is used
to suppress entries in the secondary index database when you cannot use
the values that can be specified in the NULLVAL= parameter. This parameter
is optional.
In the example, shown in Figure 131 on page 207, a system-related field (/SX1) is
used on the SUBSEQ parameter. System-related fields must also be coded on
FIELD statements after the SEGM for the source segment. For more details, see
“Making Keys Unique Using System Related Fields” on page 196.
Figure 132 shows the EDUC DBD for the example in Figure 131.
DBD NAME=EDUC,ACCESS=HDAM,...
SEGM NAME=COURSE,...
FIELD NAME=(COURSECD,...
LCHILD NAME=(XSE,SINDX),PTR=INDX
XDFLD NAME=XSTUDENT,SEGMENT=STUDENT,SRCH=STUDNM,SUBSEQ=/SX1
SEGM NAME=CLASS,...
FIELD NAME=(EDCTR,...
SEGM NAME=INSTR,...
FILED NAME=(INSTNO,...
SEGM NAME=STUDENT,...
FIELD NAME=SEQ,...
FIELD NAME=STUDNM,BYTES=20,START=1
FIELD NAME=/SX1
DBDGEN
FINISH
END
Figure 133 shows the SINDX DBD for the example in Figure 131.
DBD NAME=SINDX,ACCESS=INDEX
SEGM NAME=XSEG,...
FIELD NAME=(XSEG,SEQ,U),BYTES=24,START=1
LCHILD NAME=(COURSE,EDUC),INDEX=XSTUDNT,PTR=SNGL
DBDGEN
FINISH
END
Figure 135. Assembly and Parts as Examples to Demonstrate Segments Logical Relationship
Finally, you can have application requirements that result in a segment that appears
to have two parents. In the example shown in Figure 136, the customer database
keeps track of orders (CUSTORDN). Each order can have one or more line items
(ORDLINE), with each line item specifying one product (PROD) and model
(MODEL). In the product database, many outstanding line item requests can exist
for a given model. This type of relationship is called a many-to-many relationship
and is handled in IMS through a logical relationship.
Variable-Length Segments
Database types that support variable-length segments:
v HISAM
v SHISAM
v HDAM
v PHIDAM
v HIDAM
v PHDAM
v DEDB
Variable-length segments are simply segments whose length can vary in occurrence
of some segment types. A database can contain both variable-length segment and
fixed-length segment types. Variable-length segments can be used for HISAM,
HDAM, PHDAM, HIDAM, and PHIDAM databases.
The prefix and data portion of HDAM, PHDAM, HIDAM, and PHIDAM
variable-length segments can be separated in storage when updates occur. When
this happens, the first four bytes following the prefix point to the separated data
portion of the segment.
Figure 138 shows the format of a HISAM variable-length segment. It is also the
format of an HDAM, PHDAM, HIDAM, or PHIDAM variable-length segment when
the prefix and data portion of the segment have not been separated in storage.
Figure 139 on page 211 shows the format of an HDAM, PHDAM, HIDAM, or
PHIDAM variable-length segment when the prefix and data portion of the segment
have been separated in storage.
After a variable-length segment is loaded, replace operations can cause the size of
data in it to be either increased or decreased. When the length of data in an
existing HISAM segment is increased, the logical record containing the segment is
rewritten to acquire the additional space. Any segments displaced by the rewrite are
put in overflow storage. Displacement of segments to overflow storage can affect
performance. When the length of data in an existing HISAM segment is decreased,
the logical record is rewritten so all segments in it are physically adjacent.
When a replace operation causes the length of data in an existing HDAM, PHDAM,
HIDAM, or PHIDAM segment to be increased, one of two things can happen:
v If the space allocated for the existing segment is long enough for the new data,
the new data is simply placed in the segment. This is true regardless of whether
the prefix and data portions of the segment were previously separated in the data
set.
v If the space allocated for the existing segment is not long enough for the new
data, the prefix and data portions of the segment are separated in storage. IMS
puts the data portion of the segment as close to the prefix as possible. Once the
segment is separated, a pointer is placed in the first four bytes following the
prefix to point to the data portion of the segment. This separation increases the
amount of space needed for the segment, because, in addition to the pointer
kept with the prefix, a 1-byte segment code and 1-byte delete code are added to
the data portion of the segment (see Figure 138 on page 210). In addition, if
separation of the segment causes its two parts to be stored in different blocks,
two read operations will be required to access the segment.
When a replace operation causes the length of data in an existing HDAM, PHDAM,
HIDAM, or PHIDAM segment to be decreased, one of three things can happen:
v If prefix and data are not separated, the data in the existing segment is replaced
with the new, shorter data followed by free space.
v If prefix and data are separated but sufficient space is not available immediately
following the original prefix to recombine the segment, the data in the separated
data portion of the segment is replaced with the new, shorter data followed by
free space.
v If prefix and data are separated and sufficient space is available immediately
following the original prefix to recombine the segment, the new data is placed in
the original space, overlaying the data pointer. The old separated data portion of
the segment is then available as free space in HD databases.
descriptive data you have. This saves storage space. Note, however, that if you are
using HDAM, PHDAM, HIDAM, or PHIDAM databases and your segment data
characteristically grows in size over time, segments will split. If a segment split
causes the two parts of a segment to be put in different blocks, two read operations
will be required to access the segment until the database is reorganized. So
variable-length segments work well if segment size varies but is stable (as in an
address segment). Variable-length segments might not work well if segment size
typically grows (as in a segment type containing a cumulative list of sales
commissions).
Working with the application programmer, you should devise a scheme for
accessing data in variable-length segments. You should devise a scheme because
if variable-length fields and fixed-length fields in a segment are mixed, the
application program has no way of knowing where specific fields begin. One way to
solve this problem is to put the size of a variable-length field at the beginning of the
variable-length field. If a segment has only one variable-length field, it can be made
the last field in the segment. If it is at all possible, the simplest scheme is to have
only one field in a variable-length segment.
Detailed information on how the Segment Edit/Compression exit routine works and
how you use it is in IMS Version 9: Customization Guide. This topic introduces you
to the facility.
The Segment Edit/Compression exit routine allows you to encode, edit, or compress
the data portion of a segment. You can use this facility on segment data in full
function databases and Fast Path DEDBs. You write the routine (your edit routine)
that actually manipulates the data in the segment. The IMS code gives your edit
routine information about the segment’s location and assists in moving the segment
back and forth between the buffer pool and the application program’s I/O area.
Data compression is allowed but key compression is not allowed when the segment
is:
Depending on the options you select, search time to locate a specific segment can
increase. If you are fully compressing the segment using key compression, every
segment type that is a candidate to satisfy either a fully qualified key or data field
request must be expanded or divided. IMS then examines the appropriate field. For
key field qualification, only those fields from the start of the segment through the
sequence field are expanded during the search. For data field qualification, the total
segment is expanded. In the case of data compression and a key field request, little
more processing is required to locate the segment than that of non-compressed
segments. Only the segment sequence field is used to determine if this segment
occurrence satisfies the qualification.
| To prevent IMS from splitting compressed segments, you can specify a minimum
| size for the segments that includes extra padded space. This gives the compressed
| segment room to grow and decreases the chance that IMS will split the segment.
| You specify the minimum size for fixed-length full-function segments differently than
| you do for variable-length full-function segments:
| v For fixed-length segments, specify the minimum size using both the fourth and
| fifth subparameters on the COMPRTN= parameter of the SEGM statement. The
| fourth subparameter, size, only defines the minimum size if you also specify the
| fifth subparameter, PAD.
| v For variable-length segments, specify the minimum size using the second
| subparameter, min_bytes, of the BYTES= parameter of the SEGM statement.
| DEDB segments are never split by replace calls. If a DEDB segment grows beyond
| the size of its current location, the entire segment, including its prefix, is moved to a
| new location. For this reason, it is not necessary to pad compressed DEDB
| segments.
The Data Capture exit routine is an installation-written exit routine. Data Capture
exit routines promote and enhance database coexistence. Data Capture exit
routines capture segment-level data from a DL/I database for propagation to DB2
UDB for z/OS databases. Installations running IMS and DB2 UDB for z/OS
databases can use Data Capture exit routines to exchange data across the two
database types.
Data Capture exit routines can be written in assembler language, C, VS COBOL II,
or PL/I. IMS Version 9: Customization Guide describes Data Capture exit routines in
detail.
Data Capture exit routines are supported by IMS Transaction Manager and
Database Manager. DBCTL support is for BMPs only.
Data Capture exit routines are compatible with the following physical database
structures:
HDAM
PHDAM
HIDAM
PHIDAM
HISAM
SHISAM
DEDB
Data Capture exit routines do not support segments in secondary indexes.
Using Data Capture exit routines requires specification of one or two DBD
parameters and subsequent DBDGEN. The EXIT= parameter identifies which Data
Capture exit routines will run against segments in a database. The VERSION=
parameter records important information about the DBD for use by Data Capture
exit routines.
Specifying EXIT= on the DBD statement applies a Data Capture exit routine to all
segments within a database structure. Specifying EXIT= on the SEGM statement
applies a Data Capture exit routine to only that segment type.
You can override Data Capture exit routines specified on the DBD statement by
specifying EXIT= on a SEGM statement. EXIT=NONE on a SEGM statement
cancels all Data Capture exit routines specified on the DBD statement for that
segment type. A physical child does not inherit an EXIT= parameter specified on the
SEGM statement of its physical parent.
You can specify multiple Data Capture exit routines on a single DBD or SEGM
statement. For example, you might code a DBD statement as:
DBD EXIT=((EXIT1A),(EXIT1B))
| The name of the Data Capture exit routine that you intend to use is the only
| required operand for the EXIT= parameter. Exit names can have a maximum of
| eight alphanumeric characters. For example, if you specify a Data Capture exit
| routine with the name EXITA on a SEGM statement in a database, the EXIT=
| parameter is coded as follows:
| SEGM EXIT=(EXITA,KEY,DATA,NOPATH,(CASCADE,KEY,DATA,NOPATH))
KEY, NOPATH, DATA, CASCADE, KEY, DATA, and NOPATH are default operands.
These defaults define what data is captured by the exit routine when a segment is
updated by an application program.
Related Reading:
v For more information about the Data Capture exit routine, see IMS Version 9:
Customization Guide.
v For a full description of the EXIT= parameter on both the DBD and SEGM
statements, see IMS Version 9: Utilities Reference: System.
The maximum length of the character string is 255 bytes. You can use VERSION=
to create a naming convention that denotes the database characteristics that affect
the proper functioning of Data Capture exit routines. You might use VERSION= to
flag DBDs containing logical relationships, or to indicate which data capture exit
routines are defined on the DBD or SEGM statements. VERSION= might be coded
as:
DBD VERSION=’DAL-&SYSDATE-&SYSTIME’
DAL, in this statement, tells you that Data Capture exit routine A is specified on the
DBD statement (D), and that the database contains logical relationships (L).
&SYSDATE and &SYSTIME tell you the date and time the DBD was generated.
| A Data Capture exit routine is invoked once per segment update for each segment
| for which the Data Capture exit routine is specified. Data Capture exit routines are
| invoked multiple times for a single call under certain conditions. These conditions
| include:
| v Path updates.
| v Cascade deletes when multiple segment types or multiple segment occurrences
| are deleted.
| v Updates on logical children.
| v Updates on logical parents.
| v Updates on a single segment when multiple Data Capture exit routines are
| specified against that segment. Each exit is invoked once, in the order it is listed
| on the DBD or SEGM statements.
When multiple segments are updated in a single application program call, Data
Capture exit routines are invoked in the same order in which IMS physically
updates the segments:
1. Path inserts are executed “top-down” in DL/I. Therefore, a Data Capture exit
routine for a parent segment is called before a Data Capture exit routine for that
parent’s dependent.
2. Cascade deletes are executed “bottom-up”. All dependent segments’ exits are
called before their respective parents’ exits on cascade deletes. IMS physically
deletes dependent segments on cascade deletes only after it has validated the
delete rules by following the hierarchy to the lowest level segment. After delete
rules are validated, IMS deletes segments starting with the lowest level segment
in a dependent chain and continuing up the chain, deleting the highest level
parent segment in the hierarchy last. Data Capture exit routines specified for
segments in a cascade delete are called in reverse hierarchical order.
3. Path replaces are performed “top-down” in IMS. In Data Capture exit routines
defined against segments in path replaces, parent segments are replaced first.
All of their descendents are then replaced in descending hierarchical order.
| Data is passed to Data Capture exit routines when an application program updates
| IMS with a DL/I insert, delete, or replace call. Segment data passed to Data
| Capture exit routines is always physical data. When the update involves logical
| children, the data passed is physical data and the concatenated key of the logical
| parent segment. For segments that use the Segment Edit/Compression exit routine
| (DFSCMPX0), the data passed is expanded data.
| When an application replaces a segment, both the existing and the replacement
| physical data are captured. In general, segment data is captured even if the
| application call does not change the data. However, for full-function databases, IMS
| compares the before and after data. If the data has not changed, IMS does not
| update the database or log the replace data. Because data is not replaced, Data
| Capture exit routines specified for that segment are not called and the data is not
| captured.
Data might be captured during replaces even if segment data does not change
when:
| 1. The application inserts a concatenation of a logical child and logical parent, IMS
| replaces the logical parent, and the parent data does not change.
2. The application issues a replace for a segment in a DEDB database.
In each case, IMS updates the database without comparing the before and after
data, and therefore the data is captured even though it does not change.
The entire segment, before and after, is passed to Data Capture exit routines when
the application replaces a segment. When the exit routine is interested in only a few
fields, it is recommended that the SQL update request not be issued until after the
before and after replace data for those fields is compared to see if the fields were
changed.
| Data Capture exit routines are called when segment data is updated by an
| application program insert, replace, or delete call. Optionally, Data Capture exit
| routines are called when DL/I deletes a dependent segment because the application
| program deleted its parent segment, a process known as cascade delete. Data
| Capture exit routines are passed two functions to identify the following:
| 1. The action performed by the application program
| 2. The action performed by IMS
| The two functions that are passed to the Data Capture exit routines are:
| v Call function. The DL/I call, ISRT, REPL, or DLET, that is issued by the
| application program for the segment.
| v Physical function. The physical action, ISRT, REPL, or DLET, performed by IMS
| as a result of the call. The physical function is used to determine the type of SQL
| request to issue when propagating data.
The call and physical functions passed to the exit routine are always the same for
replace calls. However, the functions passed might differ for delete or insert calls:
v For delete calls resulting in cascade deletes, the call function passed is CASC (to
indicate the cascade delete) and the physical function passed is DLET.
v For insert calls resulting in the insert of a logical child and the replace of a logical
parent (because the logical parent already exists), the call function passed is
ISRT and the physical function passed is REPL. IMS physically replaces the
logical parent with data inserted by the application program even if the parent
data does not change. Both call and physical functions are then used, based on
the data propagation requirements, to determine the SQL request to issue in the
Data Capture exit routine.
If the EXIT= options specify NOCASCADE, data is not captured for cascade
deletes. However, when a cascade delete crosses a logical relationship into another
physical database to delete dependent segments, a Data Capture exit routine
needs to be called in order to issue the SQL delete for the parent of the physical
structure in DB2 UDB for z/OS. Rather than requiring the EXIT= CASCADE option,
IMS always calls the exit routine for a segment when deleting the parent segment in
a physical database record with an exit routine defined, regardless of the
CASCADE/NOCASCADE option specified on the segment. IMS bypasses the
NOCASCADE option only when crossing logical relationships into another physical
database. As with all cascade deletes, the call function passed is CASC and the
physical function passed is DLET.
Segment data passed to Data Capture exit routines is always physical data.
Consequently, you must place restrictions on delete rules in logically related
databases supporting Data Capture exit routines. Table 17 on page 220
summarizes which delete rules you can and cannot use in logically related
databases with Data Capture exit routines specified on their segments.
Table 17. Delete Rule Restrictions for Logically Related Databases Using Data Capture Exit
Routines
Logical Delete Physical Delete
Segment Type Virtual Delete Rule Rule Rule
Logical Children Yes No No
Logical Parents No Yes Yes
When a logically related database has a delete rule violation on a logical child:
v The logical child cannot have a Data Capture exit routine specified.
v No ancestor of the logical child can have a Data Capture exit routine specified.
When a logically related database has a delete rule violation on a logical parent, the
logical parent cannot have a Data Capture exit routine specified. ACBGEN validates
logical delete rule restrictions and will not allow a PSB that refers to a database that
violates these restrictions to proceed.
Field-Level Sensitivity
The following database types support field-level sensitivity:
v HSAM
v HISAM
v SHISAM
v HDAM
v PHDAM
v HIDAM
v PHIDAM
Sensitivity in the DBD and PSB,” but basically they determine the order of fields in
a segment as seen by an application program.)
The START= parameter defines the starting location of the field in the application
program’s I/O area. In the I/O area, fields do not need to be located in any
particular order, nor must they be contiguous. The end of the segment in the I/O
area is defined by the end of the right most field. All segments using field-level
sensitivity appear fixed in length in the I/O area. The length is determined by the
sum of the lengths of fields on SENFLD statements associated with a SENSEG
statement.
Figure 140 on page 222 is an example of field-level sensitivity. Following the figure
is information about coding field-level sensitivity.
Figure 141 shows the DBD for the example shown in Figure 140.
SEGM NAME=EMPREC,BYTES=100
FIELD NAME=(EMPNO,SEQ),BYTES=5,START=1,TYPE=C
FIELD NAME=EMPNAME,BYTES=20,START=6,TYPE=C
FIELD NAME=BIRTHD,BYTES=6,START=26,TYPE=C
FIELD NAME=SAL,BYTES=3,START=32,TYPE=P
FIELD NAME=ADDRESS,BYTES=60,START=41,TYPE=C
Figure 142 shows the PSB for the figure shown in Figure 140.
SENSEG NAME=EMPREC,PROCOPT=A
SENFLD NAME=EMPNAME,START=1,REPL=N
SENFLD NAME=EMPNO,START=25
SENFLD NAME=ADDRESS,START=35,REPL=Y
v A SENFLD statement is coded for each field that can appear in the I/O area. A
maximum of 255 SENFLD statements can be coded for each SENSEG
statement, with a limit of 10000 SENFLD statements for a single PSB.
v The optional REPL= parameter on the SENFLD statement indicates whether
replace operations are allowed on the field. In the figure, replace is not allowed
for EMPNAME but is allowed for EMPNO and ADDRESS. If REPL= is not coded
on a SENFLD statement, the default is REPL=Y.
v The TYPE= parameter on FIELD statements in the DBD is used to determine fill
values on insert operations.
Figure 143 shows an example of a retrieve call based on the DBD and PSB in
Figure 140.
Figure 145 shows an example of an insert operation based on the DBD and PCB in
Figure 140 on page 222.
Blanks are inserted in the BIRTHD field because its FIELD statement in the DBD
specifies TYPE=C. Packed decimal zero is inserted in the SAL field because its
FIELD statement in the DBD specifies TYPE=P. Binary zeros are inserted in
positions 35 to 40 because no FIELD statement was coded for this space in the
DBD.
the same field name appears in both parts of the concatenation, the first part
references the logical child. The second and all subsequent parts reference the
logical parent. This referencing sequence determines the order in which fields are
moved to the I/O area.
v When using field-level sensitivity with a virtual logical child, the field list of the
paired segment is searched after the field list of the virtual segment and before
the field list of the logical parent.
DBD
SEGM NAME=EMPREC,BYTES=(102,7)
FIELD NAME=(EMPNO,SEQ),BYTES=5,START=3,TYPE=C
FIELD NAME=EMPNAME,BYTES=20,START=8,TYPE=C
FIELD NAME=BIRTHD,BYTES=6,START=28,TYPE=C
FIELD NAME=ADDRESS,BYTES=60,START=43,TYPE=C
Figure 147. DBD Example for Field-Level Sensitivity with Variable-Length Segments
PSB
SENSEG NAME=EMPREC,PROCOPT=A
SENFLD NAME=EMPNAME,START=1,REPL=N
SENFLD NAME=EMPNO,START=25
SENFLD NAME=ADDRESS,START=35,REPLY=Y
Figure 148. PSB Example for Field-Level Sensitivity with Variable-Length Segments
The length field is not present in the I/O area. Also, the address field is filled with
blanks, because TYPE=C is specified on the FIELD statement in the DBD.
The length field, maintained by IMS, does not include room for the address field,
because the field was missing and not replaced.
On a replace call, if a field returned to the application program with a fill value is
changed to a non-fill value, the segment length is increased to the minimum size
needed to hold the modified field.
v The 'LL' field is updated to include the full length of the added field and all fields
up to the added field.
v The TYPE= parameter in the DBD (see Figure 147 on page 226) determines the
fill value for non-sensitive DBD fields up to the added field.
v Binary zero is the fill value for space up to the added field that is not defined by
a FIELD statement in the DBD.
Figure 150 is an example of a missing field on a replace call based on the DBD and
PSB in Figure 147 on page 226.
The 'LL' field is maintained by IMS to include the full length of the ADDRESS field
and all fields up to the ADDRESS field. BIRTHD is filled with blanks, because
TYPE=C is specified on the FIELD statement in the DBD. Positions 34 to 42 are set
to binary zeros, because the space was not defined by a FIELD statement in the
DBD.
Figure 151 is an example of a missing field on an insert call using the DBD and
PSB in Figure 147 on page 226.
The 'LL' field is maintained by IMS to include the full length of all sensitive fields up
to and including the ADDRESS field. BIRTHD is filled with blanks, because
TYPE=C was specified on the FIELD statement in the DBD. Positions 34 to 42 are
set to binary zeros, because the space was not defined in a FIELD statement in the
DBD.
The ADDRESS field in the I/O area is padded with blanks to correspond to the
length defined on the SEGM statement in the DBD.
Figure 153 on page 230 is an example of a partially present field on a REPL call
based on the DBD and PSB in Figure 147 on page 226.
The 'LL' field is changed from 50 to 52 by DL/I to accommodate the change in the
field length of ADDRESS.
Although this book has explored storing a database on a single or a single pair of
data sets, HD databases can be stored on more than the one or two data sets
required for database storage. You have seen that an HD database is stored on an
ESDS, if VSAM is being used, or an OSAM data set, if OSAM is being used.
In HD databases, a single data set is used for storage rather than a pair of data
sets. The primary data set group therefore consists of the ESDS (if VSAM is being
used) or OSAM data set (if OSAM is being used) on which you must specify
storage for your database. The secondary data set group is an additional ESDS or
OSAM data set on which you are allowed to store your database.
As many as ten data set groups can be used in HISAM and HD databases, that is,
one primary data set group and a maximum of nine secondary data set groups.
Figure 154. Hierarchy of Applications That Need to Access INSTR and LOC Segments
The hierarchy on the left favors applications that need to access INSTR and LOC
segments. The hierarchy on the right favors applications that need to access
STUDENT and GRADE segments. (Favor, in this context, means that access to the
segments is faster.) If the applications that access the INSTR and LOC segments
are more important than the ones that access the STUDENT and GRADE
segments, you can use the database record on the left. But if both applications are
equally important, you can split the database record into different data set groups.
This will give both types of applications good access to the segments each needs.
To split the database record, you would use two data set groups. As shown in
Figure 155, the first data set group contains the COURSE, INSTR, REPORT, and
LOC segments. The second data set group contains the STUDENT and GRADE
segments.
In the database record shown in Figure 156 on page 233, segments COURSE (1),
INSTR (2), LOC (4), and STUDENT (5) could go in one data set group, while
segments REPORT (3) and GRADE (6) could go in a second data set group.
Examples of how this HD database record could be divided into three groups are in
Table 18.
Table 18. Examples of Multiple Data Set Grouping
Data Set Group 1 Data Set Group 2 Data Set Group 3
Segment 1 Segments 2, 5, and 6 Segments 3 and 4
Segments 1, 3, and 6 Segments 2 and 5 Segment 3
Segments 1, 3, and 6 Segments 2 and 5 Segment 4
Figure 157. Connecting Segments in Multiple Data Set Groups Using Physical Child First
Pointers
Specify in the DBD which segment types need to be put in a data set group. Based
on that information, IMS automatically loads segments into the correct data set
group. In this example, the user specified that four segment types in the database
record were put in the primary data set group (COURSE, INSTR, LOC, STUDENT)
and two segment types were put in the secondary data set group (REPORT,
GRADE).
In the HDAM or PHDAM database, note that only the primary data set group has a
root addressable area. The secondary data set group is additional overflow storage.
Figure 158. HD Database Record in Storage When Multiple Data Set Groups Are Used
The following examples use the database record used in “Why Use Multiple Data
Set Groups?” on page 231 and “HD Databases Using Multiple Data Set Groups” on
page 232. The first example, Figure 159, shows two groups: data set group A
contains COURSE and INSTR, data set group B contains all of the other segments.
The second example shows a different grouping. Note the differences in DBDs
when the groups are not in sequential hierarchical order of the segments.
Figure 160 is the HDAM DBD for the first example. Note that the segments are
grouped by the DATASET statements preceding the SEGM statements and that the
segments are listed in hierarchical order. In each DATASET statement, the DD1=
parameter names the VSAM ESDS or OSAM data set that will be used. Also, each
data set group can have its own characteristics, such as device type.
DBD NAME=HDMDSG,ACCESS=HDAM,RMNAME=(DFSHDC40,8,500)
DSA DATASET DD1=DS1DD,
SEGM NAME=COURSE,BYTES=50,PTR=T
FIELD NAME=(CODCOURSE,SEQ),BYTES=10,START=1
SEGM NAME=INSTR,BYTES=50,PTR=T,PARENT=((COURSE,SNGL))
DSB DATASET DD1=DS2DD,DEVICE=2314
SEGM NAME=REPORT,BYTES=50,PTR=T,PARENT=((INSTR,SNGL))
SEGM NAME=LOC,BYTES=50,PTR=T,PARENT=((COURSE,SNGL))
SEGM NAME=STUDENT,BYTES=50,PTR=T,PARENT=((COURSE,SNGL))
SEGM NAME=GRADE,BYTES=50,PTR=T,PARENT=((STUDENT,SNGL))
DBDGEN
Figure 160. HDAM DBD for First Example of Data Set Groups
Figure 161 shows the DBD for a PHDAM database. Instead of using the DATASET
statement, use the DSGROUP parameter in the SEGM statement. The first two
segments do not have DSGROUP parameters because it is assumed that they are
in the first group.
DBD NAME=HDMDSG,ACCESS=PHDAM,RMNAME=(DFSHDC40,8,500)
SEGM NAME=COURSE,BYTES=50,PTR=T
FIELD NAME=(CODCOURSE,SEQ),BYTES=10,START=1
SEGM NAME=INSTR,BYTES=50,PTR=T,PARENT=((COURSE,SNGL))
SEGM NAME=REPORT,BYTES=50,PTR=T,PARENT=((INSTR,SNGL)),DSGROUP=B
SEGM NAME=LOC,BYTES=50,PTR=T,PARENT=((COURSE,SNGL)),DSGROUP=B
SEGM NAME=STUDENT,BYTES=50,PTR=T,PARENT=((COURSE,SNGL)),DSGROUP=B
SEGM NAME=GRADE,BYTES=50,PTR=T,PARENT=((STUDENT,SNGL)),DSGROUP=B
DBDGEN
Figure 161. PHDAM DBD for First Example of Data Set Groups
The second example, Figure 162 on page 236, differs from the first example in that
the groups do not follow the order of the hierarchical sequence. The segments must
be listed in the DBD in hierarchical sequence, so additional DATASET statements or
DSGROUP parameters are required.
Figure 163 is the DBD for an HDAM database of the second example. It is similar
to the first example, except that because the sixth segment is part of the first group,
you need another DATASET statement before it with the DSA label. The additional
DATASET label groups the sixth segment with the first three.
DBD NAME=HDMDSG,ACCESS=HDAM,RMNAME=(DFSHDC40,8,500)
DSA DATASET DD1=DS1DD,
SEGM NAME=COURSE,BYTES=50,PTR=T
FIELD NAME=(CODCOURSE,SEQ),BYTES=10,START=1
SEGM NAME=INSTR,BYTES=50,PTR=T,PARENT=((COURSE,SNGL))
SEGM NAME=REPORT,BYTES=50,PTR=T,PARENT=((INSTR,SNGL))
DSB DATASET DD1=DS2DD,DEVICE=2314
SEGM NAME=LOC,BYTES=50,PTR=T,PARENT=((COURSE,SNGL))
SEGM NAME=STUDENT,BYTES=50,PTR=T,PARENT=((COURSE,SNGL))
DSA DATASET DD1=DS1DD
SEGM NAME=GRADE,BYTES=50,PTR=T,PARENT=((STUDENT,SNGL))
DBDGEN
Figure 163. HDAM DBD for Second Example of Data Set Groups
Figure 164 is the DBD for a PHDAM database of the second example. It is similar
to the first example, except that because the sixth segment is part of the first group,
you must explicitly group it with the first two segments by using the DSGROUP
parameter.
DBD NAME=HDMDSG,ACCESS=PHDAM,RMNAME=(DFSHDC40,8,500)
SEGM NAME=COURSE,BYTES=50,PTR=T
FIELD NAME=(CODCOURSE,SEQ),BYTES=10,START=1
SEGM NAME=INSTR,BYTES=50,PTR=T,PARENT=((COURSE,SNGL))
SEGM NAME=REPORT,BYTES=50,PTR=T,PARENT=((INSTR,SNGL)),
SEGM NAME=LOC,BYTES=50,PTR=T,PARENT=((COURSE,SNGL)),DSGROUP=B
SEGM NAME=STUDENT,BYTES=50,PTR=T,PARENT=((COURSE,SNGL)),DSGROUP=B
SEGM NAME=GRADE,BYTES=50,PTR=T,PARENT=((STUDENT,SNGL)),DSGROUP=A
DBDGEN
Figure 164. PHDAM DBD for Second Example of Data Set Groups
Restriction: CI reclaim does not occur for SHISAM databases. When a large
number of records in a SHISAM database are deleted, particularly a large number
of consecutive records, serious performance degradation can occur. Eliminate
empty CIs and resolve the problem by using VSAM REPRO.
Partition Selection
A partition is selected by using the root key for the DL/I call and the high key
defined for the partition. When access is restricted to a single partition and the root
key is outside the key range of the partition, status code FM or GE is returned.
If you use a partition selection exit routine, the routine is called when the DL/I call
provides a specific root key. The exit routine selects a partition based on the root
key given. If the partition selected is different than the one that the application has
access to, status code FM or GE is returned. The exit routine is not called to select
a first partition or next partition.
When access is restricted to a single partition, the first partition is always the
partition to which access is restricted, and the next partition does not exist.
| XML documents can be stored in IMS databases using any combination of two
| storage methods to best fit the structure of the XML document:
| Decomposed XML storage
| The XML tags are removed from the XML document and only the data is
| extracted. The extracted data is converted into traditional IMS field types
| and inserted into the database. Use this approach in the following
| scenarios:
| v XML applications and non-XML applications must access the same
| database.
| v Extensive searching of the database is needed.
| v A strict XML schema is available.
| Intact XML storage
| The XML document is stored, with its XML structure and tags intact, in an
| Related Reading:
| v For more information about the DLIModel utility, see IMS Version 9: Utilities
| Reference: System.
| v For more information about storing XML data in IMS databases, see IMS Version
| 9: IMS Java Guide and Reference.
In this chapter:
v “Specifying Free Space (HDAM, PHDAM, HIDAM, and PHIDAM Only)”
v “Estimating the Size of the Root Addressable Area (HDAM or PHDAM Only)” on
page 242
v “Determining Which Randomizing Module to Use (HDAM and PHDAM Only)” on
page 243
v “Choosing HDAM or PHDAM Options” on page 244
v “Choosing a Logical Record Length for a HISAM Database” on page 245
v “Choosing a Logical Record Length for HD Databases” on page 248
v “Determining the Size of CIs and Blocks” on page 248
v “Buffering Options” on page 249
v “OSAM Sequential Buffering” on page 253
v “VSAM Options” on page 260
v “OSAM Options” on page 265
v “Dump Option (DUMP Parameter)” on page 265
v “Deciding Which FIELD Statements to Code in the DBD” on page 265
v “Planning for Maintenance” on page 265
To minimize the effect of insert operations after the database is loaded, allocate free
space in the database when it is initially loaded. Free space allocation in the
database will reduce the performance impact caused by insert operations, and
therefore, decrease the frequency with which HD databases must be reorganized.
For OSAM data sets and VSAM ESDS, free space is specified in the FRSPC=
keyword of the DATASET statement in the DBD. In the keyword, one or both of the
following operands can be specified:
v Free block frequency factor (fbff). The fbff specifies that every nth block or CI in a
data set group be left as free space when the database is loaded (where fbff=n).
The range of fbff includes all integer values from 0 to 100, except 1. Avoid
specifying fbff for HDAM or PHDAM databases. If you specify fbff for HDAM or
PHDAM databases and if at load time the randomizing module generates the
relative block or CI number of a block or CI marked as free space, the
randomizer must store the root segment in another block.
If you specify fbff, every nth block or CI will be considered a second-most
desirable block or CI by the HD Space Search Algorithm. This is true unless you
specify SEARCHA=1 in the DATASET macro of the DBDGEN utility. By
specifying SEARCHA=1, you are telling IMS not to search for space in the
second-most desirable block or CI.
Related Reading:
– For details on the HD Space Search Algorithm, see “How the HD Space
Search Algorithm Works” on page 103.
– For more information on the SEARCHA keyword, see IMS Version 9: Utilities
Reference: Database and Transaction Manager.
v Free space percentage factor (fspf). The fspf specifies the minimum percentage
of each block or CI in a data set group to be left as free space when the
database is loaded. The range of fspf is from 0 to 99.
Note: This free space applies to VSAM ESDS and OSAM data sets. It does not
apply to HIDAM or PHIDAM index databases or to DEDBs.
For VSAM KSDS, free space is specified in the FREESPACE parameter of the
DEFINE CLUSTER command. This VSAM parameter is disregarded for a VSAM ESDS
data set used for HIDAM, PHIDAM, HDAM, or PHDAM. This command is explained
in detail in DFSMS Access Method Services for Catalogs.
where:
A= the number of bytes of a database record to be stored in the root
addressable area
B= the expected number of database records
C= the number of bytes available for data in each CI or block CI or block size,
minus overhead)
D= the size you will need, in blocks or CIs, for the root addressable area.
If you have specified free space for the database, include it in your calculations for
determining the size of the root addressable area. Use the following formula to
accomplish this step:
(D x E x G) / F = H
where:
D= the size you calculated in the first formula (the necessary size of the root
addressable area in block or CIs)
E= how often you are leaving a block or CI in the database empty for free
space (what you specified in the fbff operand in the DBD)
F= (E-1) (fbff-1)
G= 100 100 - fspf The fspf is the minimum percentage of each block or
CI you are leaving as free space (what you specified in the fspf operand in
the DBD)
H= the total size you will need, in blocks or CIs
Specify the number of blocks or CIs you need in the root addressable area in the
RMNAME=rbn keyword in the DBD statement in the DBD.
Normally, one of the four randomizing modules supplied with the system will work
for your database. These modules, and the arithmetic techniques they use, are
described in detail in IMS Version 9: Customization Guide.
RMNAME=(mod,anch,rbn,bytes)
Packing density =
( Number of roots x root bytes ) /
( Number of CIs in the root addressable area x Usable space in the CI )
root bytes
The average number of bytes in each root in the root addressable area.
Usable space in the CI
The CI or block size minus (as applicable) space for the FSEAP, RAPs,
VSAM CIDF, VSAM RDF, and free space.
The database record shown in Figure 166 on page 246 is stored on three short
logical records in Figure 167 on page 246 and in two longer logical records in
Figure 168 on page 246. Note the three areas of unused space.
In Figure 167, note the three areas of unused space. In Figure 168, there are only
two areas of unused space, rather than three, but the total size of the areas is
larger.
Segments in a database record that do not fit in the logical record in the primary
data set are put in one or more logical records in the overflow data set. More read
and seek operations, and therefore longer access time, are required to access
logical records in the overflow data set than in the primary data set. This is
especially true as the database grows in size and chains of overflow records
develop. Therefore, you should try to put the most-used segments in your database
record in the primary data set. When choosing a logical record length the primary
data set should be as close to average database record length as possible. This
results in a minimum of overflow logical records and thereby minimizes performance
problems. When you calculate the average record length, beware of unusually long
or short records that can skew the results.
A read operation reads one CI into the buffer pool. CIs contain one or more logical
records in a database record. Because of this, it takes as many read and seek
operations to access an entire database record as it takes CIs to contain it. In
Figure 170 on page 247, each CI contains two logical records, and two CIs are
required to contain the database record shown in Figure 169 on page 247.
Consequently, it takes two read operations to get these four logical records into the
buffer.
The number of read and seek operations required to access a database record
increases as the size of the logical record decreases. The question to consider is:
Do you often need access to the entire database record? If so, you should try to
choose a logical record size that will usually contain an entire database record. If,
however, you typically access only one or a few segments in a database record,
choice of a logical record size large enough to contain the average database record
is not as important.
Consider what will happen in the following setup example in which you need to read
database records, one after another:
v Your CI or block size is 2048 bytes.
v Your Logical record size is 512 bytes.
v Your Average database record size is 500 bytes.
v The range of your database record sizes is 300 to 700 bytes.
Because your logical and average database record sizes are about equal (512 and
500), approximately one of every two database records will be read into the buffer
pool with one read operation. (This assumption is based on the average size of
database records.) If, however, your logical record size were 650, you would access
most database records with a single read operation. An obvious trade-off exists
here, one you must consider in picking a logical record length for HISAM data sets.
If your logical record size were 650, much unused space would exist between the
end of an average database record and the last logical record containing it.
Rules to Observe
The following rules must be observed when choosing a logical record length for
HISAM data sets:
v Logical record size in the primary data set must be at least equal to the size of
the root segment, plus its prefix, plus overhead. If variable-length segments are
used, logical record size must be at least equal to the size of the longest root
segment, plus its prefix, plus overhead. Five bytes of overhead is required for
VSAM.
v Logical record size in the overflow data set must be at least equal to the size of
the longest segment in the overflow data set, plus its prefix, plus overhead. Five
bytes of overhead is required for VSAM.
v Logical record lengths in the overflow data set must be equal to or greater than
logical record length in the primary data set.
v The maximum logical record size is 30720 bytes.
v Except for SHISAM databases, logical record lengths must be an even number.
Related Reading: To determine the average size of your database records, see
“Estimating the Minimum Size of the Database” on page 311.
Related Reading: See “Determining the Size of CIs and Blocks” for information on
determining CI or block size.
As with HISAM databases, specify the length of the logical records in the
RECORD= operand of the DATASET statement in the DBD.
Track sizes vary from one device to another, and many different CI sizes you can
specify exist. Because you can specify different CI sizes, the physical block size
that VSAM picks varies and is based on device overhead factors. For information
about using VSAM data sets, refer to DFSMS Access Method Services for
Catalogs.
Buffering Options
Database buffers are defined areas in virtual storage. When an application program
processes a segment in the database, the entire block or CI containing the segment
is read from the database into a buffer. The application program processes the
segment while it is in the buffer. If the processing involves modifying any segments
in the buffer, the contents of the buffer must eventually be written back to the
database so the database is current.
You need to choose the size and number of buffers that give you the maximum
performance benefit. If your database uses OSAM, you might also decide to use
OSAM sequential buffering. The subtopics in this topic can help you with these
decisions.
When the data an application program needs is already in a buffer, the data can be
used immediately. The application program is not forced to wait for the data to be
read from the database to the buffer. Because the application program does not
wait, performance is better. By having multiple buffers in virtual storage and by
making a buffer large enough to contain all the segments of a CI or block, you
increase the chance that the data needed by application programs is already in
virtual storage. Thus, the reason for having multiple buffers in virtual storage is to
eliminate some of an application program’s wait time.
In virtual storage, all buffers are put in a buffer pool. Separate buffer pools exist for
VSAM and OSAM. A buffer pool is divided into subpools. Each subpool is defined
with a subpool definition statement. Each subpool consists of a specified number of
buffers of the same size. With OSAM and VSAM you can specify multiple subpools
with buffers of the same size.
″Use″ Chain
In the subpool, buffers are chained together in the order in which they have been
used. This organization is called a “use chain.” The most recently used buffers are
at the top of the use chain and the least recently used buffers are at the bottom.
You can also create separate subpools for VSAM KSDS index and data
components within a VSAM local shared resource pool. Creating separate subpools
can be advantageous because index and data components do not need to share
buffers or compete for buffers in the same subpool.
Hiperspace Buffering
Multiple VSAM local shared resource pools enhance the benefits provided by
Hiperspace™ buffering. Hiperspace buffering allows you to extend the buffering of
4K and multiples of 4K buffers to include buffers allocated in expanded storage in
addition to the buffers allocated in virtual storage. Using multiple local shared
resource pools and Hiperspace buffering allows data sets with certain reference
patterns (for example, a primary index data set) to be isolated to a subpool backed
by Hiperspace, which reduces the VSAM read I/O activity needed for database
processing.
Buffer Size
Pick buffer sizes that are equal to or larger than the size of the CIs and blocks that
are read into the buffer. A variety of valid buffer sizes exist. If you pick buffers larger
than your CI or block sizes, virtual storage is wasted.
For example, suppose your CI size is 1536 bytes. The smallest valid buffer size that
can hold your CI is 2048 bytes. This wastes 512 bytes (2048 - 1536) and is not a
good choice of CI and buffer size.
Buffer Numbers
Pick an appropriate number of buffers of each size so buffers are available for use
when they are needed, an optimum amount of data is kept in virtual storage during
application program processing, and application program wait time is minimized.
The trade-off in picking a number of buffers is that each buffer uses up virtual
storage.
When you initially choose buffer sizes and the number of buffers, you are making a
scientific guess based on what you know about the design of your database and
the processing requirements of your applications. After you choose and implement
buffer size and numbers, various monitoring tools are available to help you
determine how well your scientific guess worked. Monitoring is discussed in
Chapter 14, “Monitoring Databases,” on page 335.
Buffer size and number of buffers are specified when the system is initialized. Both
can be changed (tuned) for optimum performance at any time. Tuning is discussed
in Chapter 15, “Tuning Databases,” on page 341.
In order not to waste buffer space, choose a buffer size that is the same as a valid
CI size. Valid CI sizes for VSAM data clusters are:
v For data components up to 8192 bytes (or 8K bytes), the CI size must be a
multiple of 512.
v For data components over 8192 bytes (or 8K bytes), the CI size must be a
multiple of 2048 (up to a maximum of 32768 bytes).
Valid CI sizes (in bytes) for VSAM index clusters using VSAM catalogs are:
512
1024
2048
4096
Valid CI sizes for VSAM index clusters using integrated catalog facility catalogs are:
v For index components up to 8192 bytes (or 8K bytes), the CI size must be a
multiple of 512.
v For index components over 8192 bytes (or 8K bytes), the CI size must be a
multiple of 2048 (up to a maximum of 32768 bytes).
For OSAM data sets, choose a buffer size that is the same as a valid block size so
that buffer space is not wasted. Valid block sizes for OSAM data sets are any size
from 18 to 32768 bytes.
Restriction: When using sequential buffering and the coupling facility for OSAM
data caching, the OSAM database block size must be defined in multiples of 256
bytes (decimal). Failure to define the block size accordingly can result in
ABENDS0DB from the coupling facility. This condition exists even if the IMS system
is accessing the database in read-only mode.
Specifying Buffers
Specify the number of buffers and their size when the system is initialized. Your
specifications, which are given to the system in the form of control statements, are
put in the:
v DFSVSAMP data set in batch, utility.
v IMS.PROCLIB data set with the member name DFSVSMnn in IMS DCCTL and
DBCTL environments.
Detailed information on how to code these control statements is located in the IMS
Version 9: Installation Volume 2: System Definition and Tailoring.
OSAM buffers can be fixed in storage using the IOBF= parameter. In VSAM, buffers
are fixed using the VSAMFIX= parameter in the OPTIONS statement. This
parameter is described under “VSAM Options” on page 260. Performance is
generally improved if buffers are fixed in storage, then page faults do not occur. A
page fault occurs when an instruction needs a page (a specific piece of storage)
and the page is not in storage.
With OSAM, you can fix the buffers and their buffer prefixes, or the buffer prefixes
and the subpool header, in storage. In addition, you can selectively fix buffer
subpools, that is, you can choose to fix some buffer subpools and not others. Buffer
subpools are fixed using the IOBF= parameter. The format of this parameter is:
IOBF= (length,number,fix1,fix2,id)
where:
v length is the size of buffers in a subpool.
v number is the number of buffers in a subpool. If three or fewer are specified, IMS
gives you three; otherwise, it gives you the number specified. If you do not
specify a sufficient number of buffers, your application program calls could waste
time waiting for buffer space.
v fix1 is whether the buffers and buffer prefixes in this subpool need to be fixed
and is specified as Y or N (yes or no).
v fix2 is whether the buffer prefixes in this subpool and the subpool header need to
be fixed and is specified as Y or N (yes or no).
The default for the fix1 parameter is that buffers and their prefixes are not fixed.
The default for the fix2 parameter is that buffer prefixes and the subpool header
are not fixed.
v id is a parameter that specifies an identifier to be assigned to the subpool. It is
used in conjunction with the DBD statement to assign a specific subpool to a
given data set. This DBD statement is not the DBD statement used in a DBD
generation but one specified during execution, as described in IMS Version 9:
Installation Volume 2: System Definition and Tailoring. The id parameter allows
you to have more than one subpool with the same buffer size. You can use it to:
– Get better distribution of activity among subpools
– Direct new database applications to “private” subpools
– Control the contention between a BMP and MPPs for subpools
SB reduces the time needed for I/O read operations in three ways:
v By reading 10 consecutive blocks with a single I/O operation. This is called a
sequential read. Sequential reads reduce the number of I/O operations necessary
to sequentially process a database data set.
When a sequential read is issued, the block containing the segment your
program requested plus nine adjacent blocks are read from the database into an
SB buffer pool in virtual storage. When your program processes segments in any
of the other nine blocks, no I/O operations are required because the blocks are
already in the SB buffer pool.
Example: If your program sequentially processes an OSAM data set containing
100,000 consecutive blocks, 100,000 I/O operations are required using the
normal OSAM buffering method. SB can take as few as 10,000 I/O operations to
process the same data set.
v By monitoring the database I/O reference pattern and deciding if it is more
efficient to satisfy a particular I/O request with a sequential read or a random
read. This decision is made for each I/O request processed by SB.
v By overlapping sequential read I/O operations with CPC processing and other I/O
operations of the same application. When overlapped sequential reads are used,
SB anticipates future requests for blocks and reads those blocks into SB buffers
before they are actually needed by your application. (Overlapped I/O is supported
only for batch and BMP regions.)
Note: SB is possible but not recommended for short-running MPP, IFP, and
CICS programs. SB is not recommended for the short-running programs,
because SB has a high initialization overhead each time such online
programs are run.
v IMS utilities, including:
– Online Database Image Copy
– HD Reorganization Unload
– Partial Database Reorganization
– Surveyor
– Database Scan
– Database Prefix Update
– Batch Backout
| v HALDB Online Reorganization function
v Run additional sequential application programs within the same time period.
v Run some sequential application programs more often.
v Make online image copies much faster.
v Reduce the time needed to reorganize your databases.
Flexibility of SB Use
IMS provides several methods for requesting SB. You can request the use of SB for
specific programs and utilities during PSBGEN or by using SB control statements.
You can also request the use of SB for all or some batch and BMP programs by
using an SB Initialization Exit Routine.
These methods of controlling the use of SB are discussed in “How to Request the
Use of SB” on page 257.
What SB Buffers
As discussed in Chapter 8, “Choosing Optional Database Functions,” on page 151,
HD databases can consist of multiple data set groups. A database PCB can
therefore refer to several data set groups. A database PCB can also refer to several
data set groups when the database referenced by the PCB is involved in logical
relationships. A particular database, and therefore a particular data set group, can
be referenced by multiple database PCBs. A specific data set group referenced by a
specific database PCB is referred to in the following discussion as a DB-PCB/DSG
pair.
When SB is activated, it buffers data from the OSAM data set associated with a
specific DB-PCB/DSG pair. SB can be active for several DB-PCB/DSG pairs at the
same time, but each pair requires a separate activation.
v Temporarily deactivate monitoring of the I/O reference pattern and activity rate.
This form of temporary deactivation is implemented only if SB has been
deactivated and IMS concludes from subsequent evaluations that use of SB
would still not be beneficial.
While SB is active, all requests for database blocks not found in the OSAM buffer
pool are sent to the SB buffer handler. The SB buffer handler responds to these
requests in the following way:
v If the requested block is already in an SB buffer, a copy of the block is put into
an OSAM buffer.
v If the requested block is not in an SB buffer, the SB buffer handler analyzes a
record of previous I/O requests and decides whether to issue a sequential read
or a random read. If it decides to issue a random read, the requested block is
read directly into an OSAM buffer. If it decides to issue a sequential read, the
requested block and nine adjacent blocks are read into an SB buffer set. When
the sequential read is complete, a copy of the requested block is put into an
OSAM buffer.
v The SB buffer handler also decides when to initiate overlapped sequential reads.
Note: When processing a request from an online program, the SB buffer handler
only searches the SB buffer pools allocated to that online program.
Related Reading: For information on how IMS invalidates SB buffers, see the
data-sharing chapter of IMS Version 9: Administration Guide: System.
The SB buffers are page-fixed in storage to eliminate page faults, reduce the path
length of I/O operations, and increase performance. SB buffers are page-unfixed
and page-released when a periodical evaluation temporarily deactivates SB.
You must ensure that the batch, online or DBCTL region has enough virtual storage
to accommodate the SB buffer pools. This storage requirement can be
considerable, depending upon the block size and the number of programs using
SB.
Some systems are storage-constrained only during certain periods of time, such as
during online peak times. You can use an SB Initialization Exit Routine to control
the use of SB according to specific criteria (the time) of day.
Related Reading: For details on the SB Initialization User Exit Routine see IMS
Version 9: Customization Guide.
Determine which method you will use. Using the second method is easier because
you do not need to know which BMP and batch programs use sequential
processing. However, using SB by default can lead to an uncontrolled increase in
real and virtual storage use, which can impact system performance. Generally, if
you are running IMS in a storage-constrained z/OS environment, use the first
method. If you are running IMS in a non storage-constrained z/OS environment, use
the second method.
The following diagram shows the syntax of the SB keyword in the PCB statement.
Detailed instructions for coding PSB statements are contained in IMS Version 9:
Utilities Reference: System.
This control statement allows you to override PSB specifications without requiring
you to regenerate the PSB.
You can specify keywords that request use of SB for all or specific DBD names, DD
names, PSB names, and PCB labels. You can also combine these keywords to
further restrict when SB is used.
By using the BUFSETS keyword of the SBPARM control statement, you can
change the number of buffer sets allocated to SB buffer pools. For details on the
SB buffer pools see “Virtual Storage Considerations for SB” on page 256. The
default number of buffer sets is four. Badly organized databases can require six or
more buffer sets for efficient sequential processing. Well-organized databases
require as few as two buffer sets. An indicator of how well-organized your database
is can be found in the optional //DFSSTAT reports.
Related Reading:
v For details on //DFSSTAT reports, see IMS Version 9: Utilities Reference:
Database and Transaction Manager.
v For information on tuning the number of buffer sets, see Chapter 15, “Tuning
Databases,” on page 341.
The example below shows the SBPARM control statement necessary to request
conditional activation of SB for all DBD names, DD names, PSB names, and PCBs.
SBPARM ACTIV=COND
Detailed instructions for coding the SBPARM control statement are contained in IMS
Version 9: Installation Volume 2: System Definition and Tailoring.
You can do this by writing your own SB exit routine or by selecting a sample SB
exit routine and copying it under the name DFSSBUX0 into IMS.SDFSRESL. An SB
exit routine allows you to dynamically control the use of SB at application
scheduling time.
Detailed instructions for the SB Initialization Exit Routine are in the IMS Version 9:
Customization Guide.
or
SBONLINE,MAXSB=nnnnn
where nnnnn is the maximum storage (in kilobytes) that can be used for SB buffers.
When the MAXSB limit is reached, IMS stops allocating SB buffers to online
applications until terminating online programs release SB buffer space. By default, if
you do not specify the MAXSB= keyword, the maximum storage for SB buffers is
unlimited.
Detailed instructions for coding the SBONLINE control statement are contained in
IMS Version 9: Installation Volume 2: System Definition and Tailoring.
There are three ways to disallow the use of SB. The following list describes the
three methods:
VSAM Options
Several types of options can be chosen for databases using VSAM. Specifying
options such as free space for the ESDS data set, logical record size, and CI size
are discussed in the preceding topics in this chapter. This topic describes these
optional functions:
1. Functions specified in the OPTIONS control statement when IMS is initialized.
2. Functions specified in the POOLID, VSRBF, and DBD control statements when
IMS is initialized.
3. Functions specified in the Access Method Services DEFINE CLUSTER
command when a data set is defined.
For these reasons, when an application program needs data read into a buffer and
the buffer contains altered data, the application program waits while the buffer is
written to the database. This waiting time decreases performance. The application
program is ready to do processing, but the buffer is not available for use.
Background write is a function you can choose in the OPTIONS statement that
reduces the amount of wait time lost for this reason.
To understand how background write works, you need to know something about
how buffers are used in a subpool. You specify the number of buffers and their size.
All buffers of the same size are in the same subpool. Buffers in a subpool are on a
use chain, that is, they are chained together in the order in which they have been
most or least recently used. The most recently used buffers are at the top of the
use chain; least recently used buffers are at the bottom.
When a buffer is needed, the VSAM buffer manager selects the buffer at the bottom
of the use chain. The buffer at the bottom of the use chain is selected, because
buffers that have not been used recently are less likely to contain data that will be
used again. If the buffer the VSAM buffer handler picks contains altered data, the
data is written to the database before the buffer is used. It is during this step that
the application program is waiting.
| Background write solves the following problem: when the VSAM buffer manager
| gets a buffer in any subpool, it looks (when background write is used) at the next
| buffer on the use chain. The next buffer on the use chain will be used next. If the
| buffer contains altered data, IMS is notified so background write will be invoked.
| Background write has VSAM write data to the database from some percentage of
| the buffers at the bottom of the use chain. VSAM does this for all subpools. The
| data that is written to the database still remains in the buffers so the application
| program can still use any data in the buffers.
To specify free space in the DEFINE CLUSTER command, you must decide:
v Whether free space you have specified is preserved or used when more than
one root segment is inserted at the same time into the KSDS.
v Whether to split the CI at the point where the root is inserted, or midway in the
CI, when a root that causes a CI split is inserted.
These choices are specified in the INSERT= parameter in the OPTIONS statement.
INSERT=SEQ preserves the free space and splits the CI at the point where the root
is inserted. INSERT=SKP does not preserve the free space and splits the CI
midway in the CI. In most cases, specify INSERT=SEQ so free space will be
available in the future when you insert root segments. Your application determines
which choice gives the best performance.
| ON is the default for the IMS DL/I, LOCK and retrieve traces. OFF is the default for
| all other traces. The traces can be turned on at IMS initialization time. They can
| also be started or stopped by the /TRACE command during IMS execution. Output
| from long-running traces can be saved on the system log if requested.
Related Reading: For more information on the trace parameters, see IMS Version
9: Installation Volume 2: System Definition and Tailoring.
You can specify whether buffers and IOBs are fixed in storage in the VSAMFIX=
parameter of the OPTIONS statement. If you have buffers or IOBs fixed, they are
fixed in all subpools. If you do not code the VSAMFIX= parameter, the default is
that buffers and IOBs are not fixed.
This parameter can be used in a CICS environment if the buffers were specified by
IMS.
Related Reading: Implementing the POOLID, VSRBF, and DBD control statements
and their corresponding parameters is described in detail in IMS Version 9:
Installation Volume 2: System Definition and Tailoring.
Related Reading: This command and all its parameters are described in detail in
DFSMS Access Method Services for Catalogs.
You specify free space in the FREESPACE parameter as a percentage. The format
of the parameter is FREESPACE(x,y) where:
x is the percentage of space in a CI left free when the database is loaded or
when a CI split occurs after initial load
y is the percentage of space in a control area (CA) left free when the
database is loaded or when a CA split occurs after initial load.
If you do not specify the FREESPACE parameter, the default is that no free space
is reserved in the KSDS data set when the database is loaded.
advantage of recovery when you specify the RECOVERY parameter, you should
specify SPEED to improve performance during initial load.
To be able to recover your data set during load, you should load it under control of
the Utility Control Facility. This utility is described in IMS Version 9: Utilities
Reference: Database and Transaction Manager.
The VSAM index consists of one or more levels, as shown in Figure 171. The first
(lowest) level is called the sequence set level. All other levels are called index set
levels. The sequence set level has a sequence set record for each CA in the
database. Each sequence set record contains a pointer to each CI in a specific CA
and the highest root segment’s key in that CI.
Index set records on the first index set level contain pointers to sequence set
records. Each pointer on the first index set level contains the address of a
sequence set record and the highest root segment key in the sequence set record
pointed to.
If no more room exists for new pointers in an index set record, a new index set
record is started on the same level. As soon as there are two index set records on
a level, a new index set record is started on the next higher level.
At the second and higher levels of the index set, the pointers are to index set
records at the next lowest level. Each pointer contains the address of an index set
record at the next lower level along with the highest key in the index set record
pointed to.
One option you can specify for the VSAM index that especially affects performance
is the REPLICATE | NOREPLICATE parameter in the DEFINE CLUSTER command. If
you specify REPLICATE, each record in the sequence set and the index set is
written as many times as it will fit on the track. Repeat records to reduce the delay
caused when the disk rotates. The repetition of records means the arm is almost
always close or over a record so very little disk rotation is necessary. Repeating
records also improves performance. Note, however, that the VSAM index, because
of the repetition, will probably require more direct-access space.
If you specify NOREPLICATE, records in the VSAM index are not repeated.
NOREPLICATE is the default for this parameter.
There is a new option that you must specify for KSDSs in order to take ’fuzzy’
image copies using the Database Image Copy 2 utility. BWO(TYPEIMS) is the
specification. The KSDS must be SMS-managed for BWO(TYPEIMS) to mean
anything. And, you should ensure that all access to the KSDS (once the
BWO(TYPEIMS) option has been specified) is under DFSMS 1.3 or higher.
OSAM Options
Two types of options are available for databases using OSAM:
1. Options specified in the DBD (free space, logical record size, CI size).
These options are covered in preceding sections in this chapter.
2. Options specified in the OPTIONS control statement when IMS is initialized.
In a batch system, the options are put in the data set with the DDNAME
DFSVSAMP. In an online system, they are put in the IMS.PROCLIB data set
with the member name DFSVSMnn. Your choice of OSAM options can affect
performance, recovery, and the use of space in the database.
The OPTIONS statement is described in detail in IMS Version 9: Installation
Volume 2: System Definition and Tailoring. The statement and all its parameters
are optional.
In the online environment, the Image Copy utilities allow you to do some
maintenance without taking the database offline. These utilities let you take image
copies of databases or partitions while they are allocated to and being used by an
online IMS system.
| You can also reorganize HALDBs online, which improves the performance of your
| HALDB without disrupting access to its data. If you plan to reorganize your HALDB
| online, make sure that there is enough DASD space to accommodate the
| reorganization process.
Related Reading: DEDBs can be shared. For information on DEDB data sharing,
see IMS Version 9: Administration Guide: System and IMS Version 9: Utilities
Reference: System.
v Most of the time, SDEP segments are retrieved all at once, using the DEDB
Sequential Dependent Scan utility. If you later must relate SDEP segments to
their roots, you must plan for root identification as part of the SDEP segment
data.
v A journal can be implemented by collecting data across transactions using a
DEDB. To minimize contention, you should plan for an area with more than one
root segment. For example, a root segment can be dedicated to a
transaction/region or to each terminal. To further control resource contention, you
should assign different CIs to these root segments, because the CI is the basic
unit of DEDB allocation.
v Following is a condition you might be confronted with and a way you might
resolve it. Assume that transactions against a DEDB record are recorded in a
journal using SDEP segments and that a requirement exists to interrogate the
last 20 or so of them.
SDEP segments have a fast insert capability, but on the average, one I/O
operation is needed for each retrieved segment. The additional I/O operations
could be avoided by inserting the journal data as both a SDEP segment and a
DDEP segment and by limiting the twin chain of DDEP segments to 20
occurrences. The replace or insert calls for DDEP segments does not necessarily
cause additional I/O, since they can fit in the root CI. The root CI is always
accessed even if the only call to the database is an insert of an SDEP segment.
The online retrieve requests for the journal items can then be responded to by
the DDEP segments instead of the SDEP segments.
v As physical DDEP twin chains build up, I/O activity increases. The SDEP
segment type can be of some help if the application allows it.
The design calls for DDEP segments of one type to be batched and inserted as
a single segment whenever their number reaches a certain limit. An identifier
helps differentiate them from the regular journal segments. This design prevents
updates after the data has been converted into SDEP segments.
all the time. With a DEDB, the data not available is limited only to the area
affected by the failure. Because the DEDB utilities run at the level of the area,
the recovery of the failing area can be done while the rest of the database is
accessible to online processing. The currently allocated log volume must be freed
by a /DBR AREA command and used in the recovery operation. Track recovery is
also supported. The recovered area can then be dynamically allocated back to
the operational environment.
Related Reading: Make multiple copies of DEDB area data sets to make data
more available to application programs. See “Multiple Copies of an Area Data
Set” on page 272.
v Space management parameters can vary from one area to another. This
includes: CI size, UOW size, root addressable part, overflow part, and sequential
dependent part. Also, the device type can vary from one area to the other.
v It is feasible to define an area on more than one volume and have one volume
dedicated to the sequential dependent part. This implementation might save
some seek time as sequential dependent segments are continuously added at
the end of the sequential dependent part. The savings depends on the current
size of the sequential dependent part and the blocking factor used for sequential
dependent segments. If an area spans more than one volume, volumes must be
of the same type.
v Only the independent overflow part of a DEDB is extendable. Sufficient space
should be provided for all parts when DEDBs are designed. To extend the
independent overflow part of a DEDB, you must follow the procedures in
“Extending DEDB Independent Overflow Online” on page 458.
The /DISPLAY command and the POS call can help monitor the usage of auxiliary
space. Unused space in the root addressable and independent overflow parts
can be reclaimed through reorganization. It should be noted that, in the overflow
area, space is not automatically reused by ISRT calls. To be reused at call time,
the space must amount to an entire CI, which is then made available to the ISRT
space management algorithm. Local out-of-space conditions can occur, although
some available space exists in the database.
v Adding or removing an area from a DEDB requires a DBDGEN and an ACBGEN.
Database reload is required if areas are added or deleted in the middle of
existing areas. Areas added other than at the end changes the area sequence
number assigned to the areas. The subsequent log records written reflect this
number, which is then used for recovery purposes. If areas are added between
existing areas, prior log records will be invalid. Therefore, an image copy must be
made following the unload/reload. Be aware that the sequence of the AREA
statements in the DBD determines the sequence of the MRMB entries passed on
entry to the randomizing routine. An area does not need to be mounted if the
processing does not require it, so a DBDGEN/ACBGEN is not necessary to
logically remove an area from processing.
v Careful monitoring of the retention period of each log allows you to make an
image copy of one area at a time. Also, because the High-Speed DEDB Direct
Reorganization utility logs changes, you do not need to make an image copy
following a reorganization.
v The area concept allows randomizing at the area level, instead of randomizing
throughout the entire DEDB. This means the key might need to carry some
information to direct the randomizing routine to a specific area.
See “SDEP CI Preallocation and Reporting” for a discussion of how the size of the
UOW affects DEDB design.
Because the insert process obtains the current CI, space use and reporting is
complex. If a preallocation attempt cannot obtain the number of CIs requested, the
ISRT or sync point call receives status FS, even if there is enough space for that
particular call. The FS processing marks the area as full, and any subsequent
smaller inserts also fail.
When there are few available SDEP CIs in an area, the number that can actually be
used for SDEP inserts varies depending on the system’s insert rate. Also, the
command /DIS AREA calculates the number of SDEP CIs free as those available for
preallocation and any unused CIs preallocated to the IMS issuing the command.
Area close processing discards CIs preallocated to the IMS, and the unused CIs
are lost until the SDEP Delete utility is run. Therefore, the number of unused CIs
reported by the /DIS AREA command after area close processing is smaller because
the preallocated CIs are no longer available.
The option takes effect only if the region type is a BMP. If specified, it offers the
following advantage:
Although crossing the UOW boundary has no particular significance for most
applications, the 'GC' status code that is returned indicates this could be a
convenient time to invoke sync point processing. This is because a UOW boundary
is also a CI boundary. As explained for sequential processing, a CI boundary is a
convenient place to request a sync point.
The sync point is invoked by either a SYNC or a CHKP call, but this normally
causes position on all currently accessed databases to be lost. The application
program then has to resume processing by reestablishing position first. This
situation is not always easy to solve, particularly for unqualified G(H)N processing.
An additional advantage with this processing option is, if a SYNC or CHKP call is
issued after a 'GC' status code, database position is kept. Database position is such
that an unqualified G(H)N call issued after a 'GC' status code returns the first root
segment of the next UOW. When a 'GC' status code is returned, no data is
presented or inserted. Therefore, the application program should, optionally, request
a sync point, reissue the database call that caused the 'GC' status code, and
proceed. The application program can ignore the 'GC' status code, and the next
database call will work as usual.
Database recovery and change accumulation processing must buffer all log records
written between sync points. Sync points must be taken at frequent intervals to
avoid exhausting available storage. If not, database recovery might not be possible.
or more such modules can be used with an IMS system. Only one randomizing
module can be associated with each DEDB.
Related Reading: Refer to IMS Version 9: Customization Guide for register usage
and a sample randomizing program exit (DBFHDC40).
The purpose of the randomizing module is the same as in HDAM processing. A root
search argument key field value is supplied by the application program and
converted into a relative root anchor point number. Because the entry and exit
interfaces are different, DEDB and HDAM randomizing routines are not object code
compatible. The main line randomizing logic of HDAM should not need modification
if randomizing through the whole DEDB.
Some additional differences between DEDB and HDAM randomizing routines are as
follows:
v The ISRT algorithm attempts to put the entire database record close to the root
segment (with the exception of SDEP segments). No BYTES parameter exists to
limit the size of the record portion to be inserted in the root addressable part.
v With the DEDB, only one RAP can be defined in each root addressable CI.
v CIs that are not randomized to are left empty.
Keys that randomize to the same RAP are chained in ascending key sequence.
DEDB logic runs in parallel, so DEDB randomizing routines must be reentrant. The
randomizing routines operate out of the common storage area (CSA). If they use
operating system services like LOAD, DELETE, GETMAIN, and FREEMAIN, the
routines must abide by the same rules as described in IMS Version 9:
Customization Guide.
Each copy of an ADS contains exactly the same user data. Fast Path maintains
data integrity by keeping identical data in the copies during application processing.
When an application program updates data in an area, Fast Path updates that data
in each copy of the ADS. When an application program reads data from an area,
Fast Path retrieves the requested data from any one of the available copies of the
ADS. All copies of an ADS must have the same definition but can reside on
different devices and on different device types. Using copies of ADS is also helpful
in direct access device migration; for example, from a 3380 device to a 3390
device.
If an ADS fails to open during normal open processing of a DEDB, none of the
copies of the ADS can be allocated, and the area is stopped. However, when open
failure occurs during emergency restart, only the failed ADS is deallocated and
stopped. The other copies of the ADS remain available for use.
Record Deactivation
If an error occurs while an application program is updating a DEDB, it is not
necessary to stop the database or the area. IMS continues to allow application
programs to access that area, and it only prevents them from accessing the control
interval in error. If multiple copies of the ADS exist, one copy of the data is always
available. (It is unlikely that the same control interval is in error in seven copies of
the ADS.) IMS automatically deactivates a record when a count of 10 errors is
reached.
Record deactivation minimizes the effect of database failures and errors to the data
in these ways:
v If multiple copies of an area data set are used, and an error occurs while an
application program is trying to update that area, the error does not need
immediate correction. Other application programs can continue to access the
data in that area through other available copies of that area.
v If a copy of an area has errors, you can create a new copy from existing copies
of the ADS using the DEDB Data Set Create utility. The copy with the errors can
then be destroyed.
Subset Pointers
Subset pointers help you avoid unproductive get calls when you need to access the
last part of a long segment chain. These pointers divide a chain of segment
occurrences under the same parent into two or more groups, or subsets. You can
define as many as eight subset pointers for any segment type, dividing the chain
into as many as nine subsets. Each subset pointer points to the start of a new
subset.
Related Reading: For more information on defining and using subset pointers, see
the topic about Processing DEDBs with Subset Pointers in IMS Version 9:
Application Programming: Database Manager.
Restrictions: When you unload and reload a DEDB containing subset pointers,
IMS does not automatically retain the position of the subset pointers. When
unloading the DEDB, you must note the position of the subset pointers, storing the
information in a permanent place. (For example, you could append a field to each
segment, indicating which subset pointer, if any, points to that segment.) Or, if a
segment in a twin chain can be uniquely identified, identify the segment a subset
pointer is pointing to and add a temporary indication to the segment for reload.
When reloading the DEDB, you must redefine the subset pointers, setting them to
the segments to which they were previously set.
v How are virtual storage requirements for the Fast Path buffer pool calculated?
v What are the storage requirements for the I/O area?
v Should FLD calls or other DL/I calls be used for improved MSDB and DEDB
performance?
v How can the difference in resource allocation between an MSDB and a DL/I
database be a key to good performance?
v What are the requirements in designing for minimum resource contention in a
mixed-mode environment?
v How is the number of MSDB segments loaded into virtual storage controlled?
v What are the auxiliary storage requirements for an MSDB?
v How can an MSDB be checkpointed?
where:
The different enqueue levels of an MSDB record, when a record is enqueued, and
the duration are summarized in Table 19.
Table 19. Levels of Enqueue of an MSDB Record
Enqueue Level When Duration
READ GH with no update intent VERIFY/get calls
From call time until sync point Call processing
(phase 1)¹
HOLD GH with no update intent At sync point, to reapply VERIFYs
From call time until sync point Phase 1 of sync point processing,
(phase 1)¹ then released
UPDATE² At sync point, to apply the results of Sync point processing, then
CHANGE, REPL, DLET, or ISRT released
calls
Notes:
1. If there was no FLD/VERIFY call against this resource or if this resource is not
going to be updated, it is released. Otherwise, if only FLD/VERIFY logic has to
be reapplied, the MSDB record is enqueued at the HOLD level. If the same
record is involved in an update operation, it is enqueued at the UPDATE level
as shown in the table above.
2. At DLET/REPL call time, no enqueue activity takes place because it is the prior
GH call that set up the enqueue level.
Table 20 shows that the status of an MSDB record depends on the enqueue level of
each program involved. Therefore, it is possible for an MSDB record to be
enqueued with the shared and exclusive statuses at the same time. For example,
such a record can be shared between program A (GH call for update) and program
B (GU call), but cannot be shared at the same time with a third program, C, which
is entering sync point with update on the record.
Table 20. Example of MSDB Record Status: Shared (S) or Owned Exclusively (E)
Enqueue Level in Program A
Enqueue Level in
Program B READ HOLD UPDATE
READ S S E
HOLD S E E
UPDATE E E E
If FLD/CHANGE and FLD/VERIFY calls are mixed in the same FLD call, when the
first FLD/VERIFY call is encountered, the level of enqueue is set to READ for the
remainder of the FLD call.
– The FLD/CHANGE call never waits for any resource, even if that same
resource is being updated in sync point processing.
– The FLD/VERIFY call waits only for sync point processing during which the
same resource is being updated.
– With FLD logic, the resource is held in exclusive mode only during sync point
processing.
In summary, programming with FLD logic can contribute to higher transaction rates
and shorter response times.
The following examples, Figure 172 and Figure 173, show how the MSDB record is
held in exclusive mode:
called by executing the control region startup procedure IMS. The suffix 'x' matches
the parameter supplied in the MSDB keyword of the EXEC statement in procedure
IMS.
The control information that loads and page fixes MSDBs is in 80-character record
format in member DBFMSDBx. Either you supply this information or it can be
supplied by the output of the MSDB maintenance utility. When the /NRE command
requests MSDBLOAD, the definition of the databases to be loaded is found in the
DBFMSDBx procedure.
| The definition in DBFMSDBx can represent a subset of the MSDBs currently on the
| sequential data set identified by DD statement MSDBINIT. Explicitly state each
| MSDB that you want IMS to load. If each MSDB is not explicitly stated, IMS
| abends.
DBD=dbd_name, NBSEGS=nnnnnnnn
,F
dbd_name
The DBD name as specified during DBDGEN.
nnnnnnnn
The number you specify of expected database segments for this MSDB.
This number must be equal to or great than the number of MSDB segments
loaded during restart.
The NBRSEGS parameter is also used to reserve space for terminal-related
dynamic MSDBs for which no data has to be initially loaded.
F The optional page-fix indicator for this MSDB.
If the MSDBs are so critical to your Fast Path applications that IMS should not run
without them, place a first card image at the beginning of the DBFMSDBx member.
For each card image, the characters “MSDBABND=n” must be typed without
blanks, and all characters must be within columns 1 and 72 of the card image. Four
possible card images exist, and each contains one of the following sets of
characters:
MSDBABND=Y
This card image causes the IMS control region to abend if an error occurs while
loading the MSDBs during system initialization. Errors include:
v Open failure on the MSDBINIT data set
v Error in the MSDB definition
v I/O error on the MSDBINIT data set
MSDBABND=C
This card image causes the IMS control region to abend if an error occurs while
writing the MSDBs to the MSDBCP1 or MSDBCP2 data set in the initial
checkpoint after IMS startup.
MSDBABND=I
This card image causes the IMS control region to abend if an error occurs
during the initial load of the MSDBs from the MSDBINIT data set, making one
or more of the MSDBs unusable. These errors include data errors in the
MSDBINIT data set, no segments in the MSDBINIT data set for a defined
MSDB, and those errors described under “MSDBABND=Y.”
MSDBABND=A
This card image causes the IMS control region to abend if an error occurs
during the writing of the MSDBs to the MSDBCPn data set (described in
“MSDBABND=C”), or during the initial load of the MSDBs from the MSDBINIT
data set (described in “MSDBABND=I”).
MSDBABND=B
This card image causes the IMS control region to abend if an error occurs
during the writing of the MSDBs to the MSDBCPn data set (described in
“MSDBABND=C”), or during the loading of the MSDBs in system initialization
(described in “MSDBABND=Y”).
The data sets just discussed are written in 2K-byte blocks. Because only the first
extent is used, the allocation of space must be on cylinder boundaries and be
contiguous.
The calculation of the number of records (R) to be allocated can be derived from
the formula:
(E + P + 2047)/2048
where:
E = main storage required, in bytes, for the Fast Path extension of the
CNTs (ECNTs)
E = (20 + 4D)T
where:
Why HSSP?
Some reasons you may choose to use it are that, HSSP:
v Generally has a faster response time than regular batch processing.
v Optimizes sequential processing of DEDBs.
v Reduces program execution time.
v Typically produces less output than regular batch processing.
v Reduces DEDB updates and image copy operation times.
v Image copies can assist in database recovery.
v Locks at UOW level to ease “bottle-necking” of cross IRLM communication.
v Uses private buffer pools reducing impact on NBA/OBA buffers.
v Allows for execution in both a mixed mode environment, concurrently with other
programs, and in an IRLM-using global sharing environment.
v Optimizes database maintenance by allowing the use of the image-copy option
for an updated database.
v PROCOPT=H must be used with other Fast Path processing options, such as
GH and IH.
v When a GC status code is returned, the program must cause a commit process
before any other call can be made to that PCB.
v HSSP image copying is not allowed if PROCOPT ¬=H.
v An ACBGEN must be done to activate the PROCOPT=H.
v H is compatible with all other PROCOPTs except for PROCOPT=O.
Using HSSP
To use HSSP, you must specify a new PROCOPT option during PSBGEN, option
'H' see “HSSP Processing Option H (PROCOPT=H).” Additionally, you need to
make sure that the programs using HSSP properly process the 'GC' status code by
following it with a commit process.
HSSP includes the image-copy option and the ability to set area ranges. To use
these functions, you need one or more of the following:
v The SETR statement
v The SETO statement
v A DFSCTL data set for the dependent regions
v DBRC
v PROCOPT=H
Related Reading: For more information about the SETR and SETO control
statements, refer to IMS Version 9: Installation Volume 2: System Definition and
Tailoring.
Related Reading:
v For information on PROCOPT=H rules, see “Limitations and Restrictions When
Using HSSP” on page 280.
v For more information on H processing, see IMS Version 9: Installation Volume 2:
System Definition and Tailoring.
Image-Copy Option
Selecting the image-copy option with HSSP reduces the total elapsed times of
DEDB updates and subsequent image-copy operations.
The image copy process can only be done if a database is registered with DBRC.
In addition, image copy data sets must be initialized in DBRC.
HSSP image copies can also be used for database recovery. However, the
Database Recovery Utility must know that an HSSP image copy is supplied.
Related Reading: For information on DBRC databases and HSSP, and on created
image copies, refer to the IMS Version 9: Operations Guide and the IMS Version 9:
Database Recovery Control (DBRC) Guide and Reference.
For information on image copies and recovery, refer to IMS Version 9: Utilities
Reference: System.
UOW Locking
In a globally shared environment, data is shared not only between IMS subsystems,
but also across central processor complexes (CPC). In such an environment,
communication between two IRLMs could potentially “bottleneck” and become
impeded. To ease this problem, HSSP locks at a UOW level in update mode,
reducing the locking overhead. Non-HSSP or DEDB online processing locks at a
UOW level in a shared mode. Otherwise, the locking for DEDB online processing is
at the CI level. For information on UOW locking, refer to IMS Version 9:
Administration Guide: System.
HSSP jobs use a combination of both Private buffer pools and common buffers
(NBA/OBA). HSSP dynamically allocates up to three times the number of CIs per
area in one UOW, with each buffer being a CI in size. The private buffer pools are
located in ECSA/CSA.HSSP uses the private buffers for reading RAP CIs, and
common buffers for reading IOVF CIs. An FW status code may be received during
the run of an HSSP job when NBA has been exceeded just as in a non-HSSP job.
DBFX
System buffer allocation.
This is a set of buffers in the Fast Path buffer pool that is page fixed at startup
of the first region with access to Fast Path resources.
BSIZ
Buffer size.
The size must be larger than or equal to the size of the largest CI of any DEDB
to be processed. The buffer size can be up to 28 KB.
Buffer Requirements
Fast Path buffers are used to hold:
v Update information such as:
– MSDB FLD/VERIFY call logic
– MSDB FLD/CHANGE call logic
– MSDB updates (results of REPL, ISRT, and DLET calls)
– Inserted SDEP segments
v Referenced DEDB CIs from the root addressable part and the sequential
dependent part.
v Updated DEDB CIs from the root addressable part.
v SDEP segments that have gone through sync point. The SDEP segments are
collected in the current SDEP segment buffer. One such buffer allocated for each
area defined with the SDEP segment type exists. This allocation takes place at
area open time.
Before satisfying any request from the NBA allocation, an attempt is made to reuse
any already allocated buffer containing an SDEP CI. This process goes on until the
NBA limit is reached. From that point on, a warning in the form of an 'FW' status
code returned to Fast Path database calls is sent to BMP regions. MD and MPP
regions do not get this warning.
The next request for an additional buffer causes the buffer stealing facility to be
invoked and then the algorithm examines each buffer and CI already allocated. As a
result, buffers containing CIs being released are sent to a local queue (SDEP buffer
chain) to be reused by this sync interval.
If, after invoking the buffer stealing facility, no available buffer is found, a request for
the overflow buffer latch is issued. The overflow buffer latch governs the use of an
additional buffer allocation called overflow buffer allocation (OBA). This allocation is
also specified as a parameter at region start time. From that point on, any time a
request cannot be satisfied locally, a buffer is acquired from the OBA allocation until
the OBA limit is reached. At that time, MD and BMP regions have their 'FW' status
code replaced by an 'FR' status code after an internal ROLB call is performed. In
MD and MPP regions, the transaction is abended and stopped.
where:
v DBBF: Fast Path buffer pool size as specified
v A: Number of active areas that have SDEP segments
v NBA: Normal buffer allocation of each active region
v N: Total of all NBAs
v OBA: Largest overflow buffer allocation
v DBFX: System buffer allocation
A DBFX value that is too small is likely to cause region waits and increase
response time.
An NBA value that is too small might cause the region processing to be serialized
through the overflow buffer latch and again cause delays.
An NBA value that is too large can increase the probability of contention (and
delays) for other transactions. All CIs can be acquired at the exclusive level and be
kept at that level until the buffer stealing facility is invoked. This occurrence
happens after the NBA limit is reached. Therefore, an NBA that is too large can
increase resource contention.
A (NBA + OBA) value that is too small might result in more frequent unsuccessful
processing. This means an 'FR' status code condition for BMP regions, or
transaction abend for MD and MPP regions.
| IMS logs information about buffers and their use to the X’5937’ log. This information
| can be helpful in determining how efficiently the Fast Path buffers are being used.
The first occurrence of the 'FW' status code indicates no more NBA buffers exist.
This occurrence is a convenient point at which to request a sync point. Fast Path
resources (and others) would be released and the next sync point interval would be
authorized to use a new set of NBA buffers. The overflow buffer latch serializes all
the regions in an overflow buffer state and therefore causes delays in their
processing.
Except for the following case, there is no buffer look-aside capability across
transactions or sync intervals (global buffer look-aside).
Assume that a region requests a DEDB CI resource that is currently being written
or is owned by another region that ends up being written (output thread
processing). Then, this CI and the buffer are passed to the requestor after the write
(no read required) completes successfully. Any other regions must read it from disk.
CNBA is the normal buffer allocation of each active CCTL region. FPB is the normal
buffer allocation for CCTL threads.
When the CCTL connects to DBCTL, the number of CNBA buffers is page fixed in
the fast path buffer pool. However, if CNBA buffers are not available, the connect
fails.
Each CCTL thread that requires DEDB buffers is assigned its fast path buffers
(FPB) out of the total number of CNBA buffers.
For more information about the CCTLNBA parameter, refer to IMS Version 9:
Administration Guide: System.
Before satisfying any request from the NBA allocation, an attempt is made to reuse
any already allocated buffer containing an SDEP CI. This process goes on until the
NBA limit is reached. From that point on, a warning in the form of an 'FW' status
code returned to Fast Path database calls is sent to BMP regions.
The next request for an additional buffer causes the buffer stealing facility to be
invoked and then the algorithm examines each buffer and CI already allocated. As a
result, buffers containing CIs being released are sent to a local queue (SDEP buffer
chain) to be reused by this sync interval.
If, after invoking the buffer stealing facility, no available buffer is found, a request for
the overflow buffer latch is issued. The overflow buffer latch governs the use of an
additional buffer allocation, OBA. This allocation is also specified as a parameter at
region start time. From that point on, any time a request cannot be satisfied locally,
a buffer is acquired from the OBA allocation until the OBA limit is reached. At that
time, BMP regions have their 'FW' status code replaced by an 'FR' status code after
an internal ROLB call is performed.
Before satisfying any request from the FPB allocation, an attempt is made to reuse
any already allocated buffer containing an SDEP CI. This process goes on until the
FPB limit is reached. From that point on, a warning in the form of an 'FW' status
code returned to Fast Path database calls is sent to the CCTL threads.
The next request for an additional buffer causes the buffer stealing facility to be
invoked, and then the algorithm examines each buffer and CI already allocated. As
a result, buffers containing CIs being released are sent to a local queue (SDEP
buffer chain) to be reused by this sync interval.
If, after invoking the buffer stealing facility, no available buffer is found, a request for
the overflow buffer latch is issued. The overflow buffer latch governs the use of an
additional buffer allocation, OBA (FPOB). From that point on, any time a request
cannot be satisfied locally, a buffer is acquired from the FPOB allocation until the
FPOB limit is reached. At that time, CCTL threads have their 'FW' status code
replaced by an 'FR' status code after an internal ROLB call is performed.
Determining the Size of the Fast Path Buffer Pool for DBCTL
The number of buffers required is calculated using the following formula:
DBBF ≥ A + N + LO + DBFX + CN
A DBFX value that is too small is likely to cause region waits and increase
response time.
An NBA/FPB value that is too small might cause the region processing to be
serialized through the overflow buffer latch and again cause delays.
An NBA/FPB value that is too large can increase the probability of contention (and
delays) for other BMPs and CCTL threads. All CIs can be acquired at the exclusive
level and be kept at that level until the buffer stealing facility is invoked. This
happens after the NBA limit is reached. Therefore, an NBA/FPB that is too large
can increase resource contention. Also, an FPB value that is too large indicates that
fewer CCTL threads can concurrently schedule fast path PSBs.
A (NBA + OBA) value that is too small might result in more frequent unsuccessful
processing. This means an 'FR' status code condition for BMP regions and CCTL
threads.
Inquiry-only BMP or CCTL programs do not make use of the overflow buffer
specification logic, as buffers already allocated are reused when the NBA/FPB limit
is reached.
| IMS logs information about buffers and their use to the X’5937’ log. This information
| can be helpful in determining how efficiently the Fast Path buffers are being used.
The first occurrence of the 'FW' status code indicates no more NBA/FPB buffers
exist. This occurrence is a convenient point at which to request a sync point. Fast
Path resources (and others) would be released and the next sync point interval
would be authorized to use a new set of NBA/FPB buffers. The overflow buffer latch
serializes all the regions in an overflow buffer state and therefore causes delays in
their processing.
Consider the special case: The BMP region loads or processes a DEDB and is the
only activity in the system. For example, assume that an NBA of 20 buffers exists.
To avoid a wait-for-buffer condition, the DBFX value must be between once or twice
the NBA value. This can result in a DBBF specification of three times the NBA
number, giving 60 buffers to the Fast Path buffer pool.
Except for the following case, there is no buffer look-aside capability across BMP
regions and CCTL threads or sync intervals (global buffer look-aside).
Assume that a region requests a DEDB CI resource that is currently being written
or is owned by another region that ends up being written (output thread
processing). Then, this CI and the buffer are passed to the requestor after the
successful completion of the write (no read required). Any other BMP regions and
CCTL threads must read it from disk.
Before an application program can use the database, you must tell IMS the
application program’s characteristics and use of data and terminals. You tell IMS the
application program characteristics by coding and generating a PSB (program
specification block).
Finally, before an application program can be scheduled for execution, IMS needs
the PSB and DBD information for the application program available in a special
internal format called an ACB (application control block).
This chapter examines the following areas of implementing your database design:
v “Coding Database Descriptions as Input for the DBDGEN Utility”
| v “Implementing HALDB Design” on page 294
v “Coding Program Specification Blocks as Input to the PSBGEN Utility” on page
301
v “Building the Application Control Blocks (ACBGEN)” on page 304
v “Defining Generated Program Specification Blocks for SQL Applications” on page
305
Figure 174 illustrates the DBD generation process. Figure 175 on page 292 shows
the input to the DBDGEN utility. Separate input is required for each database being
defined.
The DATASET statement is not allowed for HALDBs. Use either the HALDB
Partition Definition utility to define HALDB partitions or the DBRC commands
INIT.DB and INIT.PART
If the database is a DEDB, the AREA statement is used instead of the DATASET
statement. The AREA statement defines an area in the DEDB. Up to 2048 AREA
statements can be used to define multiple areas in the database. All AREA
statements must be put between the DBD statement and the first SEGM statement.
Figure 176. Example of a Date Field within a Segment Defined as Three 2–Byte Fields and
One 6–Byte Field
This technique allows application programs to access the same piece of data in a
variety of ways. To access the same piece of data in a variety of ways, you code a
separate FIELD statement for each field. For the example shown, you would code
four FIELD statements, one for the total 6-byte date and three for each 2-byte field
in the date.
Restriction: The LCHILD statement cannot be specified for the primary index of a
PHIDAM database because the primary index is automatically generated.
Restriction: The CONST parameter is not allowed for a HALDB. Shared secondary
indexes are not supported.
Related Reading: Detailed instructions for coding DBD statements and examples
of DBDs are contained in IMS Version 9: Utilities Reference: System.
| Related Reading: The Complete IMS HALDB Guide, published by IBM Redbooks™
| for the Version 8 release of IMS, contains a comprehensive discussion of HALDBs.
The HALDB Partition Definition utility is accessed through ISPF panels in a TSO
session. You can perform the following tasks on the HALDB master and its
partitions:
| v Register a new HALDB master database with DBRC.
v Add HALDB partitions to an existing HALDB.
v Find, view, sort, copy, modify, delete, and print HALDB partitions.
v Define and modify data set groups.
v Edit HALDB information.
| Related Reading: For information on using the DBDGEN utility to create a HALDB
| master database, see:
| v Figure 161 on page 235 for an example of the DBD for PHDAM
| v “Coding Database Descriptions as Input for the DBDGEN Utility” on page 291
| v IMS Version 9: Utilities Reference: System
| When you define the first HALDB partition, you must also register the HALDB
| master database in the DBRC RECON data set. You can use either the HALDB
| Partition Definition utility or the DBRC INIT.DB and INIT.PART commands to do this.
| The HALDB Partition Definition utility does not impact RECON data set contention
| of online IMS subsystems. The RECON data set is reserved only for the time it
| takes to process a DBRC request. It is not held for the duration of the utility
| execution.
Related Reading: For additional information on HALDB and the RECON data set,
see IMS Version 9: Database Recovery Control (DBRC) Guide and Reference.
When defining HALDB partitions using the Partition Definition utility, you must
provide information such as the partition name, data set prefix name, and high key
value. Whenever possible, the Partition Definition utility provides default values for
required fields.
If you use a logon procedure, you must log on again and specify logon with the
new procedure. If you use allocation commands, they must be issued outside of
ISPF. After you allocate the data sets and restart ISPF, restart the Install/IVP
dialog, return to this task description, and continue with the remaining steps.
3. Start the HALDB Partition Definition utility from the ISPF command line by
issuing the following command:
TSO %DFSHALDB
You can use the F2 key to split the screen and view these instructions online
while viewing the HALDB partition definition panels at the same time.
4. Specify the name of the database. Fill in the first partition name as shown in
Figure 177 on page 297. Fill in the data set name prefix using the data set
name for your data set instead of the high level qualifier shown in Figure 177 on
page 297. You should, however, specify the last qualifier as IVPDB1A to match
cluster names previously allocated.
| Recommendation: When naming your partitions, use a naming sequence that
| allows you to add new names later without disrupting the sequence. For
| example, if you name your partitions xxxx010, xxxx020 and xxxx030 and then
| later split partition xxxx020 because it has grown too large, you can name the
| new partition xxxx025 without disrupting the order of your naming sequence.
Help
---------------------------------------------------------------
Partition Default Information
Processing Options
Automatic Definition . . . .No
Input data set . . . . . . .
Use defaults for DS groups .No
Free Space
Free block freq. factor . 0
Free space percentage . . 0
DBRC options
Max. image copies . . . .2
Recovery period . . . . .0
Recovery utility JCL . . RECOVJCL
Default JCL . . . . . . .________
Image Copy JCL . . . . . ICJCL
Online image copy JCL . .OICJCL
Receive JCL . . . . . . .RECVJCL
Reusable? . . . . . . . .No
Command = = = >
| 5. Define your partitions in the Change Partition panel. Make sure that the name of
| the partition and the data set name prefix are correct and then define a high key
| for the partition.
| The high key identifies the highest root key of any record that the partition can
| contain and is represented by a hexadecimal value that you enter directly into
| the Partition High Key field of the Change Partition panel. Press F5 to accept
| the hexadecimal value and display its alphanumeric equivalent in the right
| section of the Partition High Key field.
| You can enter the partition high key value using alphanumeric characters by
| pressing F5 before making any changes in the hexadecimal section of the
| Partition High Key field. This displays the ISPF editing panel. The alphanumeric
| input you enter in the editing panel displays in both hexadecimal and
| alphanumeric formats in the Change Partition Panel when you press F3 to save
| and exit the ISPF editor.
| The last partition you define for a HALDB must have a high key value of X'FF'.
| This ensures that the keys of all records entered into the HALDB will be lower
| than the highest high key in the HALDB. The Partition Definition utility fills all
| remaining bytes in the Partition High Key field with hexadecimal X'FF'.
| When you finish defining the partition high key, press enter to create the
| partition. The Change Partition panel remains active so that you can create
| additional partitions. To create additional partitions, you must change the
| partition name and the partition high key.
| Figure 178 on page 298. is an example of the Change Partition panel. The
| Partition High Key field includes sample input.
Chapter 11. Implementing Database Design 297
HALDB Partition Utility
6. When you finish defining partitions, press the cancel key (F12) to exit the
Change Partition panel. A list of partitions defined in the current session
displays.
To exit the HALDB Partition Definition utility entirely, press F12 again.
| Help
| ---------------------------------------------------------------
| Change Partition
| Type the field values. Then press Enter.
|
| Database name..........IVPDB1
| Partition name.........IVPDB11
| Partition ID...........1
| Data set name prefix...IXUEXEHQ.IVPDB1A
| Partition Status......._______
|
|
| Partition High Key
| +00 57801850 00F7F4F2 40C5A585 99879985 | ...&.742 Evergre |
| +10 859540E3 85999981 | en Terra |
|
| Free Space
| Free block freq. factor...0
| Free space percentage.....0
|
| Attributes for data set group A
| Block Size................8192
|
| DBRC options
| Max. image copies.........2
| Recovery period...........0
| Recovery utility JCL......_________
| Image copy JCL............ICJCL
| Online image copy JCL.....OICJCL
| Receive JCL...............RECVJCL
| Reusable?.................No
|
|
| Command = = = >
||
| Figure 178. Change Partition Panel
|
| Automatic and Manual HALDB Partition Definition: You can choose either
automatic or manual partition definition by specifying Yes or No in the Automatic
Definition field in the Processing Options section of the Partition Default Information
| panel.
| Entering Yes in the Automatic Definition field specifies that the Partition Definition
| utility automatically defines your HALDB partitions. You must have previously
| created a data set and it must contain your HALDB partition selection strings.
| Specify the name of the data set in the Input data set field.
| Entering No in the Automatic Definition field specifies that you define your HALDB
| partitions manually. “Creating HALDB Partitions With the Partition Definition Utility”
| on page 295 explains this process. You can still use an input data set when you
| define HALDB partitions manually.
Use the HALDB Partition Definition utility to export a HALDB definition. The
database information is stored in the partitioned data set that you specify as an
ISPF table and so must have the attributes of ISPTLIB data sets (record format =
fixed block, record length = 80, data set organization = PDS or PDS/E).
The output from the export of a HALDB is a member of a PDS. The information
about the HALDB is saved in the form of an ISPF table. The ISPF table becomes
input for the import process.
The import can be performed from the HALDB Partition Definition utility or a batch
job.
To import a database using a batch job, submit a batch ISPF job similar to the job
shown in Figure 335 on page 541. All ISPF DD names are required.
The batch job executes the standard ISPF command, ISPSTART, that sets up the
ISPF environment, and then starts the DSPXRUN command. The DSPXRUN command
identifies the database, the import file to use, and the processing options.
When you specify a generic database name and use options 1 through 5 from the
DFSHALDB panel, the viewing IMS DDNAME concatenation option only works if
you use 4 or fewer DBD data sets. If you specify option 7, the data sets
concatenated to the IMS DDNAME always display.
Use the help (F1) information provided by ISRDDN and ISPF to learn more about
the ISRDDN utility. When you exit the ISRDDN utility, you return to the HALDB
Partition Definition utility panels.
You can control the RECON data sets in a configuration. If you have the IMS
DDNAME allocated from the logon procedure and the IMS.SDFSRESL libraries
allocated to the STEPLIB DDNAME, do not use the configuration option. If you
define and select a configuration, those data sets override the allocations from the
logon procedure.
The IMS DDNAME includes the data sets that contain the DBDLIB members. The
STEPLIB allocation contains the RECON1, RECON2, and RECON3 members that
name the actual RECON data sets. The RECON/DBDLIB Configurations option
re-allocates the IMS DDNAME and allocates RECON1, RECON2, and RECON3
DDNAMEs to specify the RECON data sets.
If you delete a configuration only, the configuration is deleted from the list, but the
data sets that are named in the configuration are not deleted.
Allocating an ILDS
Partitioning a database can complicate the use of pointers between database
records because after a partition has been reorganized the following pointers may
become invalid:
v Pointers from other database records within this partition
v Pointers from other partitions that point to this partition
v Pointers from secondary indexes
The use of indirect pointers eliminates the need to update pointers throughout other
database records when a single partition is reorganized. The Indirect List data set
(ILDS) acts as a repository for the indirect pointers. There is one ILDS per partition
in PHDAM and PHIDAM databases.
| The ILDS contains indirect list entries (ILEs). Each ILE in an ILDS has a 9-byte key
| that is the indirect list key (ILK) of the target segment appended with the segment
| code of the target segment. The ILK is a unique token that is assigned to segments
| when the segments are created.
| The sample command in Figure 179 defines an ILDS. Note that the key size is 9
| bytes at offset 0 (zero) into the logical record. Also note that the record size is
| specified as 50 bytes, the current length of an ILE.
DEFINE CLUSTER ( -
NAME (FFDBPRT1.XABCD01O.L00001) -
TRK(2,1) -
VOL(IMSQAV) -
FREESPACE(80,10) -
REUSE -
SHAREOPTIONS(3,3) -
SPEED ) -
DATA ( -
NAME(FFDBPRT1.XABCD01O.INDEXD) -
CISZ(512) -
KEYS(9,0) -
RECSZ(50,50) ) -
INDEX ( -
NAME(FFDBPRT1.XABCD01O.INDEXS) -
CISZ(2048) )
To compute the size of an ILDS, multiply the size of an ILE by the total number of
physically paired logical children, logical parents of unidirectional relationships, and
secondary index targets.
| Related Reading:
| v For information about the role of ILDS in the HALDB self-healing pointer process,
| see “The HALDB Self-Healing Pointer Process” on page 382.
| v For information about initializing an ILDS, search for “Indirect List Data Set” in
| IMS Version 9: Administration Guide: System.
After you code the PSB macro instructions, they are used as input to the PSBGEN
utility. This utility is a macro assembler that generates a PSB control block then
stores it in the IMS.PSBLIB library for subsequent use during database processing.
Figure 181 shows the structure of the deck used as input to the PSBGEN utility.
SENSEG statements must immediately follow the PCB statement to which they are
related. Up to 30000 SENSEG statements can be defined for each PSB generation.
Detailed instructions for coding PSB statements and examples of PSBs are
contained in of IMS Version 9: Utilities Reference: System.
ACBs cannot be prebuilt for GSAM DBDs. However, ACBs can be prebuilt for PSBs
that reference GSAM databases.
The ACB maintenance utility (ACBGEN), shown in Figure 183, gets the PSB and
DBD information it needs from IMS.PSBLIB and IMS.DBDLIB.
You can have the utility prebuild ACBs for all PSBs in IMS.PSBLIB, for a specific
PSB, or for all PSBs that reference a particular DBD. Prebuilt ACBs are kept in the
IMS.ACBLIB library. (IMS.ACBLIB is not used if ACBs are not prebuilt.) When ACBs
are prebuilt and an application program is scheduled, the application program’s
ACB is read from IMS.ACBLIB directly into storage. This means that less time is
required to schedule an application program. In addition, less storage is used if
prebuilt ACBs are used. Another advantage of using the ACB maintenance utility is
the initial error checking it performs. It checks for errors in the names used in the
PSB and the DBDs associated with the PSB and, if erroneous cross-references are
found, prints appropriate error messages.
You can change ACBs or add ACBs in an “inactive” copy of ACBLIB and then make
the changed or new members available to an active IMS online system by using the
online change function. “Using the Online Change Function” in Chapter 16,
“Modifying Databases,” on page 423 describes how you effectively change ACBLIB
for an online system.
Detailed instructions for running the ACB maintenance utility and examples of its
use are contained in the IMS Version 9: Utilities Reference: System.
The I/O PCB can be used by the application program to obtain input messages and
send output to the inputting terminal. The alternate PCB can be used by the
application program to send output to other terminals or programs.
Other than the I/O PCB, an application that makes only Structured Query Language
(SQL) calls does not require any PCBs. It does, however, need to define the
application program name and language type to IMS. A GPSB can be used for this
purpose.
| IBM provides various programs that can help you develop your test database,
| including the DL/I Test Program (DFSDDLT0). For more information on the available
| IMS tools, go to www.ibm.com/ims and link to the IBM® DB2 and IMS Tools Web
| site.
Related Reading:
v For guidance information about application program testing, see IMS Version 9:
Application Programming: Design Guide.
v For information about testing an online system, see IMS Version 9:
Administration Guide: System.
In this Chapter:
“Test Requirements”
“Designing, Creating, and Loading a Test Database” on page 308
Test Requirements
Depending on your system configuration, user requirements, and the design
characteristics of your database and data communication systems, test for the
following:
v That DL/I call sequences execute and the results are correct.
– This kind of test often requires only a few records, and you can use the DL/I
Test Program, DFSDDLT0, to produce these records.
– If this is part of a unit test, consider extracting records from your existing
database. To extract the necessary records, you can use programs such as
the IMS DataRefresher™.
v That calls execute through all possible application decision paths.
– You might need to approximate your production database. To do this, you can
use programs such as the IMS DataRefresher and other IMS tools.
v How performance compares with that of a model, for system test or regression
tests, for example.
– For this kind of test, you might need a copy of a subset of the production
database. You can use IMS tools to help you.
To test for these capabilities, you need a test database that approximates, as
closely as possible, the production database. To design such a test database, you
should understand the requirements of the database, the sample data, and the
application programs.
To protect your production databases, consider providing the test JCL procedures to
those who test application programs. Providing the test JCL helps ensure that the
correct libraries are used.
Again, you might use a copy of a subset of the real database. However, first
determine which fields contain sensitive data and therefore must use fictitious data
in the test database.
– DB/TM features
– Backup and recovery
Details about using this system are in Data Extraction, Processing, and
Restructuring System, Program Description/Operations Manual.
Related Reading: For information on how to use CSP/370AD, see the Cross
System Product/370 Application Development Guide.
The DL/I Test Program cannot be used by CICS, but can be used for stand-alone
batch programs. If used for stand-alone batch programs, it is useful to interpret the
database performance as it might be implemented for online or shared database
programs.
This topic contains the step-by-step procedure for estimating minimum database
space. To estimate the minimum size needed for your database, you must already
have made certain design decisions about the physical implementation of your
database. Because these decisions are different for each DL/I access method, they
are discussed under the appropriate access method in step 3 of the procedure.
| If you plan to reorganize your HALDBs online, consider the extra space
| reorganization requires. Although online reorganization does not need any additional
| space when you first load a HALDB, the process does require additional space at
| the time of reorganization. For more information on HALDB online reorganization,
| see “HALDB Online Reorganization” on page 364.
The prefix portion of the segment depends on the segment type and on the options
you are using. Table 23 on page 312 helps you determine, by segment type, the
size of the prefix. Using the chart, add up the number of bytes required for
necessary prefix information and for extra fields and pointers generated in the prefix
for the options you have chosen. Segments can have more than one 4-byte pointer
in their prefix. You need to factor all extra pointers of this type into your calculations.
Related Reading: For rules on using mixed pointers, see “Mixing Pointers” on page
89.
For example, in the database record in Figure 184 on page 313, the ITEMS
segment occurs an average of 10 times for each DEPOSITS segment. The
DEPOSITS segment occurs an average of four times for each CUSTOMER root
segment. The frequency of a root segment is always one.
Overhead is space used in a CI for two control fields. VSAM uses the control fields
to manage space in the CI. The control fields and their sizes are shown in Table 25.
Table 25. VSAM Control Fields
Field Size in Bytes
CIDF (Control interval definition field) 4
RDF (Record definition field 3
If one logical record exists for each CI, CI overhead consists of one CIDF and one
RDF (for a total of 7 bytes). HDAM and HIDAM databases and PHDAM and
PHIDAM partitions use one logical record for each CI.
If more than one logical record exists for each CI, CI overhead consists of one
CIDF and two RDFs (for a total of 10 bytes). HISAM (KSDS and ESDS), HIDAM
and PHIDAM index, and secondary index databases can all use more than one
logical record for each CI.
Step 3 tells you when to factor CI overhead into your space calculations.
In HISAM, you should remember how logical records work, because you need to
factor logical record overhead into your calculations before you can determine how
many CIs (control intervals) are needed to hold your database records. Logical
record overhead is a combination of the overhead that is always required for a
logical record and the overhead that exists because of the way in which database
records are stored in logical records (that is, storage of segments almost always
results in residual or unused space).
Because some overhead is associated with each logical record, you need to
calculate the amount of space that is available after factoring in logical record
overhead. Once you know the amount of space in a logical record available for
data, you can determine how many logical records are needed to hold your
database records. If you know how many logical records are required, you can
determine how many CIs or blocks are needed.
For example, assume you need to load 500 database records using VSAM, and to
use a CI size of 2048 bytes for both the KSDS and ESDS. Also, assume you need
to store four logical records in each KSDS CI and two logical records in each ESDS
CI.
1. First factor in CI overhead by subtracting the overhead from the CI size: 2048 -
10 = 2038 bytes for both the KSDS and the ESDS. The 10 bytes of overhead
consists of a 4-byte CIDF and two 3-byte RDFs.
2. Then, calculate logical record size by dividing the available CI space by the
number of logical records per CI: 2038/4 = 509 bytes for the KSDS and 2038/2
= 1019 bytes for the ESDS. Because logical record size in HISAM must be an
even value, use 508 bytes for the KSDS and 1018 bytes for the ESDS.
3. Finally, factor in logical record overhead by subtracting the overhead from
logical record size: 508 - 5 = 503 bytes for the KSDS and 1018 - 5 bytes for the
ESDS. HISAM logical record overhead consists of 5 bytes for VSAM (a 4-byte
RBA pointer for chaining logical records and a 1-byte end-of-data indicator).
This means if you specify a logical record size of 508 bytes for the KSDS, you
have 503 bytes available in it for storing data. If you specify a logical record size
of 1018 bytes for the ESDS, you have 1013 bytes available in it for storing data.
Refer to the previous example. Because the average size of a database record is
1336 bytes, the space available for data in the KSDS is not large enough to contain
it. It takes the available space in one KSDS logical record plus one ESDS logical
record to hold the average database record (503 + 1013 = 1516 bytes of available
space). This record size is greater than an average database record of 1336 bytes.
Because you need to load 500 database records, you need 500 logical records in
both the KSDS and ESDS.
v To store four logical records per CI in the KSDS, you need a minimum of 500/4 =
125 CIs of 2048 bytes each for the KSDS.
v To store two logical records per CI in the ESDS, you need a minimum of 500/2 =
250 CIs of 2048 bytes each for the ESDS.
If you are using VSAM, and you decide to estimate, without use of an aid, the
amount of space to allocate for the database, the first CI in the database is
reserved for VSAM. Because of this, the bit map is in the second CI.
With HDAM or PHDAM, logical record overhead depends on the database design
options you have selected. You must choose the number of CIs or blocks in the root
addressable area and the number of RAPS for each CI or block. These choices are
based on your knowledge of the database.
Because of the way your randomizer works, you decide 300 CIs or blocks with two
RAPs each works best. Assume you need to store 500 database records using
VSAM, and you have chosen to use 300 CIs in the root addressable area and two
RAPs for each CI. This decision influences your choice of CI size. Because you are
using two RAPs per CI, you expect two database records to be stored in each CI.
You know that a 2048-byte CI is not large enough to hold two database records (2 x
1336 = 2672 bytes). And you know that a 3072-byte CI is too large for two
database records of average size. Therefore, you would probably use 2048-byte
CIs and the byte limit count to ensure that on average you would store two
database records in the CI.
Continuing our example, you know you need 300 CIs of 2048 bytes each in the root
addressable area. Now you need to calculate how many CIs you need in the
overflow area. To do this:
v Determine the average number of bytes that will not fit in the root addressable
area. Assume a byte limit count of 1000 bytes. Subtract the byte limit count from
the average database record size: 1336 - 1000 = 336 bytes. Multiply the average
number of overflow bytes by the number of database records: 500 x 336 =
168000 bytes needed in the non-root addressable area.
v Determine the number of CIs needed in the non-root addressable area by
dividing the number of overflow bytes by the bytes in a CI available for data.
Determine the bytes in a CI available for data by subtracting CI and logical
You have estimated you need a minimum of 300 CIs in the root addressable area
and a minimum of 83 CIs in the non-root addressable area.
Step 4. Determine the Number of Blocks or CIs Needed for Free Space
In HDAM, HIDAM, PHDAM, and PHIDAM databases, you can allocate free space
when your database is initially loaded. Free space is explained in Chapter 6,
“Choosing Full-Function Database Types,” on page 55, “Specifying Free Space”.
Free space can only be allocated for an HD VSAM ESDS or OSAM data set. Do
not confuse the free space discussed here with the free space you can allocate for
a VSAM KSDS using the DEFINE CLUSTER command.
To calculate the total number of CIs or blocks you need to allocate in the database,
you can use the following formula:
A = B x (fbff / (fbff - 1)) x (100 / (100 - fspf))
You need to add the number of CIs or blocks needed for bit maps to your space
calculations.
Attention: If you plan to use the Database Image Copy 2 utility to take image
copies of your database, the data sets must be allocated on hardware that supports
the DFSMS concurrent copy function.
All other data sets are allocated using normal z/OS JCL. You can use the z/OS
program IEFBR14 to preallocate data sets, except when the database is an MSDB.
For MSDBs, you should use the z/OS program IEHPROGM.
If the installation control of direct-access storage space and volumes require that
the OSAM data sets be pre-allocated, or if a message queue data set requires
more than one volume, the OSAM data sets might be pre-allocated.
Observe the following restrictions when you preallocate with any of the accepted
methods:
v DCB parameters should not be specified.
v Secondary allocation must be specified for all volumes if the data set will be
extended beyond the primary allocation.
v Secondary allocation must be specified for all volumes in order to write to
volumes pre-allocated but not written to by initial load or reload processing.
v Secondary allocation is not allowed for queue data sets because queue data sets
are not extended beyond their initial or pre-allocated space quantity. However,
queue data sets can have multivolume allocation.
v The secondary allocation size defined on the first volume will be used for all
secondary allocations on all volumes regardless of the secondary allocation size
specified on the other volumes. All volumes should be defined with the same
secondary allocation size to avoid confusion.
v If the OSAM data set will be cataloged, use IEHPROGM or Access Method
Services to ensure that all volumes are included in the catalog entry.
When a multiple-volume data set is pre-allocated, you should allocate extents on all
the volumes to be used. The suggested method of allocation is to have one
IEFBR14 utility step for each volume on which space is desired.
Restrictions:
v Do not use IEFBR14 and specify a DD card with a multivolume data set,
because this allocates an extent on only the first volume.
v Do not use this technique to allocate multi-volume OSAM databases on which
you intend to use the Image Copy 2 Utility (DFSUDMT0). All multi-volume
databases on which the Image Copy 2 Utility will be used MUST be allocated
using the standard DFP techniques.
data after the old EOF mark in the third volume instead of inserting data after
the EOF mark created by the reorganization utility in the second volume.
Subsequent processing by another utility such as the Image Copy utility uses
the EOF mark set by the reorganization utility on the second volume and
ignores new data inserted by OSAM on volume three.
3. When loading this database, the order of the DD cards determines the order in
which the data is loaded.
4. If you intend to use the Image Copy 2 utility (DFSUDMT0) to back up and
restore multi-volume databases, they MUST be allocated using the standard
DFP techniques.
Basically, an initial load program reads an existing file containing your database
records. Using the DBD, which defines the physical characteristics of the database,
and the load PSBs (see Figure 186 on page 322), the load program builds
segments for a database record and inserts them into the database in hierarchic
order. If the data to be loaded into the database already exists in one or more files
(see Figure 187 on page 323), merge and sort the data, if necessary, so that it is
presented to the load program in correct sequence. Also, if you plan to merge
existing files containing redundant data into one database, delete the redundant
data, if necessary, and correct any data that is wrong.
After you have defined the database, you load it by writing an application program
that uses the ISRT call. An initial load program builds each segment in the
program’s I/O area, then loads it into the database by issuing an ISRT call for it.
ISRT calls are the only DL/I requests allowed when you specify PROCOPT=L in the
PCB. The only time you use the “L” option is when you initially load a database.
This option is valid only for batch programs.
The FIRST, LAST, and HERE insert rules do not apply when you are loading a
database, unless you are loading an HDAM database. When you are loading a
HDAM database, the rules determine how root segments with non-unique sequence
fields are ordered. If you are loading a database using HSAM, the same rules
apply.
program after the first load program is technically an “add” program, not a load
program. Do not specify “L” as the processing option in the PCB for add programs.
You should review any add type of load program written to load a database to
ensure that the program’s performance will be acceptable; it usually takes longer to
add a group of segments than to load them.
For HSAM, HISAM, HIDAM, and PHIDAM, the root segments that the application
program inserts must be pre-sorted by the key fields of the root segments. The
dependents of each root segment must follow the root segment in hierarchic
sequence, and must follow key values within segment types. In other words, you
insert the segments in the same sequence in which your program would retrieve
them if it retrieved in hierarchic sequence (children after their parents, database
records in order of their key fields).
If you are loading an HDAM or PHDAM database, you do not need to pre-sort root
segments by their key fields.
| Recommendation: You should always create an image copy immediately after you
| load, reload, or reorganize the database.
The only SSA you must supply is the unqualified SSA giving the name of the
segment type you are inserting.
Because you do not need to worry about position, you need not use SSAs for the
parents of the segment you are inserting. If you do use them, be sure they contain
only the equal (EQ, =b, or b=) relational operator. You must also use the key field of
the segment as the comparative value.
For HISAM, HIDAM, and PHIDAM, the key X'FFFF' is reserved for IMS. IMS returns
a status code of LB if you try to insert a segment with this key.
Figure 187 on page 323 illustrates loading a database using existing files.
Figure 188 on page 325 shows the logic for developing a basic initial load program.
Following Figure 188 is a sample load program (Figure 189) that satisfies the basic
IMS database loading requirements. A sample program showing how this can be
done with the Utility Control Facility is also provided.
Fast Path Data Entry Databases (DEDBs) cannot be loaded in a batch job as can
DL/I databases. DEDBs are first initialized by the DEDB Initialization Utility and then
loaded by a user-written Fast Path application program that executes typically in a
BMP region.
Fast Path Main Storage Databases (MSDBs) are not loaded until the IMS control
region is initialized. These databases are then loaded by the IMS start-up procedure
when the following requirements are met:
v The MSDB= parameter on the EXEC Statement of Member Name IMS specifies
a one-character suffix to DBFMSDB in IMS.PROCLIB.
v The member contains a record for each MSDB to be loaded.
The record contains a record for each MSDB, the number of segments to be
loaded, and an optional “F” which indicates that the MSDB is to be fixed in
storage.
Related Reading: For a description of the record format and the DBD keyword
parameters, see the topics about member name IMS in IMS Version 9:
Installation Volume 2: System Definition and Tailoring.
v A sequential data set, part of a generation data group (GDG) with dsname
IMS.MSDBINIT(0), is generated.
This data set can be created by a user-written program or by using the INSERT
function of the MSDB Maintenance utility. Records in the data set are sequenced
by MSDB name, and within MSDBs by key.
Related Reading: For a description of the record format and information on how
to use the MSDB Maintenance utility, see IMS Version 9: Utilities Reference:
Database and Transaction Manager.
DLITCBL START
PRINT NOGEN
SAVE (14,12),,LOAD1.PROGRAM SAVE REGISTERS
USING DLITCBL,10 DEFINE BASE REGISTER
LR 10,15 LOAD BASE REGISTER
LA 11,SAVEAREA PERFORM
ST 13,4(11) SAVE
ST 11,8(13) AREA
LR 13,11 MAINT
L 4,0(1) LOAD PCB BASE REGISTER
STCM 4,7,PCBADDR+1 STORE PCB ADDRESS IN CALL LIST
USING DLIPCB,4 DEFINE PCB BASE REGISTER
OPEN (LOAD,(INPUT)) OPEN LOAD DATA SOURCE FILE
LOOP GET LOAD,CARDAREA GET SEGMENT TO BE INSERTED
INSERT CALL CBLTDLI,MF=(E,DLILINK) INSERT THE SEGMENT
AP SEGCOUNT,=P’1’ INCREMENT SEGMENT COUNT
CLC DLISTAT,=CL2’ ’ WAS COMPLETION NORMAL?
BE LOOP YES - KEEP GOING
ABEND ABEND 8,DUMP INVALID STATUS
EOF WTO ’DATABASE 1 LOAD COMPLETED NORMALLY’
UNPK COUNTMSG,SEGCOUNT UNPACK SEGMENT COUNT FOR WTO
OI COUNTMSG+4,X’F0’ MAKE SIGN PRINTABLE
WTO MF=(E,WTOLIST) WRITE SEGMENT COUNT
CLOSE (LOAD) CLOSE INPUT FILE
L 13,4(13) UNCHAIN SAVE AREA
RETURN (14,12),RC=0 RETURN NORMALLY
LTORG
SEGCOUNT DC PL3’0’
DS 0F
WTOLIST DC AL2(LSTLENGT)
DC AL2(0)
COUNTMSG DS CL5
DC C’ SEGMENTS PROCESSED’
LSTLENGT EQU (*-WTOLIST)
DLIFUNC DC CL4’ISRT’ FUNCTION CODE
DLILINK DC A(DLIFUNC) DL/I CALL LIST
PCBADDR DC A(0)
DC A(DATAAREA)
DC X’80’,AL3(SEGNAME)
CARDAREA DS 0CL80 I/O AREA
SEGNAME DS CL9
SEGKEY DS 0CL4
DATAAREA DS CL71
SAVEAREA DC 18F’0’
LOAD DCB DDNAME=LOAD1,DSORG=PS,EODAD=EOF,MACRF=(GM),RECFM=FB
DLIPCB DSECT , DATABASE PCB
DLIDBNAM DS CL8
DLISGLEV DS CL2
DLISTAT DS CL2
DLIPROC DS CL4
DLIRESV DS F
DLISEGFB DS CL8
DLIKEYLN DS CL4
DLINUMSG DS CL4
DLIKEYFB DS CL12
END
execution. If problems occur and your program is not restartable, the entire load
program has to be rerun from the beginning.
Restartable load programs differ from basic load programs in their logic. Figure 190
on page 328 shows the logic for developing a restartable initial load program. If you
already have a basic load program, usually only minor changes are required to
make it restartable. The basic program must be modified to recognize when restart
is taking place, when WTOR requests to stop processing have been made, and
when checkpoints have been taken.
| To make your initial database load program restartable under UCF, consider the
| following points when you are planning and writing the program:
v If a program is being restarted, the PCB status code will contain a UR prior to
the issuance of the first DL/I call. The key feedback area will contain the fully
concatenated key of the last segment inserted prior to the last UCF checkpoint
taken. (If no checkpoints were taken prior to the failure, this area will contain
binary zeros.)
v The UCF does not checkpoint or reposition user files. When restarting, it is the
user’s responsibility to reposition all such files.
v When restarting, the first DL/I call issued must be an insert of a root segment.
For HISAM and HIDAM Index databases, the restart will begin with a GN and a
VSAM ERASE sequence to reinsert the higher keys. The resume operation then
takes place. Space in the KSDS is reused (recovered) but not in the ESDS.
For HDAM, the data will be compared if the root sequence field is unique and a
root segment insert is done for a segment that already exists in the database
because of segments inserted after the checkpoint. If the segment data is the
same, the old segment will be overlaid and the dependent segments will be
dropped since they will be reinserted by a subsequent user/reload insert. This
occurs only until a non-duplicate root is found. Once a segment with a new key
or with different data is encountered, LB status codes will be returned for any
subsequent duplicates. Therefore, space is reused for the roots, but lost for the
dependent segments.
For HDAM with non-unique keys, any root segments that were inserted after the
checkpoint at which the restart was made will remain in the database. This is
also true for their dependent segments.
v When the stop request is received, UCF will take a checkpoint just prior to
inserting the next root. If the application program fails to terminate, it will be
presented the same status code at each of the following root inserts until normal
termination of the program.
v For HISAM databases, the RECOVERY option must be specified. For HD
organizations, either RECOVERY or SPEED can be defined to Access Method
Services.
v UCF checkpoints are taken when the checkpoint count (CKPNT=) has expired
and a root insert has been requested. The count refers to the number of root
segments inserted and the checkpoint is taken immediately prior to the insertion
of the root.
The following lists explains the status codes shown in Figure 190:
UR Load program being restarted under control of UCF
UC Checkpoint record written to UCF journal data set
US Initial load program prepared to stop processing
UX Checkpoint record was written and processing stopped
DLITCBL START
PRINT NOGEN
SAVE (14,12),,LOAD1.PROGRAM SAVE REGISTERS
USING DLITCBL,10 DEFINE BASE REGISTER
LR 10,15 LOAD BASE REGISTER
LA 11,SAVEAREA PERFORM
ST 13,4(11) SAVE
ST 11,8(13) AREA
LR 13,11 MAINT
L 4,0(1) LOAD PCB BASE REGISTER
STCM 4,7,PCBADDR+1 STORE PCB ADDRESS IN CALL LIST
USING DLIPCB,4 DEFINE PCB BASE REGISTER
OPEN (LOAD,(INPUT)) OPEN LOAD DATA SOURCE FILE
CLC DLISTAT,=C’UR’ IS THIS A RESTART?
BNE NORMAL NO - BRANCH
CLC DLIKEYFB(4),=X’00000000’ IS KEY FEEDBACK AREA ZERO?
BE NORMAL YES - BRANCH
RESTART WTO ’RESTART LOAD PROCESSING FOR DATABASE 1 IS IN PROCESS’
RLOOP GET LOAD,CARDAREA GET A LOAD RECORD
CLC SEGNAME(8),=CL8’SEGMA’ IS THIS A ROOT SEGMENT RECORD?
BNE RLOOP NO - KEEP LOOKING
CLC DLIKEYFB(4),SEGKEY IS THIS THE LAST ROOT INSERTED?
BNE RLOOP NO - KEEP LOOKING
B INSERT GO DO IT
NORMAL WTO ’INITIAL LOAD PROCESSING FOR DATABASE 1 IS IN PROCESS’
LOOP GET LOAD,CARDAREA GET SEGMENT TO BE INSERTED
INSERT CALL CBLTDLI,MF=(E,DLILINK) INSERT THE SEGMENT
AP SEGCOUNT,=P’1’ INCREMENT SEGMENT COUNT
CLC DLISTAT,=CL2’ ’ WAS COMPLETION NORMAL?
BE LOOP YES - KEEP GOING
CLC DLISTAT,=CL2’UC’ HAS CHECKPOINT BEEN TAKEN?
BNE POINT1 NO - KEEP CHECKING
POINT0 WTO ’UCF CHECKPOINT TAKEN FOR LOAD 1 PROGRAM’
UNPK COUNTMSG,SEGCOUNT UNPACK SEGMENT COUNT FOR WTO
OI COUNTMSG+4,X’F0’ MAKE SIGN PRINTABLE
WTO MF=(E,WTOLIST) WRITE SEGMENT COUNT
B LOOP NO - KEEP GOING
POINT1 CLC DLISTAT,=CL2’US’ HAS OPERATOR REQUESTED STOP?
BNE POINT2 NO - KEEP CHECKING
B LOOP KEEP GOING
POINT2 CLC DLISTAT,=CL2’UX’ COMBINED CHECKPOINT AND STOP?
BNE ABEND NO - GIVE UP
WTO ’LOAD1 PROGRAM STOPPING PER OPERATOR REQUEST’
B RETURN8
ABEND ABEND 8,DUMP INVALID STATUS
EOF WTO ’DATABASE 1 LOAD COMPLETED NORMALLY’
UNPK COUNTMSG,SEGCOUNT UNPACK SEGMENT COUNT FOR WTO
OI COUNTMSG+4,X’F0’ BLAST SIGN
WTO MF=(E,WTOLIST) WRITE SEGMENT COUNT
CLOSE (LOAD) CLOSE INPUT FILE
L 13,4(13) UNCHAIN SAVE AREA
RETURN (14,12),RC=0 RETURN NORMALLY
RETURN8 WTO ’DATABASE 1 LOAD STOPPING FOR RESTART’
UNPK COUNTMSG,SEGCOUNT UNPACK SEGMENT COUNT FOR WTO
OI COUNTMSG+4,X’F0’ BLAST SIGN
WTO MF=(E,WTOLIST) WRITE SEGMENT COUNT
CLOSE (LOAD) CLOSE INPUT FILE
L 13,4(13) UNCHAIN SAVE AREA
RETURN (14,12),RC=8 RETURN AS RESTARTABLE
LTORG
SEGCOUNT DC PL3’0’
DS 0F
WTOLIST DC AL2(LSTLENGT)
DC AL2(0)
COUNTMSG DS CL5
DC C’ SEGMENTS PROCESSED’
LSTLENGT EQU (*-WTOLIST)
DLIFUNC DC CL4’ISRT’ FUNCTION CODE
DLILINK DC A(DLIFUNC) DL/I CALL LIST
PCBADDR DC A(0)
DC A(DATAAREA)
DC X’80’,A13(SEGNAME)
CARDAREA DS 0CL80 I/O AREA
SEGNAME DS CL9
SEGKEY DS 0CL4
DATAAREA DS CL71
SAVEAREA DC 18F’0’
STOPNDG DC X’00’
LOAD DCB DDNAME=LOAD1,DSORG=PS,EODAD=EOF,MACRF=(GM),RECFM=FB
DLIPCB DSECT DATABASE PCB
DLIDBNAM DS CL8
DLISGLEV DS CL2
DLISTAT DS CL2
DLIPROC DS CL4
DLIRESV DS F
DLISEGFB DS CL8
DLIKEYLN DS CL4
DLINUMSG DS CL4
DLIKEYFB DS CL12
END
Loading an MSDB
Because MSDBs reside in main storage, you do not load them as you do other IMS
databases, that is, by means of a load program that you provide. Rather, they are
loaded during system initialization, when they are read from a data set. You first
build this data set either by using a program you provide or by running the MSDB
Maintenance utility.
Related Reading:
v See IMS Version 9: Utilities Reference: Database and Transaction Manager for
information on how to the MSDB Maintenance utility.
v See Figure 73 on page 130 for the record format of the MSDBINIT data set.
Loading a DEDB
You load data into a DEDB database with a load program similar to that used for
loading other IMS databases. Unlike other load programs, this program runs as a
batch message program. The following five steps are necessary to load a DEDB:
1. Calculate space requirements.
The following example assures that root and sequential dependent segment
types are loaded in one area.
Assume all root segments are 200 bytes long (198 bytes of data plus 2 bytes
for the length field) and that there are 850 root segments in the area. On the
average, there are 30 SDEP segments per record. Each is 150 bytes long (148
bytes of data and a 2-byte length field). The CI size is 1024 bytes.
A. Calculate the minimum space required to hold root segments:
After choosing a UOW size, you can determine the DBD specifications for the
root addressable and independent overflow parts using the result of the above
calculation as a base.
B. Calculate the minimum space required to hold the sequential dependent
segments:
v One control data CI for each 120 CIs in the independent overflow part
Assuming a UOW size of 20 CIs, the minimum amount of space to be
allocated is: 213 + 4250 + 20 + 2 + 1 = 4486 CIs.
2. Set up the DBD specifications according to the above results, and execute the
DBD generation.
3. Allocate the VSAM cluster using VSAM Access Method Services.
The following example shows how to allocate an area that would later be
referred to as AREA1 in a DBDGEN:
DEFINE -
CLUSTER -
(NAME (AREA1) -
VOLUMES (SER123) -
NONINDEXED -
CYLINDERS (22) -
CONTROLINTERVALSIZE (1024) -
RECORDSIZE (1017) -
SPEED) -
DATA -
(NAME(DATA1)) -
CATALOG(USERCATLG)
The following keywords have special significance when defining an area:
NAME The name supplied for the cluster is the name
subsequently referred to as the area name. The
name for the data component is optional.
NONINDEXED DEDB areas are non-indexed clusters.
CONTROLINTERVALSIZE The value supplied, because of a VSAM ICIP
requirement, must be 512, 1024, 2048, or 4096.
RECORDSIZE The record size is 7 less than the CI size.
These 7 bytes are used for VSAM control
information at the end of each CI.
SPEED This keyword is recommended for performance
reasons.
CATALOG This optional parameter can be used to specify
a user catalog.
4. Run the DEDB initialization utility (DBFUMIN0).
This offline utility must be run to format each area to DBD specifications.
Root-addressable and independent-overflow parts are allocated accordingly. The
space left in the VSAM cluster is reserved for the sequential-dependent part. Up
to 2048 areas can be specified in one utility run; however, the area initializations
are serialized. After the run, check the statistical information report against the
space calculation results.
5. Run the user DEDB load program.
A BMP program is used to load the DEDB. The randomizing routine used during
the loading of the DEDB might have been tailored to direct specific ranges of
data to specific areas of the DEDB.
If the load operation fails, the area must be scratched, reallocated, and
initialized.
Related Reading:
v For information about these and other IMS tools, go to www.ibm.com/ims and link
to the IBM DB2 and IMS Tools Web site.
v For information about using the IMS Monitor is found in IMS Version 9:
Administration Guide: System.
v Additional information about monitoring can also be found in the topic on data
sharing in IMS Version 9: Administration Guide: System.
In this chapter:
v “IMS Monitor”
v “Monitoring Fast Path Systems” on page 337
IMS Monitor
The IMS Monitor is a tool that records data about the performance of your DL/I
databases in a batch environment. The recorded data is produced in a variety of
reports. The monitor’s usefulness is twofold. First, when you run the monitor
routinely, it gives you performance data over time. By comparing this data, you can
determine whether the performance trend is acceptable. This helps you make
decisions about tuning your database and determining when it needs to be
reorganized.
The second use of the monitor is to assess how the changes you make effect
performance. Once you have accumulated reports describing normal database
processing, you can use them as a profile against which to compare the effect of
your changes. Examples of changes you might make (then test for performance)
include:
v Changes in the structure of your databases
v A change from one DL/I access method to another
v A change in database buffer pool number and size
v Changes in application program logic
In all these cases, your primary goal is probably to minimize the number of I/Os
required to perform an operation. The monitor helps you determine whether you
have met your objective.
The following example shows how to use the IMS Monitor: suppose you are
performing a final test on a new or revised application. The monitor reports show
that some DL/I calls in the program, which should have required a single I/O
retrieval, actually required a large database scan involving many I/Os. You might be
able to correct this problem by making changes in the application program logic.
The IMS Monitor collects data from IMS control blocks (when DL/I is operating) and
records the data either on an independent data set or in the IMS log. It collects data
with minimum interference to the system. The monitor runs in the same address
space as the IMS job, and it can be turned on or off with the MON= parameter in
the execution JCL.
The IMS Monitor Report Print utility is an offline program that produces reports
summarizing information collected by the IMS Monitor. It produces the following
reports:
v VSAM Buffer Pool report
v VSAM Statistics report
v Database Buffer Pool report
v Program I/O report
v DL/I Call Summary report
v Distribution Appendix report
v Monitor Overhead report
Example output of each of these reports is in the IMS Version 9: Utilities Reference:
System. Each field in the reports is explained, followed by a summary of how you
can use the report. Many of these reports are also provided by the IMS Monitor,
which is described in IMS Version 9: Administration Guide: System. Where the
same report is produced by both the DB and IMS Monitor, the description of the
report in the IMS Version 9: Utilities Reference: System is applicable for both.
When the IMS Monitor is on, it remains on until the batch execution ends, requiring
some overhead. It cannot be turned on and off from the system console. To
minimize the monitor’s impact, use the IMS Monitor in a single-thread test
environment rather than multi-thread application environments.
This ensures that the data gathered by the IMS Monitor can be related to a
particular program.
Related Reading: For information on using the IMS Monitor for Fast Path systems,
see IMS Version 9: Utilities Reference: System.
Use the Fast Path Log Analysis utility (DBFULTA0) to prepare statistical reports for
Fast Path based on data recorded on the IMS system log. This utility is offline and
produces five reports useful for system installation, tuning, and troubleshooting:
v A detailed listing of exception transactions
v A summary of exception detail by transaction code for MPP (message-processing
program) regions
v A summary by transaction code for MPP regions
v A summary of IFP, BMP, and CCTL transactions by PSB name or transaction
code
v A summary of the log analysis
Do not confuse this utility with the IMS Monitor or the IMS Log Transaction Analysis
utility.
Related Reading:
v For more information on CCTL transactions, see the IMS Version 9:
Customization Guide.
v For more detailed information on the Fast Path Log Analysis utility, see IMS
Version 9: Utilities Reference: System.
As an administrator in the Fast Path environment, you should perform tasks, like
establishing monitoring strategies, performance profiles, and analysis procedures.
This topic highlights how to use the Analysis utility to do these tasks, and suggests
some Areas where tuning activities might be valuable.
v A data set of records, in the same format, that are selected based on exception
conditions (such as those transactions that exceed a certain fixed response time)
The latter data sets can be analyzed in more detail by your installation’s programs.
They can also be sorted to group critical transactions or events. The details of the
record format and meaning of the fields are given in IMS Version 9: Utilities
Reference: System.
Related Reading:
v Another way to reduce log volume is to designate the DEDB as nonrecoverable.
No changes to the database are logged and no record of database updates is
kept in the DBRC RECON data set. See “Non-Recovery Option” on page 114.
v For more information on log reduction and the LGNR parameter, see IMS Version
9: Utilities Reference: System.
The following list describes the four intervals shown in Figure 194:
1. Input queue time: reflects the transaction input queuing within the balancing
group to distribute the work.
2. Process time: records the actual elapsed processing time for the individual
transaction.
3. Output queue time: shows the effect of sync point in delaying the output
message release until after logging.
4. Output message time: shows the line and device availability for receiving the
output message. If the transaction originated from a programmable controller,
the output time could reflect a delay in dequeue caused by the output not being
acknowledged until the next input.
The sum of the first three intervals is termed the transit time. This time is slightly
different from a response time, because it excludes the line activity for the
message, message formatting, and the input edit processing up to the time the
message segment leaves the exit routine.
Selecting Transactions
The analysis utility lets you select transactions to be reported in detail. You give the
transaction code and a transit time that each transaction is to exceed, up to a
maximum of 65.5 seconds. Several codes can be selected for each utility run.
There is also a way to ask for all transactions that exceed the given transit time. In
this case, the individual exception specification overrides the general one.
When you do not need to print all such occurrences of the exceptions, you can give
a maximum number of detail records to be printed. The default is 1000 individual
records, though you can specify up to 9999999 as the maximum number. When you
cut off the number of printed records, the data set for the exception records
contains all transactions that meet the selection criteria.
You can also specify a start time and end time for the transaction reporting interval.
The start time corresponds to the earliest transaction that satisfies the clock time
(format HH:MM:SS) specified by a utility input control statement. End time is set by
the latest transaction that enters the sync point processing before the ending clock
time that is specified on an input control statement.
For those transactions selected, the terminal origin and routing code are given for
each individual occurrence of the transaction. The detail also includes the data
appearing in the overall summary.
v Summary of exception detail by transaction code
This report is based on the transactions in the exception report. The items
reported are the same as for the overall summary.
v Summary of transactions by PSB
All programs that are in non-message-driven regions, MPP regions, and BMP
regions that enter the sync point processing are reported. The items reported are
the same as the summary of exception detail.
v Recapitulation of the analysis
This is a documentation aid that gives the grand totals of transactions input to
the analysis, and the I/O for online utilities.
The combination of the interval covered by the system log input to the utility and the
exception criteria you define in the input control statements determines the content
of these reports.
Examples of the reports format and the definition of the items reported can be
found in IMS Version 9: Utilities Reference: System, within the description of the
Fast Path Log Analysis utility.
Keep in mind that when you tune your database, you are often making more than a
simple change to it. For example, you might need to reorganize your database and
at the same time change operating system access methods. This chapter has
procedures to guide you through making each type of change. If you are making
more than one change at a time, you should look at the flowchart, Figure 223 on
page 413. When used in conjunction with the individual procedures in this chapter,
the flowchart guides you in making some types of multiple changes to the database.
Also, some of the tuning changes you make can affect the logic in application
programs. You can often use the dictionary to analyze the affect before making
changes. In addition, some changes require that you code new DBDs and PSBs. If
you initialize your changes in the dictionary, you can then use the dictionary to help
create new DBDs and PSBs.
If you are using data sharing, additional information about tuning is in IMS Version
9: Administration Guide: System.
Two database types, DEDB and HALDB, support online reorganization in addition to
the offline methods of reorganization discussed here. For more information on the
online reorganization of each of these types of databases, see:
v For HALDB, see “HALDB Online Reorganization” on page 364
v For DEDB, search for High-Speed DEDB Direct Reorganization utility
(DBFUHDR0) in IMS Version 9: Utilities Reference: Database and Transaction
Manager
Related Reading: See Chapter 16, “Modifying Databases,” on page 423, for
information on making structural changes to your database.
IMS reclaims storage used for KSDS control intervals (CIs) whose erasure has
been committed in data-sharing or XRF environments. This function is not, however,
a replacement for routine reorganization of KSDS data sets. VSAM CI space
reclamation enhances the performance of database GETS or INSERTS after mass
deletes occur in data-sharing or XRF environments.
Restriction: CI reclaim does not occur for SHISAM databases. When a large
number of records in a SHISAM database are deleted, particularly a large number
of consecutive records, serious performance degradation can occur. Eliminate
empty CIs and resolve the problem by using VSAM REPRO.
The DB Monitor can aid in monitoring a database to help you determine when it is
time to reorganize your database. Information about the DB Monitor is found in
Chapter 14, “Monitoring Databases,” on page 335.
space. You should take an image copy of your database as soon as it is reloaded
and before any application programs are run against it. Taking an image copy
provides you with a backup copy of the database and establishes a point of
recovery with DBRC in case of system failure. You can create image copies of your
database using the Database Image Copy utility or the Database Image Copy 2
utility, which are described in detail in IMS Version 9: Utilities Reference: Database
and Transaction Manager.
Related Reading: For more information about reorganization utilities, see the IMS
Version 9: Utilities Reference: Database and Transaction Manager.
The reorganization utilities can be classified into three groups, based on the type of
reorganization you plan to do:
v Partial reorganization
v Reorganization using UCF
v Reorganization without UCF
If your database does not use logical relationships or secondary indexes, you
simply run the appropriate unload and reload utilities, which are as follows:
v For HISAM databases, the HISAM Reorganization Unload utility and the HISAM
Reorganization Reload utility
v For HIDAM index databases (if reorganized separately from the HIDAM
database), the HISAM Reorganization Unload utility and the HISAM
Reorganization Reload utility
v For SHISAM, HDAM, and HIDAM databases, the HD Reorganization Unload
utility and the HD Reorganization Reload utility
If your database does use logical relationships or secondary indexes, you need to
run the HD Reorganization Unload and Reload utilities (even if it is a HISAM
database). In addition, you must run a variety of other utilities to collect, sort, and
restore pointer information from a segment’s prefix. Remember, when a database is
reorganized, the location of segments changes. If logical relationships or secondary
indexes are used, update prefixes to reflect new segment locations. The various
utilities involved in updating segment prefixes are:
v Database Prereorganization utility
v Database Scan utility
v Database Prefix Resolution utility
v Database Prefix Update utility
These utilities can also be used to resolve prefix information during initial load of
the database.
In the discussion of the utilities in this section, the four unload and reload utilities
are discussed first. The four utilities used to resolve prefix information are then
discussed. When reading through the utilities for the first time, you need to
understand that, if logical relationships or secondary indexes exist (requiring use of
the latter four utilities), the sequence in which operations is as follows:
1. Unload
2. Collect more prefix information
3. Reload
4. Collect more prefix information
5. Updated prefixes
You will find, for instance, that the HD Reorganization Reload utility does not just
reload the database if a secondary index or logical relationship exists. It reloads the
database using one input as a data set containing some of the prefix information
that has been collected. It then produces a data set containing more prefix
information as output from the reload. When the various utilities do their processing,
they use data sets produced by previously executed utilities and produce data sets
for use by subsequently executed utilities. When reading through the utilities, watch
the input and output data set names, to understand what is happening.
Figure 195 shows you the sequence in which utilities are executed if logical
relationships or secondary indexes exist. Figure 196 on page 347 shows the
sequence for these utilities when using HALDB partitions.
Figure 195. Steps in Reorganizing When Logical Relationships or Secondary Indexes Exist
|
| Figure 196. Steps for Reorganizing HALDB Partitions When Logical Relationships or
| Secondary Indexes Exist
| As an alternative, where Figure 196 calls for the Partition Initialization utility, you
| can run the Prereorganization utility.
You use the HISAM Unload utility to unload a HISAM database or HIDAM index
database. (SHISAM databases are unloaded using the HD Reorganization Unload
utility.) If your database uses secondary indexes, you also use the HISAM Unload
utility (later in the reorganization process) to perform a variety of other operations
associated with secondary indexes.
You use the HISAM reload utility to reload a HISAM database. (SHISAM databases
are reloaded using the HD Reorganization Reload utility.) You also use the HISAM
reload utility to reload the primary index of a HIDAM database. If your databases
use secondary indexes, you use the HISAM reload utility (later in the reorganization
process) to perform a variety of other operations associated with secondary
indexes.
The DFSURWF1 work data set will become input to the Database Prefix Resolution
utility. Note in Figure 200 that, if the database being reloaded has a primary index, it
The Database Prereorganization utility produces the DFSURCDS control data set,
which contains information about what pointers need to be resolved later if
secondary indexing or logical relationships exist. The DFSURCDS control data set
produced by the Prereorganization utility is used as input to the following:
v The Database Scan utility, if that utility needs to be run
v The HD Reorganization Reload utility, if secondary indexing or logical
relationships exist
v The Database Prefix Resolution utility, after the database is loaded or reloaded
The Prereorganization utility also produces a list of which databases not being
initially loaded or reorganized contain segments involved in logical relationships with
the database that is being initially loaded or reorganized.
This utility is always run before the database is loaded (for initial load) or reloaded
(for reorganization).
You use the Database Scan utility to scan databases that are not being initially
loaded or reorganized but contain segments involved in logical relationships with
databases that are being initially loaded or reorganized. For input, the utility uses
the DFSURCDS control data set created by the Prereorganization utility. For output,
the utility produces the DFSURWF1 work data set, which contains prefix information
needed to resolve logical relationships. The DFSURWF1 work data set is used as
input to the Database Prefix Resolution utility.
This utility is always run before the database is loaded (for initial load) or reloaded
(for reorganization).
You use the Prefix Resolution utility to accumulate and sort the information that has
been put on DFSURWF1 work data sets up to this point in the load or reload
process. The various work data sets that could be input to this utility are:
v The DFSURCDS control data set produced by the Prereorganization utility
v The DFSURWF1 work data set produced by the scan utility
v The DFSURWF1 work data set produced by the HD Reorganization Reload utility
The DFSURWF1 work data sets must be concatenated to form an input data set for
the Prefix Resolution utility. The name of the input data set is SORTIN.
The Prefix Resolution utility uses the z/OS sort/merge programs to sort the
information that has been accumulated. For output, the utility produces the
DFSURWF3 work data set, which contains the sorted prefix information needed to
resolve logical relationships. The DFSURWF3 data set will become input to the
Database Prefix Update utility.
If secondary indexes exist, the utility produces the DFSURIDX work data set, which
contains the information needed to create a new secondary index or update a
shared secondary index database. The DFSURIDX work data set is used as input
to the HISAM unload utility. The HISAM unload utility formats the secondary index
information before the HISAM reload utility creates a secondary index or updates a
shared secondary index database.
This utility is always run after the database is loaded (for initial load) or reloaded
(for reorganization).
You use the Prefix Update utility to update the prefix of each segment whose prefix
was affected by the initial loading or reorganization of the database. The prefix
fields that are updated include the logical parent, logical twin, and logical child
pointer fields, and the counter fields for logical parents. The Prefix Update utility
uses as input the DFSURWF3 data set created by the Prefix Resolution utility.
This utility is always run after the database is loaded (for initial load) or reloaded
(for reorganization) and after the Prefix Resolution utility has been run.
Each of these operations is done separately. That is, none of them can be done in
conjunction with running the HISAM unload and reload utilities to unload or reload a
regular database.
Figure 205 on page 354 shows the input to and output from the HISAM unload and
reload utilities when performing the first three operations. The DFSURIDX work data
set used as input to the HISAM unload utility was created by the Prefix Resolution
utility. It contains the information needed to create or update a shared secondary
index database. The HISAM unload utility formats the secondary index information
for use by the HISAM reload utility. Note that the input control statement to the
HISAM unload utility has an X in position 1 when the utility is used for secondary
indexing operations rather than for unloading a regular database. Position 3
contains one of the following characters:
v M: means the operation is either to build a new secondary index database or
merge a secondary index into a shared secondary index database
v R: means the operation is to replace a secondary index into a shared secondary
index database
The HISAM reload utility uses the output from the HISAM unload utility to create the
new secondary index or merge or replace the secondary index in a shared
secondary index database.
Figure 206 on page 355 shows the input to and output from the HISAM unload
utility when an index is being extracted from a set of shared indexes. Note that the
input can be one of the following:
v The DFSURIDX work data set created by the Prefix Resolution utility
v The shared secondary index database
Figure 205. HISAM Reorganization Unload and Reload Utilities Used for Create, Merge, or
Replace Secondary Indexing Operations
Figure 206. HISAM Reorganization Unload Utility Used for Extract Secondary Indexing
Operations
Use the Surveyor utility to scan all or part of an HDAM or a HIDAM database to
determine whether a reorganization is needed. The Surveyor utility produces a
report describing the physical organization of the database. The report includes the
size and location of areas of free space. When you do a partial reorganization, you
will know where free space exists into which you can put your reorganized
database records.
You would use the Partial Database Reorganization utility to reorganize parts of
your HD database. It can be used when HD databases use secondary indexes or
logical relationships. You tell the utility what range of records you need reorganized.
v In an HDAM database, a range is a group of database records with continuous
relative block numbers.
v In a HIDAM database, a range is a group of database records with continuous
key values.
Generally, before using the Partial Database Reorganization utility, you would run
the Database Surveyor utility (described in “Database Surveyor Utility
(DFSPRSUR)” on page 355). The Surveyor utility helps you determine whether a
reorganization is needed and find the location and size of areas of free space. You
need to know the location and size of areas of free space so you will know where
to put reorganized database records.
The Partial Database Reorganization utility reorganizes the database in two steps:
1. In the first step, the utility produces control tables for use in Step 2, which is
when the actual reorganization is done. As an option, the utility can produce
PSB source statements for creating a PSB for use in Step 2. The utility also
generates reports that show which logically related segments in logically related
356 Administration Guide: Database Manager
Reorganizing the Database
| Reorganizing HALDBs
| One of the primary advantages of HALDB is its simplified and shortened
| reorganization process and the ability to reorganize HALDB databases online using
| the integrated HALDB Online Reorganization function.
| Figure 209 on page 360 shows the offline processes used to reorganize a HALDB
| database with logical relationships and secondary indexes. In this case, the
| partitions are reorganized by parallel processes. Each partition can be unloaded
| and reloaded in less time than unloading and reloading the entire database. This is
| much faster than the process for a non-HALDB full-function database. Additionally,
| no time is required for updating pointers in the logically related database or
| rebuilding secondary indexes. This further shortens the process.
|
|
| Figure 209. Offline Reorganization of a HALDB database
|
| Related Reading: To compare the HALDB reorganization process illustrated in
| Figure 209 with the reorganization process for other full-function databases, see the
| flow chart of the steps for reorganizing non-HALDB databases that use logical
| relationships or secondary indexes in Figure 195 on page 346.
| Related Reading:
| Do not include DD statements for the HALDB database data sets. The HD
| Reorganization Unload utility uses dynamic allocation for HALDB data sets. This is
| not true for non-HALDB databases.
| Requirement: You must supply buffer pools for all data sets in the partitions that
| are unloaded. This includes the ILDSs.
|
| Figure 210. Example: The HD Reorganization Unload Utility Control Statement to Unload
| One Partition
|
| Figure 211. Example: The HD Reorganization Unload Utility Control Statement to Unload
| Multiple Partitions
| Figure 212 on page 362 shows a sample job that unloads a HALDB partition.
|
| Related Reading: For more information on the High Performance Unload tool, see
| IBM DB2 and IMS Tools: IMS High Performance Unload for OS/390.
| If you delete and redefine partition data sets, but do not reload data into them, you
| must initialize the partition data sets. If you reload data into the partition data sets
| after deleting and redefining them, you do not need to initialize the partition data
| sets.
| If you delete and redefine VSAM data sets, you receive a z/OS IEC161I system
| message when reloading a partition. This is not an error message. It indicates that
| a VSAM data set was empty when it was opened. Figure 213 shows the message
| for an ILDS.
|
| IEC161I 152-061,JOUKO3D,RELOAD,PEO01L,,,
| IEC161I JOUKO3.HALDB.DB.PEOPLE.L00001,
| IEC161I JOUKO3.HALDB.DB.PEOPLE.L00001.DATA,CATALOG.TOTICF2.VTOTCAT
|
| Figure 213. Example: IEC161I message during reload
|
| Related Reading: For more information on IEC system messages, see z/OS V1R4:
| MVS System Messages, Vol 7 (IEB-IEE).
| Do not include DD statements for the HALDB database data sets. The HD
| Reorganization Reload utility uses dynamic allocation for HALDB data sets. This is
| not true for non-HALDB databases.
| You must supply buffer pools for all data sets in the partitions that are reloaded.
| This includes the ILDSs.
| The HD Reorganization Reload utility sets the image copy needed flag for data sets
| in partitions that it loads. You should image copy them as you would any database
| data sets after they have been reloaded.
| Figure 214 shows a sample job that reloads HALDB partitions. The partitions it
| reloads depend on the records in the input file.
|
| //JOUKO3D JOB (999,POK),JOUKO3,CLASS=A,NOTIFY=&SYSUID,
| // MSGLEVEL=(1,1),MSGCLASS=X,REGION=0M
| //JOBLIB DD DSN=IMSPSA.IMS0.SDFSRESL,DISP=SHR
| // DD DSN=IMSPSA.IM0A.MDALIB,DISP=SHR
| //*******************************************************************
| //* HD RELOAD FOR THE PEOPLE DATABASE
| //*******************************************************************
| //RELOAD EXEC PGM=DFSRRC00,REGION=1024K,
| // PARM=’ULU,DFSURGL0,PEOPLE,,,,,,,,,,,Y,N’
| //DFSRESLB DD DSN=IMSPSA.IMS0.SDFSRESL,DISP=SHR
| //IMS DD DISP=SHR,DSN=JOUKO3.HALDB.DBDLIB
| //DFSUINPT DD DSN=JOUKO3.UNLOAD.PEOPLE,DISP=OLD
| //DFSVSAMP DD *
| VSRBF=8192,50
| IOBF=(4096,50)
| /*
| //SYSPRINT DD SYSOUT=*
| //DFSSTAT DD SYSOUT=*
|
| Figure 214. Example: JCL to Reload a HALDB Partition
|
| ILDS Reorganization Updates: The HD Reorganization Reload utility updates the
| ILDS for partitions that contain targets of logical relationships or secondary indexes.
| The utility has three options for updating ILDSs:
| v No control statement
| v NOILDS control statement
| v ILDSMULTI control statement
| If you do not specify a control statement in the SYSIN data for the HD
| Reorganization Reload utility, an ILDS entry is updated or created when a target of
| a secondary index or logical relationship is inserted in the partition. An entry exists if
| a previous reorganization loaded the target segment in the partition. The updates to
| the ILDS are done in VSAM update mode. When a CI or CA is filled, it must be split
| by VSAM. Free space in the ILDS can help avoid these splits. Updates can be
| random or sequential. This depends on the order in which these segments are
| inserted and their ILKs. The ILDS keys are based on the ILK that is based on the
| location of the target segment when it was created.
| You can create free space in an ILDS by copying it using the VSAM REPRO
| command. The REPRO command honors the free space parameters in the VSAM
| DEFINE.
| You can delete and redefine the ILDS before reloading. You might want to do this to
| eliminate entries in the ILDS for target segments that are no longer in the partition.
| The HD Reorganization Reload utility never deletes an entry in the ILDS. The only
| way to delete these entries is to delete and redefine the ILDS. Alternately, an empty
| ILDS contains no free space. A reload with a large number of target segments
| might require a large number of CI and CA splits.
| The ILDSMULTI option applies only to migration reloads. For more information
| about ILDSMULTI, see the HD Reorganization Reload utility section of IMS Version
| 9: Utilities Reference: Database and Transaction Manager.
| The HD Reorganization Unload utility and the HD Reorganization Reload utility can
| be used to reorganize PSINDEX databases. The restrictions and recommendations
| for reorganizing other HALDB databases also apply to PSINDEX databases with
| one exception: HALDB secondary indexes have no ILDSs. The HD Reorganization
| Reload utility control statements should not be used with secondary indexes.
| The steps for reorganizing a PSINDEX database are the same as those for
| reorganizing other types of HALDBs offline. See “Overview of HALDB Offline
| Reorganization” on page 359 for a list of these steps.
| The initial load or offline reorganization reload of a HALDB partition always uses the
| A-through-J (and X) data sets. Until the first time that you reorganize a HALDB
| partition online, only the A-through-J (and X) data sets are used.
| During the initialization phase, IMS updates the RECON data sets to establish the
| ownership of the online reorganization by the IMS system that is performing the
| online reorganization. This ownership means that no other IMS system can perform
| a reorganization of the HALDB partition until the current online reorganization is
| complete or until ownership is transferred to another IMS system. IMS adds the
| M-V (and Y) DBDSs to the RECON data sets if those DBDS records do not already
| exist. IMS also adds the M-V (and Y) DBDSs to any existing change accumulation
| groups and DBDS groups that include the corresponding A-J (and X) DBDSs.
| Before online reorganization begins for a HALDB partition, there is a single set of
| active data sets for the HALDB partition. These active data sets are the input data
| sets for the copying phase. There might also be a set of inactive data sets from a
| prior online reorganization that are not used by IMS application programs.
| During the initialization phase, IMS evaluates each of the inactive data sets to
| ensure that it meets the requirements for output data sets (see “HALDB Online
| Reorganization Requirements for Existing Output Data Sets” on page 545). If any of
| the output data sets does not exist, IMS creates it automatically during this phase.
| At the end of the initialization phase, IMS treats the original active set of data sets
| as the input set and the inactive data sets as the output set. This use of the input
| and output sets of data sets is represented by the cursor-active status for the
| partition, which is recorded in online reorganization records in the RECON data
| sets. A listing of the partition’s database record in the RECON data sets shows
| OLREORG CURSOR ACTIVE=YES. A listing of the partition also shows that both sets of
| DBDSs are active: the first set of DBDSs listed is for the input data set and the
| second set of DBDSs is for the output data set, for example, DBDS ACTIVE=A-J and
| M-V. While the partition is in the cursor-active status, both sets of data sets must be
| available for the partition to be processed by any application.
| Figure 215 shows part of a listing of the RECON data sets for a HALDB partition
| that has the cursor-active status.
|
| -------------------------------------------------------------------------------
| 04.174 12:30:54.1 LISTING OF RECON PAGE 0003
| -------------------------------------------------------------------------------
| DB
| DBD=POHIDKA MASTER DB=DBOHIDK5 IRLMID=*NULL CHANGE#=2 TYPE=PART
| USID=0000000004 AUTHORIZED USID=0000000004 HARD USID=0000000004
| RECEIVE USID=0000000004 RECEIVE NEEDED USID=0000000000
| DBRCVGRP=**NULL**
| DSN PREFIX=IMSTESTS.DBOHIDK5 PARTITION ID=00001
| PREVIOUS PARTITION=**NULL** NEXT PARTITION=POHIDKB
| OLRIMSID=**NULL** ACTIVE DBDS=A-J and M-V
|
| FREE SPACE:
| FREE BLOCK FREQ FACTOR=0 FREE SPACE PERCENTAGE=0
|
| PARTITION HIGH KEY/STRING (CHAR): (LENGTH=5 )
| K2000
| PARTITION HIGH KEY/STRING (HEX):
| D2F2F0F0F0404040404040404040404040404040404040404040404040404040
|
| OSAM BLOCK SIZE:
| A = 4096
| B = 4096
|
| FLAGS: COUNTERS:
| BACKOUT NEEDED =OFF RECOVERY NEEDED COUNT =0
| READ ONLY =OFF IMAGE COPY NEEDED COUNT =0
| PROHIBIT AUTHORIZATION=OFF AUTHORIZED SUBSYSTEMS =0
| HELD AUTHORIZATION STATE=0
| EEQE COUNT =0
| TRACKING SUSPENDED =NO RECEIVE REQUIRED COUNT =0
| OFR REQUIRED =NO OLR ACTIVE HARD COUNT =0
| PARTITION INIT NEEDED =NO OLR INACTIVE HARD COUNT =0
| OLREORG CURSOR ACTIVE =YES
| PARTITION DISABLED =NO
| ONLINE REORG CAPABLE =YES
|
| Figure 215. Example RECON Listing: DB Record for a HALDB in Cursor-Active Status
|
| During the initialization phase, various error conditions, such as an unacceptable
| preexisting data set or an insufficient amount of disk space for an automatically
| created data set, can cause the initialization to fail. However, if an error occurs
| during or after the data set creation and validation process, but before IMS records
| the cursor-active status in the RECON data sets, any automatically created output
| data sets are retained along with any preexisting ones.
| While IMS reorganizes a HALDB partition online, IMS applications can make
| database updates to the partition. Some of the database updates are made to the
| input data sets, while others are made to the output data sets, depending on which
| data is updated by the application. Which data sets are updated is transparent to
| the application program. Figure 216 illustrates the relationship between the input
| and output data sets at a point during the online reorganization.
|
Figure 216. The Relationship between Input Data Sets and Output Data Sets during the
Online Reorganization of a HALDB Partition
| Figure 216 shows two sets of database data sets for a HALDB partition, the input
| data sets that have not been reorganized and the output data sets that have been
| (at least partially) reorganized. The figure shows the reorganization as progressing
| from left to right, from the input data sets above to the output data sets below. The
| data sets in the figure are divided into four areas:
| 1. Data within the input data sets that has been copied to the output data sets.
| This area reflects the old data organization (prior to the reorganization), and is
| not used again by IMS applications until the data sets are reused as the output
| data sets for a later online reorganization.
| 2. Data within the output data sets that has been copied from the input data sets.
| This data in this area has been reorganized, and can be used by IMS
| applications during the reorganization.
| 3. Data within both the input and output data sets that is locked and in the process
| of being copied and reorganized from the input data sets to the output data
| sets. This area of locked records is called a unit of reorganization. From a
| recovery point of view, this unit of reorganization is equivalent to a unit of
| recovery.
| While IMS processes the current unit of reorganization, IMS applications that
| access any of the locked data records must wait until IMS completes the
| reorganization for those records. After the copying and reorganization completes
| for the unit of reorganization, IMS commits the changes and unlocks the
| records, thus making them available again for IMS applications.
| 4. Data within the input data sets that has not yet been copied to the output data
| sets. This data has also not yet been reorganized, and can be used by IMS
| applications during the reorganization.
| As the online reorganization progresses, IMS uses a kind of pointer called a cursor
| to mark the end point of those database records that have already been copied
| from the input data sets to the output data sets. As the reorganization and copying
| proceeds, this cursor moves through the partition (from left to right in Figure 216).
| When an IMS application program accesses data from a HALDB partition that is
| being reorganized online, IMS retrieves the data record:
| v From the output data sets if the database record is located “at or before” the
| cursor.
| v From the input data sets if the database record is located “after” the cursor.
| If the data record happens to fall within the unit of reorganization, IMS retries the
| data access after the records are unlocked. An application program does not
| receive an error status code for data within a unit of reorganization.
| To allow recovery of either an input data set or an output data set, all database
| changes are logged during the online reorganization, including the database records
| that are copied from the input data set to the output data sets.
| After the copying phase is complete for a HALDB partition, the output data sets
| become the active data sets, and the input data sets become the inactive data sets.
| The active data sets are used for all data access by IMS application programs. The
| inactive data sets are not used by application programs, but can be reused for a
| subsequent online reorganization. Unless you perform an initial load or a batch
| reorganization reload for the partition, successive online reorganizations for the
| partition alternate between these two sets of data sets.
| IMS updates the partition’s database record in the RECON data sets to reset the
| cursor-active status for the partition to reflect that there is now just one set of data
| sets. A listing of this record from the RECON data sets shows OLREORG CURSOR
| ACTIVE=NO and the ACTIVE DBDS field shows the active (newly reorganized) data
| sets. IMS also updates the online reorganization records in the RECON data sets
| with the timestamp of when the reorganization completed.
| If you specified the DEL keyword for the INITIATE OLREORG command (or the UPDATE
| OLREORG command), IMS deletes the inactive data sets after resetting the
| cursor-active status for the partition. Before deleting the inactive data sets, IMS
| notifies all sharing IMS systems, including batch jobs, that the online reorganization
| is complete and is recorded in the RECON data sets. The IMS system that is
| performing the online reorganization waits until it receives an acknowledgement
| from each of these sharing IMS systems that they have closed and deallocated the
| now-inactive data sets, and then it deletes these data sets. However, if the
| acknowledgements are not received within 4.5 minutes, the owning IMS system will
| attempt to delete the inactive data sets anyway. Without the acknowledgements, the
| deletion attempt is likely to fail.
| Finally, at the end of the termination phase, IMS updates the RECON data sets to
| reset the ownership of the online reorganization so that no IMS system has
| ownership. This resetting of ownership means that any IMS system can perform a
| subsequent reorganization of the HALDB.
| Figure 217 shows the normal processing steps of a successful online reorganization
| of a HALDB partition. The columns represent the flow of control through the phases
| of the online reorganization, from the user to IMS, and the status of the data sets
| as the processing events occur.
|
| Table 27 shows the IMS versions that can access HALDBs that are capable of
| being reorganized online.
| Table 27. IMS Versions that Can Access HALDBs that Are Capable of Being Reorganized
| Online
| Access to HALDB partitions that are
| IMS Version capable of being reorganized online?
| IMS Version 7 No
| IMS Version 8 No
| IMS Version 8 with the OLR Coexistence Yes
| SPE
| IMS Version 9 Yes
|
| You must apply the IMS Version 8 OLR Coexistence SPE to allow full data sharing
| between IMS Version 8 and IMS Version 9 systems that have HALDBs that are
| capable of being reorganized online.
| You must use the following IMS Version 9 (or later) utilities to process HALDBs that
| are capable of being reorganized online:
| v Database Recovery
| v Database Image Copy
| v Database Image Copy 2
| v Database Change Accumulation
| For any partitions with M-through-V (and Y) data sets active, or for any partitions
| with an active HALDB Online Reorganization cursor, you must run an offline
| reorganization before you can fall back to using the IMS Version 8 utilities.
| Should fallback to a prior version become necessary, you must define all the
| HALDBs as no longer capable of being reorganized online. For IMS Version 7
| systems and IMS Version 8 systems that do not have the OLR Coexistence SPE
| applied, you can access only those HALDBs that are not capable of being
| reorganized online. After fallback, HALDBs that are capable of being reorganized
| online are unavailable until you complete the following actions:
| 1. Using the IMS Version 9 offline reorganization utility, reorganize all partitions
| that have the M-through-V (and Y) data sets active; these data sets could be
| active either because the partition has the cursor-active status or because these
| are the only data sets for the partition.
| 2. Define the partitions as no longer capable of being reorganized online by using
| the command CHANGE.DB DBD(HALDB_master) OLRNOCAP.
| v You can perform an online reorganization only for a HALDB that is defined in the
| RECON data sets as capable of being reorganized online (OLRCAP). For more
| information about the OLRCAP parameter, see the INIT.DB command or the
| CHANGE.DB command in the IMS Version 9: Database Recovery Control (DBRC)
| Guide and Reference.
| v You cannot start an online reorganization for a partition if another IMS system
| already owns an online reorganization for that partition.
| v You cannot make data definitional changes during an online reorganization of a
| partition. HALDB Online Reorganization provides only reclustering and space
| distribution advantages.
| v Image copy for a partition is not allowed if the partition is in the cursor-active
| status. This restriction applies even if the online reorganization terminated before
| the cursor-active status has been reset and the online reorganization for the
| partition is not owned by any IMS.
| v To backout in-flight work from an online reorganization, you must run a batch
| backout using a DL/I region type.
| v To use a type-2 command to start an online reorganization for a HALDB partition,
| you must have an IMS Common Service Layer that includes the Operations
| Manager and the Structured Call Interface. See the IMS Version 9: Common
| Service Layer Guide and Reference for more information.
| v HALDB Online Reorganization runs only in a local storage option-subordinate
| (LSO=S) environment. IMS rejects attempts to initiate an online reorganization for
| a HALDB partition in a local storage option-yes (LSO=Y) environments. For more
| information about the LSO specification, see the IMS Version 9: Installation
| Volume 2: System Definition and Tailoring.
| v You cannot perform an online reorganization for a HALDB partition from an
| alternate IMS system in an XRF complex. However, after an XRF takeover, the
| new active IMS system will continue a reorganization that was active when the
| takeover process began.
| v You cannot perform an online reorganization for a HALDB partition from a
| tracking IMS system in an RSR complex. However, for HALDBs that are
| registered as DBTRACK at the tracking IMS system, IMS tracks the effects of an
| online reorganization in the same way it tracks updates to any database. See
| “IMS Remote Site Recovery Processing for HALDB Online Reorganization” on
| page 378 for more information.
| v You cannot issue the following commands for a HALDB partition while it is being
| reorganized online:
| – /START DATABASE or UPDATE DATABASE NAME(name) START(ACCESS)
| – /DBRECOVERY DATABASE or UPDATE DATABASE NAME(name) STOP(ACCESS)
| – /DBDUMP DATABASE or UPDATE DATABASE NAME(name) STOP(UPDATES)
| – /STOP DATABASE or UPDATE DATABASE NAME(name) STOP(SCHD)
| If you issue any of these commands for a HALDB partition that is actively being
| reorganized online, IMS displays error message DFS0488I and does not process
| the command for the named partition. For more information about these
| commands, see the IMS Version 9: Command Reference. For more information
| about message DFS0488I, see the IMS Version 9: Messages and Codes,
| Volume 2.
| v You cannot issue the following commands for a HALDB master while any of its
| partitions is being reorganized online:
| – /START DATABASE ACCESS UP or UPDATE DATABASE NAME(name) START(ACCESS)
| – /DBRECOVERY DATABASE or UPDATE DATABASE NAME(name) STOP(ACCESS)
| The data set names for the output data sets are identical to the names of the
| corresponding input data sets, except for the IMS-assigned data set name type
| character (A-through-J, M-through-V, X, or Y). Table 28 shows example data set
| names.
| Table 28. Data Set Name Examples for HALDB Online Reorganization
| Active Data Set Before Data Set Group or
| Online Reorganization Index Partition ID Input Data Set Name Output Data Set Name
| A-through-J (and X) 1 00003 DH41.A00003 DH41.M00003
| A-through-J (and X) Index 00065 ACCT.X00065 ACCT.Y00065
| M-through-V (and Y) 2 00005 PAY.MST.N00005 PAY.MST.B00005
| M-through-V (and Y) 8 00001 PAY.EMP.T00001 PAY.EMP.H00001
|
| Any existing output data sets must have the characteristics described in “HALDB
| Online Reorganization Requirements for Existing Output Data Sets” on page 545.
| Any data in the existing output data sets is overwritten during the copying phase of
| an online reorganization. Output data sets that IMS creates for the online
| reorganization have the characteristics described in “Attributes of
| Automatically-Created Output Data Sets” on page 545.
| Related Reading:
| v For more information about the INITIATE OLREORG, /INITIATE OLREORG, UPDATE
| OLREORG and /UPDATE OLREORG commands, see the IMS Version 9: Command
| Reference.
| v For more information about the CHANGE.DB and the INIT.DB commands, see the
| IMS Version 9: Database Recovery Control (DBRC) Guide and Reference.
| Related Reading: For more information about the /DISPLAY DB, /DISPLAY DB OLR,
| QUERY DB, and QUERY OLREORG commands, see the IMS Version 9: Command
| Reference.
| Table 31. Mapping Modifying and Tuning Tasks to Commands for HALDB Online
| Reorganization (continued)
| Task Command Command Type
| Specify whether to delete the inactive /UPDATE OLREORG OPTION(DEL | Type 1
| data sets after the copying phase NODEL)
| completes.
|
| Related Reading: For more information about the TERMINATE OLREORG, /TERMINATE
| OLREORG, UPDATE OLREORG, and /UPDATE OLREORG commands, see the IMS Version 9:
| Command Reference.
| Example: Figure 218 on page 376 shows the processing steps for an online
| reorganization of a HALDB partition and how it is affected by a TERMINATE OLREORG
| command that temporarily stops the reorganization:
| v When you issue the TERMINATE OLREORG command, IMS terminates the
| reorganization by entering the termination phase.
| v Later, when you issue the INITIATE OLREORG command, IMS restarts the
| reorganization from the initialization phase, then proceeds to the copying phase.
| In the figure, the reorganization then completes successfully through the
| termination phase.
| Note that there are two sets of data sets for the second initialization phase because
| the reorganization is not complete.
| In the figure, the columns represent the flow of control through the phases of the
| online reorganization, from the user to IMS, and the status of the data sets as the
| processing events occur.
|
Figure 218. Processing Steps for an Interrupted Online Reorganization of a HALDB Partition
| The default value for the RATE parameter is 100, which allows the online
| reorganization to run as fast as possible, depending on system resources, system
| contention, and log contention, with no intentionally introduced delay. However, if
| you set the RATE value to 25, for example, IMS adds a delay to the reorganization
| processing so that 25% of the total processing time for a unit of reorganization is
| spent copying the data, and the remaining 75% is spent in an intentionally
| introduced delay. Thus, RATE(25) would cause the online reorganization to take
| approximately four times as long to run as it would have run with RATE(100).
| You can change the RATE value at any time by issuing the UPDATE OLREORG
| command.
| If IMS terminates abnormally while any online reorganizations are running, IMS
| dynamically backs out all uncommitted changes for these reorganizations to the
| most recent sync point. After IMS restarts, IMS automatically resumes the online
| reorganizations.
| Likewise, when an XRF takeover occurs, IMS automatically resumes the online
| reorganizations on the new active IMS system.
| When you restart the IMS system, IMS does not resume the online reorganization
| because the partitions are not authorized after the FDBR terminates.
| IMS stops the shadow partition if errors occur during the validation or creation of
| the output data sets. The tracked partition at the active site is unaffected by errors
| at the tracking site. After you correct the problem that caused the error, restart the
| shadow partition on the tracking IMS system to initiate online forward recovery for
| the partition and to continue tracking.
| If the output data sets for the online reorganization already exist at the tracking site
| before tracking begins, ensure that these data sets have same characteristics (such
| as block size, record size, and control interval size) as those at the active site. See
| “HALDB Online Reorganization Requirements for Existing Output Data Sets” on
| page 545 for the data set characteristics. If you change output data set
| characteristics manually at the active site, you must make the same changes at the
| tracking site.
| After an RSR takeover, IMS stops all HALDB partitions, including those that had
| online reorganizations in process. After you rebuild the primary index and indirect
| list data sets using the HALDB Index/ILDS Rebuild utility (DFSPREC0) at the new
| active site, issue the INITIATE OLREORG command to resume the online
| reorganizations, if needed. The online reorganizations are not automatically
| restarted after takeover.
| Recommendations:
| v Consider using a second subpool to relieve database buffer contention for more
| than four concurrent online reorganizations.
| v Use the IBM CFSizer to model the additional coupling facility activities to ensure
| that your coupling facility configuration is capable of handling the extra load
| introduced by the online reorganizations:
| – For IRLM 2.1 with PC=NO specified, each additional 1000 concurrently held
| locks requires 256 KB of ECSA storage.
| – For IRLM 2.2, each additional 1000 concurrently held locks requires 540 KB
| obtained from IRLM private storage. No increase in ECSA storage is
| necessary.
| v Review your LOGL latch contention rate, OLDS logging rate, IRLM lock structure
| access, and DBBP (for OSAM) latch contention.
| Related Reading: For more information about these utilities, see the IMS Version 9:
| Utilities Reference: Database and Transaction Manager.
| To recover an output data set before the online reorganization completes, perform
| the following tasks:
| 1. Stop the online reorganization by using the TERMINATE OLREORG command. If the
| online reorganization encountered an abend, it is stopped automatically.
| 2. Issue the /DBR or the UPDATE DB command for the HALDB partition.
| 3. Run database change accumulation, as necessary. You can create the JCL by
| issuing the GENJCL.CA command, or you can run the Database Change
| Accumulation utility (DFSUCUM0) from your own JCL. The purge time for the
| change accumulation must be equal to the time of the beginning of the online
| reorganization to represent restoring from the initial empty state of the data set.
| See “Specifying a Purge Time for the Database Change Accumulation Utility” on
| page 381.
| 4. Create the output data set to be recovered, either by using a JCL DD statement
| or by using Access Method Services, as appropriate.
| 5. Recover the database changes. You can create the JCL by issuing the
| GENJCL.RECOV command. Alternatively, you can run the Database Recovery utility
| (DFSURDB0) from your own JCL with the DD statement for DFSUDUMP
| specified as DUMMY to indicate that there is no image copy from which to
| restore.
| 6. Run the Batch Backout utility (DFSBBO00), because you might need to back
| out uncommitted data.
| 7. After you have recovered, and possibly backed-out, all of the required data sets
| of the HALDB partition, issue the /STA DB or the UPDATE DB command for the
| HALDB partition.
| 8. Issue the INITIATE OLREORG command to resume the online reorganization.
| You can also recover an output data set after the online reorganization completes
| but before an image copy has been made. Follow the same steps as for recovering
| an output data set before the online reorganization completes, except the steps for
| stopping and restarting the online reorganization.
| In addition, you can recover an output data set from a point other than the
| beginning of the online reorganization, such as from a full dump of a DASD volume,
| using existing procedures if the online reorganization is either completed or
| terminated.
| Specifying this purge time is necessary if change accumulation records (or an input
| log) that involve the output data set span the time that a online reorganization was
| started. Specifying the purge time eliminates database change records from before
| this point in time and is analogous to eliminating database change records from
| prior to the start time of an image copy.
| Specifying the Active Data Sets for the Database Image Copy Utilities: The
| database image copy utilities always copy from the currently active data sets that
| are recorded in the RECON data sets. Regardless of whether the A-through-J or
| the M-through-V data sets are active, you do not need to change the JCL or control
| statements for these utilities to specify which set of data sets to use.
| On the utility control statement for the Database Image Copy utility (DFSUDMP0),
| the DDNAME does not need to refer to the currently active data set. Regardless of
| whether the A-through-J or the M-through-V data sets are active, the utility
| automatically uses currently active data sets.
| Example: Assume that the data set for a second data set group defined in the DBD
| is to be copied, and that the partition name is PARTNO3. Regardless of which set
| of data sets is active, you can code a DDNAME of either PARTNO3B or
| PARTNO3N on the control statement. If the A-through-J data sets are active,
| whether you specify PARTNO3B or PARTNO3N, the utility copies from PARTNO3B.
| Likewise, if the M-through-V data sets are active, the utility copies from
| PARTNO3N.
| In the JCL statements for the Database Image Copy utility, you should omit the DD
| statement that refers to the input data set. Based on whether the A-through-J or the
| M-through-V data sets are active, the utility dynamically allocates the appropriate
| data set. A DD statement that refers to a specific data set name can cause the
| utility job to fail because of a “Data Set Not Found” condition during job-step
| initiation. This condition occurs if an inactive data set name is coded in the JCL and
| the data set does not exist.
| Using the SBONLINE statement causes IMS to load the sequential buffering
| modules during initialization so that, whenever you start an online reorganization for
| an OSAM partition, IMS activates sequential buffering immediately. If you do not
| include the SBONLINE statement, IMS analyzes the DL/I calls to determine whether
| sequential buffering is suited for processing the reorganization.
| where nnnnn is the maximum amount of storage (in kilobytes) that can be allocated
| to sequential buffers.
| When the maximum amount of storage is reached, IMS stops allocating sequential
| buffers to online applications (including HALDB Online Reorganization) until these
| applications release sequential buffer space. If you do not specify the MAXSB=
| keyword, the maximum amount of storage for sequential buffers is unlimited. For
| more information about the SBONLINE control statement, see the IMS Version 9:
| Installation Volume 2: System Definition and Tailoring.
|
| Figure 219. HALDB Pointer Before a Reorganization
|
| Each secondary index entry and each logical child segment contains the key of its
| target record. For secondary indexes, the key of the target’s root segment is
| included in the prefix. For logical child segments, the concatenated key of the
| logical parent is included in the segment data.
| Each segment in a PHDAM or PHIDAM database has an indirect list key (ILK). The
| ILK is unique for the segment type across the entire database. It is composed of
| the relative byte address (RBA), partition ID, and partition reorganization number of
| the segment when it was first created. The ILK for a segment never changes. It is
| maintained across reorganizations.
| Each secondary index entry or logical child segment has an extended pointer set
| (EPS). The EPS includes the ILK of its target segment. It also contains the RBA,
| partition ID, and partition reorganization number for the target segment. These parts
| of the EPS might not be accurate. That is, they might not reflect the current location
| of the target segment or the current reorganization number of the target segment’s
| partition. In Figure 219 they are accurate.
| The target segment has an indirect list entry (ILE) in the ILDS for a partition. The
| ILE contains accurate information about the target segment. This includes its
| current RBA, the correct partition ID, and the current reorganization number for the
| partition. The key of the ILE is composed of the ILK and the segment code of the
| target segment.
| The reorganization number for a partition is physically stored in the partition’s first
| database data set. This number is initialized by partition initialization or load, and
| incremented with each reorganization that reloads segments in the partition.
| When the RBA in the EPS cannot be used, IMS uses the information in the ILE to
| locate the target segment. The ILE key is found by using the ILK from the EPS and
| the target’s segment code. The ILE is read from the ILDS of the partition
| determined from the target’s key.
| Figure 220 on page 385 illustrates a situation in which the RBA in the EPS cannot
| be used. In the figure, the target partition has been reorganized three times since
| the EPS was accurate. This has moved the target segment and updated the
| reorganization number in the partition data set. The EPS still contains a
| reorganization number of 5, but the reorganization number in the partition data set
| is now 8. The information in the ILE has been updated by the HD Reorganization
| Reload utility. IMS uses the ILK from the EPS to find the ILE and uses the RBA in
| the ILE to find the target segment.
|
|
| Figure 220. HALDB Pointer After a Reorganization
|
| Even though the retrieval is indirect, often the CI containing the ILE will already be
| in IMS’s buffer pool.
| Healing Pointers
| The self-healing process updates or corrects the information in EPSs. When the ILE
| is used, the information about the current location of the segment in the ILE is
| moved to the EPS. This allows IMS to avoid the indirect process if the EPS is used
| for a later retrieval. This correction to the EPS in the database buffer pool is always
| done.
| Because of locking considerations, the update might not be written to the database
| on DASD. The buffer containing the entry or segment with the updated EPS is
| marked as altered if the application program is allowed to update the database. The
| call must be done with a PCB allowing updates, and the IMS system must have an
| access intent for the partition that allows updates. If updates are not allowed, the
| buffer is not marked as altered.
| When the application reaches a sync point, it does not write buffers to DASD if they
| are not marked as altered. If the updated EPS is not written to DASD, the next time
| it is retrieved from DASD and used to find its target, IMS must use the indirect
| process. That is, IMS must read the ILE again.
| Figure 221 shows the EPS after it has been healed. The RBA points to the current
| location. The partition ID is correct. The partition reorganization number matches
| the number stored in the partition database data set.
|
|
|
| Figure 221. HALDB Pointer After the Self-Healing Process
|
| Performance of the Self-Healing Process
| The performance of the self-healing process can be much more efficient than you
| might anticipate.
| Many pointers can be healed with a small number of ILDS reads. This is due to the
| use of IMS database buffering. ILDSs are database data sets. They use database
| buffer pools in the same way that other database data sets use them. If a CI is
| already in its buffer pool, it does not have to be read from DASD.
| Each ILE is 50 bytes. You specify the CI sizes for your ILDSs. An 8 KB ILDS CI
| holds up to 163 ILEs and a 16 KB CI holds up to 327 ILEs, so a single CI can hold
| many ILEs. After a reorganization, IMS might need to heal many pointers to the
| reorganized partitions.
| When there are frequent uses of the CIs in an ILDS, they tend to remain in their
| buffer pool. One read of an ILDS CI might be sufficient to heal hundreds of pointers.
| As with most IMS database tuning, having a large number of buffers for frequently
| used data sets can be highly beneficial.
| Another benefit of the self-healing process is that it does not waste resources
| healing pointers that are not used. In many secondary indexes, only a small number
| of entries are actually used. With a non-HALDB database, the entire index is rebuilt
| every time the indexed database is reorganized. With HALDB, the index is not
| rebuilt and only a small number of referenced index entries are updated. HALDB
| does not use resources to update pointers that are never used.
| If you have a program that holds locks for a long time or that holds many locks
| when performing the self-healing pointer process, you have four options:
| v If the application program does not make updates, use PROCOPT=G.
| v Have your program commit frequently.
| v Invoke the pointer healing process before you run application programs that use
| PROCOPT=A, but do not do any updates. Run another program or utility before
| this type of application program. The HALDB Conversion and Maintenance Aid
| tool supplies a pointer healing utility.
| v Rebuild secondary indexes with an index builder, such as the IMS Index Builder
| for z/OS. The IMS Index Builder for z/OS creates EPSs with accurate RBAs.
| This scenario is not common. Most users can let the pointer healing process occur
| without taking any special precautions.
| Related Reading:
| v For more information about the IMS High Availability Large Database Conversion
| and Maintenance Aid, see the IMS High Availability Large Database Conversion
| and Maintenance Aid for z/OS, User’s Guide.
| v For more information about the IMS Index Builder, see the IMS Index Builder for
| z/OS User’s Guide.
The reorganization utilities described earlier in this chapter can be used to change
DL/I access methods among the HISAM, HDAM, and HIDAM access methods. One
exception to this is that HDAM cannot be changed to HISAM or HIDAM unless
HDAM database physical records are in root key sequence. This exception exists
because HISAM and HIDAM databases must be loaded with database records in
root key sequence. When the HD Reorganization Unload utility unloads an HDAM
database, it unloads it using GN calls. GN calls against an HDAM database unload
the database records in the physical sequence in which they were stored by the
randomizing module. This will not be root key sequence unless you used a
sequential randomizing module (one that put the database records into the
database in physical root key sequence).
| Related Reading: The procedures in this topic require you to reassess different
| aspects of your databases. See the following related readings for information to
| help you make the reassessments:
| v For a description of free space and how it is specified, see “Specifying Free
| Space (HDAM, PHDAM, HIDAM, and PHIDAM Only)” on page 241.
| v For a description of types of pointers and how to specify them, see “Types of
| Pointers You Can Specify” on page 81.
| v For information about what to consider in choosing a logical record length and
| how logical record lengths are specified, see “Choosing a Logical Record Length
| for HD Databases” on page 248.
| v For information about what to consider in choosing a CI or block size and how CI
| and block size are specified, see “Determining the Size of CIs and Blocks” on
| page 248.
| v For information about what to consider in choosing buffer number and size and
| how buffers are specified, see “Buffer Numbers” on page 251.
| v For information about how to calculate database size, see “Estimating the
| Minimum Size of the Database” on page 311.
| v For information about choosing HDAM or PHDAM options, see “Choosing HDAM
| or PHDAM Options” on page 244.
| v For information about choosing and specifying a randomizing module, see
| “Determining Which Randomizing Module to Use (HDAM and PHDAM Only)” on
| page 243.
Once you have determined what changes you need to make, you are ready to
change your DL/I access method from HISAM to HIDAM. To do this:
1. Unload your database using the existing DBD and the HD Reorganization
Unload utility.
2. Code a new DBD that reflects the changes you need to make. You must also
code a DBD for the HIDAM index.
3. If you need to make change that are not specified in the DBD (such as
changing database buffer sizes or the amount of space allocated for the
database), make these changes.
4. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
5. Reload the database using the new DBD and the HD Reorganization Reload
utility. Remember to make an image copy of your database as soon as it is
reloaded.
If you are using logical relationships or secondary indexes, you will need to run
additional utilities immediately before and after reloading your database. The
flowchart in Figure 195 on page 346 tells you which utilities to use and the order
in which they must be run.
v Determine what type of pointers you are going to use in the database. Unlike
HISAM, HDAM uses direct-address pointers to point from one segment in the
database to the next.
v Determine which randomizing module you are going to use. Unlike HISAM,
HDAM uses a randomizing module. The randomizing module generates
information that determines where a database record will be stored.
v Determine which HDAM options you are going to use. Unlike HISAM, an HDAM
database is divided into two parts: a root addressable area and an overflow area.
The root addressable area contains all root segments and is the primary storage
area for dependent segments in a database record. The overflow area is for
storage of dependent segments that do not fit in the root addressable area. The
HDAM options here are the ones that pertain to choices you make about the root
addressable area. These are:
– The maximum number of bytes of a database record to be put in the root
addressable area when segments in the database record are inserted
consecutively (without intervening processing operations).
– The number of blocks or CIs in the root addressable area.
– The number of RAPS (root anchor points) in a block or CI in the root
addressable area. (A RAP is a field that points to a root segment.)
v Reassess your choice of logical record sizes. A logical record in HISAM can only
contain segments from the same database record. In HDAM, a logical record can
contain segments from more than one database record. In addition, HDAM
logical records contain RAPs and two space management fields (FSEs and
FSEAPs).
v Reassess your choice of CI or block size. In HISAM, your choice of CI or block
size should have been some multiple of the average size of a database record.
In HDAM, the size should be chosen because of the characteristics of the device
and the type of processing you plan to do.
v Reassess your choice of database buffer sizes and the number of buffers you
have allocated. If you have changed your CI or block size, you need to allocate
buffers for the new size.
v Recalculate database space. You need to do this because the changes you are
making will result in different requirements for database space.
Once you have determined what changes you need to make, you are ready to
change your DL/I access method from HISAM to HDAM. To do this:
1. Unload your database, using the existing DBD and the HD Reorganization
Unload utility.
2. Code a new DBD that reflects the changes you need to make.
3. If you need to make changes that are not specified in the DBD (such as
changing database buffer sizes or the amount of space allocated for the
database), make these changes. HDAM only requires one data set, whereas
HISAM requires two.
4. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
5. Reload the database using the new DBD and the HD Reorganization Reload
utility. Make an image copy of your database as soon as it is reloaded.
If you are using logical relationships or secondary indexes, you will need to run
additional utilities before reloading your database. The flowchart in Figure 195
on page 346 tells you which utilities to use and the order in which they must be
run.
Once you have determined what changes you need to make, you are ready to
change your DL/I access method from HIDAM to HISAM. To do this:
1. Unload your database using the existing DBD and the HD Reorganization
Unload utility.
2. Code a new DBD that reflects the changes you need to make. You will not be
specifying direct-address pointers or free space in the DBD, because HISAM,
unlike HIDAM, does not allow use of these. Also, HISAM has only one DBD
whereas HIDAM had two.
3. If you need to make changes that are not specified in the DBD (such as
changing database buffer sizes or the amount of space allocated for the
database), make these changes.
4. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
5. Reload the database using the new DBD and the HD Reorganization Reload
utility. Remember to make an image copy of your database as soon as it is
reloaded.
If you are using logical relationships or secondary indexes, run additional utilities
right before and after reloading your database. The flowchart in Figure 195 on
page 346 tells you which utilities to use and the order in which they must be
run.
v Determine which randomizing module you are going to use. Unlike HIDAM,
HDAM uses a randomizing module. The randomizing module generates
information that determines where a database record is to be stored.
v Determine which HDAM options you are going to use. Unlike HIDAM, an HDAM
database does not have a separate index database. Instead the database is
divided into two parts: a root addressable area and an overflow area. The root
addressable area contains all root segments and is the primary storage area for
dependent segments in a database record. The overflow area is for storage of
dependent segments that do not fit in the root addressable area. The HDAM
options here are the ones that pertain to choices you make about the root
addressable area. These are:
– The maximum number of bytes of a database record to be put in the root
addressable area when segments in the database record are inserted
consecutively (without intervening processing operations).
– The number of blocks or CIs in the root addressable area.
– The number of RAPs in a block or CI in the root addressable area.
v Reassess your choice of logical record size.
v Reassess your choice of CI or block size.
v Reassess your choice of database buffer sizes and the number of buffers you
have allocated. If you have changed your CI or block size, you need to allocate
buffers for the new size.
v Recalculate database space. You need to do this because the changes you are
making will result in different requirements for database space.
After you have determined what changes you need to make, you are ready to
change your DL/I access method from HIDAM to HDAM. To do this:
1. Unload your database using the existing DBD and the HD Reorganization
Unload utility.
2. Code a new DBD that reflects the changes you need to make. You probably will
not be specifying free space, but you will be specifying HDAM options. Note
also that you’ll need only one DBD for HDAM, whereas HIDAM required two
DBDs.
3. If you need to make changes that are not specified in the DBD (such as
changing database buffer sizes or the amount of space allocated for the
database), make these changes. HDAM only requires one data set, whereas
HIDAM requires two.
4. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
5. Reload the database using the new DBD and the HD Reorganization Reload
utility. Remember to make an image copy of your database as soon as it is
reloaded.
If you are using logical relationships or secondary indexes, you will need to run
additional utilities right before and after reloading your database. The flowchart
in Figure 195 on page 346 tells you which utilities to use and the order in which
they must be run.
v Reassess your choice of logical record size. A logical record in HISAM can only
contain segments from the same database record. In HISAM, a logical record
can contain segments from more than one database record.
v Reassess your choice of CI or block size. In HDAM, your choice of CI or block
size should be based on the characteristics of the device and the type of
processing you plan to do. In HISAM, the size should be some multiple of the
average size of a database record.
v Reassess your choice of database buffer sizes and the number of buffers you
have allocated. If you have changed your CI or block size, you need to allocate
buffers for the new size.
v Recalculate database space. You need to recalculate database space because
the changes you are making will result in different requirements for database
space.
After you have determined what changes you need to make, you are ready to
change your DL/I access method from HDAM to HISAM. Remember you must write
your own unload and reload programs unless database records in the HDAM
database are in physical root key sequence. In writing your own load program, if
your HDAM database uses logical relationships, you must preserve information in
the delete byte (for example, a segment that is logically deleted in the database
might not be physically deleted).
when the database is initially loaded.) In a HIDAM database, you can set aside
periodic blocks or CIs of free space or a percentage of free space in each block
or CI (in the ESDS or OSAM data set). This free space can then be used for
inserting database records or segments into the database after initial load. In an
HDAM database, you generally get the free space you need by careful choice of
HDAM options.
v Reassess your choice of direct-address pointers. Although both HIDAM and
HDAM use direct-address pointers, you might need to change the type of
direct-address pointer used:
– Because of the changing needs of your applications.
– Because pointers are partly chosen based on the type of database you are
using. For example, you can chose to use physical twin forward and backward
pointers on root segments in your HIDAM database to get fast sequential
processing of roots.
v Reassess your choice of logical record size.
v Reassess your choice of CI or block size.
v Reassess your choice of database buffer sizes and the number of buffers you
have allocated. If you have changed your CI or block size, you need to allocate
buffers for the new size.
v Recalculate database space. You need to recalculate database space because
the changes you are making will result in different requirements for database
space.
Once you have determined what changes you need to make, you are ready to
change your DL/I access method from HDAM to HIDAM. Remember you must write
your own unload and reload programs unless database records in the HDAM
database are in physical root key sequence. In writing your own load program, if
your HDAM database uses logical relationships, you must preserve information in
the delete byte (for example, a segment that is logically deleted in the database
might not be physically deleted).
If you are using logical relationships or secondary indexes, you will need to run
additional utilities before reloading your database. The flowchart in Figure 195
on page 346 tells you which utilities to use and the order in which they must be
run.
Changing the DL/I Access Method From HDAM to PHDAM and HIDAM
to PHIDAM
For a logical view of HDAM and HIDAM databases before and after changing to
PHDAM and PHIDAM respectively, see Figure 222.
Figure 222. HDAM and HIDAM Databases Before and After Changing to PHDAM and
PHIDAM
Requirement: You must concurrently migrate all databases that are logically
related. All secondary indexes that point to these logically related databases must
be migrated at the same time the databases they point to are migrated.
Because non-keyed PHDAM root segments are not supported, you cannot migrate
an HDAM database with non-keyed roots to HALDB.
There are two methods for changing a HDAM or HIDAM database to PHDAM or
PHIDAM. The first method keeps the same database name. The second method
changes the name of the physical database and uses a logical database with the
old database name.
the same as the steps for unload and reload. Run the HD Reorganization Unload
and Reload utilities against the secondary index. The user data is preserved in the
secondary index.
If the new database is to have the same name as the old database:
1. Unload the old database with the migrate option before changing RECON or
DBDLIB.
2. Create a RECON list before deleting the records for the database.
3. Remove the information from the old database RECON and DBDLIB.
4. Delete all MDA members
5. Define the HALDB by using DBDGEN, ACBGEN, and either the HALDB
Partition Definition utility or the DBRC commands INIT.DB and INIT.PART.
If the new database is to have a different name from the old database:
1. Create a RECON list before deleting the records for the database. The old
information is retained in RECON as long as necessary.
2. Unload the old database.
3. Remove the DBD from DBDLIB and ACBLIB.
4. Delete all MDA members that refer to the old database.
5. Perform a DBDGEN on the old database name as a logical database with the
source being the new HALDB.
6. Define the HALDB by using DBDGEN, ACBGEN, and either the HALDB
Partition Definition utility or the DBRC commands INIT.DB and INIT.PART.
The order of physical twin segments is maintained when a fallback from HALDBs
occurs. This includes segments that are non-keyed and that have a non-unique key.
Primary indexes are recreated, not unloaded. Secondary indexes are recreated by
the reload utility process. User data is not preserved.
Logical children have some special considerations. There are three cases to
consider: unidirectional, virtually paired, and physically paired databases. Current
DL/I offers an option to not store the logical parent’s concatenated key in the logical
child (virtual key storage option); in normal retrieval the key is built and the user
application always sees the concatenated key in the data. For all logical children
unloaded, you must drop the logical parent’s concatenated key if the virtual key
storage option is chosen. The unloaded segments are reloaded as real segments
that are part of a physically paired relationship. This type of unload, dropping the
logical parent’s concatenated key, only occurs when DFSURGU0 performs a
fallback unload.
| For example, suppose a PHDAM partition has many roots that randomize to the
| same root anchor point. This causes lock contention problems that negatively
| impact performance. To remedy this problem, you can increase the number of
| RAPs in the partition.
| The following steps describe how to change the number of RAPs in a partition:
| 1. Issue the /DBRECOVERY command to take the partition offline.
| 2. Unload the data from the partition.
| 3. Use the Partition Definition utility or DBRC commands to change the number of
| RAPs in the partition.
| 4. Reload the partition.
| 5. Take an image copy of the data sets for the partition.
| 6. Issue the /START DB command or the UPDATE DB command to make the partition
| available again.
| You can change the following characteristics of HALDB partitions only after taking
| all partitions offline:
| v DBD definition
| For example, suppose a HALDB has an existing HALDB Partition Selection exit
| routine that needs to be replaced with a HALDB Partition Selection exit routine that
| selects partitions based on a new algorithm. This change requires the entire HALDB
| to be offline.
| The steps below describe how to change a HALDB Partition Selection exit routine:
| 1. Issue the /DBRECOVERY command to take the HALDB offline.
| 2. Unload the data from the HALDB using the existing HALDB Partition Selection
| exit routine.
| 3. Use the Partition Definition utility or DBRC commands to change the HALDB
| Partition Selection exit routine.
| 4. Reload the data from the HALDB using the new HALDB Partition Selection exit
| routine.
| 5. Run Image Copy for the data sets for all partitions in the HALDB.
| 6. Issue the /START command or the UPDATE DB command to make the HALDB
| available again.
| Online change is not used for changing HALDB partition definitions. IMS recognizes
| the version number differences and dynamically reflects the new definitions in the
| online IMS system.
| If you are using XRF, the alternate IMS system sees the dynamic change and
| automatically updates the definitions in the alternate system, requiring no action
| from you.
| There are three cases when IMS verifies the HALDB partition structure:
| v When a partition is authorized for use. This would detect a change in an existing
| partition or in a partition which was not previously authorized. This occurs
| commonly when a partition is taken offline, modified, and made available again.
| The first use of the updated partition triggers partition structure rebuild.
| v When an invalid key is detected by partition selection or by a Partition Selection
| exit routine. This can occur, for example, when a new partition is added beyond
| the high key of the last partition and all existing partitions are already authorized.
| In this case, IMS partition selection or the Partition Selection exit routine detects
| the new partition. Aftger the new partition is detected, IMS performs partition
| structure rebuild automatically.
| v When a /START DB HALDB_Master OPEN or UPDATE DB NAME(HALDB_Master)
| OPTION(OPEN) command is issued. For example, if a new partition has been
| added beyond the high key of the last partition and all existing partitions are
| already authorized, these commands will initiate a partition structure rebuild. For
| more details on the /START DB command and the UPDATE DB command, see IMS
| Version 9: Command Reference.
| When making changes to HALDB partition definitions, consider the following points:
| v If you use a HALDB Partition Selection exit routine, you must issue the
| /DBRECOVERY command and then the /START command after making any structure
| modifications to a partition. Issuing /DBRECOVERY and then /START registers the
| changes with IMS. When the HALDB Partition Selection exit routine selects
| HALDB partition membership, IMS is not aware of HALDB partition boundaries
| and cannot automatically recognize changed definitions.
| v If you are using a HALDB Partition Selection exit routine and IMS notifies you of
| a structure modification, you might need the exit routine to select partitions
| correctly based on the current partition structure.
| v Issuing a /START DB command with the OPEN keyword might fail after the
| definition of a partition structure has been changed. This is because structure
| rebuild is needed. To invoke structure rebuild, an application program that uses
| the partition must be run or the type-1 command /START DB HALDB_Master OPEN
| must be issued.
| v Newly added partitions will not be known by the online IMS system until partition
| structure rebuild has been invoked and the new structure has been created.
| For example, suppose a HALDB partition named PART200 has a key range from
| 101 up to a high key of 200 (KEY200). PART200 needs to be split into two HALDB
| partitions so that a new partition named PART150 is added between another
| partition, PART100, and PART200. PART150 will have a key range from 101 up to
| a high key of 150 (KEY150), a key range that used to be included in PART200.
| In this example, online IMS systems do not know of PART150 until one of the
| following events occur:
| v A /START DB HALDB_Master OPEN command is issued
| v A UPDATE DB HALDB_Master OPEN command is issued
| v A DL/I call causes an authorization call to DBRC for PART200. The first DL/I call
| goes through HALDB partition selection again to properly select and authorize
| either PART150 or PART200.
To change to DEDBs:
1. Unload your database using the existing DBD and one of the following:
v Your unload program
v The HD Reorganization Unload utility if database records are in physical root
key sequence
2. Code a new DBD for the DEDBs.
3. Execute the DBD generation.
4. For non-VSAM data sets, delete the old database space and define the new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
5. Run the DEDB initialization utility (DBFUMIN0).
6. Run the user DEDB load program.
Changes involving adding and deleting segments in the hierarchy are covered in
Chapter 16, “Modifying Databases,” on page 423.
One way to determine whether the order of dependent segment types in your
hierarchy is an efficient one is to examine the IWAITS/CALL field on the DL/I Call
Summary report.
Related Reading: For detailed information on the DL/I Call Summary report, see
IMS Version 9: Utilities Reference: Database and Transaction Manager.
The IWAITS/CALL field tells you, by DL/I call against a specific segment, the
average number of times a segment had to wait for I/O operations to finish before
the segment could be processed. A high number (and high, of course, is relative to
the application) indicates that multiple I/O operations were required to process the
segment.
If the database does not need to be reorganized, the high number can mean this is
a frequently used segment type placed too far from the beginning of the database
record. If you determine this is the situation, you can change placement of the
segment type. The change can increase the value in the IWAITS/CALL field for
other segments.
To change the placement of a segment type, you must write a program to unload
segments from the database in the new hierarchic sequence. (The reorganization
utilities cannot be used to make such a change.) Then you need to load the
segments into a new database. Again, you must write a program to reload.
Combining Segments
The second type of change you might need to make in the structure of your
database record is combining segment types to maximize use of space. For
example, having two segment types, a dependent segment for college classes with
a dependent segment for instructors who teach the classes, is an inefficient use of
space if typically only one or two instructors teach a class. Rather than having a
separate instructor segment, you can combine the two segment types, thereby
saving space.
Combining segments also requires that you write an unload and reload program.
(The reorganization utilities cannot be used to make such a change.)
5. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
6. Reload your database using your load program and the new DBD. Remember
to make an image copy of your database as soon as it is reloaded.
7. If your database uses logical relationships or secondary indexes, you must run
some of the reorganization utilities before and after reloading to resolve prefix
information. The flowchart in Figure 195 on page 346 tells you which utilities to
use and the order in which they must be run.
You can change your database (or part of it) from one device to another using the
reorganization utilities. To change direct-access storage devices:
1. Unload your database using the existing DBD and the appropriate unload utility.
2. Recalculate CI or block size to maximize use of track space on the new device.
Information on calculating CI or block size is contained in Chapter 9, “Designing
Full-Function Databases,” on page 241 under “Determining the Size of CIs and
Blocks”.
3. Code a new DBD.
4. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
5. Reload your database, using the new DBD and the appropriate reload utility.
Remember to make an image copy of your database as soon as it is reloaded.
6. If your database uses logical relationships or secondary indexes, you must run
some of the reorganization utilities before and after reloading to resolve prefix
information. The flowchart in Figure 195 on page 346 tells you which utilities to
use and the order in which they must be run.
Well-Organized Database
Well-organized databases are by far the most important of these two factors. When
the databases SB processes are well organized, you note elapsed time
improvements. This is because your programs process IMS database segments
and records, and they do not process DASD blocks directly. Processing a
well-organized database in logical-record sequence results in an I/O reference
pattern that accesses most DASD blocks in physical sequence. SB can take
Badly-Organized Database
Processing a badly-organized database in logical-record sequence typically results
in an I/O reference pattern that accesses many DASD blocks in a random
sequence. This happens because many segments were stored in randomly
scattered blocks after the database was loaded or reorganized. When your
database is accessed in a predominantly random pattern, most I/O operations
issued by the SB buffer handler are random reads. SB is not able to issue many
sequential reads, and the elapsed time for your job is not considerably reduced.
You can use the SB buffering statistics in the optional //DFSSTAT reports to see if
your database is well-organized. Your database is likely to be badly organized if a
large percentage of the blocks were read with random reads during sequential
processing. You can monitor this percentage over a period of time to see if it
increases as the database ages.
Related Reading: For details on //DFSSTAT reports, see IMS Version 9: Utilities
Reference: System.
You can adjust HDAM and PHDAM options using the reorganization utilities:
1. Determine whether the change you are making will affect the code in any
application programs. It should only do so if you are changing to a sequential
randomizing module.
2. Unload your database, using the existing DBD and the appropriate unload utility.
3. Code a new DBD (for non-PHDAM) using the TSO Partition Definition Utility. If
you changed your CI or block size, you need to allocate buffers for the new
size.
Related Reading: See Chapter 9, “Designing Full-Function Databases,” on
page 241 for a discussion of what things to consider in choosing buffer number
and size and how they are specified.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for those application programs. If you have
the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Determine whether you need to recalculate database space.
Related Reading: See “Estimating the Minimum Size of the Database” on page
311 for a description of how to calculate space.
6. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
7. Reload your database or partition using the new DBD (if any) and the
appropriate reload utility. Make an image copy of your database as soon as it is
reloaded.
Adjusting Buffers
The size and number of buffers you can choose are described in “Multiple Buffers in
Virtual Storage” on page 249. This topic also discusses the performance
implications of choosing a buffer size and number. To improve performance, reread
that topic and reassess the original choices you made before you adjust your
buffers.
VSAM Buffers
| This topic contains the following information about VSAM buffers:
| v “Monitoring VSAM Buffers”
| v “When to Adjust VSAM Buffers”
| v “VSAM Buffer Adjustment Options”
Hiperspace parameters are valid only for buffer sizes of 4K or multiples of 4K.
Specifying Hiperspace parameters on buffers smaller than 4K causes an error. To
use Hiperspace buffering you might need to unload your database and then reload
it into 4K or multiples of 4K CI sizes to accommodate Hiperspace requirements.
If you decide to leave intact databases with CI sizes of less than 4K, do not allocate
any buffers less than 4K. The CIs that are less than 4K are placed in 4K or larger
buffer pools. However, the CIs compete with VSAM data sets already there. This
method might be expedient in the short term.
Related Reading:
v For more information on coding the HSO|HSR and HSn parameters to activate
Hiperspace buffering on VSAM buffers, see IMS Version 9: Installation Volume 2:
System Definition and Tailoring.
v For more information about VSAM buffers, including Hiperspace buffers, see
z/OS V1R4: DFSMS: Using Data Sets.
OSAM Buffers
If you are using OSAM, individual subpool buffer reports do exist. However, you can
monitor the number of buffers you are using by using the Enhanced OSAM Buffer
Subpool statistics function which supports the following values:
DBESF
Provides the full OSAM Subpool statistics in a formatted form.
DBESU
Provides the full OSAM Subpool statistics in an unformatted form.
DBESS
Provides a summary of the OSAM database buffer pool statistics in a
formatted form.
DBESO
Provides a the full OSAM database buffer pool statistics in a formatted form
for online statistics returned as a result of a /DIS POOL command.
Related Reading: For detailed information on these values, see the IMS Version
9: Application Programming: Design Guide.
Performance can also be improved through the use of the co (caching option)
parameter of the IOBF control statement specified either in the DFSVSMxxx
member of IMS.PROCLIB or in DFSVSAMP.
Related Reading:
v For detailed information about the DB Monitor Database Buffer Pool report, see
the IMS Version 9: Utilities Reference: System.
v For more information on the co (caching option) parameter of the IOBF control
statement, OSAM buffer pools and the use of the coupling facility for OSAM data
caching see the IMS Version 9: Installation Volume 2: System Definition and
Tailoring.
By default, four buffer sets exist in each SB buffer pool. If the reports indicate that a
large percentage of random read I/O operations were used, and you know that the
program was processing your database sequentially, increasing the number of
buffer sets to six or more can improve performance. By increasing the number of
buffer sets, it is more likely that a block is still in an SB buffer when requested, and
a read I/O operation is not necessary.
If only a few random reads were used during your program’s execution, it indicates
that the database is very well organized and most requests were satisfied from the
SB buffer pool or with sequential reads. If this happens, you can save virtual
storage space by decreasing the number of buffer sets in each SB buffer pool to
two or three.
Once you have changed the number of buffer sets, you can use the SB Test Utility
to reprocess the SB buffer handler call sequence that was issued during your
program’s execution. Then you can study the resulting //DFSSTAT reports to see
the impact of the change.
Related Reading:
v The Sequential Buffering Summary report and the Sequential Buffering Detail
reports are described and instructions on how to use the SB Test Utility are in the
IMS Version 9: Utilities Reference: Database and Transaction Manager.
v Detailed instructions on how to code an SBPARM control statement are in the
IMS Version 9: Installation Volume 2: System Definition and Tailoring.
v Details on the SB Initialization Exit Routine are in the IMS Version 9:
Customization Guide.
The only VSAM option you can specifically monitor for is background write. If you
are not using background write, you can look at the VSAM Buffer Pool report
described in IMS Version 9: Utilities Reference: System. The report, in the Number
of VSAM Writes To Make Space in the Pool field, documents the number of times
data in a buffer had to be written to the database before the buffer could be used. If
you use background write, you might find that you are able to reduce this number
and therefore the size of the buffer pool.
If you are already using background write, the VSAM Buffer Pool report tells you
how many times background write is invoked in the Number of Times Background
Write Function Invoked field. The VSAM Statistics report (another report produced
by the DB monitor) tells you in the BKG WTS field if background write was invoked.
It also tells you, in the USR WRTS field, among other things, how many times
background write was invoked.
Because it is assumed you would only change the parameter when making other
database changes that require you to unload and reload your database, no
procedure for changing it is provided here.
You cannot specifically monitor any OSAM options. To adjust OSAM options,
change the appropriate parameters in the OPTIONS control statement. Then put
the new control statement in the:
v DFSVSAMP data set in a batch system
v IMS.PROCLIB data set with the member name DFSVSMnn in an online system
One way to routinely monitor use of space is by watching the IWAITS/CALL field in
the DL/I Call Summary report. The DL/I Call Summary report is described in IMS
Version 9: Utilities Reference: System. If the IWAITS/CALL field has a relatively
high number in it, the high number can be caused by space problems. If you
suspect space is the problem, you can verify such problems in two specific ways:
v For VSAM data sets, you can get a report from the VSAM catalog using the
LISTCAT command. In the report, check CI/CA splits, EXCPs, and EXTENTS
(LISTCAT ALL report is described in Chapter 14, “Monitoring Databases,” on
page 335).
v For non-VSAM data sets, you can get a report on the VTOC using the LISTVTOC
command. In the report, check the NOEXT field (LISTCAT ALL report is
described in Chapter 14, “Monitoring Databases,” on page 335).
If you decide to change the amount of space allocated for your database, do it with
JCL or with z/OS utilities. The reorganization utilities must be run to put the
database in its new space. The procedure for putting the database in its new space
is as follows:
1. Unload your database, using the existing DBD and the appropriate unload utility.
You should be familiar with these topics. You should also have decided to change to
multiple data set groups to tune your database. It is not possible for you to
specifically monitor your database to determine whether multiple data set groups
will improve performance or better utilize space. Rather, knowledge of your
application’s requirements along with many types of statistics about database use
might help you make this decision.
To change the number of data set groups in your database, (see Figure 223 on
page 413) you:
1. Unload your database using the existing DBD.
2. If your database is PHDAM or PHIDAM, delete the database definition from the
DBRC RECON data sets using the HALDB Partition Definition Utility.
3. Code a new DBD.
4. Recalculate database space. You need to recalculate database space because
the change you are making will result in different requirements for database
space.
Related Reading: See “Estimating the Minimum Size of the Database” on page
311 for a description of how to calculate database space.
5. Delete the old database space and define new database space for non-VSAM
data sets. Delete the space allocated for the old clusters and define space for
the new clusters for VSAM data sets.
6. If your new database is PHDAM or PHIDAM, run the HALDB Partition Definition
utility to define the partition data sets for the database.
7. Reallocate data sets because the number and size of data sets you are using
will change.
Related Reading: See “Allocating Data Sets” on page 318 for information on
allocating data sets.
8. Reload your database using the new DBD. Take an image copy of your
database as soon as the database is reloaded.
9. Run some of the reorganization utilities before and after reloading to resolve
prefix information if your database uses logical relationships or secondary
indexes. The flowchart in Figure 195 on page 346 shows you which utilities to
use and the order in which they must be run.
Figure 223. Utility Sequence of Execution When Making Database Changes during Reorganization
example, you can reorganize one or more existing databases at the same time
that other databases are being initially loaded. Any or all of the databases
being operated on can be logically interrelated. A database operation is defined
as an initial database load, a database unload/reload (reorganization), or a
database scan.
2. If one or more segments in any or all of the databases being operated upon is
involved in either a logical relationship or a secondary index relationship, the
YES branch must be taken. You can also use the Prereorganization utility to
determine which database operations must be performed.
3. Based upon the information given to it on control statements, the database
Prereorganization utility provides a list of databases that must be initially
loaded, reorganized, or scanned. You must not change the number and
sequence of databases specified on the prereorganization control statement
between reload and prefix resolution.
4. This area of the flowchart must be followed once for each database to be
operated upon, whether the operation consists of an initial load, reorganization,
or scan. The operations can be done for all databases concurrently, or one
database at a time. If the various database operations are performed
sequentially, work data set storage space can be saved and processing
efficiency increased if DISP=(MOD,KEEP) is specified for the DFSURWF1 DD
statement associated with each database operation. The attributes of the work
data set for the database initial load, reorganization, and scan programs must
be identical.
When using the HD Reorganization Reload utility, first do all unloads and
scans of logically related databases if logical parent concatenated keys are
defined as virtual in the logical child.
5. You must ensure that all operations indicated by the Prereorganization utility (if
it was executed) are completed prior to taking the YES branch.
6. If any work data sets were generated during any of the database operations
that were executed by you, the YES branch must be taken. The presence of a
logical relationship in a database does not guarantee that work data sets will
be generated during a database operation. The reorganization/load processing
utilities determine the need for work data sets dynamically, based upon the
actual segments presented during a database operation. If any segments that
participate in a logical relationship are loaded, work data sets will be generated
and the YES branch must be taken.
If for any specific database operation no work data set was generated for the
database, processing of that database is complete and ready to use.
When a HIDAM database is initially loaded or reorganized, its primary index
will be generated at database load time.
7. You must run the DB Scan utility before a database is unloaded when logical
parent concatenated keys are defined as virtual in the logical child database to
be unloaded.
This program should be executed against each database listed in the output of
the Prereorganization utility. A work data set can be generated for each
database scanned by this utility. Databases for scanning are listed after the
characters “DBS=” in one or more output messages of the Prereorganization
utility.
8. The HD Reorganization Reload utility can cause the generation of a work data
set to be later used by the Prefix Resolution utility. Databases to be
reorganized using the HD Reorganization Unload utility and the HD
Reorganization Reload utility are listed after the character “DBR=” in one or
more output messages of the Prereorganization utility.
9. The user-provided initial database load program can automatically cause the
generation of a work data set to be later used by the Prefix Resolution utility.
You do not need to add code to the initial load program for work data set
generation. Code is added automatically by IMS through the user program
issuing ISRT requests. You must, however, provide a DD statement for this
data set along with the other JCL statements necessary to execute the initial
load program. Databases for initial loading are listed after the characters
DBIL= in one or more output messages of the Prereorganization utility.
10. The database Prefix Resolution utility combines the workfile output from the
Database Scan utility, the HD Reorganization Reload utility, and the user’s
initial database load execution to create an output data set for use by the
Prefix Update utility. The Prefix Update utility then completes all logical
relationships defined for the databases that were operated upon.
11. This path must be taken for HISAM databases with logical relationships. This
path must also be taken if structural changes are required (for example,
HISAM to HDAM, pointer changes, additional segments, or adding a secondary
index).
12. If a secondary index needs to be created or if two secondary indexes need to
be combined, you must run the HISAM Unload/Reload utilities. After the
HISAM Unload/Reload utilities are run, if logical relationships exist in the
database, you must execute the Prefix Update utility before the reorganization
or load process is considered to be complete.
13. For information on scratching and allocating OSAM data sets, see the topic
about designing the IMS online system in IMS Version 9: Administration Guide:
System.
Statistics on transaction processing and contention for CIs can be obtained from the
output of the Fast Path Log Analysis utility (DBFULTA0), which retrieves (from
system log input) data relating to the usage of Fast Path resources.
Related Reading: For information on the Fast Path Log Analysis utility, see IMS
Version 9: Utilities Reference: System.
The first three characteristics are unique to DEDBs; the last five apply generally to
databases. Data replication allows up to seven data sets for an individual area.
When reading from an area represented by multiple data sets, performance is not
impacted, unless the CI is defective. When updating, up to seven additional writes
could be required. Although the physical write is performed asynchronously to
transaction processing, there could be delays caused by access paths to a variety
of DASD devices.
comprises buffers of a size defined at system startup by the BSIZ parameter. The
buffer size selected must be capable of holding the largest CI from any DEDB area
that is to be opened. The number of buffers page-fixed is based upon the value of
supplied parameters:
v The normal buffer allocation (NBA) value causes the defined number of buffers to
be fixed in the buffer pool at startup of the dependent region. (This number can
be specified for the dependent region startup procedure using the NBA
parameter.) The application program in this dependent region is eligible to
receive up to this number of buffers within a given sync interval before one of the
following occurs:
– The buffer manager acquires unmodified buffers from the requesting
application program.
– No more buffers can be acquired on behalf of the requesting application
program (a number of buffers equal to NBA have been requested, received,
and modified). In this case, the buffer manager must acquire access to the
overflow buffer allocation (OBA) if this value was specified for this program. If
no OBA was specified, then all resources acquired for this program during
sync interval processing to date are released.
v The OBA value is the number of buffers that a program can serially acquire when
NBA is exceeded. (This number can be specified for the dependent region
startup procedure using the OBA parameter.) The overflow interlock function
serializes the overflow buffer access, and only one application program at a time
can gain access to the overflow buffer allocation. Therefore, the overflow buffer
can be involved in deadlocks.
v The DBFX value, which is a system startup parameter, defines a reserve of
buffers that are page-fixed upon start of the first Fast Path application program.
These buffers are used when asynchronous OTHREAD processing is not
releasing buffers quickly enough to support the requests made in sync interval
processing.
It follows that:
v BSIZ should be set equal to the largest DEDB CI that will be online. Because the
buffer manager does not split buffers to accommodate multiple control intervals,
making all DEDB CIs of a same size will provide more optimum use of storage.
Even though large block sizes (up to 28K) can be used, this would cause only
partial use of the buffer pool if there were many smaller CI sizes.
v The NBA value should be set approximately equal to the normal number of buffer
updates made during a sync interval. The NBA value for inquiry-only programs
should be small, because the buffers that are never modified can be reused and
will all be released at sync time.
v The OBA should be used only in relation to a limited proportion of sync intervals.
OBA is not required for inquiry-only programs. In general, the user should be
careful to use the OBA value as intended. It should be used to support sync
intervals where application program logic demands a variation in total modified
buffer needs, thereby requiring access to OBA on an exceptional basis. With
BMPs, OBA values greater than 1 should be unnecessary because the 'FW'
status code that is returned when the NBA allocation is exceeded can be used to
invoke a SYNC call. Invoking a SYNC call would then release all resources.
Such application design reduces the serialization and possible deadlocks
inherent in using the overflow interlock function.
v The DBFX value should be set, taking into account the total number of buffers
that are likely to be in OTHREAD processing at peak load time. If this value is
too low, an excessive number of wait-for-buffer conditions are reflected in the
IMS Fast Path Log Analysis report.
To optimize the buffer usage, group message processing application programs with
similar buffer use characteristics and assign them to a particular message class, so
that the applications share the region’s buffers.
Related Reading: See IMS Version 9: Installation Volume 2: System Definition and
Tailoring for details of APPLCTN and TRANSACT class specifications.
The number of contention and deadlock situations can be decreased by taking the
following steps:
v Ensure that CIs contain no more segments than necessary. (CI size is specified
in the DBD.)
v Limit the use of the overflow buffer interlock, which, in conjunction with CI usage,
can be involved in a deadlock.
v Limit the value of NBA to the value necessary to cope with the majority of cases
and use OBA to deal with the exceptional conditions. When the full buffer
allocation (NBA or NBA and OBA) for a program has been exceeded, the buffer
manager can begin stealing unmodified buffers from this program. When all
buffers associated with a CI have been stolen, the CI can be released, providing
it is not currently in use by a PCB. The buffer stealing and associated CI
releasing is triggered by exceeding the full buffer allocation. Minimizing NBA and
OBA will assist the timely release of CIs, thereby reducing CI contention.
v Ensure that BMPs accessing DEDBs issue SYNC calls at frequent intervals.
(BMPs could be designed to issue many calls between sync points and so gain
exclusive control over a significant number of CIs.)
v BMPs that do physical-sequential processing through a DEDB should issue a
SYNC call when crossing a CI boundary (provided it is possible to calculate this
point). This ensures that the application program never holds more than a single
CI.
Reports produced by the Fast Path Log Analysis utility give statistics about CI
contention.
situations will affect the operation of the system as a whole and can necessitate
lengthy recovery procedures. The number of out-of-space conditions can be
decreased by:
v Attempting to restrict the number of uses of independent overflow CIs through
randomizing algorithm design or regular reorganization
v Deleting sequential dependent CIs on a regular basis
v Using display commands or DEDB POS calls to track space usage
An out-of-space condition can be relieved without bringing IMS down by following
the procedures in “Extending DEDB Independent Overflow Online” on page 458.
It is likely that, for performance reasons, the physical log record will be large, so
that the log record might not be written for some time during low logging activity.
However, IMS varies the interval between the periodic invoking of physical logging.
This interval is directly related to the total logging activity in the IMS system. (Low
activity causes a smaller interval to be set.)
The physical logging process can be relatively slow because of small physical log
buffers or channel or control unit contention for the WADS/OLDS data sets.
The Fast Path environment can have high transaction rates and logging activity.
Therefore, the physical configuration supporting the logging process must also be
analyzed and altered for optimum performance.
In the case of deadlocks, the application program is pseudo abended for dynamic
backout. The program controller subtask is detached, and subsequently, reattached.
For verify failures or rollback calls, rescheduling involves only the release of
resources held and returned to the application program.
Excessive incidence of the above conditions will add to response time and total
overhead. Conditions resulting in abend interception followed by dump and
application program reinstatement will add to overhead.
One technique used by the IMS-supplied Fast Path Resource Name Hash routine
(DBFLHSH0) increases the range of values implicit with the relative CI numbers by
combining parts of the 31-bit CI number with values derived from a database’s
DMCB number and its area number as follows: Bits 11 through 15 of DMCB
number are XOR’d with bits 7, 6, 5, 4, 3 of the area number to give a combination
5-bit position number. (Using the area number’s bits in reverse order helps make
both DMCB number and area number vary the combination value.)
When you modify your database, you often make more than a simple change to it.
For example, you might need to add a segment type and a secondary index. This
topic has procedures to guide you through making each type of change. If you
make more than one change at a time, you should look at Figure 223 on page 413.
The flowchart, when used with the individual procedures in this chapter, will guide
you in making some types of multiple changes to the database.
Attention: If the DBD for an existing MSDB is changed, the header information
(BHDR) might change, even though the database segments do not. In this case,
the headers in the MSDBCPx data sets are invalid or the wrong length. A change in
the MSDB headers causes message DFS2593I. If ABND=Y is specified in the
MSDB PROCLIB member, ABENDU1012 is also issued. Correct this problem by
using the MSDBLOAD option on a warm start or cold start to load the MSDBs from
an MSDBINIT data set.
7. For non-VSAM data sets, delete the old database space and define the new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Reload your database, using the new DBD. Make an image copy of your
database as soon as it is reloaded.
9. If your database uses logical relationships or secondary indexes, run some of
the reorganization utilities before and after reloading to resolve prefix
information. The flowchart in Figure 195 on page 346 tells you which utilities to
use and the order in which they must be run.
10. Code and execute an application program to insert the new segment types into
the database.
You can delete a segment type from a database, using the reorganization utilities, if:
v The existing relative order of segments in the database record does not change.
In other words, the existing parent to child relationships cannot change.
v The existing segment names do not change.
To use the reorganization utilities to delete a segment type from the database:
1. Code and execute an application program to delete all occurrences of the
segment type being deleted. You must code and execute the application
program before the database is unloaded.
2. Determine whether the change you are making affects the code in any
application programs. If the code is affected, make sure it gets changed.
3. Unload your database, using the existing DBD.
4. Code a new DBD. You need to remove SEGM= statements from the DBD for:
v The segment type being deleted
v The children of the deleted segment.
5. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for those application programs. If you
have the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
6. Recalculate database space. You need to do this because the change you are
making will result in different requirements for database space.
Related Reading: See “Estimating the Minimum Size of the Database” on
page 311 for a description of how to calculate database space.
7. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
8. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
9. Reload your database using the new DBD. Remember to make an image copy
of your database as soon as it is reloaded.
10. If your database uses logical relationships or secondary indexes, you must run
some of the reorganization utilities before and after reloading to resolve prefix
information. The flowchart in Figure 195 on page 346 tells you which utilities to
use and the order in which they must be run.
If you are increasing the size of a segment, you cannot predict what is at the end of
the segment when it is reloaded. Also, new data must be added to the end of a
segment using your own program after the database is reloaded.
3. Code a new DBD. You need to change the BYTES= operand on the SEGM
statement in the DBD to reflect the new segment size. If you are eliminating
data from a segment for which FIELD statements are coded in the DBD, you
need to eliminate the FIELD statements. If you are adding data to a segment
and the data is referenced in the SSA in application programs, you need to
code FIELD statements. No database updates are allowed between unload and
reload.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for those application programs. If you have
the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than build dynamically.
6. Recalculate database space. You need to do this because the change you are
making results in different requirements for database space.
Related Reading: See “Estimating the Minimum Size of the Database” on page
311 for a description of how to calculate database space.
7. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Reload your database, using the new DBD. Make an image copy of your
database as soon as it is reloaded.
9. If your database uses logical relationships or secondary indexes, you must run
some of the reorganization utilities before and after reloading to resolve prefix
information. The flowchart in Figure 195 on page 346 tells you which utilities to
use and the order in which they must be run.
Related Reading: See “Field-Level Sensitivity” on page 220 for information on how
to use field-level sensitivity.
| The examples in this topic are followed by Table 32 on page 441, which tells you
| what to do when reorganizing a database to add a logical relationship. Following
| the table, “Some Restrictions on Modifying Existing Logical Relationships” on page
| 443 discusses some restrictions on modifying existing logical relationships.
The examples in this topic show the logical parent as a root segment, although this
is not a requirement. The examples are still valid when the logical parent is at a
lower level in the hierarchy.
When adding logical relationships to existing databases, you should always make
the change on a test database. Thoroughly test the change before implementing it
using production databases.
| Related Reading: For example procedures 1 through 13, the following related
| readings provide more detailed information for some of the steps:
| v See “Estimating the Minimum Size of the Database” on page 311 for a
| description of how to calculate database space.
| v See “Writing a Load Program” on page 320 for a description of how to write an
| initial load program.
DBX must be reorganized to add the counter field to the segment prefix for A. DBIL
must be specified in the control statement for DBX. In the following “Example 1
Procedure,” the counter field for segment A is updated to show the number of C
segments because segment C is loaded with a user load program.
Example 1 Procedure
1. Determine whether the change you are making affects the code in any
application programs. If the code is affected, make sure it gets changed.
2. Unload DBX, using the existing DBD and the HD Unload utility.
3. Code a new DBD for DBX and DBY. “How to Specify Use of Logical
Relationships in the Logical DBD” in Chapter 8, “Choosing Optional Database
Functions,” on page 151, explains how the DBD is coded for logical
relationships.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for these application programs. If you
have the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
6. Recalculate database space for DBX and calculate space for DBY.
7. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Run the Prereorganization utility, specifying DBIL in the control statements for
DBX and DBY.
9. Reload DBX, using the new DBD and the HD Reload utility.
10. Load DBY, using an initial load program.
11. Run the Prefix Resolution utility, using the DFSURWF1 work files that are
output from Steps 9 and 10 as input.
12. Run the Prefix Update utility, using the DFSURWF3 work file that is output
from Step 11 as input.
13. Remember to make an image copy of both databases as soon as they are
loaded.
In this example, the counter exists in the segment C prefix. DBX and DBY must be
reorganized to calculate the new value for the counter in the segment C prefix.
DBIL must be specified in the control statement for DBX and DBY. In the following
“Example 2 Procedure,” the segment A counter field is updated to show the number
of C segments because segment C is loaded with a user load program.
Example 2 Procedure
1. Determine whether the change you are making affects the code in any
application programs. If the code is affected, make sure it gets changed.
2. Unload DBX and DBY, using the existing DBDs and HD Unload utility.
3. Code a new DBD for DBY and DBZ. “How to Specify Use of Logical
Relationships in the Logical DBD” in Chapter 8, “Choosing Optional Database
Functions,” on page 151 explains how the DBD is coded for logical
relationships.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for these application programs. If you
have the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
6. Recalculate database space for DBX and DBY, and calculate space for DBZ.
7. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Run the Prereorganization utility, specifying DBIL in the control statements for
DBX, DBY and DBZ.
9. Reload DBX and DBY, using the new DBDs and the HD Reload utility.
10. Load DBZ, using an initial load program.
11. Run the Prefix Resolution utility, using the DFSURWF1 work files that are
output from Steps 9 and 10 as input.
12. Run the Prefix Update utility, using the DFSURWF3 work file that is output
from Step 11 as input.
13. Remember to make an image copy of all three databases as soon as they are
loaded.
DBY must be reorganized to add the counter field to the segment C prefix. DBIL
must be specified in the control statement for DBY. DBX must be reorganized
because an initial load (DBIL) of the logical parent (segment C) assumes an initial
load (DBIL of the logical child). The procedure for this example (and all conditions
and considerations) is exactly the same as example 2.
The procedure for this example (and all conditions and considerations) is exactly
the same as for example 2.
DBX must be reorganized to add the logical child pointers in the segment A prefix.
Procedure
1. Determine whether the change you are making affects the code in any
application programs. If the code is affected, make sure it gets changed.
2. Unload DBX, using the existing DBD and the HD Unload utility.
3. Code a new DBD for DBX and DBY. “How to Specify Use of Logical
Relationships in the Logical DBD” in Chapter 8, “Choosing Optional Database
Functions,” on page 151 explains how the DBD is coded for logical
relationships.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for these application programs. If you
have the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
6. Recalculate database space for DBX, and calculate space for DBY.
7. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Run the Prereorganization utility, specifying DBR in the control statement for
DBX, and DBIL in the control statement for DBY.
9. Reload DBX, using the new DBD and the HD Reload utility.
10. Load DBY, using an initial load program.
11. Run the Prefix Resolution utility, using the DFSURWF1 work files that are
output from Steps 9 and 10 as input.
12. Run the Prefix Update utility, using the DFSURWF3 work file that is output
from Step 11 as input.
13. Remember to make an image copy of both databases as soon as they are
loaded.
DBY must be reorganized to add the logical child pointers to the segment C prefix.
One of the following three procedures should be used:
v “Procedure When Reorganizing DBY (Segment B Contains a Symbolic Pointer)”
v “Procedure When Reorganizing DBY and Scanning DBX (Segment B Contains a
Direct Pointer)” on page 433
v “Procedure When Reorganizing DBX and DBY” on page 433
11. Run the Prefix Resolution utility, using the DFSURWF1 work files that are
output from Steps 9 and 10 as input.
12. Run the Prefix Update utility, using the DFSURWF3 work file that is output
from Step 11 as input.
13. Remember to make an image copy of both databases as soon as they are
loaded.
When DBY is reloaded, two type 00 records are produced for each occurrence of
segment C. One contains a logical child database named DBZ and matches the
type 10 record produced for segment E. The other contains a logical child database
named DBX. Because a scan or reorganization of DBX was not done, a matching
10 record was not produced for segment B. The Prefix Resolution utility produces
message DFS878 when this occurs. The message can be ignored as long as the
printed 00 record refers to DBY and DBX. Any messages for DBY and DBZ should
be investigated.
3. Code a new DBD for DBY and DBZ. “How to Specify Use of Logical
Relationships in the Logical DBD” in Chapter 8, “Choosing Optional Database
Functions,” on page 151 explains how the DBD is coded for logical
relationships.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for these application programs. If you
have the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
6. Recalculate database space for DBX and DBY, and calculate space for DBZ.
7. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Run the Prereorganization utility, specifying DBR in the control statements for
DBX and DBY, and DBIL in the control statement for DBZ. (The output from
the Prereorganization utility says that a scan of DBX is required.)
9. Reload DBX and DBY, using the new DBDs and the HD Reload utility.
10. Load DBZ, using an initial load program.
11. Run the Prefix Resolution utility, using the DFSURWF1 work files that are
output from Steps 9 and 10 as input.
12. Run the Prefix Update utility, using the DFSURWF3 work file that is output
from Step 11 as input.
13. Remember to make an image copy of all three databases as soon as they are
loaded.
DBY must be reorganized to add the logical child pointers to the segment C prefix.
Logical child pointers from segment C to segment B are not unloaded, therefore,
DBX must be reorganized or scanned. DBX must be reorganized to add the logical
child pointers in the segment A prefix.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for these application programs. If you
have the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
6. Recalculate database space for DBY and calculate space for DBZ.
7. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Run the Prereorganization utility, specifying DBR in the control statements for
DBY, and DBIL in the control statement for DBZ. (The output from the
Prereorganization utility indicates that a scan of DBX is required.)
9. Run the scan utility against DBX.
10. Reload DBY, using the new DBDs and the HD Reload utility.
11. Load DBZ, using an initial load program.
12. Run the Prefix Resolution utility, using the DFSURWF1 work files that are
output from Steps 9, 10, and 11 as input.
13. Run the Prefix Update utility, using the DFSURWF3 work file that is output
from Step 12 as input.
14. Remember to make an image copy of both databases as soon as they are
loaded.
DBY must be reorganized to add the logical child pointers in the segment C prefix.
The procedure for this example (and all conditions and considerations) is exactly
the same as the procedures for example 6.
DBY must be reorganized. DBZ must be loaded using an initial load program. DBIL
must be specified in the control statement for DBY. Do not specify DBR in the
control statement for DBY.
Procedure
1. Determine whether the change you are making affects the code in any
application programs. If the code is affected, make sure it gets changed.
2. Unload DBY, using the existing DBD and HD Unload utility.
3. Code a new DBD for DBY and DBZ. “How to Specify Use of Logical
Relationships in the Logical DBD” in Chapter 8, “Choosing Optional Database
Functions,” on page 151 explains how the DBD is coded for logical
relationships.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for these application programs. If you
have the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
6. Recalculate database space for DBY and calculate space for DBZ.
7. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Run the Prereorganization utility, specifying DBIL in the control statements for
DBY and DBZ.
9. Reload DBY, using the new DBDs and the HD Reload utility.
10. Load DBZ, using an initial load program.
11. Run the Prefix Resolution utility, using the DFSURWF1 work files that are
output from Steps 9 and 10 as input.
12. Run the Prefix Update utility, using the DFSURWF3 work file that is output
from Step 11 as input.
13. Remember to make an image copy of both databases as soon as they are
loaded.
In this example, you could use symbolic or direct pointers for segment X. Do not
under any circumstances specify DBR in the control statement for DBY. If you do,
the reload utility will not generate work records for segment D; the logical child
pointer in segment D would never be resolved. The procedure for this example (and
all conditions and considerations) is exactly the same as the procedures for
example 9.
DBX and DBY must be reorganized. DBZ must be loaded using an initial load
program. Because you must specify DBIL in the control statement for DBZ (a logical
parent database), you must also specify DBIL for DBY (a logical child database).
DBY is also a logical parent database. Therefore, you must specify DBIL in the
control statement for DBX (a logical child database). The procedure for this
example (and all conditions and considerations) is exactly the same as for Example
2.
In this example, segment B has a symbolic pointer. The procedure for this example
(and all conditions and considerations) is exactly the same as for example 2.
Example 13. DBX and DBY Exist, Segment Y and DBZ Are to Be Added
Example 13 is shown in Figure 237.
Figure 237. DBX and DBY Exist, Segment Y and DBZ Are to Be Added
Procedure
1. Determine whether the change you are making affects the code in any
application programs. If the code is affected, make sure it gets changed.
2. Unload DBX, using the existing DBD and HD Unload utility.
3. Code a new DBD for DBY and DBZ. “How to Specify Use of Logical
Relationships in the Logical DBD” in Chapter 8, “Choosing Optional Database
Functions,” on page 151 explains how the DBD is coded for logical
relationships.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for these application programs. If you
have the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
6. Recalculate database space for DBX and DBY, and calculate space for DBZ.
7. For non-VSAM data sets, delete the old database space and define new
database space. For VSAM data sets, delete the space allocated for the old
clusters and define space for the new clusters.
8. Run the Prereorganization utility, specifying DBIL in the control statements for
DBX, DBY and DBZ.
9. Reload DBX, using the new DBD and the HD Reload utility.
10. Load DBY and DBZ, using an initial load program.
11. Run the Prefix Resolution utility, using the DFSURWF1 work files that are
output from Steps 9 and 10 as input.
12. Run the Prefix Update utility, using the DFSURWF3 work file that is output
from Step 11 as input.
13. Remember to make an image copy of both databases as soon as they are
loaded.
The figure applies to reorganizations only. When initially loading databases, you
must run the Prefix Resolution and Update utilities whenever work data sets are
generated.
Table 32 covers all reorganization situations, whether or not database pointers are
being changed. In using the figure, a bidirectional physically paired relationship
should be treated as two unidirectional relationships. Unless otherwise specified,
DBR should be specified for the reorganized databases when the Prereorganization
utility is run.
Assume your database has unidirectional symbolic pointers and you are not
changing pointers. On the left side of Table 32, in the FROM column, find
unidirectional symbolic pointers. The follow across to the right in the TO row and
find unidirectional symbolic pointers. The figure tells you what you must do to
reorganize with one of the following:
v The database containing the logical parent
v The database containing the logical child
In all three situations, it is not necessary to run the Prefix Resolution or Update
utilities (this is what is meant by “finished”).
Assume your database has bidirectional symbolic pointers and you need to change
to bidirectional direct pointers. Table 32 shows that:
v Reorganizing only the logical parent database cannot be done, because a logical
parent pointer must be created in the logical child segment in the logical child
database.
v Reorganizing the logical child database can be done. To scan the logical child
database, you must scan the logical parent database. The control statements for
the Prereorganization utility must specify DBIL for the logical child database.
Also, the Prefix Resolution and Update utilities must be run.
v Reorganizing both databases can also be done. In this case, the control
statements for the Prereorganization utility must specify DBIL for the logical child
database and DBR for the logical parent database. Again, the Prefix Resolution
and Update utilities must be run.
Notes:
1. The Prereorganization utility says to scan the logical child database and the DFSURWF1 records will be produced
if scan is run.
2. DFSURWF1 records are produced; however, the prefix resolution and update utilities need not be run.
Figure 239. The Position Change of a Real Logical Child from One Logically Related
Database to Another
In both of these “before” examples, occurrences of segment B can exist that are
physically, but not logically, deleted. The logical child can be accessed from the
logical path but not the physical path. When unloading DBX, the HD Unload utility
cannot access occurrences of segment B that are physically, but not logically,
deleted. Therefore, you must write your own program to do this type of
reorganization.
4. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
5. Write a program that sequentially retrieves from the database all segments that
are to be variable length. Your program must add the 2-byte size field to each
segment retrieved and then insert the segment back into the database.
3. Code a new DBD. The new DBD must specify the name of your edit routine for
the segment types you need edited.
4. If the change you are making affected the code in application programs, make
any necessary changes to the PSBs for those application programs. If you have
the DB/DC Data Dictionary, it can help you determine which application
programs and PCBs are affected by the DBD changes you have made.
5. Rebuild the ACB if you have ACBs prebuilt rather than built dynamically.
6. Recalculate database space. You need to do this because the change you are
making results in different requirements for database space.
7. Delete the old database space and define new database space. If you are using
VSAM, use the Access Method Services DEFINE CLUSTER command to define
VSAM data sets.
8. Reload the database, using the new DBD. Remember to make an image copy
of your database as soon as it is reloaded.
9. If your database uses logical relationships or secondary indexes, you must run
some of the reorganization utilities before and after reloading to resolve prefix
information. Figure 195 on page 346 tells you which utilities to use and the
order in which they must be run.
Data Capture exit routines are explained in “Data Capture Exit Routines” on page
215. To convert an existing database for use with Data Capture exit routines or
Asynchronous Data Capture:
1. Determine whether the change requires revisions to the logical delete rules in a
database. If so, change the delete rules, which might require reorganizing your
database.
2. Code a new DBD. On the DBD or SEGM statements, specify the name of each
exit routine you need called against a segment in the database.
Related Reading:
v See IMS Version 9: Utilities Reference: System for details on the DBD
parameters required for Data Capture exit routines or Asynchronous Data
Capture.
v IMS Version 9: Customization Guide explains the exit routines in detail, how
to code them, and how they work.
3. Run DBDGEN.
4. If you use prebuilt ACBs rather than dynamically built ACBs, rebuild the ACB.
The online change function for DEDBs allows both database-level and area-level
changes. A database-level change affects the structure of the DEDB and includes
such changes as adding or deleting an area, adding a segment type, or changing
the randomizer routines. An area-level change involves increasing or decreasing the
size of an area (IOVF, DOVF, CI). An area-level change requires the user to stop
only that area with the /DBRECOVERY command; a database-level change requires
the user to stop all areas of the DEDB.
Unlike standard randomizers which distribute database records across the entire
DEDB, two-stage randomizers distribute database records within an area. By using
a two-stage randomizer, changes to an individual area’s root addressable allocation
are area-level changes, and only the areas affected need to be stopped.
Note: All changes to ACBLIB members resulting from the ACBGEN execution
are available to the online system after the online change (provided that
the changed resources—PSBs and DBDs—are defined in the online
system).
| 4. Update the security definitions of the IMS system’s security facilities to include
| any new databases. Security facilities can include RACF, another external
| security product, the IMS Security Maintenance utility, and exit routines. For
| more information on IMS security, see IMS Version 9: Administration Guide:
| System.
5. Allocate the database data sets for databases to be added.
6. Load your database.
7. For Fast Path, online change must be completed before the database can be
loaded. Also, Fast Path can only load databases online; batch jobs cannot be
used.
8. If dynamic allocation is used in a z/OS environment, run the dynamic allocation
utility.
9. Use the online change utility to copy your updated staging libraries to the
inactive libraries (see IMS Version 9: Utilities Reference: System for
information on running this utility).
10. Issue the operator commands to cause your inactive libraries to become your
active libraries (see IMS Version 9: Command Reference for the commands
used).
Two level changes can be made to DEDBs. The database level changes allow:
1. Add or Delete DEDBs.
2. Add or Delete segment types.
3. Add, Change, or Delete a segment and its fields.
4. Add, Change, or Delete segment compression routines.
5. Add, Change, or Delete data capture exit routines.
6. Change randomizers.
7. Add or Delete areas.
8. Change area RAP space allocation and the randomizer is not a 2-stage
randomizer.
Area level changes and items 4 through 8 of the database level change require a
BUILD DBD (not a BUILD PSB). In this case, with exception to items 4 and 5 when
the defined PSB SENSEGs have reference to exit routines that are added or
deleted, the PSB does not change. Changes can be made to DEDBs using online
change while maintaining the availability of IFP and MPP regions that access the
DEDBs only if there is no change to the scheduled PSB. The application will then
pseudoabend with ABENDU0777 and the PSB will be rescheduled on the next DL/I
call to DEDB. The message DFS2834I is issued. Other changes to the PSBs such
as items 1 through 5 of the DEDB database changes, full-function database
changes, or PSB changes using online change require that the IFP and MPP
regions be brought down.
The following procedure describes the steps necessary to make database level
changes to a DEDB with an IFP / MPP running:
1. Use a specific user-developed application program or OEM utility to unload the
DEDB through existing system definitions.
2. DBDGEN, PSBGEN and ACBGEN to generate the application control blocks to
implement the DEDB structural changes. The changed or new application
control blocks must be built into the active IMS system’s staging copy of
ACBLIB, which is offline.
3. Run the online change utility, DFSUOCU0, to move the changed ACBLIB from
the staging ACBLIB to the inactive (A or B) copy of the ACBLIB that is online
to the active IMS system.
4. Enter the normal /DBR command sequence to remove access to the DEDB
from the active IMS system.
5. Enter and follow the online change command sequence for PREPARE
processing for ACBLIB changes.
6. Enter and follow the online change command sequence for COMMIT/ABORT
processing for ACBLIB changes. The online IMS system will switch from using
the active (A or B) copy of the ACBLIB to the inactive (A or B) copy.
7. Delete, define and initialize all of the DEDB AREA data sets with the new
system definitions.
8. Enter the normal /START DATABASE and /START AREA commands to make the
DEDB and its AREAs accessible to the active IMS system.
9. Use a specific user-developed application program or OEM utility to reload the
DEDB through the change system definitions for the DEDB.
10. On the first access to the newly changed DEDB, the application will
pseudoabend and the PSB will be rescheduled. Message DFS2834I will be
displayed.
The transaction will be tried again for both IFPs and MPPs when the PSB is
rescheduled. If the application attempts to access the DEDB before commit
processing has completed, an ’FH’ status will be returned to the application.
The DEDB is inaccessible because the randomizer for the DEDB is unloaded
by the /DBR command.
If database level changes are made to DEDBs while a BMP or DBCTL thread is
active, then commit processing fails and the message DFS3452 is issued.
Related Reading: See the IMS Version 9: Messages and Codes, Volume 2 for
more information on message DFS3452 and other messages.
If area level changes are made to DEDBs while a BMP or DBCTL thread is active,
then on the next access to the newly changed area, the application should continue
processing as usual.
A randomizer change can involve introducing a brand new randomizer into the
active IMS system or changing an existing randomizer in use by one or more
DEDBs.
Adding a New Data Capture Exit Routine: To add a new Data Capture exit
routine, follow the procedure below:
1. Assemble and link edit the new exit routine into the IMS.SDFSRESL or one of
the libraries in the IMS.SDFSRESL STEPLIB concatenation.
2. Run a DBDGEN for the DEDB with the new exit routine designated in the DBD
or SEGM parameter: ″EXIT=″.
3. ACBGEN is also needed to build the application control blocks to implement the
DEDB definition that includes the new exit routine. The changed or new
application control blocks must be built into the active IMS system’s staging
copy of ACBLIB, which is offline.
4. Run the online change Utility, DFSUOCU0, to move the changed ACBLIB from
the staging ACBLIB to the inactive (A or B) copy of the ACBLIB that is online to
the active IMS system.
5. Enter the normal /DBR command sequence to remove access to the DEDB
from the active IMS system.
6. Enter and follow the online change command sequence for PREPARE
processing for ACBLIB changes.
7. Enter and follow the online change command sequence for COMMIT/ABORT
processing for ACBLIB changes. The online IMS system will switch from using
the active (A or B) copy of the ACBLIB to the inactive (A or B) copy.
8. Enter the normal /START DATABASE and /START AREA commands to make the
DEDB and its areas accessible to the active IMS system.
Deleting a Data Capture Exit Routine: To delete a Data Capture exit routine,
execute the following steps:
1. Run a DBDGEN for the DEDB with the old exit routine omitted from the DBD or
SEGM statement.
2. ACBGEN is also needed to build the application control blocks to implement the
DEDB definition that excludes the old exit routine. The changed or new
application control blocks must be built into the active IMS system’s staging
copy of ACBLIB, which is offline.
3. Run the online change utility, DFSUOCU0, to move the changed ACBLIB from
the staging ACBLIB to the inactive (A or B) copy of the ACBLIB that is online to
the active IMS system.
4. Enter the normal /DBR command sequence to remove access to the DEDB
from the active IMS system.
5. Enter and follow the online change command sequence for PREPARE
processing for ACBLIB changes.
6. Enter and follow the online change command sequence for COMMIT/ABORT
processing for ACBLIB changes. The online IMS system will switch from using
the active (A or B) copy of the ACBLIB to the inactive (A or B) copy.
7. Enter the normal /START DATABASE and /START AREA commands to make
the DEDB and its areas accessible to the active IMS system.
If, however, a ″Two Stage″ randomizer is used for the DEDB, a change to an
individual area UOW root addressable definition is an AREA Level change. A ″Two
Stage″ randomizer does not attempt to evenly distribute database records across all
areas based on the total number of root anchor points in the entire DEDB. A ″Two
Stage″ randomizer is designated in the DBDGEN by coding the randomizer name
as follows:
RMNAME=(mmmmmmmm,2)
In prior releases of IMS, customers would get the following error message if a
DEDB DBD had more than one operand in the RMNAME parameter:
8, DBD130 - RMNAME OPERAND IS OMITTED OR INVALID
The same message will appear for this release of IMS if anything but a two is
specified as the second operand of RMNAME. Customers can still specify
RMNAME=(mmmmmmmm) for standard randomizer routines.
Adding or deleting a DEDB and implementing the change by means of the IMS
online change facility requires that you follow the steps described below. See
Figure 240 for an overall picture.
1. MODBLKs Level system definition (Stage 1 and Stage 2) to add or delete the
DEDB. The changed MODBLKs should be generated into the active IMS
system’s staging copy of MODBLKs, which is offline.
2. DBDGEN, PSBGEN and ACBGEN to generate the application control blocks to
add or delete the DEDB and PSBs that access it. The changed or new
application control blocks must be generated into the active IMS system’s
staging copy of ACBLIB, which is offline.
3. Run the online change utility, DFSUOCU0, to move the changed MODBLKs and
ACBLIB changes from the staging libraries to the inactive (A or B) copies of
these libraries that are online to the active IMS system.
4. Enter and follow the online change command sequence for PREPARE
processing. If a DEDB is being added to an IMS system that does not have
Fast Path installed, the DFS2833 error message will appear and the PREPARE
process will be aborted.
If a DEDB is added whose areas have CI sizes that exceed the system buffer
size (BSIZ= ), then message DFS2832 will appears and the PREPARE process
aborts.
Finally, if a DEDB is added to an IMS system that was initialized without any
DEDBs, then message DFS2837 appears and the PREPARE process aborts.
Output threads are initialized during Fast Path initialization only if DEDBs are
currently generated in the system. In order for the user to be able to add
DEDBs with online change, IMS must be initialized with DEDBs to begin with.
5. If the DEDB is to be deleted, any BMP region or DBCTL thread scheduled for
access to the DEDB must first be stopped. Full function transactions scheduled
for access to the DEDB will be placed in a QSTOP state and as a result, MPP
or IFP dependent regions need not be stopped to implement the online change
to delete the DEDB.
6. If the DEDB is to be deleted, access to it from the active IMS system must be
removed by means of a /DBR DB command. The commit will fail with a
DFS3452 message if the DEDB has not had the /DBR command successfully
run against it beforehand.
7. Execute the online change command sequence for COMMIT/ABORT
processing.
8. If the DEDB is newly added, execute the following additional steps at any
appropriate time prior to making the DEDB generally available for normal user
access:
a. Execute the normal procedures for defining the new DEDB and its areas
and area data sets to DBRC and the RECON data sets.
b. Define and initialize all of the area data sets belonging to the new DEDB.
c. Execute the procedures to include the required Dynamic Allocation
definitions that will enable the DEDB and its areas to be allocated to the
active IMS system. Or register the DEDB and its areas to DBRC, and DBRC
will dynamically allocate them during IMS initialization.
d. Enter the /START DATABASE and /START AREA commands to make the DEDB
and its areas accessible to the active IMS system.
e. Run the necessary application load programs.
Related Reading: See the IMS Version 9: Messages and Codes, Volume 2 for
information on the types of messages you might receive while adding or deleting
DEDBs.
4. Enter the normal /DBR command sequence to remove access to the DEDB
from the active IMS system. This command may be issued any time prior to the
/MODIFY COMMIT.
5. Enter and follow the online change command sequence for PREPARE
processing for ACBLIB changes.
6. Enter and follow the online change command sequence for COMMIT/ABORT
processing for ACBLIB changes.
7. Delete, define and initialize all of the AREA data sets belonging to the DEDB
with the new system definitions.
8. Enter the normal /START DATABASE and /START AREA commands to make the
DEDB and its areas accessible to the active IMS system.
9. Use a specific customer-developed application program or OEM utility to reload
the DEDB through the changed system definitions for the DEDB.
A change to the UOW structure that changes the number of CIs defined to the root
addressable area constitutes Database Level change when a standard DEDB
randomizing routine is used. This type of change should be treated the same as a
DEDB structural change in terms of online change procedures.
unchanged. Additionally, DEDB online change will allow changes to the overflow
space allocation both within each UOW (Dependent Overflow) and outside the root
addressable portion (Independent Overflow) of the AREA. Both Dependent and
Independent Overflow changes are considered to be Area-level changes. However,
such changes must not alter the number of CIs defined to the root addressable
portion. Changing the number of root addressable CIs will change the number of
root anchor points and could affect the DEDB randomizing routine in locating
database records.
Changing DEDB AREA overflow allocation requires the same procedural steps as
those defined for changing the root addressable area.
Related Reading: See “Changing the DEDB AREA UOW Structural Definition” on
page 454 for details on changing the DEDB AREA overflow.
Changing CI Size
DEDB online change can be used to change DEDB AREA control interval size.
However, CI size changes must not alter the number of CIs allocated to the root
addressable portion of an AREA because this could affect the DEDB randomizer in
locating database records across the DEDB. The SIZE= parameter on the AREA
statement of DBDGEN defines the CI size of the data set that constitutes the
AREA.
You cannot decrease the size of the IOVF with this procedure. However, the size of
the sequential dependent part might increase or decrease depending on the total
amount of space allocated to the area. The steps in this procedure also reorganize
the area.
To increase the size of the IOVF portion of a DEDB online you must:
1. Run the DBDGEN utility to obtain an updated DBD. Update only the following
operands on the ROOT= keyword of the AREA statement:
number
Specifies the total number of units of work (UOWs) allocated to the root
addressable and the IOVF parts of the area. Increase number to reflect
the number of UOWs you need to add to the IOVF.
overflow
Specifies the space reserved for the IOVF, expressed as the number of
UOWs. Increase the number on this operand by the same amount you
increase the number operand. For example, if the original values were
number=x and overflow=y, and if number is changed to x + 2, then
overflow must be changed to y + 2.
All other control statements must remain identical to those on the existing
DBD. Changing other control statements might damage data and create
unpredictable results.
2. Run the ACBGEN utility using the updated DBD. You should run PSB=ALL to
create a new and complete ACBLIB with the new ROOT= parameters. The
output should be a different data set from the one currently used by the control
region. The new ACBLIB is identical to the old ACBLIB, except for the ROOT=
changes. You can use the staging ACBLIB, but do not switch with the online
change function.
3. Ensure that the area is in good condition. The area must not have any
in-doubts, and must not be in a recovery-needed condition. Also, at least one
copy of the area (one area data set) must have no error queue elements
(EQEs). Use the /DIS AREA command to display EQEs and the condition. Use
the /DIS CCTL INDOUBT command to display all in-doubt threads. Eliminate
potential defects before continuing to the next step so that data is not lost or
damaged.
4. Process SDEPs using the SDEP scan and delete utilities. This step is required
because the IOVF extension procedure requires an unload and load of the
area. Some unload and load utilities are unable to process SDEPs.
Unload/load utilities that do process SDEPs might reload them in root order
rather than time order, which can interfere with subsequent SDEP scan and
delete operations.
Related Reading:
v For more information on the DBRC definitions for the shared AREAs with
SDEP segments, see the IMS Version 9: Database Recovery Control
(DBRC) Guide and Reference.
v For more information on DEDB Sequential Dependent Scan utility keywords
and change boundaries, see the IMS Version 9: Utilities Reference:
Database and Transaction Manager.
v For more information on the DEDB Sequential Dependent Scan utility
user-written exit routine parameter interface, see the IMS Version 9:
Customization Guide.
5. If multiple copies of the area (MADS) exist, stop all copies of the area except
one using the /STOP ADS command. Ensure that the remaining copy does not
have any EQEs and is not in a recovery-needed condition. Multiple ADSs must
be stopped to ensure that DBRC has accurate information when the area is
brought online after the IOVF is extended.
6. Issue a /DBR or /STO AREA command against the area.
7. Take an image copy of the area.
8. If the area is registered with DBRC, set the recovery-needed flag on for the
area. This flag is required by the DEDB Initialization utility and can be set
using a CHANGE.DBDS RECOV command.
9. Unload the area.
10. Execute the IDCAMS utility to delete and redefine the data set. The amount of
space you allocate for the area in the Define procedure should reflect the
increased size of the IOVF. The number of SDEP CIs in the area might change
because this number represents the difference between the total amount of
space allocated to the area and the amount used by the other parts. These
other parts are the root addressable part, the IOVF, the reorganization UOW,
and two control CIs.
Related Reading: See DFSMS Access Method Services for Catalogs for a
description of the IDCAMS Delete and Define functions.
11. Execute the Fast Path initialization utility against the new area using the new
ACBLIB.
12. Issue the /START AREA command to bring the area online.
Note: It is recommended that you reload the area in batch. If you reload the
area using a BMP, the BMP might fail with message DFS3709A and
reason code 5. If this failure occurs, issue the CHANGE.DBDS command to
set ICOFF and restart the BMP.
IMS Version 9: Messages and Codes, Volume 2 explains message DFS3709A
and the reason for this failure.
14. Take an image copy of the area after the reload.
When the area is next accessed, message DFS3703I is issued. This message
alerts you that discrepancies were found during open processing. However, open
processing continues because the discrepancies indicate to IMS that you used an
accepted procedure to increase the size of the IOVF. DFS3703I is not issued during
subsequent opens of the area as long as IMS remains online. DFS3703I is also
issued by any sharing subsystem the first time the area is opened on that
subsystem after the IOVF is extended.
During emergency restart or extended recovery facility (XRF) takeover, the updated
area information is picked up from the log. Therefore, DFS3703I is not issued.
Use the new ACBLIB for any subsequent normal restarts of the online system.
Ensure that the new ACBLIB reflects only the changes made to the ROOT=
keyword. Any other changes you make might cause damage to the area. If you do
not use the new ACBLIB, open logic allows the discrepancy between information
from the old ACBLIB and information from the area data set, but issues message
DFS3703I each time the discrepancy is encountered.
Note: Remember that you cannot use the online change function to update the
ACBLIB with the altered ROOT= parameter.
The meaning of each bit in the delete byte, when turned on, is as follows:
Bit Meaning When Delete Byte is Turned On
0 Segment has been marked for deletion. This bit is used for segments in a
HISAM or secondary index database or segments in primary index.
1 Database record has been marked for deletion. This bit is used for
segments in a HISAM or secondary index database or segments in a
primary index.
2 Segment has been processed by the delete routine.
3 This bit is reserved.
4 Prefix and data portion of the segment are separated in storage. (The
delete byte preceding the separated data portion of the segment has all bits
turned on.)
5 Segment has been marked for deletion from a physical path. This bit is
called the PD (physical delete) bit.
6 Segment has been marked for deletion from a logical path. This bit is called
the LD (logical delete) bit.
7 Segment has been marked for removal from its logical twin chain. This bit
should only be set on if bits 5 and 6 are also on).
The delete byte is also used for the root segment of a DEDB, only there it is called
a prefix descriptor byte. The meaning of each bit, when turned on, is as follows:
Bit Meaning When Root Segment Prefix Descriptor is Turned On
0 Sequential dependent segment is defined.
1-3 These bits are reserved.
4-7 If the number of defined segments is 8 or less, bits 4 through 7 contain the
highest defined segment code. Otherwise, the bits are set to 000.
Appendix B, “Insert, Delete, and Replace Rules for Logical Relationships,” on page
465, discusses replacing, inserting, and deleting rules for logical relationships,
which includes how to specify rules in a physical DBD and a rules summary.
For example, RULES=P,L,V says the insert rule is physical, the delete rule is
logical, and the replace rule is virtual. The B rule is only applicable for delete. In
general, the P rule is the most restrictive, the V rule is least restrictive, and the L
rule is somewhere in between.
Insert Rules
The insert rules apply to the destination parent segments, but not to the logical child
segment. A destination parent can be a logical or physical parent. The insert rule
has no meaning for the logical child segment except to satisfy the RULES= macro’s
coding scheme. Therefore, any insert rule (P, L, V) can be coded for a logical child.
A logical child can be inserted provided:
v The insert rule of the destination parent is not violated
v The logical child being inserted does not already exist (it cannot be a duplicate)
A description of how the insert rules work for the destination parent is a follows:
v When RULES=P is specified, the destination parent can be inserted only using
the physical path. This means the destination parent must exist before inserting a
logical path. A concatenated segment is not needed, and the logical child is
inserted by itself. Figure 242 on page 467 shows an example of the physical
insert rule.
v When RULES=L is specified, the destination parent can be inserted either using
the physical path or concatenated with the logical child and using the logical
path. When a logical child/destination parent concatenated segment is inserted,
the destination parent is inserted if it does not already exist and the I/O area key
check does not fail. If the destination parent does exist, it will remain unchanged
and the logical child will be connected to it. Figure 245 on page 468 shows an
example of the logical insert rule.
v When RULES=V is specified, the destination parent can be inserted either using
the physical path or concatenated with the logical child and using the logical
path. When a logical child/destination parent concatenated segment is inserted,
the destination parent is replaced if it already exists. If it does not already exist,
the destination parent is inserted. Figure 247 on page 469 shows an example of
the virtual insert rule.
For all DL/I calls, either an error is detected and an error status code returned (in
which case no data is changed), or the required changes are made to all segments
effected by the call. Therefore, if the required function cannot be performed for both
parts of the concatenated segment, an error status code is returned, and no change
is made to either the logical child or the destination parent.
Status Codes
The nonblank status codes that can be returned to an application program after an
ISRT call are as follows:
v AM—An insert was attempted and PROCOPTI
v GE—The parent of the destination parent or logical child was not found
v II—An attempt was made to insert a duplicate segment
v IX—The rule specified was P, but the destination parent was not found
One reason for getting an IX status code is that the I/O area key check failed.
Concatenated segments must contain the destination parent’s key twice—once
as part of the logical child’s LPCK and once as a field in the parent. These keys
must be equal.
Figure 242, Figure 243, and Figure 244 on page 468 show a physical insert rule
example.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 467
Insert Rules
Figure 244. ISRT and Status Codes for Physical Insert Rule Example
Figure 245 and Figure 246 show a logical insert rule example.
Figure 246. ISRT and Status Codes for Logical Insert Rule Example
The IX status code shown in Figure 246 is the result of omitting the concatenated
segment CUST/CUSTOMER in the second call. IMS checked for the key of the
CUSTOMER segment (in the I/O area) and failed to find it. With the L insert rule,
the concatenated segment must be inserted to create a logical path.
Figure 247 on page 469 and Figure 248 on page 469 show a virtual insert rule
example.
Figure 248. ISRT and Status Codes for Virtual Insert Rule Example
The code shown in Figure 248 will replace the LOANS segment if present, and
insert the LOANS segment if not. The V insert rule is a powerful option.
Specifying the insert rule as L on the logical and physical parent allows insertion
using either the physical path or the logical path as part of a concatenated
segment. When inserting a concatenated segment, if the destination parent already
exists it remains unchanged and the logical child is connected to it. If the
destination parent does not exist, it is inserted. In either case, the logical child is
inserted if it is not a duplicate, and the destination parent’s insert rule is not
violated.
The V insert rule is the most powerful of the three rules. The V insert rule is the
most powerful because it will insert the destination parent (inserted as a
concatenated segment using the logical path) if the parent did not previously exist,
or otherwise replace the existing destination parent with the inserted destination
parent.
Replace Rules
The replace rules are applicable to the physical parent, logical parent, and logical
child segments of a logical path. The following is a description of how the replace
rules work:
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 469
Replace Rules
v When RULES=P is specified, the segment can only be replaced when retrieved
using a physical path. If this rule is violated, no data is replaced and an RX
status code is returned. Figure 249 shows an example of the physical replace
rule.
v When RULE=L is specified, the segment can only be replaced when retrieved
using a physical path. If this rule is violated, no data is replaced. However, no RX
status code is returned, and a blank status code is returned. Figure 251 on page
471 shows an example of the logical replace rule.
v When RULES=V is specified, the segment can be replaced when retrieved by
either a physical or logical path. Figure 253 on page 472 shows an example of
the virtual replace rule.
For all DL/I calls, either an error is detected and an error status code returned (in
which case no data is changed), or the required changes are made to all segments
affected by the call. Therefore, if the required function cannot be performed for both
parts of the concatenated segment, an error status code is returned, and no change
is made to either the logical child or the destination parent.
Status Codes
The status code returned to an application program indicates the first violation of
the replace rule that was detected. These status codes are as follows:
v AM—a replace was attempted and PROCOPTR
v DA—the key field of a segment or a non-replaceable field was changed
v RX—the replace rule was violated
Figure 249 and Figure 250 on page 471 show a physical replace rule example.
Figure 250. Calls and Status Codes for Physical Replace Rule Example
Figure 251 and Figure 252 show a logical replace rule example.
GHU ’CUSTOMER’
’BORROW/LOANS’ STATUS CODE=’ ’
REPL STATUS CODE=’ ’
Figure 252. Calls and Status Codes for Logical Replace Rule Example
As shown in Figure 251, the L replace rule prevents replacing the LOANS segment
as part of a concatenated segment. Replacement must be done using the
segment’s physical path. However, the status code returned is blank. The
BORROW segment, accessed by its physical path, is replaced. Because the logical
child is accessed by its physical path, it does not matter which replace rule is
selected.
The L replace rule allows replacing only the logical child half of the concatenation,
and the return of a blank status code.
Figure 253 on page 472 and Figure 254 on page 472 show a virtual replace rule
example.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 471
Replace Rules
GHU ’LOANS’
’CUST/CUSTOMER’ STATUS CODE=’ ’
REPL STATUS CODE=’ ’
Figure 254. Calls and Status Codes for Virtual Replace Rule Example
As shown in Figure 254, the V replace rule allows replacing the CUSTOMER
segment using its logical path as part of a concatenated segment.
Table 33 on page 473 and Table 34 on page 474 show all of the possible
combinations of replace rules that can be specified. They show what actions take
place for each combination when a call is issued to replace a concatenated
segment in a logical database. Table 33 on page 473 and Table 34 on page 474 are
based on the databases and logical views shown in Figure 255 on page 473 and
Figure 256 on page 473.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 473
Replace Rules
Logically deleting a logical child prevents further access to the logical child using its
logical parent. Unidirectional logical child segments are assumed to be logically
deleted. A logical parent is considered logically deleted when all its logical children
are physically deleted. For physically paired logical relationships, the physical child
paired to the logical child must also be physically deleted before the logical parent
is considered logically deleted.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 475
Delete Rules
These paths are called “full-duplex” paths, which means accessibility to segments in
the paths is in two directions (up and down). Two delete bits that control access
along the paths exist, but they are “half-duplex,” which means they only block half
of each respective path. No bit that blocks the third path exists. If SEG4 were both
physically and logically deleted (in which case the PD and LD bits in SEG4 would
be set), SEG4 would still be accessible from the third path, and so would both of its
parents.
Neither physical nor logical deletion prevents access to a segment from its physical
or logical children. Logically deleting SEG4 prevents access to SEG4 from its
logical parent SEG7, and it does not prevent access from SEG4 to SEG7.
Physically deleting SEG4 prevents access to SEG4 from its physical parent SEG3,
but it does not prevent access from SEG4 to SEG3.
A DLET call issued against a concatenated segment requests deletion of the logical
child in the path that is accessed. If a concatenated segment or a logical child is
accessed from its logical parent, the DLET call requests logical deletion. In all other
cases, a delete call requests physical deletion.
Physical deletion of a segment generates a request for logical deletion of all the
segment’s logical children and generates a request for physical deletion of all the
segment’s physical children. Physical deletion of a segment also generates a
request to delete any index pointer segments for which the physically deleted
segment is the source segment.
Delete sensitivity must be specified in the PCB for each segment against which a
delete call can be issued. The call does not need to be specified for the physical
dependents of those segments. Delete operations are not affected by KEY or DATA
sensitivity as specified in either the PCB or logical DBD.
Status Codes
The nonblank status codes that can be returned to an application program after a
DLET call are as follows:
v DX—A delete rule was violated
v DA—The key was changed in the I/O area
v AM—The call function was not compatible with the processing option or segment
sensitivity
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 477
Delete Rules
Delete Rules
The following is a description of how the delete values work for the logical parent,
physical parent, and logical child.
Logical Parent
v When RULES=P is specified, the logical parent must be logically deleted before
a DLET call is effective against it or any of its physical parents. Otherwise, the
call results in a DX status code, and no segments are deleted. However, if a
delete request is made against a segment as a result of propagation across a
logical relationship, then the P rule acts like the L rule that follows.
v When RULES=L is specified, either physical or logical deletion can occur first.
When the logical parent is processed by a DLET call, all logical children are
logically deleted, but the logical parent remains accessible from its logical
children.
v When RULES=V is specified, a logical parent is deleted along its physical path
explicitly when deleted by a DLET call. All of its logical children are logically
deleted, although the logical parent remains accessible from these logical
children.
A logical parent is deleted along its physical path implicitly when it is no longer
involved in a logical relationship. A logical parent is no longer involved in a logical
relationship when:
– It has no logical children pointing to it (its logical child counter is zero, if it has
any)
– It points to no logical children (all of its logical child pointers are zero, if it has
any)
– It has no physical children that are also real logical children
Logical Child
v When RULES=P is specified, the logical child segment must be logically deleted
first and physically deleted second. If physical deletion is attempted first, the
DLET call issued against the segment or any of its physical parents results in a
DX status code, and no segments are deleted. If a delete request is made
against the segment as a result of propagation across a logical relationship, or if
the segment is one of a physically paired set, then the rule acts like the L rule
that follows.
v When RULES=L is specified, deletion of a logical child is effective for the path for
which the delete was requested. Physical and logical deletion of the logical child
can be performed in any order. The logical child and any physical dependents
remain accessible from the non-deleted path.
v When RULES=V is specified, a logical child is both logically and physically
deleted when it is deleted through either its logical or physical path (setting either
the PD or LD bits sets both bits). If this rule is coded on only one logical child
segment of a physically paired set, it acts like the L rule.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 479
Delete Rules
Figure 260. Logical Parent, Physical Pairing—Physical Delete Rule Example: Before and
After
Figure 261. Logical Parent, Physical Pairing—Physical Delete Rule Example: Database Calls
The physical delete rule requires that all logical children be previously physically
deleted. Physical dependents of the logical parent are physically deleted.
The DLET status code will be ’DX’ if all of the logical children were not previously
physically deleted. All logical children are logically deleted. The LD bit is set on in
the physical logical child BORROW.
Figure 263. Logical Parent, Physical Pairing—Physical Delete Rule Example: Before and
After
Figure 264. Logical Parent, Physical Pairing—Physical Delete Rule Example: Calls and
Status Codes
CUSTOMER, the logical parent, has been physically deleted. Both the logical child
and its pair had previously been physically deleted. (The PD and LD bits are set on
the before figure of the BORROW/LOANS.)
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 481
Delete Rules
Figure 266. Logical Parent, Virtual Pairing—Logical Delete Rule Example: Before and After
Figure 267. Logical Parent, Virtual Pairing—Logical Delete Rule Example: Calls and Status
Codes
The logical delete rule allows either physical or logical deletion first; neither causes
the other. Physical dependents of the logical parent are physically deleted.
The logical parent LOANS remains accessible from its logical children. All logical
children are logically deleted. The LD bit is set on in the physical child BORROW.
The processing and results shown in Figure 265 on page 481 would be the same if
the logical parent LOANS delete rule were virtual instead of logical. The example
that follows is an additional one to explain the logical delete rule.
Figure 269. Logical Parent, Physical Pairing—Logical Delete Rule Example: Before and After
Figure 270. Logical Parent, Physical Pairing—Logical Delete Rule Example: Calls and Status
Codes
The logical delete rule allows either physical or logical deletion first; neither causes
the other. Physical dependents of the logical parent are physically deleted.
The logical parent LOANS remains accessible from its logical children. All physical
children are physically deleted. Paired logical children are logically deleted.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 483
Delete Rules
The processing and results shown in Figure 268 on page 483 would be the same if
the logical parent LOANS delete rule were virtual instead of logical. An additional
example to explain the virtual delete rule follows in Figure 271.
Figure 272. Logical Parent, Virtual Pairing—Virtual Delete Rule Example: Before and After
GHU ’CUSTOMER’
’BORROW/LOANS’ STATUS=’ ’
DLET STATUS=’ ’
Figure 273. Logical Parent, Virtual Pairing—Virtual Delete Rule Example: Calls and Status
Codes
The virtual delete rule allows explicit and implicit deletion. Explicit deletion is the
same as using the logical rule. Implicit deletion causes the logical parent to be
physically deleted when the last logical child is physically deleted.
Physical dependents of the logical child are physically deleted. The logical parent is
physically deleted. Physical dependents of the logical parent are physically deleted.
The LD bit is set on in the physical logical child BORROW.
Figure 275. Logical Parent, Physical Pairing—Virtual Delete Rule Example: Before and After
GHU ’CUSTOMER’
’BORROW/LOANS’ STATUS=’ ’
DLET STATUS=’ ’
Figure 276. Logical Parent, Physical Pairing—Virtual Delete Rule Example: Calls and Status
The virtual delete rule allows explicit and implicit deletion. Explicit deletion is the
same as using the logical rule. Implicit deletion causes the logical parent to be
physically deleted when the last logical child is physically and logically deleted.
The logical parent is physically deleted. Any physical dependents of the logical
parent are physically deleted.
Note: The CUST segment must be physically deleted before the DLET call is
issued. The LD bit is set on in the BORROW segment.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 485
Delete Rules
Figure 278. Physical Parent, Virtual Pairing—Bidirectional Virtual Example: Before and After
GHU ’LOANS’
’CUSTOMER’ STATUS=’ ’
DLET STATUS=’ ’
The bidirectional virtual rule for the physical parent has the same effect as the
virtual rule for the logical parent.
When the last logical child is logically deleted, the physical parent is physically
deleted. The logical child (as a dependent of the physical parent) is physically
deleted. All physical dependents of the physical parent are physically deleted. That
is, ACCOUNTS (not shown), BORROW, and PAYMENT are physically deleted.
Figure 281. Logical Child, Virtual Pairing—Physical Delete Rule Example: Deleting the
Logical Child
The physical delete rule requires that the logical child be logically deleted first. The
LD bit is now set in the BORROW segment.
The logical child can be physically deleted only after being logically deleted. After
the second delete, the LD and PD bits are both set. The physical delete of the
logical child also physically deleted the physical dependents of the logical child. The
PD bit is set.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 487
Delete Rules
Figure 282. Logical Child, Virtual Pairing—Physical Delete Rule Example: Before and After
Figure 284. Logical Child, Virtual Pairing—Logical Delete Rule Example: Calls and Status
The logical delete rule allows the logical child to be deleted physically or logically
first. Physical dependents of the logical child are physically deleted, but they remain
accessible from the logical path that is not logically deleted.
The delete of the virtual logical child sets the LD bit on in the physical logical child
BORROW (BORROW is logically deleted).
Figure 285. Logical Child, Virtual Pairing—Logical Delete Rule Example: Before and After
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 489
Delete Rules
Figure 286. Logical Child, Physical Pairing—Physical or Logical Delete Rule Example
Figure 287. Logical Child, Physical Pairing—Physical or Logical Delete Rule Example: Calls
and Status
With the physical or logical delete rule, each logical child must be deleted from its
physical path. Physical dependents of the logical child are physically deleted, but
they remain accessible from the paired logical child that is not deleted.
Physically deleting BORROW sets the LD bit on in CUST. Physically deleting CUST
sets the LC bit on in the BORROW segment.
Figure 288. Logical Child, Physical Pairing—Physical or Logical Delete Rule Example: Before
and After
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 491
Delete Rules
Figure 290. Logical Child, Virtual Pairing—Virtual Delete Rule Example: Calls and Status
The virtual delete rule allows the logical child to be deleted physically and logically.
Deleting either path deletes both parts. Physical dependents of the logical child are
physically deleted.
The previous delete deleted both paths because the delete rule is virtual. Deleting
either path deletes both.
Figure 291. Logical Child, Virtual Pairing—Virtual Delete Rule Example: Before and After
Figure 293. Logical Child, Physical Pairing—Virtual Delete Rule Example: Calls and Status
With the virtual delete rule, deleting either logical child deletes both paired logical
children. (Notice the PD and LD bit is set on in both.) Physical dependents of the
logical child are physically deleted.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 493
Delete Rules
Figure 294. Logical Child, Physical Pairing—Virtual Delete Rule Example: Before and After
A logically deleted logical child cannot be accessed from its logical parent.
Neither physical or logical deletion prevents access to a segment from its physical
or logical children. Because logical relationships provide for inversion of the physical
structure, a segment can be physically or logically deleted or both, and still be
accessible from a dependent segment because of an active logical relationship. A
physically deleted root segment can be accessed when it is defined as a dependent
segment in a logical DBD. The logical DBD defines the inversion of the physical
DBD. Figure 295 shows the accessibility of deleted segments.‘
When the physical dependent of a deleted segment is a logical parent with logical
children that are not physically deleted, the logical parent and its physical parents
are accessible from those logical children.
The physical structure in Figure 295 shows that SEG3, SEG4, SEG5, and SEG6
have been physically deleted, probably by issuing a DLET call for SEG3. This
resulted in all of SEG3’s dependents being physically deleted. (SEG6’s delete rule
is not P, or a ’DX’ status code would be issued.)
SEG3, SEG4, SEG5, and SEG6 remain accessible from SEG2, the logical child of
SEG6. This is because SEG2 is not physically deleted. However, physical
dependents of SEG6 cannot be accessible, and their DASD space is released
unless an active logical relationship prohibits
When the physical dependent of a deleted segment is a logical child whose logical
parent is not physically deleted, the logical child, its physical parents, and its
physical dependents are accessible from the logical parent.
The logical child segment SEG4 remains accessible from its logical parent SEG7
(SEG7 is not physically deleted). Also accessible are SEG5 and SEG6, which are
variable intersection data. The physical parent of the logical child (SEG3) is also
accessible from the logical child (SEG4).
A physically and logically deleted logical child can be accessed from its physical
dependents (Figure 296 on page 496).
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 495
Delete Rules
The physical structure Figure 296 shows that logical child SEG4 is both physically
and logically deleted.
The third path cannot be blocked because no delete bit exists for this path.
Therefore, the logical child SEG4 is accessible from its dependents even though it
is been physically and logically deleted.
When a segment accessed by its third path is deleted, it is physically deleted in its
physical data base, but it remains accessible from its third path (Figure 297 and
Figure 298 on page 497).
Figure 298. (Part 4 of 5). Example of Deleted Segments Accessibility: Database Calls
SEG5 is physically deleted by the DLET call, and SEG 6 is physically deleted by
propagation. SEG2/SEG6 has unidirectional pointers, so SEG2 was considered
logically deleted before the DLET call was issued. The LD bit is only assumed to be
set on (Figure 299).
The results are interesting. SEG5 is inaccessible from its physical parent path (from
SEG4) unless SEG4 is accessed by its logical parent SEG7 (SEG5 and SEG6 are
accessible as variable intersection data). SEG5 is still accessible from its third path
(from SEG6) because SEG6 is still accessible from its logical child. Thus, a
segment can be physically deleted by an application program and still be accessible
to that application program, using the same PCB used to delete the segment.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 497
Delete Rules
The logical parent SEG7 has been physically and logically deleted (the LD bit is
never really set, but is assumed to be set. It is shown only for the purpose of
illustration.) All of the logical children of the logical parent have also been physically
and logically deleted. However, the logical parent has had its segment space
released, whereas the logical child (SEG4) still exists. The logical child still exists
because it has a physical dependent that has an active logical relationship that
precludes releasing its space.
The second method requires breaking the logical path whenever the logical child
is physically deleted. Breaking the logical path with this method is done for
subordinate logical child segments using the V delete rule. Subordinate logical
parent segments need to have bidirectional logical children with the V rule (must
be able to reach the logical children) or physically paired logical children with the
V rule. This method will not work with subordinate logical parents pointed to by
unidirectional logical children.
Figure 302. Example of Violation of the Physical Delete Rule: Database Calls
SEG7 (the logical child of SEG2) uses the physical delete rule and has not been
logically deleted (the LD bit has not been set on). Therefore, the physical delete
rule is violated. A ’DX’ status code is returned to the application program, and no
segments are deleted.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 499
Delete Rules
Figure 304. Example of Treating the Physical Delete Rule as Logical: Database Calls
SEG8 and SEG9 are both physically deleted, and SEG9 is logically deleted (V rule).
SEG5 is physically and logically deleted because it is the physical pair to SEG9
(with physical pairing setting the LD bit in one set, the PID bit in the other, and vice
versa). Physically deleting SEG5 causes propagation of the physical delete to
SEG5’s physical dependents; therefore, SEG6 and SEG7 are physically deleted.
Note that the physical deletion of SEG7 is prevented if the physical deletion started
by issuing a DLET call for SEG4. But the physical rule of SEG7 is treated as logical
in this case.
For HDAM and HIDAM databases, the logical twin chain is established as required,
and existing dependents of the inserted segment remain.
For HISAM databases, if the root segment is physically and logically deleted before
the insert is done, then the first logical record for that root in primary and secondary
data set groups is reused. Remaining logical records on any OSAM chain are
dropped.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 501
Delete Rules
| Insert Rules for Physical Parent Segment A: The insert rules for physical parent
| (PP) segment A control the insert of PP A using the logical path to PP A. The rules
| are as follows:
| v To disallow the insert of PP A on its logical path, use the physical insert rule.
| v To allow the insert of PP A on its logical path (concatenated with virtual logical
| child segment A), use either the logical or virtual rule.
| Where PP A is already present, a logical connection is established to the existing
| PP A segment. The existing PP A can either be replaced or remain unchanged:
| – If PP A is to remain unchanged by the insert call, use the logical insert rule.
| – If PP A is to be replaced by the insert call, use the virtual insert rule.
| Delete Rules for Physical Parent Segment A: The delete rules for PP segment
| A control the deletion of PP A using the logical path to PP A. The rules are as
| follows:
| v To cause PP segment A to be deleted automatically when the last logical
| connection (through real logical child segment B to PP segment A) is broken, use
| the bidirectional virtual delete rule.
| v The other delete rules for PP A are not meaningful.
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 503
Delete Rules
| Replace Rules for Physical Parent Segment A: The replace rules for PP
| segment A control the replacement of PP A using the logical path to PP A. The rules
| are as follows:
| v To disallow the replacement of PP A on its logical path and receive an 'RX' status
| code if the rule is violated by an attempt to replace PP A, use the physical
| replace rule.
| v To disregard the replacement of PP A on its logical path, use the logical replace
| rule.
| v To allow the replacement of PP A on its logical path, use the virtual replace rule.
| Note: These rules are identical to the insert rules for PP segment A.
| The insert rules for logical parent (LP) segment B control the insert of LP B using
| the logical path to LP B. The rules are as follows:
| v To disallow the insert of LP B on its logical path, use the physical insert rule.
| v To allow the insert of LP B on its logical path (concatenated with virtual segment
| RLC B) use either the logical or virtual rule.
| Where LP B is already present, a logical connection is established to the existing
| LP B segment. The existing LP B can either be replaced or remain unchanged:
| – If LP B is to remain unchanged by the insert call, use the logical insert rule.
| – If LP B is to be replaced by the insert call, use the virtual insert rule.
| Delete Rules for Logical Parent Segment B: The delete rules for segment LP B
| control the deletion of LP B on its physical path. A delete call for a concatenated
| segment is interpreted as a delete of the logical child only. The rules are as follows:
| v To ensure that LP B remains accessible until the last logical relationship path to
| that occurrence has been deleted, choose the physical delete rule. If an attempt
| to delete LP B is made while there are occurrences of real logical child (RLC) B
| pointing to LP B, a 'DX' status code is returned and no segment is deleted.
| v To allow segment LP B to be deleted on its physical path, choose the logical
| delete rule. When LP B is deleted, it is no longer accessible on its physical path.
| It is still possible to access LP B from PP A through RLC B as long as RLC B
| exists.
| v Use the virtual delete rule to physically delete LP B when it has been explicitly
| deleted by a delete call or implicitly deleted when all RLC Bs pointing to it have
| been physically deleted.
| Note: These rules are identical to the replace rules for PP segment A.
| The replace rules for LP segment B control the replacement of LP B using the
| logical path to LP B. The rules are as follows:
| v Use the physical replace rule to disallow the replacement of LP B on its logical
| path and receive an 'RX' status code if the rule is violated by an attempt to
| replace LP B.
| v Use the logical replace rule to disregard the replacement of LP B on its logical
| path.
| v Use the virtual replace rule to allow the replacement of LP B on its logical path.
| Insert Rules for Real Logical Child Segment B: The insert rules do not apply to
| a logical child.
| Delete Rules for Real Logical Child Segment B: The delete rules for RLC
| segment B apply to delete calls using its logical or physical path. The rules are as
| follows:
| v Use the physical delete rule to control the sequence in which RLC B is deleted
| on its logical and physical paths. The physical delete rule requires that it be
| logically deleted before it is physically deleted. A violation results in a 'DX' status
| code.
| v Use the logical delete rule to allow either physical or logical deletes to be first.
| v Use the virtual delete rule to use a single delete call from either the logical or
| physical path to both logically and physically delete RLC B.
| Note: These rules are identical to the replace rules for PP segment A.
| The replace rules for LP B control the replacement of RLC B using the logical path
| to RLC B. The rules are as follows:
| v Use the physical replace rule to disallow the replacement of RLC B on its logical
| path and receive an 'RX' status code if the rule is violated by an attempt to
| replace RLC B.
| v To disregard an attempt to replace RLC B on its logical path, use the logical
| replace rule.
| v To allow the replacement of RLC B on its logical path, use the virtual replace
| rule.
|
Appendix B. Insert, Delete, and Replace Rules for Logical Relationships 505
Delete Rules
You need to know the following information about OSAM if your database is using
OSAM as an access method:
v OSAM is a special access method supplied with IMS.
v IMS communicates with OSAM using OPEN, CLOSE, READ, and WRITE
macros.
v OSAM communicates with the I/O supervisor using the I/O driver interface.
v An OSAM data set can be read using either the BSAM or QSAM access method.
v The number of extents in an OSAM data set is limited by:
– The maximum length of the data extent block (DEB)
– The length of the sector number table that is created for rotational position
sensing (RPS) devices
The length of a DEB is represented in a single byte that is expressed as the
number of double words. The sector number table exists only for RPS devices
and consists of a fixed area of eight bytes plus one byte for each block on a
track, rounded up to an even multiple of eight bytes. A minimum-sized sector
table (7 blocks per track) requires two double words. A maximum-sized sector
table (255 blocks per track) requires 33 double words.
In addition, for each extent area (two double words), OSAM requires a similar
area that contains device geometry data. Each extent requires a total of four
double words. The format and length (expressed in double words) of an OSAM
DEB are shown in Table 36.
Table 36. Length and Format of an OSAM DEB
Format Length
Appendage sector table 5
Basic DEB 4
Access method dependent section 2
Subroutine name section 1
Standard DEB extents 120 (60 extents)
OSAM extent data 120
Minimum sector table 2
With a minimum-sized sector table, the DEB can reflect a maximum of 60 DASD
extents. With a maximum-sized sector table, the DEB can reflect a maximum of
52 DASD extents.
v An OSAM data set can be opened for update in place and extension to the end
through one data control block (DCB). The phrase “extension to the end” means
that records can be added to the end of the data set and that new direct-access
extents can be obtained.
v An OSAM data set does not need to be formatted before use.
v An OSAM data set can use fixed-length blocked or unblocked records.
| v The maximum size of an OSAM data set depends on the block size of the data
| set and whether it is a HALDB OSAM data set. The size limits for OSAM data
| sets are:
| – 8 GB for a non-HALDB OSAM data set that has an even-length block size
| – 4 GB for a non-HALDB OSAM data set that has an odd-length block size
| – 4 GB for a HALDB OSAM data set
v File mark definition is always used to define the current end of the data set.
When new blocks are added to the end of the data set, they replace dummy
pre-formatted (by OSAM) blocks that exist on a logical cylinder basis. A file mark
is written at the beginning of the next cylinder, if one exists, during a format
logical cylinder operation. This technique is used as a reliability aid while the
OSAM data set is open.
v OSAM EXCP counts are accumulated during OSAM End of Volume (EOV) and
close processing.
| v Migrating OSAM data sets utilizing ADRDSSU and the DFSMSdss™ component
| of z/OS DFSMS: DFSMSdss will migrate the tracks of a data set up to the last
| block written value (DS1LSTAR) as specified by the DSCB for the volume being
| migrated. If the OSAM data set spans multiple volumes that have not been
| pre-allocated, the DS1LSTAR field for each DSCB will be valid and DFSMSdss
| can correctly migrate the data.
| If the OSAM data set spans multiple volumes that have been pre-allocated, the
| DS1LSTAR field in the DSCB for each volume (except the last) can be zero. This
| condition will occur during the loading operation of a pre-allocated, multi-volume
| data set. The use of pre-allocated volumes precludes EOV processing when
| moving from one volume to another, thereby allowing the DSCBs for these
| volumes to not be updated. The DSCB for the last volume loaded is updated
| during close processing of the data set.
| DFSMSdss physical DUMP or RESTORE commands with the parameters ALLEXCP
| or ALLDATA must be used when migrating OSAM data sets that span
| pre-allocated, multi volumes. These parameters will allow DFSMSdss to correctly
| migrate OSAM data sets.
| Related Reading: For more information on the z/OS DFSMSdss component of
| DFSMS and the ALLEXCP and ALLDATA parameters of the DUMP and RESTORE
| commands, see the DFSMSdss Storage Administration Reference.
Other z/OS access methods (VSAM and SAM) are used in addition to OSAM for
physical storage of data.
Related Reading: For information about defining OSAM subpools, see IMS Version
9: Installation Volume 2: System Definition and Tailoring.
The normal way to correct a bad pointer is to perform recovery. However, some
cases exist in which a bad pointer can be corrected through reorganization. A
description of the circumstances in which this can or cannot be done is as follows:
v PC/PT pointers. The HD Unload utility issues unqualified GN calls to read a
database. If the bad pointer is a PC or PT pointer, DL/I will follow the bad pointer
and the GN call will fail. Therefore, reorganization cannot be used to correct PC
or PT pointers.
v LP/LT pointers. LP and LT pointers are rebuilt during reorganization. However,
DL/I can follow the LP pointer during unload. If the logical child segment contains
a direct LP pointer and the logical parent’s concatenated key is not physically
stored in the logical child segment, DL/I follows the bad LP pointer to construct
the logical parent’s concatenated key. This causes an ABEND.
v LP pointer. When DBR= is specified for pre-reorganization and the database has
direct LP pointers, the HD Unload utility saves the old LP pointer. Bad LP
pointers produce an error message (DFS879) saying a logical child that has no
logical parent exists.
v LP pointer. When DBIL= is specified for pre-reorganization of a logical child or
parent database, the utilities that resolve LP pointers use concatenated keys to
match logical parent and logical child segments. New LP pointers are created.
Related Reading: For more information on how HALDBs are maintained in the
RECON, see IMS Version 9: Database Recovery Control (DBRC) Guide and
Reference.
Important: The HALDB Partition Definition utility will not impact online IMS
subsystems with regard to RECON contention. The RECON is only reserved for the
time it takes to process a DBRC request. It is not held for the duration of the utility
execution.
The utility consists of several panels and programs that perform various actions on
the HALDB and its partitions.
Important: The Panel IDs are shown enclosed in parentheses in the caption of
each panel image here. To enable Panel IDs to be displayed in the upper left corner
of each of your panels; enter panelid on the ISPF command line and press Enter.
In this appendix:
v “The Partitioned Databases Panel” on page 512
v “Accessing Help Information” on page 513
v “Exiting the Utility” on page 513
v “Displaying the ISPF Member List” on page 514
v “Opening HALDB Partitions” on page 515
v “Defining Data Set Group Information” on page 527
v “Displaying the List of Defined Partitions” on page 528
v “Opening Database Information” on page 536
v “Deleting Database Information” on page 537
v “Exporting Database Information” on page 537
v “Importing Database Information” on page 538
v “Displaying the IMS Concatenation” on page 538
v “Selecting an IMS Configuration” on page 539
v “Using Batch to Export or Import Partition Information” on page 541
v “DSPXRUN Command Syntax” on page 542
The Partitioned Databases panel has point-and-shoot text fields (in turquoise by
default). To use the point-and-shoot fields, just position the cursor on the text and
press the enter key.
The Figure 306 on page 512 provides space for you to enter a HALDB name,
allowing HALDB to gather information about that HALDB. The information can be
retrieved from DBDLIB or from RECON depending on the option you select and the
current state of the partitions. Following Figure 306 on page 512 are descriptions of
the panel fields.
Help
------------------------------------------------------------------------------
Partitioned Databases
Configuration . . : DEFAULT
Command ===>
F1=Help F3=Exit F4=Prompt
The options in Figure 306 allow you to perform the following actions:
1. Create or change HALDB partitions (see “Opening HALDB Partitions” on page
515 and “Displaying the List of Defined Partitions” on page 528).
2. View or change HALDB information (see “Opening Database Information” on
page 536).
3. Delete HALDB information (see “Deleting Database Information” on page 537).
4. Export HALDB information (see “Exporting Database Information” on page 537).
5. Import HALDB information (see “Importing Database Information” on page 538).
6. Show the IMS concatenation (see “Displaying the IMS Concatenation” on page
538).
7. Select an IMS configuration (see “Selecting an IMS Configuration” on page
539).
Configuration
| The configuration is a name you have specified that identifies a set of DBD
| libraries and a set of RECON data sets. If you already have the IMS DD
| statement allocated from the logon procedure and if you have the
Help information can also be obtained by pressing the help key. The help displayed
depends on the circumstances and on the placement of the cursor when the help
key is pressed.
v If an error message is displayed, more information on the error might be
displayed.
v If the cursor is on an input field, information about the field is displayed,
otherwise information about the panel is displayed.
To exit pull-down panels press the cancel key (F12), then the exit key (F3) if you
wish to leave the HALDB Partition Definition utility panels altogether.
File Help
-------------------------------------------------------------------------------
MEMBER LIST IMSIVP81.DBDLIB Row 00001 of 00011
Name Size TTR Alias-of AC AM RM ---- Attributes ---
. DBFSAMD1 00000158 00013B 00 24 24
. DBFSAMD2 000001A0 000143 00 24 24
. DBFSAMD3 000006E0 00014B 00 24 24
. DBFSAMD4 000002C8 000207 00 24 24
. DI21PART 00000230 000133 00 24 24
. IVPDB1 00000138 000103 00 24 24
. IVPDB1I 00000138 00010B 00 24 24
. IVPDB2 00000130 000113 00 24 24
. IVPDB3 00000188 00011B 00 24 24
. IVPDB4 00000110 000123 00 24 24
. IVPDB5 000000B0 00012B 00 24 24
**End**
Command ====> Scroll ===> CSR
F1=Help F3=Exit F12=Cancel
The member list originates from the PDS directories of the IMS concatenation. The
members that are displayed can be HALDB or non-HALDB. The member list is a
standard ISPF list so there is no IMS-specific information displayed.
From the member list, you can select the HALDB name to process by typing in the
far-left column. If the name selected is not for a partitioned database, an error
message is displayed. You can select a HALDB name with the slash (/) character
and the File action to select the type of actions to perform. The same actions that
are shown on Figure 306 on page 512 are available here.
If you specify an option on the Partitioned Databases panel (512), you do not need
to use the File Action bar; just press Enter. You can use the File Action bar to
override the option that you specified an option on the Partitioned Databases panel.
The list of HALDBs in the Member List panel can be manipulated by using the File
action bar (Figure 310 on page 515).
The options on the File Action bar allow you to perform the following actions:
v Create or change HALDB partitions (see “Opening HALDB Partitions” and
“Displaying the List of Defined Partitions” on page 528).
v View or change HALDB information (see “Opening Database Information” on
page 536).
v Delete HALDB information (see “Deleting Database Information” on page 537).
v Export HALDB information (see “Exporting Database Information” on page 537).
v Import HALDB information (see “Importing Database Information” on page 538).
The first time you choose a HALDB you must set values for the HALDB master; see
Figure 311 on page 516. When you press Enter to continue, you set the defaults for
the partitions, see Figure 312 on page 518. When you press Enter to continue
again, you define partitions using those defaults. You can modify each partition
uniquely as they are created or you can modify them later from the list of partitions.
Figure 314 on page 524 shows an example of the panel to specify the partition
information.
After the initial set of partitions is defined (and whenever you select that HALDB
again), you will see the Database Partitions display (see Figure 319 on page 529 in
Displaying the List of Defined Partitions).
Important: Most of the information initially displayed on the panel in Figure 311 on
page 516 is extracted from the DBDLIB member. You can change the displayed
information, but that information is not saved back into the DBDLIB member (the
definition is saved in the RECON data sets).
| Help
| ------------------------------------------------------------------------------
| Partitioned Database Information
|
| Type the field values. Then press Enter to continue.
|
| Database name . . . . . . . : IVPDB1
|
|
| Master Database values
| Part. selection routine . . . DFSIVD1
| RSR global service group . . . BKUPGRP1
| RSR tracking type . . . . . . DBTRACK
| Share level . . . . . . . . . 0
| Database organization . . . : PHDAM
| Recoverable? . . . . . . . . . Yes
| Number of data set groups . : 10
| Online Reorganization Capable: Yes
|
| To exit the application, press F3.
|
| Command ===>
| F1=Help F3=Exit F12=Cancel
|
|
| Figure 311. Partitioned Database Information (DSPXPOA)
|
The following are descriptions of the fields on the Partitioned Database information
screen:
Database name
Enter 1 to 8 alphanumeric characters. This is the name you selected from
the previous panel (see Figure 306 on page 512); it is the name of the
HALDB that you are defining.
Part. selection routine
Enter 1 to 8 alphanumeric characters (the first character must be
alphabetic). This is the name of the Partition Selection Exit Routine
provided by you.
RSR global service group
Enter 1 to 8 alphanumeric characters (the first character must be
alphabetic). This is an optional parameter used to specify the RSR global
service group that the HALDB is to be assigned to.
RSR tracking type
This is an optional parameter you use to specify the type of RSR tracking
(shadowing) for a partition assigned to a global service group. The type,
RCVTRACK or DBTRACK, cannot be specified without an RSR global
service group having been defined for the HALDB master.
v DBTRACK- indicates HALDB readiness tracking is to be done.
v RCVTRACK- indicates recovery readiness tracking is to be done.
Recoverable?
Yes indicates that the HALDB is recoverable. No indicates that the HALDB is
not recoverable. Yes is the default. If an RSR global service group is
specified, the recoverable field must be Yes.
Related Reading: For more information on non-recoverable databases see
the IMS Version 9: Operations Guide.
Number of data set groups
This is the number of data sets in the groups that contain data as specified
in the DBDGEN.
| Online Reorganization Capable
| Yes specifies that this HALDB supports online reorganization. No specifies
| that this HALDB does not support online reorganization. These
| specifications are stored in the DBRC RECON data set.
| Related Reading:
| v For more information on reorganizing HALDBs online, see “HALDB
| Online Reorganization” on page 364.
| v For more information on DBRC and the RECON data set, see IMS
| Version 9: Database Recovery Control (DBRC) Guide and Reference.
Help
------------------------------------------------------------------------------
Partition Default Information
Processing options
Automatic definition . . . . No
Input dataset . . . . . . . . ’IMS.IVPDB1.KEYS’
Use defaults for DS groups. . No
Randomizer
Module name . . . . . . . DD41DUP2
Anchor . . . . . . . . . . 2
High block number. . . . . 999
Bytes . . . . . . . . . . 2000
Free Space
Free block freq. factor. . 0
Free space percentage. . . 0
DBRC options
Max. image copies. . . . . 2
Recovery period. . . . . . 0
Recovery utility JCL . . . RECOVJCL
Default JCL. . . . . . . . ________
Image copy JCL . . . . . . ICJCL
Online image copy JCL. . . OICJCL
Receive JCL. . . . . . . . RECVJCL
Reusable? . . . . . . . . No
Command ===>
F1=Help F3=Exit F6=Groups F12=Cancel
Important:
v The Randomizer section is present only if the HALDB is PHDAM.
v The Defaults for data set groups section is present only if there is only one data
set group specified during DBDGEN. If there are multiple data set groups, use
F6=Groups to display all data set groups using the dialog described in “Defining
Data Set Group Information” on page 527.
The following are descriptions of the fields on the Partition Default Information
screen:
Database name
This is the name you selected from the previous panel (see Figure 306 on
page 512), it is the name of the HALDB that you are defining.
Automatic definition
The value can be Yes or No. Specifying yes will cause the partitions to be
defined automatically based on your choices for partition name (that must
This value specifies the maximum number of bytes of a HALDB record that
can be stored into the root addressable area in a series of inserts unbroken
by a call to another HALDB record.
| A value of 0 (zero) means that all bytes are addressable. It is equivalent to
| omitting the bytes parameter from the RMNAME keyword in the DBD macro
| statement in DBDGEN. This parameter is for PHDAM HALDBs only.
| Related Reading: For more information on the DBD macro statement in
| DBDGEN, see IMS Version 9: Utilities Reference: System.
Free block freq. factor
A numeric unsigned decimal integer from 0 to 100, except 1. The free block
frequency factor (fbff) specifies that every nth control interval or block in this
data set group is left as free space during HALDB load or reorganization
(where fbff=n). The range of fbff includes all integer values from 0 to 100
except fbff=1. The default value for fbff is 0.
Free space percentage
Two numeric unsigned decimal integer digits with a range from 0 to 99. The
fspf is the free space percentage factor. It specifies the minimum
percentage of each control interval or block that is to be left as free space
in this data set group.
The default value for fspf is 0.
Block size
A numeric unsigned even decimal integer with a range from 1 to 32,000.
The block size value is used by OSAM only. An initial value of 4096 is
displayed. If the HALDB is not OSAM, the block size field is not displayed.
Related Reading: For more information on the INIT.DBDS command, see
IMS Version 9: Database Recovery Control (DBRC) Guide and Reference.
Max. image copies
A required parameter you use to specify the number of image copies that
DBRC maintains for the identified DBDS. The value must be a unsigned
decimal integer from 2 to 255.
Recovery period
An optional parameter you use to specify the recovery period of the image
copies for the specified DBDS.
Specify an unsigned decimal integer from 0 to 999 that represents the
number of days that information about the image copies is kept in RECON.
If you specify 0, there is no recovery period. 0 is the default.
Recovery utility JCL
Enter 1 to 8 alphanumeric characters (the first character must be
alphabetic). This is an optional parameter you use to specify the name of a
member of a partitioned data set of skeletal JCL. When you issue the
GENJCL.RECOV command, DBRC uses this member to generate the JCL to
run the Database Recovery utility for the identified DBDS.
RECOVJCL is the default member name.
Default JCL
Enter 1 to 8 alphanumeric characters (the first character must be
alphabetic). This is an optional parameter you use to specify an implicit
skeletal JCL default member for the DBDS. The specified member is used
by the GENJCL.IC, GENJCL.OIC, and GENJCL.RECOV commands to resolve
keywords that you have defined.
Each line of the input data set must contain a partition selection string or the high
key value to be used during partition definition. The file must contain only one value
on each line of the file, with the value left-justified. The length of the string is
determined by the last non-blank character. Each record must contain only one
string.
In the partition name field, include percent signs (%) as placeholders for an
alphanumeric sequence number (A-Z, 0-9). If you type a partition name like:
Partition name . . . . . . . IVPD1%%
.
IVPD1A9
IVPD1BA
IVPD1BB
IVPD1BC
.
.
When you press Enter, as many partitions as you have key values in the input data
set are automatically generated.
| If you want to generate partition names that will allow you to preserve your naming
| sequence when expanding your database in the future, you can specify a partition
| name like IVP1%%A. The partitions would then be created in the following
| sequence:
| IVP1AAA
| IVP1ABA
| .
| .
| IVP1AZA
| IVP1A0A
| IVP1A1A
| .
| .
| IVP1A9A
| IVP1BAA
| .
| .
After automatic definition is complete, in the Database Partitions panel (Figure 319
on page 529) you can see that the partition selection string is filled-in with
information from your input data set.
– If you did not specify a partition selection exit, the partition high key values
are required.
– If you did specify a partition selection exit, the partition selection string values
are optional.
After you set the defaults and press the enter key, the partition definition screen is
displayed. You can modify the fields and press the enter key to define the partition.
After you press the enter key, the partition is defined in RECON and the partition
definition panel is displayed again so that you can define more partitions. The
partition ID is incremented each time a partition is defined. Press the cancel key
(PF12) to prevent the displayed partition from being defined.
When you press PF12 to stop defining new partitions, the Partitioned Databases
panel (Figure 306 on page 512) is displayed again. You may also choose to stop
defining new partitions by pressing F11=List; a list of defined partitions (see
“Displaying the List of Defined Partitions” on page 528) is displayed.
| Help
| ------------------------------------------------------------------------------
| Change Partition
|
| Type the field values. Then press Enter.
|
| Database name . . . . . . . : IVPDB1
| Partition name . . . . . . . IVPD101
| Partition ID. . . . . . . . : 1
| Data set name prefix. . . . . IMS.DB01.FINANCE.YEAR1998.CURR
| Partition Status. . . . . . . _______
|
|
| Partition Selection String
| +00 F2F0F0F3 4BF2F2F4 40F1F77A F2F57AF0 | 2003.224 17:25:0 |
| +10 F94BF6F3 F3F12432 00000000 00001020 | 9.6331.......... |
| +20 A840C1A5 85404040 40E28195 40D196A2 | y Ave San Jos |
| +30 856B40C3 C14040F9 F5F1F4F1 00100020 | e, CA 95141.... |
| +40 00050000 40F0F34B F0F3F440 00000100 | .... 03.034 .... |
| +50 F1F8F0F0 C9C2D4E2 C5D9E540 40C9C2D4 | 1800IBMSERV IBM |
| +60 40C39699 974B4040 F5F5F540 C2818993 | Corp. 555 Bail |
| +70 A840C1A5 85404040 40E28195 40D196A2 | y Ave San Jos |
| +80 856B40C3 C14040F9 F5F1F4F1 00403010 | e, CA 95141. .. |
| +90 00010500 40F0F34B F2F4F340 00324020 | .... 03.243 .. . |
| +A0 9201913C D2FE933D 913C1F66 4360A005 | k.j.K.l.j....-.. |
| +B0 3233A200 D996A281 6BD785A3 85996B40 | ..s.Rosa,Peter, |
| +C0 000080D4 81A3A3F9 71C4C6F8 F1F4C6C2 | ...Matt9.DF814FB |
| +D0 9311913C F6F4F8F6 943C1F66 4360A005 | l.j.6486m....-.. |
| +E0 41E3453C 06000045 10110220 10416220 | .T.............. |
| +F0 FFFFF900 00004920 18007410 94000300 | ..9.........m... |
|
| Randomizer
| Module name . . . . . . . DD41DUP2
| Anchor . . . . . . . . . . 2
| High block number. . . . . 999
| Bytes . . . . . . . . . . 2000
|
| Free Space
| Free block freq. factor. . 0
| Free space percentage. . . 0
|
| Attributes for data set group A
| Block Size . . . . . . . . 8192
|
| DBRC options
| Max. image copies. . . . 2
| Recovery period. . . . . 0
| Recovery utility JCL . . RECOVJCL
| Default JCL. . . . . . . ________
| Image copy JCL . . . . . ICJCL
| Online image copy JCL. . OICJCL
| Receive JCL. . . . . . . RECVJCL
| Reusable? . . . . . . . No
|
|
| Command ===>
| F1=Help F3=Exit F5=String F6=Groups F12=Cancel
||
| Figure 314. Change Partition (DSPXPPA)
|
Important:
v The Randomizer section is present only if the HALDB is PHDAM.
v The data set group attributes section is present only if there is only one data set
group specified during DBDGEN. If there is more than one data set group, use
F6=Groups to display all data set groups using the dialog described in “Defining
Data Set Group Information” on page 527.
The following are descriptions of the fields on the Change Partition screen:
| Partition ID
| A numeric value between 1 and 32 767, but less than the current high
| partition ID value for this HALDB. The Partition Definition utility generates
| the partition ID for you, regardless of whether you create your partitions
| manually or automatically. DBRC records this number in the RECON data
| set. Data set names include the partition ID of the partition to which they
| belong.
| After an ID is assigned to a partition, you cannot change it.
| Partition Status
| You can disable a partition by typing disable in the Partition Status field.
| Usually, you would only disable a partition prior to deleting it.
| To enable a disabled partition, type enable in the Partition Status field.
| Partition High Key
The Partition High Key field allows you to specify the highest database
record root key that a partition can contain. The partition high key is
determined by your installation. IMS treats the partition high key as a
hexadecimal value. You must enter a value in the Partition High Key field.
The length of the Partition High Key field is determined by the root key
length you specify using the BYTES= parameter in the FIELD statement
during DBD definition. If the length of the partition high key you enter is
longer than the root key length, an error message displays and you must
reduce the length of the partition high key. If the partition high key length is
less than the defined root key length, the Partition Definition utility pads the
high key value with hex ’FF’s up to the defined root key length. The partition
high key values must be unique for each partition within a HALDB.
The Partition High Key field consists of two sections: an editable section on
the left that displays the partition high key in hexadecimal format and a
view-only section on the right that displays the partition high key in
alphanumeric format.
You can enter a hexadecimal value directly in the left section of the Partition
High Key field. The Partition Definition utility displays the alphanumeric
equivalent of this value in the right section of the Partition High Key field.
You can enter an alphanumeric value directly by using the ISPF editor. To
access the ISPF editor, press F5 (If you have already entered something in
the hexadecimal section, press F5 twice). Once an alphanumeric value is
entered, its hexadecimal equivalent is displayed in the left section of the
Partition High Key field.
An alphanumeric value can consist of any character information. If the
alphanumeric value contains non-display characters, you must identify
these characters using hexadecimal notation. In the ISPF editor, a
hexadecimal character string is enclosed by single quotation marks and
| either prefixed or followed with an x, for example: X'c1f201ffff'.
| Partition Selection String
| The Change Partition panel displays the Partition Selection String field
| only when you have specified a partition selection routine in the HALDB
| master definition. A partition selection routine uses the partition selection
| string in hexadecimal format to distribute records across the partitions in
| your HALDB.
| Partition selection strings are 256 bytes long. If you enter a partition
| selection string that is less than 256 bytes in length, the Partition Definition
| utility fills the remaining bytes with X'00'.
| The Partition Selection String field consists of two sections: an editable
| section on the left that displays the partition selection string in hexadecimal
| format and a view-only section on the right that displays the partition
| selection string in alphanumeric format.
| You can enter a hexadecimal value directly in the left section of the
| Partition Selection String field. The Partition Definition utility displays the
| alphanumeric equivalent of this value in the right section of the Partition
| Selection String field.
| You can enter the partition selection string in an alphanumeric format by
| using the ISPF editor. To access the ISPF editor, press F5 (If you have
| already entered something in the hexadecimal section, press F5 twice).
| After you enter an alphanumeric string, its hexadecimal equivalent is
| displayed in the left section of the Partition Selection String field.
| An alphanumeric string can consist of any character information. If an
| alphanumeric string contains non-display characters, you must identify
| these characters using hexadecimal notation. In the ISPF editor, a
| hexadecimal character string is enclosed by single quotation marks and
| either prefixed or followed with an x, for example: X'c1f201ffff'.
F5=String
| F5 performs two functions: first, when new data is entered into the
| hexadecimal section of either the Partition High Key or the Partition
| Selection String field, F5 enters the data into the Partition Definition utility
| and displays the alphanumeric equivalent of the hexadecimal string in the
| right section of the field. Second, if there is no uncommitted data in the
| hexadecimal section, it displays the alphanumeric editor. Figure 315 is an
| example of the editor panel that is displayed for the Partition Selection
| String field.
|
EDIT Partition Selection String
F6=Groups
Pressing F6 allows you to display the Data set group dialog that is
discussed in “Defining Data Set Group Information” on page 527.
F11=List
Pressing F11 allows you to display the Database partitions panel that is
discussed in “Displaying the List of Defined Partitions” on page 528.
If your definition of the HALDB from DBDLIB only allows one data set group, the
Attributes for data set group A section is displayed. If multiple groups are
allowed, a reminder to press PF6 to work with the groups is displayed. The data set
groups dialog is discussed in “Defining Data Set Group Information” on page 527.
Related Reading: For a description of the fields shown in Figure 314 on page 524,
see the description for Figure 312 on page 518.
If you have multiple data set groups defined for your HALDB and you do not use
automatic definition, use the data set group list that is displayed in Figure 316 on
page 527 and Figure 317 on page 528.
From the data set groups list, you can change the attributes for each member by
typing over the values in the list column. There is a special row in the list that
allows you to make changes to an entire column of the list; the all row. When you
type a value in the all row and press Enter, the value you typed is propagated to all
of the members of the groups. After your changes are made, the all row is blanked
out.
Important: Press F9 to save your changes and then press F12 to return to the
previous panel.
The list contains an action column. The only action allowed is to display all
information for a particular group. Select the group by typing a slash (/) in the Act
column. Figure 318 on page 528 is where you can modify the values by typing over
the existing data and pressing enter.
Help
------------------------------------------------------------------------------
Change Dataset Groups Row 1 to 11 of 11
Select an item by pressing a ’/’ on the desired line then press Enter.
Help
------------------------------------------------------------------------------
Change Dataset Groups Row 1 to 11 of 11
Select an item by pressing a ’/’ on the desired line then press Enter.
Help
------------------------------------------------------------------------------
Change a Dataset Group
DBRC options
Max. image copies. . . . 2
Recovery period. . . . . 0
Recovery utility JCL . . RECOVJCL
Default JCL. . . . . . . ________
Image copy JCL . . . . . ICJCL
Online image copy JCL. . OICJCL
Receive JCL. . . . . . . RECVJCL
Reusable? . . . . . . . No
Command ===>
F1=Help F3=Exit F12=Cancel
Related Reading: For descriptions of the fields on the Change Data Set Groups
panels, see the field definitions for Figure 312 on page 518.
From the Database Partitions list panel Figure 319, you can work with individual
partitions. To use the File Action bar, type a slash (/) in the Act line command
column for the partition you want to work with, then put the cursor on the File action
bar choice and press Enter. Select the action you want to perform by typing the
number or by positioning the cursor on the choice then pressing enter again.
You can invoke the Database Partitions panel (Figure 320) to show the values by
pressing your PF11 key.
You can invoke the Database Partitions panel to show the Randomizer values by
pressing your PF11 key (Figure 321 on page 531).
Select an item by pressing a ’/’ on the desired line then press Enter.
The Database Partitions list panel has the HALDB name at the top and table
information below. Descriptions of the table columns for Figure 321 on page 531 are
presented below.
Act This is the line command input field where you can invoke commands such
as open, copy, and the other commands listed in “The Partition List Line
Commands” on page 532.
Name The name column contains the partition name provided during the definition
of the partition. This is the initial sort sequence.
Related Reading: For a more detailed description of the partition name,
see “Opening HALDB Partitions” on page 515.
Module
The module column contains the module name of the randomizing module.
Related Reading: For a more detailed description of the module name see
Figure 312 on page 518.
Anchor
The anchor column contains the number of root anchor points.
Related Reading: For a more detailed description of the anchor see
Figure 312 on page 518.
High block
The high block column contains the high block number.
Related Reading: For a more detailed description of the high block number
see Figure 312 on page 518.
Bytes For a more detailed description of the bytes field see Figure 312 on page
518.
FBFF The FBFF column contains the free block frequency factor.
To use line commands, type the command in the Act column to the right of the
partition you want to use. You can type multiple line commands (only one per
partition, though) on the Database Partitions panel: the commands are executed
serially starting from the top.
New partition
You can create new partitions using the same panels that you used when
you initially created partitions. See Figure 312 on page 518.
Open partition
You can open the selected partitions and modify them as desired. See
Figure 314 on page 524.
Open data set groups
You can manipulate the data set group members using the panels
described in “Defining Data Set Group Information” on page 527.
Print partition information
Information about the selected partitions is written to the ISPF list data set.
Print partition view
The information in the currently-displayed view is written to the ISPF list
data set.
The list of partitions in the Database Partitions can be sorted in various ways using
the Edit action bar choice (Figure 323).
Copy partition...
Type a slash (/) in the line command field and use the Edit - Copy partition
pull-down panel to define a new partition using the attributes of the selected
partition. The partition name and the ID must be unique.
The Change Partition panel is then displayed, see Figure 314 on page 524,
and you can create new partitions serially. The values shown in the panel
are filled-in using the attributes of the selected partition.
Delete partition
Type a slash (/) in the line command field and use the Edit - Delete a
partition pull-down panel to delete partitions. A delete confirmation panel is
displayed. You can press Enter to confirm delete or press the cancel key to
ignore the delete.
Find... You can search the partition list for a selected character string. Only simple
character values can be specified. The cursor is placed on the partition that
contains the search value.
The search string is not case sensitive. It will search on any field, not just
the currently displayed fields on the Database Partitions panels (Figure 324
on page 534).
The list of partitions in the Database Partitions can be sorted in various ways by
using the View action bar choice (Figure 325).
Important: The same process is used for Change selected partitions except that
the changes are only applied to the partitions selected from the list with a slash (/).
If you want to change a character field to blanks, type a single slash (/) character
so that it is the only character in the field.
Help
------------------------------------------------------------------------------
Change Partition
Randomizer
Module name . . . . . . . DD41MOD3
Anchor . . . . . . . . . . ___
High block number. . . . . ________
Bytes . . . . . . . . . . ________
Free Space
Free block freq. factor. . ___
Free space percentage. . . __
Command ===>
F1=Help F3=Exit F5=String F6=Groups F12=Cancel
Important:
v The Randomizer section is present only if the HALDB is PHDAM.
v The data set groups section is present only if there is only one data set group
specified during DBDGEN. If there is more than one data set group, use
F6=Groups to display all data set groups using the dialog described in “Defining
Data Set Group Information” on page 527.
Figure 327 on page 536 shows the Change Dataset Groups panel.
Help
------------------------------------------------------------------------------
Change Dataset Groups Row 1 to 10 of 10
Select an item by pressing a ’/’ on the desired line then press Enter.
Related Reading: For a description of the fields not listed here, see the description
for Figure 311 on page 516.
| Help
| ------------------------------------------------------------------------------
| Partitioned Database Information
|
| Type the field values. Then press Enter to continue.
|
| Database name . . . . . . . : IVPDB1
|
|
| Master Database values
| Part. selection routine . . . DFSIVD1
| RSR global service group . . .
| RSR tracking type . . . . . .
| Share level . . . . . . . . . 0
| Database organization . . . : PHDAM
| Recoverable? . . . . . . . . . Yes
| Number of data set groups . : 1
| Online Reorganization Capable: Yes
|
| To exit the application, press F3.
|
| Command ===>
| F1=Help F3=Exit F12=Cancel
||
| Figure 328. Partitioned Database Information (DSPXPOA)
|
Related Reading: For a description of the fields shown in Figure 328, see the
description for Figure 311 on page 516.
You can modify the fields and press Enter to change the values in RECON. If you
press cancel or exit, any changes you entered on this panel are discarded.
There is no way to undo the delete. You may wish to perform an export prior to
deleting a HALDB from RECON. See “Exporting Database Information” for
information about performing an export.
Help
------------------------------------------------------------------------------
Delete Database Information
Type ’/’ to confirm the delete of the database information from RECON.
Then press Enter.
Command ===>
F1=Help F3=Exit F12=Cancel
Help
------------------------------------------------------------------------------
Export a Database
Command ===>
F1=Help F12=Cancel
Field Description
Database name
The HALDB name that was specified in the primary panel.
After you press Enter, the table is read and each partition is defined.
Help
------------------------------------------------------------------------------
Import a Database
Command ===>
F1=Help F3=Exit F12=Cancel
Database name
The HALDB name that was specified on the primary panel.
Input data set name
The input data set name is the name of the data set that contains the
partition information. The data set must be partitioned.
Input member name
The input member name is the name of a member within the input data set.
The member must have been exported using the HALDB Partition Definition
utility .
Processing option
Each partition in the imported table can be defined in RECON. If there are
errors, you can choose to try the remaining partitions or to stop the
process.
Use the help (F1) information provided by ISRDDN and in the ISPF manuals to
learn more about the ISRDDN utility. When you exit the ISRDDN utility, the HALDB
Partition Definition utility panels are displayed again.
The configuration is a name that you specify that identifies a set of DBD libraries
and a set of RECON data sets. If you already have the IMS DD name allocated
from the logon procedure and if you have the IMS.SDFSRESLs allocated to the
STEPLIB DD name, you do not need to use the Configuration option. If you do
define and select a configuration, those data sets will override the allocations from
the logon procedure.
1. IMS DD name
The IMS DD name includes the data sets that contain the DBDLIB members.
The RECON / DBDLIB Configurations panels re-allocate the IMS DD name.
2. RECON allocation
The STEPLIB allocation contains RECON1, RECON2, and RECON3 members
that name the actual RECON data sets. IMS uses those members to determine
which RECON data sets to use. There is an alternative to using a STEPLIB:
use the TSOLIB command to change the search order that TSO/E uses to find
commands and programs.
The RECON / DBDLIB Configurations panels re-allocates the IMS DD name
and will allocate the RECON1, RECON2, and RECON3 DDnames to explicitly
specify the RECON data sets. The STEPLIB concatenation is not modified.
To create a new configuration, fill in the first line and press Enter.
Select a default by type ’/’ on the Act column then press Enter.
You can use ’O’ to open or ’D’ to delete a configuration.
Command ===>
F1=Help F3=Exit F7=Up F8=Down F12=Cancel
A list of configurations can be maintained when you select option 7 from the
Partitioned Databases panel. The list is initially empty and it can be added-to by
filling in the blank line. The active configuration is identified by an asterisk (*) in the
Current column. Figure 334 shows the Configurations Details panel.
Rows from the list can be deleted by using a line command of d. Only the
configuration is deleted from the list. The data sets that are named in the
configuration are not deleted.
The data sets named in the configuration are set or changed by using a line
command of o for open.
Configuration Details
Command ===>
F13=Help F15=Exit F19=Up F20=Down F22=Actions
The RECON data sets are separately allocated to the RECON1, RECON2, and
RECON3 file names.
The DBDLIB data sets are concatenated to the IMS file name.
Important: When you specify a generic HALDB name in the Partitioned Database
panel; option 6 will only work if you use four (4) or fewer DBD data sets. However,
for greater flexibility you can specify up to ten (10) data sets.
| The batch import of a HALDB can be done by submitting a batch ISPF job similar
| to the job shown in Figure 335. ISPF is invoked in batch, so all ISPF DDNAMES
| are required.
|
//DSPXRUN JOB ...
//*
//DSPXRUN EXEC PGM=IKJEFT01,DYNAMNBR=50,REGION=6M
//STEPLIB DD DSN=IMSIVP91.SDFSRESL,DISP=SHR /* IMS.SDFSRESL */
//SYSPROC DD DSN=IMSIVP91.SDFSEXEC,DISP=SHR /* IMS rexx execs */
//IMS DD DSN=your.local.DBDLIB,DISP=SHR
//RECON1 DD DSN=IMSIVP91.RECON1,DISP=SHR
//RECON2 DD DSN=IMSIVP91.RECON2,DISP=SHR
//RECON3 DD DSN=IMSIVP91.RECON3,DISP=SHR
//ISPPROF DD DSN=&&PROFILE;, /* dummy ISPF profile */
// UNIT=SYSDA,DISP=(NEW,DELETE),
// SPACE=(3200,(30,30,1)),DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200)
//ISPPLIB DD DSN=IMSIVP91.SDFSPLIB,DISP=SHR /* IMS ISPF panels */
//ISPSLIB DD DSN=IMSIVP91.SDFSSLIB,DISP=SHR /* IMS ISPF skeletons */
//ISPMLIB DD DSN=IMSIVP91.SDFSMLIB,DISP=SHR /* IMS ISPF messages */
// DD DSN=ISP.ISPMLIB,DISP=SHR
//ISPTLIB DD DSN=IMSIVP91.SDFSTLIB,DISP=SHR /* IMS ISPF tables */
// DD DSN=ISP.ISPTLIB,DISP=SHR
//ISPLOG DD SYSOUT=*,DCB=(RECFM=VA,LRECL=125,BLKSIZE=129)
//SYSPRINT DD SYSOUT=*,DCB=(RECFM=VA,LRECL=125,BLKSIZE=129)
//SYSOUT DD SYSOUT=*
//PARTLOG DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*,DCB=(RECFM=F,LRECL=255,BLKSIZE=255)
//SYSTSIN DD *
ISPSTART CMD( +
DSPXRUN IMPORT DSN(’PROD.RSR.PARTS’) +
DBN(IVPDB1) MEM(IVPDB1) OPT(2))
/*
The batch job executes the standard ISPF command ISPSTART that sets up the
ISPF environment then starts the DSPXRUN command. The DSPXRUN command
identifies the HALDB, the import file to use, and the processing options.
| Command String:
| The values are essentially the same as the values required for the foreground
| import (see “Importing Database Information” on page 538).
| EXPORT
| When you choose to export database information using a batch job, the
| information is stored in the partitioned data set that you specify. The
| information is saved as an ISPF table and so it must have the attributes of
| ISPTLIB data sets: record format = fixed block, record length = 80, and
| data set organization = PDS (or PDS/E).
| Related Reading: For more information on ISPTLIB data sets, see ISPF
| User’s Guide, Volume 1.
| IMPORT
| When you choose to import database information using a batch job, the
| partition information is read from a partitioned data set that you specify. The
| partition information is defined to the RECON data sets.
database_name
The HALDB name that was specified on the primary panel.
dataset_name
The input data set name is the name of the data set that contains the
partition information. The data set must be partitioned.
member_name
The input member name is the name of a member within the input data set.
The member must have been exported using the HALDB Partition Definition
utility.
| processing_option
| The processing option field lets you determine what the Partition Definition
| utility does in the event that an error occurs when it processes a partition
| from the imported table. The Partition Definition utility records each partition
| it imports in RECON. If there are errors, you can choose to try the
| remaining partitions or to stop the process. The valid values are 1 or 2:
| 1 Stop on first error (prior imported partitions are retained)
| 2 Try all partitions
| If you specified secondary space amount for the input data set, IMS
| uses this same secondary amount for the output data set.
Appendix F. Output Data Set Requirements for HALDB Online Reorganization 547
548 Administration Guide: Database Manager
Notices
This information was developed for products and services offered in the U.S.A. IBM
may not offer the products, services, or features discussed in this document in other
countries. Consult your local IBM representative for information on the products and
services currently available in your area. Any reference to an IBM product, program,
or service is not intended to state or imply that only that IBM product, program, or
service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However,
it is the user’s responsibility to evaluate and verify the operation of any non-IBM
product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you any
license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply to
you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for this
IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes
appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of
enabling: (i) the exchange of information between independently created programs
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
Information concerning non-IBM products was obtained from the suppliers of those
products, their published announcements or other publicly available sources. IBM
has not tested those products and cannot confirm the accuracy of performance,
compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those
products.
All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.
If you are viewing this information softcopy, the photographs and color illustrations
may not appear.
Notices 551
Trademarks
The following terms are trademarks of the IBM Corporation in the United States,
other countries, or both:
BookManager NetView
CICS OS/390
DataPropagator RACF
DataRefresher RAMAC
DB2 Redbooks
DB2 Universal Database RMF
DFSMSdss SAA
Hiperspace Tivoli
IBM WebSphere
IMS z/OS
MVS
Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc., in the United States, other countries, or both.
UNIX is a trademark of The Open Group in the United States and other countries.
Other company, product, and service names may be trademarks or service marks
of others.
Supplementary Publications
Title Order number
IMS Connector for Java 2.2.2 and SC09-7869
9.1.0.1 Online Documentation for
WebSphere Studio Application
Developer Integration Edition 5.1.1
IMS Version 9 Fact Sheet GC18-7697
IMS Version 9: Licensed Program GC18-7825
Specifications
Publication Collections
Title Format Order
number
IMS Version 9 Softcopy Library CD LK3T-7213
IMS Favorites CD LK3T-7144
Licensed Bill of Forms (LBOF): Hardcopy LBOF-7789
IMS Version 9 Hardcopy and and CD
Softcopy Library
Unlicensed Bill of Forms Hardcopy SBOF-7790
(SBOF): IMS Version 9
Unlicensed Hardcopy Library
OS/390 Collection CD SK2T-6700
z/OS Software Products CD SK3T-4270
Collection
z/OS and Software Products DVD SK3T-4271
DVD Collection
Index 557
data part of segment 14, 15 database (continued)
data requirements, analyzing 45, 53 HD description 78
data sensitivity 184 HSAM description 60
data set implementing 5, 291
OSAM introduction to 11
maximum size 79, 507 loading 5, 320
VSAM Local-DL/I support 56
maximum size 79 logical 162
data set groups modifying 5, 423
See multiple data set groups 18 monitoring 5, 335
data set statement MSDB description 128
description 292 MSDB, Areas in data sharing 115
HALDB (High Availability Large Database) 292 multiple data set groups 234
data sets protecting during reorganization 342
allocation 318 recovery 5
DFSVSAMP 69 reorganizing 341
ESDS in HD databases 91 security
ESDS in secondary indexes 192 establishing 31
HALDB Online Reorganization for application programs 18
naming conventions 372 introduction 6
output data sets 373, 545 SHISAM description 75
HALDB partitions SHSAM description 75
maximum number of data sets 299 standards and procedures 6
HISAM 65 testing 5, 307
KSDS in secondary indexes 192 tuning 5, 341
MSDBCP1 and MSDBCP2 279 database administration
MSDBDUMP data set 279 task description 3
naming convention database definition
HALDB Online Reorganization overview 24 HALDB partitions 295
naming conventions using the Partition Definition utility 295
HALDB (High Availability Large Database) 23 database description
HALDB Online Reorganization 372 See DBD (database description) 18
PHDAM 23 database PCB 303
PHIDAM 23 Database Prefix Resolution utility (DFSURG10) 351
PSINDEX 23 Database Prefix Update utility (DFSURGP0) 352
OSAM in HD databases 91 Database Prereorganization utility (DFSURPR0) 350
pre-formatting space 263 database record
data sharing calculating size 311
DEDB 115 definition 6
VSO DEDB Areas 144 HDAM (Hierarchical Direct Access Method) 94
data space HIDAM 96
z/OS HISAM (Hierarchical Indexed Sequential Access
accessing for VSO DEDB areas 143 Method) 66
acquiring for VSO areas 143 HSAM (Hierarchical Sequential Access Method) 61
data structures, developing 45, 53 introduction to 12
database locking 105
application program’s view 18 MSDB (main storage database) 130
CICS local-DL/I 56 PHDAM (Partitioned Hierarchical Direct Access
DBCTL support 56 Method) 94
DEDB 115 PHIDAM 96
DEDB description 109 Database Scan utility (DFSURGS0) 350
definition 18 Database Surveyor utility (DFSPRSUR) 355
design databases
aids for testing 309 XML
what it involves 4 overview of storing XML data 238
design considerations 241, 267 databases, loading
DL/I 56 description 311
Fast Path types 115 Fast Path initial loads 323
GSAM description 76 JCL 325
HALDB (High Availability Large Database) restartable load program, using UCF 326
description 78
Index 559
delete byte (continued) distribution of DB records, random 457
HDAM 96 DL/I access methods
HISAM 66 changing 388
HSAM 63 from HDAM to PHDAM and HIDAM to
in logical relationships 477 PHIDAM 395
in secondary indexes 194 from PHDAM and PHIDAM to HDAM and
PHDAM (Partitioned Hierarchical Direct Access HIDAM 396
Method) 96 HDAM to HIDAM 393
delete rules for logical relationships 182, 183, 475, 505 HDAM to HISAM 392
deleted randomizer routine 452 HIDAM to HDAM 391
deleting segments HIDAM to HISAM 391
DEDBs 456 HISAM to HDAM 389
HD databases 103 HISAM to HIDAM 389
HISAM databases 72 DL/I and ACBs 304
HSAM databases 64 DL/I Call Summary report 402
dependent segment, definition 7 DL/I calls
design aids DEDBs 127
for test databases 309 HD databases 80
design reviews HISAM databases 68
description of 25 HSAM databases 63
introduction 4 in logical relationships
destination parent 163, 184 delete call 477
determining VSAM options 260 logical child insert call 466
DFPXPMB 539, 540 replace call 470
DFSCTL data set control statements MSDB 131, 134
SB control statement 258 DL/I Databases 56
SBPARM control statement 258 DL/I parameter 262
DFSDDLT0 (DL/I test program) 309 DL/I test program (DFSDDLT0) 309
DFSMNTB0 (DB Monitor program) 335 DLIModel utility
DFSPRCT1 (Partial Database Reorganization storing XML data
utility) 356 overview 238
DFSPRSUR (Database Surveyor utility) 355 DLOG parameter 262
DFSUOCU0 (Online Change utility) 451, 453 DREF (disabled reference) option
DFSURG10 (Database Prefix Resolution utility) 351 for VSO-area data spaces 143
DFSURGL0 (HD Reorganization Reload utility) 349 DSPXPDA 537
DFSURGP0 (Database Prefix Update utility) 352 DSPXPEA 537
DFSURGS0 (Database Scan utility) 350 database name 537
DFSURGU0 (HD Reorganization Unload utility) 348 output data set name 538
DFSURPR0 (Database Prereorganization utility) 350 DSPXPIA 538
DFSURRL0 (HISAM Reorganization Reload utility) 348 database name 538
DFSURUL0 (HISAM Reorganization Unload utility) 347 input data set name 538
DFSVSAMP data set 69 input member name 538
DFSVSMxx member of IMS.PROCLIB processing option 538
MADSIOT 149 DSPXPKE panel 526
dictionary DSPXPLA 529
See DB/DC Data Dictionary act 529
direct access methods data set name prefix 529
HDAM (Hierarchical Direct Access Method) 78 ID 529
HIDAM (Hierarchical Indexed Direct Access name 529
Method) 78 DSPXPLB 530
PHDAM (Partitioned Hierarchical Direct Access DSPXPOA 536
Method) 78 DSPXRUN command 542
PHIDAM (Partitioned Hierarchical Indexed Direct database_name 542
Access Method) 78 dataset_name 542
direct address pointers 78, 81 member_name 542
direct dependent segment types (DDEP) 122 processing_option 542
direct pointers dump option 262
logical relationships 156, 158, 161, 183 DUMP parameter 262, 265
secondary indexes 194, 195 duplex paths 476
direct storage method 56 duplicate data field 195
DISP parameter 262 duplicate data in logical relationships 151
Index 561
format (continued) FW status code
DEDB segments 119 for CCTL threads 289
fixed-length segments 14 in BMP regions 285
HD databases 91 in fast path buffer allocation 284
HDAM segments 96 in fast path buffer allocation for BMPs 288
HIDAM index segment 98
HIDAM segments 97
HISAM segments 66 G
HSAM segments 62 GC status code 270, 281
PHDAM segments 96 GE status code 171
PHIDAM index segment 98 general format of HD databases and use of special
PHIDAM segments 97 fields 317
pointer segment 193 Generalized Sequential Access Method (GSAM)
variable-length segments 14 See GSAM (Generalized Sequential Access
formula Method) 74
estimating CFRM list structure size 150 GPSB (Generated PSB)
first fit algorithm 143 I/O PCB 305
formulas for modifiable alternate response PCB 305
calculating buffers for Fast Path 284, 288 GSAM (Generalized Sequential Access Method) 74,
calculating space for MSDBs 279 76, 331
calculating storage for MSDB 274
size of root addressable area 242
forward chain pointer 130 H
FPOPN= HALDB (High Availability Large Database) 78
overview 111 adding partitions 298
FPRLM= automatic partition definition 298
restarting DEDB areas 112 automatic partition definition using Partition Definition
FR status code utility 521
for BMP regions 285 batch import 299
for CCTL threads 289 bit map block for partition 92
in fast path buffer allocation 284 Change Partition screen 524
in fast path buffer allocation for BMPs 288 F11 526
free block frequency factor (fbff) 241 F5 526
free logical record 68 F6 526
free space Partition high key 525
chain pointer (CP) field 93 Partition ID field 525
element (FSE) 93 Partition Selection String 525
element anchor point (FSEAP) 92 changing 398
HD (Hierarchical Direct) 92 HALDB Partition Selection exit routine 399
HDAM (Hierarchical Direct Access Method) 241 overview 398
HIDAM 241 partition boundaries 400
HIDAM (Hierarchical Indexed Direct Access partition key ranges 400
Method) 97 partition structure modification 399
KSDS 263 single partitions 398
percentage factor 242 changing DL/I access methods
PHDAM (Partitioned Hierarchical Direct Access changing from HDAM to PHDAM and HIDAM to
Method) 241 PHIDAM 395
PHIDAM 241 from PHDAM and PHIDAM to HDAM and
PHIDAM (Partitioned Hierarchical Indexed Direct HIDAM 396
Access Method) 97 changing partitions using the PDU 297
space calculations 317 configuration
FREESPACE parameter 263 list 540
FRSPC parameter 241 copying partitions 299
FS status code 271 creating HALDB (High Availability Large Database)
FSE (free space element) 93 partitions 295
FSEAP (free space element anchor point) 92 creating with the Partition Definition utility
fspf (free space percentage factor) 242 (PDU) 295
full-duplex paths 476 data set naming conventions 23, 372
full-function segments data set statement 292
specifying minimum size 214 data sets
maximum per partition 299
Index 563
HALDB (High Availability Large Database) (continued) HALDB (High Availability Large Database) partition
partition bit map block 92 definition utility
partition definition 295 registering OLR capability with DBRC 517
Partition Definition utility 295, 523 HALDB Online Reorganization
accessing 511 coexistence considerations 369
high key value 522 copying phase 366
impact on RECON 511 cursor 367
modifying fields 523 cursor-active status 365
panels 511 Database Change Accumulation utility 381
Partition Definition utility (PDU) 294 DD name naming convention
Change Partition panel 297 overview 23
partition high key 297 dynamic PSB 366
partition structure modification 399 fallback considerations 370
partitions FDBR 378
changing 398 GENJCL.CA command 381
changing boundaries 400 GENJCL.RECOV command 381
changing key ranges 400 image copy utilities 381
changing using Partition Definition utility 515 initialization phase 365
changing, overview 398 locking 378
copying using Partition Definition utility 532 log impact 377
creating using Partition Definition utility 515 migration considerations 369
data sets, maximum 299 modifying 374
defining using Partition Definition utility 515 monitoring 374
deleting using Partition Definition utility 532 naming convention
manual definition using Partition Definition overview 24
utility 522 output data set requirements 373, 545
maximum number 515 overview 364
modifying 398 RATE parameter of INITIATE OLREORG
modifying key ranges 400 command 377
modifying, overview 398 recovery 380
naming conventions 23 Remote Site Recovery (RSR) 378
opening using Partition Definition utility 515, 532 requirements for output data sets 373, 545
printing information using Partition Definition restart 377, 378
utility 532 restrictions 370
pointers sequential buffering 382
self-healing pointer process 382 starting 373
printing partitions 299 system impact 377
reallocating data sets termination phase 368
offline reorganization 362 tuning 374
RECON 300 unit of reorganization 367
RECON data set 539 utilities 379
reloading partitions XRF 377
offline reorganization 363 HALDB Partition Definition utility (%DFSHALDB)
reorganizing 358 accessing Help 513
offline 359 exiting the utility 513
reallocating data sets 362 main panel 512
reloading partitions 363 options 512
secondary indexes 364 main screen 512
unloading partitions 361 options 512
updating ILDS 363 Partitioned Databases panel 512
REUSE parameter 318 options 512
secondary indexes using 511
reorganizing 364 HALDB Partition Selection exit routine (DFSPSE00)
self-healing pointer process 382 changing 399
performance 386 modifying 399
sorting partitions 299 replacing 399
unloading partitions half-duplex paths 476
offline reorganization 361 HB (hierarchic backward) pointers 83
viewing DDNAME 300 HD Reorganization Reload utility
viewing partitions 299 ILDS
control statement specifications 363
Index 565
HISAM (Hierarchical Indexed Sequential Access ILDS (indirect list data set) (continued)
Method) (continued) sample JCL 300
loading the database 331 size, calculating 301
locking 106 ILE (indirect list entry) 301
logical record format 67 ILK (indirect list key) 301
logical record length 245, 248 image-copy option 281
options available 65 IMBED | NOIMBED parameter 264
performance 70, 74 implementing database design 5, 291
pointers 67 importing database definitions
replacing segments 74 HALDB (High Availability Large Database) 299
segment format 66 IMS Data Capture exit
space calculations 311 See Data Capture exit routine
storage of records 65 IMS High Performance Pointer Checker 243
when to use 65, 74 IMS trace parameters 262
HISAM Reorganization Reload utility (DFSURRL0) 348 IMS.ACBLIB 305
HISAM Reorganization Unload utility (DFSURUL0) 347 IMS.DBDLIB 291
HSAM (Hierarchical Sequential Access Method) IMS.PSBLIB 302
accessing segments 63 in physical databases 176
calls against 63 in the physical DBD 175
deleting segments 64 independent overflow part of area (IOVF)
description of 60 description 119
inserting segments 64 extending online 458
options available 61 index maintenance exit routine 198
performance 64 index segment 98
replacing segments 64 index set records 264
segment format 62 indexed databases 79
space calculations 311 HIDAM 96
storage of records 61 HISAM 65
when to use 61 PHIDAM 96
z/OS access methods used 61 INDICES parameter 201
HSSP (high-speed sequential processing) indirect list data set (ILDS)
description 279 allocating 300
for database recovery 282 calculating size 301
image-copy option 281 defining 300
limits and restrictions 280 sample JCL 300
private buffer pools 282 size, calculating 301
processing option H 281 indirect list entry (ILE) 301
reasons for choosing 280 indirect list key (ILK) 301
SETO statement 281 initial load program
SETR statement 281 basic 326
UOW locking 282 Fast Path 323
using 281 restartable, using UCF 326
writing 323
initialization phase of HALDB Online
I Reorganization 365
I/O errors input for DBDGEN utility
ADS 149 DBD 291
MADS 149 INSERT parameter
I/O PCB 305 free space for a KSDS 261, 263
ID (task ID) field 93 using in splitting CIs 69
IDP and Fast Path 337 insert rules for logical relationships 182, 183, 465, 469
IEFBR14 utility 318 insert strategy
IEHPROGM program 318 choosing 261
IFP and MPP regions inserting segments
maintaining continuous availability of 449 DEDB SDEPs 271
ILDS HD databases 100
reorganization updates 363 HISAM databases 68
ILDS (indirect list data set) HSAM databases 64
allocating 300 MSDB (main storage database) 132
calculating size 301 inspections
defining 300 code inspections 28
Index 567
logical relationships (continued) maximum size (continued)
insert rules 182, 466, 469 HDAM database 79
intersection data 164, 166 HIDAM database 79
ISRT call 466 PHDAM database 79
loading databases 331 PHIDAM database 79
logical child 152, 156 MBR parameter 177
logical parent 152, 156 migrating
paths 162, 163 fallback
performance considerations 183, 186 from HALDB 396
physical parent 152, 156 from PHDAM and PHIDAM 396
pointers 156, 161 to HDAM and HIDAM 396
procedures for adding to existing databases 427 from HDAM to PHDAM and HIDAM to PHIDAM 395
REPL call 470 to HALDB 395
replace rules 182, 469, 473 migration considerations for HALDB Online
restrictions on modifying 443 Reorganization 369
rules 505 minimum size
rules for defining 175, 176, 177, 183 specifying for full-function segments 214
secondary indexes, with 203 mixed mode 127
sequence fields 170, 171 mixing pointers 89
specifying in DBD 172, 175, 176, 177 modifiable alternate response PCB 305
uses 151 modifying a database
virtual logical children 171 description of 423
logical twin backward (LTB) pointer 160 introduction 5
logical twin chains 185 modifying data set groups
logical twin forward (LTF) pointer 160 HALDB (High Availability Large Database) 299
logical twin pointer 509 MON parameter 336
long busy 149 monitoring
lookaside option and tuning Fast Path systems 337
for buffer pools 145 description of 335
lookaside option for buffer pools, description 145 events for Fast Path 339
lookaside, defining private buffer pools 141 introduction 5
LP (logical parent) pointer 156 reports 335
correcting bad pointers 509 movement in hierarchy 10
definition 156 MSDB (main storage database)
performance considerations 183 calls against 131
LPCK (logical parent’s concatenated key) 157 deleting segments 132
LTB (logical twin backward) pointer 160 description of 128
LTERM 128 design considerations 273, 282
LTF (logical twin forward) pointer 160 inserting segments 132
loading the database 331, 423
MSDB Maintenance utility (DBFDBMA0) 129
M options available 128
macros page fixing 277
PCB 291 position 133
PSB 291 restrictions on changing DBD 423
MADSIOT (Multiple Area Data Set I/O Timing) 149 storage of records 130
CFRM 149 when to use 127, 129
coupling facility 149 MSDBCP1 data set 279
long busy 149 MSDBCP2 data set 279
main storage database MSDBDUMP data set 279
See MSDB (main storage database) 331 multi-area structure
main storage utilization, Fast Path 419 duplexing 139
maintenance Multiple Area Data Set I/O Timing (MADSIOT) 149
databases, planning 265 multiple area data sets (MADS)
secondary indexes 199 I/O errors 149
maintenance utility (DFSUACB0) 304 MADSIOT 149
making keys unique using system related fields 196 multiple data set groups
many-to-many mapping 46 description of 230
mapping data aggregates 46 HD databases 232
maximum size introduction 18
HALDB (High Availability Large Database) 79 specifying in DBD 234
Index 569
parameters parameters (continued)
BGWRT 260 RMNAME (continued)
BSIZ specifying number of RAPS 93
in DB/TM environment 283 RULES 465, 505
in the DBCTL environment 286 SCHD 262
BWO(TYPEIMS) 263 SEGMENT 205
BYTES 197 SHARELVL 116
CNBA 287 SOURCE 175, 184
CONSTANT 206 SPEED | RECOVERY 263
DB Monitor 336 SRCH 206
DBBF START 197
in DB/TM environment 282 SUBS 262
in the DBCTL environment 286 SUBSEQ 196, 206
DBFX TYPE 222
in DB/TM environment 282 VERSION 217
in the DBCTL environment 286 VSAMFIX 252, 262
DDATA 197 VSAMPLS 262
DISP 262 PARENT parameter 85, 163, 174, 177
DL/I 262 parent segment, definition 7
DLOG 262 Partial Database Reorganization utility
DUMP 262, 265 (DFSPRCT1) 356
EXIT 216 Partition Default information screen
EXTRTN 198, 206 anchor 519
FPB 287 automatic definition 518, 521
FPOB 287 block size 520
FREESPACE 263 bytes 519
FRSPC 241 data set name prefix 519
IMBED | NOIMBED 264 database name 518
INDICES 201 default JCL 520
INSERT free block freq. factor 520
free space for a KSDS 261, 263 free space percentage 520
using in splitting CIs 69 high block number 519
IOBF 252 image copy JCL 521
LATC 262 input data set 519
LGNR 338 max. image copies 520
LOCK 262 module name 519
MBR 177 online image copy JCL 521
MON 336 partition ID 519
NAME receive JCL 521
in a DBD 177, 205 recovery period 520
in the SENFLD statement 221 recovery utility JCL 520
NBA 274 reusable? 521
NBRSEGS 278 use defaults for DS groups 519
NOPROT 200 partition definition utility
NULLVAL 198, 206 HALDB (High Availability Large Database)
PARENT 163, 177 registering OLR capability with DBRC 517
in logical relationships 174, 177 Partition Definition utility (PDU)
to specify PCF and PCL pointers 86 changing partitions 297
to specify PCF pointers 85 creating HALDB partitions 295
PASSWD 33 HALDB functions 294
POINTER 175 high key value, entering 297
PROCOPT 32, 271 partition definition steps 295
PROCSEQ 188, 191 partition high key value, entering 297
PROT 200 partition high key 297
RECORD 248 entering the high key value 297
REPL 222 partition structure modification 399
REPLICATE | NOREPLICATE 264 partitioned database 78
RMNAME 94 information screen
HDAM options 244 database name 516
PHDAM options 244 database organization 516
specifying number of blocks or CIs 243 number of data set groups 517
Index 571
PHIDAM (Partitioned Hierarchical Indexed Direct Access pointers (continued)
Method) (continued) PP 159
data set naming conventions 23 PTB 88
database PTF 87
reorganizing 358 self-healing pointer process 382
DBCTL support 56 performance 386
description of 78 sequence in a segment’s prefix 90, 164
format of database 91 symbolic 189, 194
index database 79, 96 types 391
index segment 98 position
inserting segments 100 hierarchy 10
loading the database 331 MSDB 133
locking 107 post-implementation review 29
logical record length 248 PP (physical parent) pointer 159
maximum size 79 pre-formatting data set space 263
multiple data set groups 232 preallocated CIs 270
options available 80 prefix descriptor byte 463
pointers in 81 prefix part of segment 14
pointers, introduction 15 Prefix Resolution utility (DFSURG10) 351
segment format 97 Prefix Update utility (DFSURGP0) 352
space calculations 105, 311 preopen
specifying free space 241 disabling for DEDB areas 112
storage of records 96 preopening
when to use 81 DEDB areas 111
physical block size 248 Prereorganization utility (DFSURPR0) 350
physical child first pointers 84, 509 primary data set groups
physical child last pointers 85, 509 See multiple data set groups
physical parent in logical relationships 152, 156 primary data set, defined 65
physical parent pointer private buffer pool
See PP (physical parent) pointer 159 description 139
physical twin backward pointers 88, 509 procedures
physical twin forward pointers 87, 509 adding a DEDB 455
physically adjacent 60, 65 adding logical relationships 427
PI (program isolation), lock protocols 105 adding secondary indexes 445
pointer field 194 adding segment edit/compression facility 446
POINTER parameter 175 adding segment types 424
pointer segment 188, 193 adding variable-length segments 445
pointers adjusting HDAM options 404
correcting 509 adjusting PHDAM options 404
direct-address 78 Asynchronous Data Capture 447
FCP (forward chain pointer) 130 calculating database size 311
HALDB self-healing pointer process 382 changing DASD 403
performance 386 changing hierarchic structure
HB (hierarchic backward) 83 changing sequence of segment types 401
HD 81 combining segments 402
hierarchic forward (HF) 82 changing segment size 426
HISAM (Hierarchical Indexed Sequential Access converting concatenated keys 448
Method) 67 deleting a DEDB 455
in logical relationships 161 deleting segment types 425
in secondary indexes 194, 195 description of 19
introduction 15 extending DEDB IOVF online 458
LCF 158 introduction 6
LCL 158 modifying a database 423
logical relationships 156 reorganization
logical twin 509 HD database 358
LP (logical parent) 156, 509 HISAM database 358
LTB 160 primary index 358
LTF 160 processing option H 281
mixing types 89 processing option P
PCF (physical child first) 84 and NBA limit 285
PCL (physical child last) 85 and NBA/FPB limit 289
Index 573
reorganizing (continued) root anchor point (RAP) 451
HALDB (High Availability Large Database) root anchor points
(continued) See RAPs (root anchor points) 93
reloading partitions 363 root processing
secondary indexes 364 sequential
unloading partitions 361 HIDAM 99
updating ILDS 363 root segment, definition 7
HALDB self-healing pointer process 382 RRSAF
offline reorganization See Recoverable Resource Manager Services
HALDB (High Availability Large Database) 359 attachment facility
reallocating data sets 362 RSA (record search argument) 76
reloading HALDB partitions 363 rules
unloading HALDB partitions 361 defining logical relationships 176
updating ILDS 363 description of 465, 505
PHDAM database in logical databases 177, 183
overview of offline reorganization 359 in physical databases 175
PHDAM databases 358 fields in a segment 15
PHIDAM database HD with data set groups 232
overview of offline reorganization 359 secondary indexes with logical relationships 203
PHIDAM databases 358 segments 14
reloading HALDB partitions 363 sequence fields 16
secondary indexes using an SSA 131
HALDB (High Availability Large Database) 364 RULES parameter 465, 505
self-healing pointer process for HALDBs 382 RX status code 470
unloading HALDB partitions 361, 362
updating ILDS 363
REPL parameter 222 S
replace rules for logical relationships SB (OSAM Sequential Buffering)
choosing 183 benefits 254
description of 469, 473 productivity 254
replacing segments programs 254
HISAM databases 74 utilities 254
HSAM databases 64 buffer handler 256
REPLICATE | NOREPLICATE parameter 264 buffer pools 256
replication, area data set 115 buffer set 256
reports CICS 254
Fast Path Analysis 339 conditional activation 255
resolution utility (DFSURG10) 351 data set groups 255
resolving data conflicts 52 DB-PCP/DSG pair 255
resource allocation for MSDBs 275 deactivation 255
resource contention 276 description 253, 254
restart 76 disallowing use 259
emergency HALDB Online Reorganization 382
reopening DEDB areas 111 overlapped I/O 254, 256
HALDB Online Reorganization 377, 378 periodical evaluation 255
restrictions random read 253
HALDB Online Reorganization 370 requesting use 257, 260
HSSP, of 280 sequential read 253
modifying existing logical relationships 443 virtual storage 256
segments 14 scan utility (DFSURGS0) 350
SSA rules for DEDBs 127 SCD (system contents directory) 132
using secondary indexes with logical SCHD parameter 262
relationships 203 SDEP (sequential dependent)
reviews 25 CI preallocation 270
RMNAME parameter 244 SDFSRESL 453
specifying number of blocks or CIs 243 search field 194
specifying number of RAPS 93 secondary data set groups
usage 451 See multiple data set groups 18
ROLB call 284, 288 secondary data structure 192
root addressable area 94, 454
root addressable Area 119
Index 575
segments (continued) single area data sets (ADS) (continued)
source 189 I/O errors 149
target 189 size
twin, definition 8 maximum
type, definition 7 HALDB (High Availability Large Database) 79
variable length 14 HIDAM database 79
variable-length 209 PHDAM database 79
variable-length segments PHIDAM database 79
specifying minimum size 214 size calculations
segments, adding to change DEDBs 456 See space calculations 311
segments, deleting to change DEDBs 456 size field in variable-length segments 210
self-healing pointer process 382 size of DEDB estimation 270
performance 386 SOURCE parameter 175, 184
SENFLD statement 220, 303 source segment 189
SENSEG statement space calculations
description 303 CIs or blocks needed for database 314
field-level sensitivity 221 database size 311
restricting data access 31 overhead for DEDB CI resources 313
sequence field space management fields, updating 101
See also keys space management in HD databases 91
HIDAM 97 space release in logical relationships 478
HISAM 65 space search algorithm 103
HSAM (Hierarchical Sequential Access Method) 61 sparse indexing 198
introduction to 15 SPEED | RECOVERY parameter 263
logical relationships 170, 171 SRCH parameter 206
PHIDAM (Partitioned Hierarchical Indexed Direct SSA (segment search argument)
Access Method) 97 restrictions for DEDBs 127
unique, definition 16 secondary indexes 195
sequence set records 264 standards and procedures
sequencing in hierarchy 9 description of 19
sequencing logical twin chains 185 introduction 6
sequential access methods START parameter 197
HISAM 65 starting
HSAM 60 DEDB areas 112
sequential buffering (SB) statements
See SB (OSAM Sequential Buffering) 253 AREA 293
sequential dependent part of Area 119 data set
sequential randomizing module 243 description of 292
sequential root processing DATASET
HIDAM 99 example of 235
sequential storage method 56 specifying DDNAMEs for data sets 177
SETO statement 281 DBD 208, 292
SETR statement 281 DBDGEN 294
shared secondary indexes 201 END 294, 304
SHARELVL 116 FIELD
SHISAM (Simple Hierarchical Indexed Sequential definition of 196
Access Method) 74, 331 in the DBD 265
CI reclaim restriction 237, 342 position in DBD 293
VSAM REPRO, using 237, 342 FINISH 294
SHSAM (Simple Hierarchical Sequential Access LCHILD in logical relationships 172, 175, 205, 293
Method) 74, 75 OPTIONS
Simple Hierarchical Indexed Sequential Access Method fixing buffers in VSAM 252
(SHISAM) for OSAM 265
See SHISAM (Simple Hierarchical Indexed for VSAM 260, 262
Sequential Access Method) 74 OSAM 265
Simple Hierarchical Sequential Access Method use in splitting CIs 69
(SHSAM) PSBGEN 304
See SHSAM (Simple Hierarchical Sequential Access SEGM
Method) 74 description of 293
single area data sets (ADS) example of 177, 208
Fast Path I/O toleration 149 in secondary indexing 208
Index 577
unload utility (DFSURUL0) 347 variable-length segments (continued)
UOW (unit of work) 119, 270 using 209
UOW locking 282 what application programmers need to know 212
UOW structural definition 454 VERSION parameter 217
use chain 249 VID (variable intersection data) 165
user data field in pointer segment 196 virtual logical child 155
utilities Virtual Storage Access Method (VSAM)
ACB maintenance 304 HISAM databases 65
Database Change Accumulation 381 virtual storage option
database image copy 381 introduction 135
Database Prefix Resolution utility (DFSURG10) 351 VSAM
Database Prefix Update utility (DFSURGP0) 352 data set
Database Prereorganization utility maximum size 79
(DFSURPR0) 350 VSAM (Virtual Storage Access Method)
Database Scan utility (DFSURGS0) 350 access to GSAM databases 76
Database Surveyor (DFSPRSUR) 355 adjusting buffers 405
DBDGEN 291 adjusting options 409, 410
DBFDBMA0 129 and Hiperspace buffering 250
DBFUHDR0 270 changing access methods 411
DFSPRCT1 356 changing space allocation 410
DFSPRSUR 355 CIDF (control interval definition field) 314
DFSUCF00 355 ESDS in HD databases 91
DFSURG10 351 HISAM databases 65
DFSURGL0 349 index 264
DFSURGP0 352 local shared resource pools
DFSURGS0 350 assigning data sets 262
DFSURGU0 348 defining 262
DFSURPR0 350 index and data subpools 262
DFSURRL0 348 subpools of same size 250
DFSURUL0 347 options 260, 265
for unload and reloading secondary indexes 353 passwords 33
HALDB Online Reorganization 379 RDF (record definition field) 314
HD Reorganization Reload 349 storage of secondary indexes 192
HD Reorganization Unload 348 track space used 248
High-Speed DEDB Direct Reorganization VSAMFIX parameter 252, 262
(DBFUHDR0) 270 VSAMPLS parameter 262
HISAM Reorganization Reload 348 VSO DEDB (virtual storage option data entry database)
HISAM Reorganization Unload 347 checkpoint processing 147
MSDB Maintenance 129 data sharing 144
Partial Database Reorganization 356 defining a VSO Cache Structure Name 139
PSBGEN 302 defining a VSO DEDB Area 136
reorganization 343 emergency restart 147
UCF 355 I/O error processing 146
Unload 348 read errors 147
utility control facility write errors 146
See UCF (utility control facility) input processing 145
locking 144
options across restart 147
V output processing 146
variable intersection data (VID) 165 PRELOAD option 146
variable-length segments resource control 144
definition 14 using data spaces 143
description of 210 with XRF 148
introduction 17 VSO DEDB areas
procedure for adding 445 block-level sharing of 138
replace operations 211 defining
specifying in DBD 210 CHANGE.DBDS 135
specifying minimum size 214 INIT.DBDS 135
storage 210 virtual storage
use with secondary indexes 204 coupling facility cache structure 135
uses 211 data space 135
X
XDFLD statement
description 196
in secondary indexing 205
restrictions in use 294
specifying sparse indexing 198
XML
decomposed storage
overview 238
intact storage
overview 238
overview of storing in IMS databases 238
schema
overview of storing XML data 238
Z
z/OS access methods
used by HD 79
used by HSAM 61
Index 579
580 Administration Guide: Database Manager
Printed in USA
SC18-7806-00
Spine information: