Application Programming and SQL
Application Programming and SQL
1 for z/OS
SC18-9841-11
SC18-9841-11
Note Before using this information and the product it supports, be sure to read the general information under Notices at the end of this information.
Twelfth edition (June 2013) This edition applies to DB2 Version 9.1 for z/OS (DB2 V9.1 for z/OS, product number 5635-DB2), DB2 9 for z/OS Value Unit Edition (product number 5697-P12), and to any subsequent releases until otherwise indicated in new editions. Make sure you are using the correct edition for the level of the product. Specific changes are indicated by a vertical bar to the left of a change. A vertical bar to the left of a figure caption indicates that the figure has changed. Editorial changes that have no technical significance are not noted. Copyright IBM Corporation 1983, 2013. US Government Users Restricted Rights Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
About this information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Who should read this information . . DB2 Utilities Suite . . . . . . . . Terminology and citations . . . . . Accessibility features for DB2 Version 9.1 How to send your comments . . . . How to read syntax diagrams . . . . . . . for . . . . . . . . z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii xiii xiii xiv . xv . xv
iii
| | | | | | | | | | | | | | | | | | |
Make any necessary program changes for possibly different values for RETURNED_SQLSTATE and DB2_RETURNED_SQLCODE . . . . . . . . . . . . . . . . . . . . . . . . . SQLSTATE and SQLCODE SQL variables after a GET DIAGNOSTICS statement . . . . . . . . Coding multiple SQL statements in a handler body . . . . . . . . . . . . . . . . . . Unhandled warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . Change your programs to handle any changed messages from SQL procedures. . . . . . . . . Enhanced data type checking for zero-length characters . . . . . . . . . . . . . . . . Adding a column generates a new table space version . . . . . . . . . . . . . . . . . You cannot add a column and issue SELECT, INSERT, UPDATE, DELETE, or MERGE statements in the commit scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CAST FROM clause of CREATE FUNCTION statement for SQL functions is no longer supported . . GRAPHIC and NOGRAPHIC SQL processing options are deprecated . . . . . . . . . . . . Specifying ALTER DATABASE STOGROUP for work file databases . . . . . . . . . . . . DB2 enforces restrictions about where an INTO clause can be specified . . . . . . . . . . . Precompilation for unsupported compilers . . . . . . . . . . . . . . . . . . . . . Changes for INSTEAD OF triggers . . . . . . . . . . . . . . . . . . . . . . . Qualify user-defined function names . . . . . . . . . . . . . . . . . . . . . . . SQLCODE changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQL reserved words . . . . . . . . . . . . . . . . . . . . . . . . . . . . Determining the value of any SQL processing options that affect the design of your program . . . . . Determining the binding method . . . . . . . . . . . . . . . . . . . . . . . . . Changes that invalidate plans or packages . . . . . . . . . . . . . . . . . . . . . Determining the value of any bind options that affect the design of your program . . . . . . . . Programming applications for performance . . . . . . . . . . . . . . . . . . . . . . Designing your application for recovery . . . . . . . . . . . . . . . . . . . . . . . Unit of work in TSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unit of work in CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning for program recovery in IMS programs . . . . . . . . . . . . . . . . . . . Undoing selected changes within a unit of work by using savepoints . . . . . . . . . . . . Planning for recovery of table spaces that are not logged . . . . . . . . . . . . . . . . Designing your application to access distributed data . . . . . . . . . . . . . . . . . . Remote servers and distributed data . . . . . . . . . . . . . . . . . . . . . . . Advantages of DRDA access. . . . . . . . . . . . . . . . . . . . . . . . . . Preparing for coordinated updates to two or more data sources . . . . . . . . . . . . . . Forcing restricted system rules in your program . . . . . . . . . . . . . . . . . . . Creating a feed in IBM Mashup Center with data from a DB2 for z/OS server . . . . . . . . . .
. . . . . . . . . . . . . . same . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 10 11 11 11 12 12 12 12 12 12 12 12 13 13 13 13 14 14 17 19 19 21 22 22 23 30 32 33 34 35 35 36 37
iv
Sample RRSAF scenarios . . . . . . . . . . . . Program examples for RRSAF . . . . . . . . . . . Controlling the CICS attachment facility from an application . Detecting whether the CICS attachment facility is operational Improving thread reuse in CICS applications . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Contents
Equivalent SQL and assembler data types . . . . . SQL statements in assembler programs . . . . . . Delimiters in SQL statements in assembler programs Macros for assembler applications . . . . . . Programming examples in assembler . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
vi
| |
Contents
vii
| | |
Moving stored procedures to a WLM-established environment . . . . . . Creating a native SQL procedure . . . . . . . . . . . . . . . . Migrating an external SQL procedure to a native SQL procedure . . . . . Changing an existing version of a native SQL procedure . . . . . . . . Regenerating an existing version of a native SQL procedure . . . . . . . Removing an existing version of a native SQL procedure . . . . . . . . Creating an external SQL procedure . . . . . . . . . . . . . . . Creating an external stored procedure . . . . . . . . . . . . . . Creating multiple versions of external procedures and external SQL procedures
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
| | |
| | |
viii
| | |
Saving storage when manipulating LOBs by using LOB locators . Deferring evaluation of a LOB expression to improve performance LOB file reference variables . . . . . . . . . . . . . Referencing a sequence object . . . . . . . . . . . . . . Retrieving thousands of rows . . . . . . . . . . . . . . Determining when a row was changed . . . . . . . . . . . Checking whether an XML column contains a certain value . . . . Accessing DB2 data that is not in a table . . . . . . . . . . Ensuring that queries perform sufficiently . . . . . . . . . . Items to include in a batch DL/I program . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
747 749 751 753 754 754 755 756 756 757
| |
Contents
ix
| | | |
Accessing distributed data by using explicit CONNECT statements . . . . . . . . . . Specifying a location alias name for multiple sites . . . . . . . . . . . . . . . Releasing connections . . . . . . . . . . . . . . . . . . . . . . . . Transmitting mixed data. . . . . . . . . . . . . . . . . . . . . . . . . Identifying the server at run time . . . . . . . . . . . . . . . . . . . . . SQL limitations at dissimilar servers. . . . . . . . . . . . . . . . . . . . . Support for executing long SQL statements in a distributed environment . . . . . . . . Distributed queries against ASCII or Unicode tables . . . . . . . . . . . . . . . Restrictions when using scrollable cursors to access distributed data . . . . . . . . . . Restrictions when using rowset-positioned cursors to access distributed data . . . . . . . WebSphere MQ with DB2 . . . . . . . . . . . . . . . . . . . . . . . . WebSphere MQ messages . . . . . . . . . . . . . . . . . . . . . . . DB2 MQ functions and DB2 MQ XML stored procedures . . . . . . . . . . . . . Generating XML documents from existing tables and sending them to an MQ message queue Shredding XML documents from an MQ message queue . . . . . . . . . . . . . DB2 MQ tables . . . . . . . . . . . . . . . . . . . . . . . . . . . Converting applications to use the MQI functions . . . . . . . . . . . . . . . Basic messaging with WebSphere MQ . . . . . . . . . . . . . . . . . . . Sending messages with WebSphere MQ . . . . . . . . . . . . . . . . . . Retrieving messages with WebSphere MQ . . . . . . . . . . . . . . . . . . Application to application connectivity with WebSphere MQ . . . . . . . . . . . Asynchronous messaging in DB2 for z/OS . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
891 892 893 894 894 894 895 895 896 896 896 897 901 907 907 907 916 917 918 919 920 924
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
Tailoring DB2-supplied JCL procedures for preparing CICS programs DB2I primary option menu . . . . . . . . . . . . . . . DB2I panels that are used for program preparation . . . . . . . DB2 Program Preparation panel . . . . . . . . . . . . . DB2I Defaults Panel 1 . . . . . . . . . . . . . . . . DB2I Defaults Panel 2 . . . . . . . . . . . . . . . . Precompile panel . . . . . . . . . . . . . . . . . . Bind Package panel . . . . . . . . . . . . . . . . . Bind Plan panel . . . . . . . . . . . . . . . . . . Defaults for Bind Package and Defaults for Rebind Package panels . Defaults for Bind Plan and Defaults for Rebind Plan panels . . . System Connection Types panel . . . . . . . . . . . . . Panels for entering lists of values . . . . . . . . . . . . Program Preparation: Compile, Link, and Run panel . . . . . . DB2I panels that are used to rebind and free plans and packages. . . Bind/Rebind/Free Selection panel . . . . . . . . . . . . Rebind Package panel . . . . . . . . . . . . . . . . Rebind Trigger Package panel . . . . . . . . . . . . . Rebind Plan panel . . . . . . . . . . . . . . . . . Free Package panel . . . . . . . . . . . . . . . . . Free Plan panel . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
1008 1010 1011 1012 1016 1019 1020 1022 1025 1029 1031 1033 1034 1035 1037 1038 1039 1041 1043 1045 1046
| |
Chapter 19. Testing and debugging an application program on DB2 for z/OS . . . . 1063
Designing a test data structure . . . . . . . . . . . . . . Analyzing application data needs . . . . . . . . . . . . Authorization for test tables and applications . . . . . . . . Example SQL statements to create a comprehensive test structure . Populating the test tables with data . . . . . . . . . . . . Methods for testing SQL statements . . . . . . . . . . . . Executing SQL by using SPUFI . . . . . . . . . . . . . . SPUFI . . . . . . . . . . . . . . . . . . . . . Content of a SPUFI input data set . . . . . . . . . . . . The SPUFI panel . . . . . . . . . . . . . . . . . . Changing SPUFI defaults . . . . . . . . . . . . . . . Setting the SQL terminator character in a SPUFI input data set . . Controlling toleration of warnings in SPUFI . . . . . . . . . Output from SPUFI . . . . . . . . . . . . . . . . . Testing an external user-defined function . . . . . . . . . . . Testing a user-defined function by using the Debug Tool for z/OS . Testing a user-defined function by routing the debugging messages to Testing a user-defined function by using driver applications . . . Testing a user-defined function by using SQL INSERT statements . Debugging stored procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SYSPRINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063 1063 1065 1065 1066 1067 1067 1071 1072 1072 1074 1079 1080 1081 1082 1083 1084 1085 1085 1085
Contents
xi
| | | | | | |
Debugging stored procedures with the Debug Tool and IBM VisualAge COBOL . . . Debugging a C language stored procedure with the Debug Tool and C/C++ Productivity Debugging stored procedures by using the Unified Debugger . . . . . . . . . . Debugging stored procedures with the Debug Tool for z/OS . . . . . . . . . . Recording stored procedure debugging messages in a file . . . . . . . . . . . Driver applications for debugging procedures . . . . . . . . . . . . . . . DB2 tables that contain debugging information. . . . . . . . . . . . . . . Debugging an application program. . . . . . . . . . . . . . . . . . . . Locating the problem in an application . . . . . . . . . . . . . . . . . Techniques for debugging programs in TSO . . . . . . . . . . . . . . . . Techniques for debugging programs in IMS . . . . . . . . . . . . . . . . Techniques for debugging programs in CICS . . . . . . . . . . . . . . . Finding a violated referential or check constraint . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. 1086 1087 . 1088 . 1089 . 1091 . 1092 . 1092 . 1092 . 1093 . 1098 . 1098 . 1099 . 1103
Information resources for DB2 for z/OS and related products . . . . . . . . . . . 1149 Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1151
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152 . 1153 . 1153
xii
xiii
When referring to a DB2 product other than DB2 for z/OS, this information uses the product's full name to avoid ambiguity. The following terms are used as indicated: DB2 Represents either the DB2 licensed program or a particular DB2 subsystem. OMEGAMON Refers to any of the following products: v IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS v IBM Tivoli OMEGAMON XE for DB2 Performance Monitor on z/OS v IBM DB2 Performance Expert for Multiplatforms and Workgroups v IBM DB2 Buffer Pool Analyzer for z/OS C, C++, and C language Represent the C or C++ programming language. | CICS Represents CICS Transaction Server for z/OS. IMS Represents the IMS Database Manager or IMS Transaction Manager. MVS Represents the MVS element of the z/OS operating system, which is equivalent to the Base Control Program (BCP) component of the z/OS operating system. RACF Represents the functions that are provided by the RACF component of the z/OS Security Server.
Accessibility features
The following list includes the major accessibility features in z/OS products, including DB2 Version 9.1 for z/OS. These features support: v Keyboard-only operation. v Interfaces that are commonly used by screen readers and screen magnifiers. v Customization of display attributes such as color, contrast, and font size Tip: The Information Management Software for z/OS Solutions Information Center (which includes information for DB2 Version 9.1 for z/OS) and its related publications are accessibility-enabled for the IBM Home Page Reader. You can operate all features using the keyboard instead of the mouse.
Keyboard navigation
You can access DB2 Version 9.1 for z/OS ISPF panel functions by using a keyboard or keyboard shortcut keys. For information about navigating the DB2 Version 9.1 for z/OS ISPF panels using TSO/E or ISPF, refer to the z/OS TSO/E Primer, the z/OS TSO/E User's Guide, and the z/OS ISPF User's Guide. These guides describe how to navigate each interface, including the use of keyboard shortcuts or function keys (PF keys). Each guide includes the default settings for the PF keys and explains how to modify their functions.
xiv
If an optional item appears above the main path, that item has no effect on the execution of the statement and is used only for readability.
optional_item required_item
xv
v If you can choose from two or more items, they appear vertically, in a stack. If you must choose one of the items, one item of the stack appears on the main path.
required_item required_choice1 required_choice2
If choosing one of the items is optional, the entire stack appears below the main path.
required_item optional_choice1 optional_choice2
If one of the items is the default, it appears above the main path and the remaining choices are shown below.
default_choice required_item optional_choice optional_choice
v An arrow returning to the left, above the main line, indicates an item that can be repeated.
required_item
repeatable_item
If the repeat arrow contains a comma, you must separate repeated items with a comma.
, required_item repeatable_item
| | | | | | | | | | |
A repeat arrow above a stack indicates that you can repeat the items in the stack. v Sometimes a diagram must be split into fragments. The syntax fragment is shown separately from the main syntax diagram, but the contents of the fragment should be read as if they are on the main path of the diagram.
required_item fragment-name
fragment-name:
required_item optional_name
v With the exception of XPath keywords, keywords appear in uppercase (for example, FROM). Keywords must be spelled exactly as shown. XPath keywords are defined as lowercase names, and must be spelled exactly as shown. Variables appear in all lowercase letters (for example, column-name). They represent user-supplied names or values.
xvi
v If punctuation marks, parentheses, arithmetic operators, or other such symbols are shown, you must enter them as part of the syntax.
xvii
xviii
Then make sure that your program implements the appropriate recommendations so that it promotes concurrency, can handle recovery and restart situations, and can efficiently access distributed data. Related tasks: Programming applications for performance (DB2 Performance) Programming for concurrency (DB2 Performance) Writing efficient SQL queries (DB2 Performance) Improving performance for applications that access distributed data (DB2 Performance) Related reference: BIND and REBIND options (DB2 Commands) | | | | | | | | | | | | Plan for the following changes in Version 9.1 that might affect your migration.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The default value for bind option ISOLATION is changed from RR to CS. This applies to the BIND PLAN and the remote BIND PACKAGE subcommands. For the BIND PACKAGE subcommand, the current default (plan value) stays. The default change does not apply to implicitly built CTs (for example, DISTSERV CTs). Although you can specify DBPROTOCOL(PRIVATE) for the DBPROTOCOL parameter of the BIND option, DB2 issues a new warning message, DSNT226I. All BIND statements for plans and packages that are bound during the installation or migration process specify the ISOLATION parameter explicitly, except for routines that do not fetch data. The current settings are maintained for compatibility.
Changes to XMLNAMESPACES
In DB2 Version 8, in the XMLNAMESPACES function, if the XML-namespace-uri argument has a value of http://www.w3.org/XML/1998/namespace or http://www.w3.org/2000/xmlns/, DB2 does not issue an error. Starting in Version 9 conversion mode, DB2 issues an error.
Availability of LOB or XML values in JDBC or SQLJ applications with progressive streaming
In previous releases, if a JDBC or SQLJ application retrieves LOB data into an application variable, the contents of the application variable are still available after the cursor is moved or closed. Version 9 supports streaming. The IBM Data Server Driver for JDBC and SQLJ uses progressive streaming as the default for retrieval of LOB or XML values. When progressive streaming is in effect, the contents of LOB or XML variables are no longer available after the cursor is moved or closed.
Adjust applications that depend on error information that is returned from DB2-supplied stored procedures
Adjust any applications that call one of the following stored procedures and then check and process the specific SQLCODE or SQLSTATE that is returned by the CALL statement: v SQLJ.INSTALL_JAR v v v v v v SQLJ.REMOVE_JAR SQLJ.REPLACE_JAR SQLJ.DB2_INSTALL_JAR SQLJ.DB2_REPLACE_JAR SQLJ.DB2_REMOVE_JAR SQLJ.DB2_UPDATEJARINFO
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Starting in Version 9, these stored procedures return more meaningful SQLCODEs and SQLSTATEs than they return in previous releases of DB2. The other input and output parameters of these stored procedures have not changed. For example, the following application needs to change because -20201 is no longer the SQLCODE that is returned. Successful execution (SQLCODE 0) is not affected.
CALL SQLJ.REMOVE_JAR(...) IF (SQLCODE = -20201) THEN DO; ... END;
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Changed behavior for ODBC data conversion for the SQL_BINARY type
In releases before Version 9.1, when ODBC applications used the SQL_BINARY type to bind parameter markers, ODBC mapped the SQL_BINARY type to CHAR FOR BIT DATA. In Version 9.1, when the DB2 server is in Version 9.1 new-function mode, ODBC maps SQL_BINARY to BINARY. Because CHAR FOR BIT DATA fields are padded with blanks, and BINARY fields are not padded, applications might experience differences in behavior. For example, in releases before Version 9.1, if the target CHAR FOR BIT DATA column was shorter than the SQL_BINARY input host variable, and the truncated values were blanks, DB2 did not generate an error. In Version 9.1, if the target BINARY column is shorter than the SQL_BINARY input host variable, and the truncated values are hexadecimal zeroes, DB2 generates an error.
Changed behavior of the INSERT statement with the OVERRIDING USER VALUE clause
When the INSERT statement is specified with the OVERRIDING USER VALUE clause, the value for the insert operation is ignored for columns that are defined with the GENERATED BY DEFAULT or GENERATED ALWAYS attribute.
DB2 enforces the restrictions about where a host variable array can be specified
host-variable-array is the meta-variable for host variable arrays in syntax diagrams. host-variable-array is included only in the syntax for multi-row FETCH, multi-row INSERT, multi-row MERGE, and EXECUTE in support of a dynamic multi-row INSERT or MERGE statement. host-variable-array is not included in the syntax diagram for expression, so a host variable array cannot be used in other contexts. In previous releases, if you specified host-variable-array in an unsupported context, you received no errors. In Version 9.1, if a host variable array is referenced in an unsupported context, DB2 issues an error. For more information about where you can specify the host-variable-array variable, see Host variable arrays in an SQL statement (DB2 Application programming and SQL).
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Unified Debugger enabled client platforms need this system privilege. Users of the Version 8 SQL Debugger-enabled client platforms do not need this system privilege.
On data sharing systems, SYSPROC.DSNWZP needs to be dropped and re-created as part of migrating the first member, but not for subsequent members. DSNTIJSG grants execute access on DSNWZP to PUBLIC. If necessary, change PUBLIC to a specific authorization ID.
DB2 returns all DSNWZP output in the same format as DB2 parameters
In previous releases, DSNWZP returned the current setting of several system parameters in a format other than the one used by the system parameter macros. For example, DSN6SPRM expected the setting for EDMPOOL in kilobytes, and DSNWZP returned it in bytes. In Version 9.1, DB2 returns all DSNWZP output in the same format as DB2 parameters. Modify programs that call DSNWZP if they compensate for the format differences.
DB2 enforces the restriction that row IDs are not compatible with character strings when they are used with a set operator
In previous releases, DB2 did not always enforce the restriction that row IDs are not compatible with character strings. In Version 9.1, DB2 enforces the restriction that row IDs are not compatible with string types when they are used with a set operator (UNION, INTERSECT, or EXCEPT).
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Database privileges on the DSNDB04 database now give you those privileges on all implicitly created databases
Because database privileges on the DSNDB04 database now give you those privileges on all implicitly created databases, careful consideration is needed before you grant database privileges on DSNDB04. For example, in Version 9.1, if you have the STOPDB privilege on DSNDB04, you also have the STOPDB privilege on all implicitly created databases.
Implicitly created objects that are associated with LOB columns require additional privileges
In releases before Version 9.1, implicitly created objects that are associated with LOB columns do not require CREATETAB and CREATETS privileges on the database of the base table. Those implicitly created objects also do not require the USE privilege on the buffer pool and storage group that is used by the LOB objects. In Version 9.1, these privileges are required.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
statement to drop an auxiliary index from a table space that was implicitly created, the DROP INDEX statement fails and DB2 issues SQLCODE -20355, SQLSTATE 429BW, and reason code 2.
DB2 returns an error when a LOB value is specified for an argument to a stored procedure and the argument value is longer than the target parameter and the excess is not trailing blanks
In releases before Version 9.1, DB2 did not return an error when a LOB value was specified for an argument to a stored procedure and the argument value was longer than the target parameter and the excess was not trailing blanks. DB2 truncated the data and the procedure executed. In Version 9.1, DB2 returns an error.
'W' is no longer recognized as a valid format element of the VARCHAR_FORMAT function format string
DB2 Version 9.1 no longer recognizes 'W' as a valid format element of the VARCHAR_FORMAT function format string. Version 8 never recognized 'W' as a valid format element. Use WW instead. Drop and re-create existing views and materialized queries that are defined with Version 9.1 and that use the 'W' format element with the VARCHAR_FORMAT function. Rebind existing bound statements that are bound with Version 9.1 and that use the 'W' format element with the VARCHAR_FORMAT function.
Leading or trailing blanks from the VARCHAR_FORMAT function format string are no longer removed
Leading or trailing blanks from the format string for the VARCHAR_FORMAT function are no longer removed. Existing view definitions are recalculated as part of Version 9.1, so the new rules take effect. You can continue to use existing SQL statements that use a materialized query table that references the VARCHAR_FORMAT function, but they use the old rules and remove leading and
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
trailing blanks. Existing references to the VARCHAR_FORMAT function in bound statements only get the new behavior when they have been bound or rebound in Version 9.1.
Changes to the maximum size of the row that is used by sort to evaluate aggregate functions
The maximum limit of a row (data and key columns) that is used by sort to evaluate MULTIPLE DISTINCT column functions is decreased from 32686 to 32600. This change should not impact applications that work in Version 8.
DB2 enforces restriction on specifying a CAST FROM clause for some forms of CREATE FUNCTION statements
The CAST FROM clause is included only in the syntax diagram for the CREATE FUNCTION statement for an external scalar function. The CAST FROM clause is not included in the syntax diagrams for the other variations of CREATE FUNCTION (external table function, sourced function, or SQL function); the clause cannot be used for these other variations. In previous releases, if you specified a CAST FROM clause in an unsupported context, you received no errors. Starting in Version 9, if a CAST FROM clause is specified in an unsupported context, DB2 issues an error.
DB2 enforces restrictions on specifying the AS LOCATOR clause and TABLE LIKE clause
The AS LOCATOR clause for LOBs is included in the syntax diagram for the CREATE FUNCTION statement for an SQL function. This clause is not supported in other contexts when identifying an existing SQL function such as in an ALTER, COMMENT, DROP, GRANT, or REVOKE statement. In previous releases, if you specified an AS LOCATOR clause for LOBs in an unsupported context, you might not have received an error. Starting in Version 9, if an AS LOCATOR clause for LOBs is specified in an unsupported context, DB2 issues an error. The TABLE LIKE clause for a trigger transition table is included only in the syntax diagram for the CREATE FUNCTION statement for an external scalar function, external table function, or sourced function. This clause is not supported for SQL functions or in other contexts when identifying an existing function such as in an ALTER, COMMENT, DROP, GRANT, or REVOKE statement, or in the SOURCE clause of a CREATE FUNCTION statement. In previous releases, if you specified a TABLE LIKE clause for a trigger transition table in an unsupported context, you might not have received an error. Starting in Version 9, if a TABLE LIKE clause for a trigger transition table is specified in an unsupported context, DB2 issues an error.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DB2 enforces restriction on the CCSID parameter for the DECRYPT_BIT and DECRYPT_BINARY functions
The CCSID parameter is not supported by the DECRYPT_BIT and DECRYPT_BINARY built-in functions. In previous releases, if you specified an argument for the CCSID parameter for these functions, you received no errors. Starting in Version 9, if an argument is specified for the CCSID parameter in an unsupported context, DB2 issues an error.
Make any necessary program changes for possibly different values for RETURNED_SQLSTATE and DB2_RETURNED_SQLCODE
Starting in Version 9, when an SQL statement other than GET DIAGNOSTICS or compound-statement is processed, the current diagnostics area is cleared before DB2 processes the SQL statement. Clearing of the diagnostics area can result in different values being returned for RETURNED_SQLSTATE and DB2_RETURNED_SQLCODE for a GET DIAGNOSTICS statement than what would be returned if the GET DIAGNOSTICS statement were issued from within an external SQL procedure. Additionally, there might be some differences in the values returned for the SQLSTATE and SQLCODE SQL variables than would have been returned from an external SQL procedure. (External SQL procedures were previously called SQL procedures in Version 8.)
10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Unhandled warnings
Starting in Version 9, when a native SQL procedure completes processing with an unhandled warning, DB2 returns the unhandled warning to the calling application. The behavior of an external SQL procedure is unchanged from releases prior to Version 9. When such a procedure completes processing with an unhandled warning, DB2 does not return the unhandled warning to the calling application.
Change your programs to handle any changed messages from SQL procedures
Starting in Version 9, DB2 issues different messages for the new native SQL procedures than it does for external SQL procedures. (External SQL procedures were previously called SQL procedures in Version 8.) For external SQL procedures, DB2 continues to issue DSNHxxxx messages. For native SQL procedures, DB2 issues SQL return codes. The relationship between these messages is shown in the following table:
Table 1. Relationship between DSNHxxxx messages that are issued for external SQL procedures and SQLCODEs that are issued for native SQL procedures DSNHxxxx message1 DSNH051I DSNH385I DSNH590I DSNH4408I DSNH4777I DSNH4778I DSNH4779I DSNH4780I DSNH4781I DSNH4782I DSNH4785I DSNH4787I SQLCODE2 -051 +385 -590 -408 n/a -778 -779 -780 -781 -782 -785 -787
Note: 1. These messages are used for external SQL procedures, which can be defined by specifying EXTERNAL or FENCED in Version 9.1. 2. These messages are used for native SQL procedures in Version 9.1.
11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
You cannot add a column and issue SELECT, INSERT, UPDATE, DELETE, or MERGE statements in the same commit scope
You cannot have a version-generating ALTER TABLE ADD COLUMN statement and SELECT, INSERT, UPDATE, DELETE, or MERGE statements in the same commit scope. If a version-generating ALTER TABLE ADD COLUMN statement follows SELECT, INSERT, UPDATE, DELETE, or MERGE statements in the same commit scope, SQLCODE -910 is issued. SQLCODE -910 is also issued if SELECT, INSERT, UPDATE, DELETE, or MERGE statements follow a version-generating ALTER TABLE ADD COLUMN statement in the same commit scope.
CAST FROM clause of CREATE FUNCTION statement for SQL functions is no longer supported
The CAST FROM clause of the CREATE FUNCTION statement for SQL functions is no longer supported. Starting in Version 9, if you issue a CREATE FUNCTION statement for an SQL function with a CAST FROM clause, DB2 issues an error.
12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
dependencies on these unsupported compilers. You can use this version of the precompiler with the following unsupported compilers: v OS/VS COBOL V1.2.4 v OS PL/I 1.5 (PL/I Opt. V1.5.1) v VS/COBOL II V1R4 v OS PL/I 2.3 The load module for this precompiler is DSNHPC7. This precompiler is meant only to ease the transition from unsupported compilers to supported compilers. This precompiler has the following restrictions: v There is no corresponding DB2 coprocessor function to match this precompiler. v The precompiler does not support SQL procedures. v Only COBOL and PL/I are supported. v The SQL flagger is not supported. v The precompiler produces Version 7 DBRMs, and does not support any capability that is newer than Version 7.
SQLCODE changes
Some SQLCODE numbers and message text might have changed in DB2 Version 9.1. Also, the conditions under which some SQLCODEs are issued might have changed.
13
Determining the value of any SQL processing options that affect the design of your program
When you process SQL statements in an application program, you can specify options that describe the basic characteristics of the program. You can also indicate how you want the output listings to look. Although most of these options do not affect how you design or code the program, a few options do.
Procedure
To determine the value of any SQL processing options that affect the design of your program: Review the list of SQL processing options and decide the values for any options that affect the way that you write your program. For example, you need to know if you are using NOFOR or STDSQL(YES) before you begin coding. Related concepts: DB2 program preparation overview on page 1002 Related reference: Descriptions of SQL processing options on page 959
14
Procedure
Consider the advantages and disadvantages of each binding method, which are described in the following table.
Table 2. Advantages and disadvantages of each binding method Binding method Bind all of your DBRMs into a single application plan. Advantages This method has fewer steps and is appropriate in some cases. This method is suitable for small applications that are unlikely to change or that require all resources to be acquired when the plan is allocated, rather than when your program first uses them. Disadvantages Maintenance is difficult. This method has the disadvantage that a change to one DBRM requires rebinding the entire plan, even though most DBRMs are unchanged.
15
Table 2. Advantages and disadvantages of each binding method (continued) Binding method Bind all of your DBRMs into separate packages. Then bind all those packages into a single application plan Advantages Maintenance is easier. When you use packages, you do not need to bind the entire plan again when you change one SQL statement. You need to bind only the package that is associated with the changed SQL statement. Disadvantages
Too many packages might be difficult to track. Input to binding a package is a single DBRM only. A one-to-one correspondence between programs and packages might You can incrementally develop your program without easily enable you to keep track rebinding the plan. A collection is a group of associated of each. However, your packages. Binding packages into package collections application might consist of too enables you to add packages to an existing application many packages to track easily. plan without having to bind the plan again. If you include a collection name in the package list when you bind a plan, any package in the collection becomes available to the plan. The collection can be empty when you first bind the plan. Later, you can add packages to the collection and drop or replace existing packages without binding the plan again. You can maintain several versions of a package within the same plan. Maintaining several versions of a plan without using packages requires a separate plan for each version, and therefore, separate plan names and RUN commands. Isolating separate versions of a program into packages requires only one plan and helps to simplify program migration and fallback. For example, you can maintain separate development, test, and production levels of a program by binding each level of the program as a separate version of a package, all within a single plan. You cannot bind or rebind a package or a plan while it is running. However, you can bind a different version of a package that is running. You can use different bind options for different DBRMs. The options of the BIND PLAN command apply to all DBRMs that are bound directly to the plan. The options of the BIND PACKAGE command apply to only the single DBRM that is bound to that package. The package options need not all be the same as the plan options, and they need not be the same as the options for other packages that are used by the same plan. You can use different name qualifiers for different groups of SQL statements. You can use a bind option to name a qualifier for the unqualified object names in SQL statements in a plan or package. By using packages, you can use different qualifiers for SQL statements in different parts of your application. By rebinding, you can redirect your SQL statements, for example, from a test table to a production table. Unused packages are not locked. Packages and plans are locked when you bind or run them. Packages that run under a plan are not locked until the plan uses them. If you run a plan and some packages in the package list never run, those packages are never locked.
16
Table 2. Advantages and disadvantages of each binding method (continued) Binding method Bind some of your DBRMs into separate packages. Then bind those packages and any other DBRMs for that program into an application plan. Advantages This method helps you migrate to using packages. Binding DBRMs directly to the plan and specifying a package list is a suitable method for maintaining existing applications. You can add a package list when you rebind an existing plan. To migrate gradually to the use of packages, bind DBRMs as packages when you need to make changes. Disadvantages Depending on your application design, you might not gain some of the advantages of using packages.
Related concepts: DB2 program preparation overview on page 1002 Related tasks: Binding an application on page 969
Rebind the package or plan by using the REBIND command and specifying the new value for the bind option. If the option that you want to change is not available for the REBIND command, issue the BIND command with ACTION(REPLACE) instead.
Change both statements in the host language Precompile, compile, and link the application and SQL statements program. Issue the BIND command with ACTION(REPLACE) for the package or plan.
17
Table 3. Changes that require plans or packages to be rebound. (continued) Change made Drop a table, index, or other object, and recreate the object Required action If a table with a trigger is dropped, recreate the trigger if you recreate the table. Otherwise, no change is required. DB2 attempts to automatically rebind the plan or package the next time it is run. No action is required. If the package or plan becomes invalid, DB2 automatically rebinds the plan or package the next time that it is allocated. The package might become invalid according to the following criteria: v If the package is not appended to any running plan, the package becomes invalid. v If the package is appended to a running plan, and the drop occurs within that plan, the package becomes invalid. However, if the package is appended to a running plan, and the drop occurs outside of that plan, the object is not dropped, and the package does not become invalid. In all cases, the plan does not become invalid until it has a DBRM that references the dropped object. Revoke an authorization to use an object No action is required. DB2 attempts to automatically rebind the plan or package the next time it is run. Automatic rebind fails if authorization is still not available. In this case, you must rebind the package or plan by using the REBIND command. No action is required. DB2 automatically rebinds invalidated plans and packages. If automatic rebind is unsuccessful, modify, recompile, and rebind the affected applications. Trigger packages in the database are invalidated. Rebind all trigger packages in the database No action is required. DB2 automatically rebinds invalidated packages. If automatic rebind is unsuccessful, modify, recompile, and rebind the affected applications.
| | | | | | |
Note: 1. In the case of changing the bind options, the change is not actually made until you perform the required action.
18
Related concepts: Automatic rebinding on page 999 Trigger packages on page 480 Related tasks: Checking for invalid plans and packages (DB2 Performance) Rebinding an application on page 991 Related reference: plans and (Managing Security) Related information: 00E30305 (DB2 Codes)
Determining the value of any bind options that affect the design of your program
Several options of the BIND PACKAGE and BIND PLAN commands can affect your program design. For example, you can use a bind option to ensure that a package or plan can run only from a particular CICS connection or IMS region. Your code does not need to enforce this situation.
Procedure
To determine the value of any bind options that affect the design of your program: Review the list of bind options and decide the values for any options that affect the way that you write your program. For example, you should decide the values of the ACQUIRE and RELEASE options before you write your program. These options determine when your application acquires and releases locks on the objects it uses. Related reference: BIND and REBIND options (DB2 Commands)
Procedure
To improve the performance of application programs that access data in DB2, use the following approaches when writing and preparing your programs: v Program your applications for concurrency. The goal is to program and prepare applications in a way that: Protects the integrity of the data that is being read or updated from being changed by other applications. Minimizes the length of time that other access to the data is prevented. For more information about DB2 concurrency and recommendations for improving concurrency in your application programs, see the following topics: Concurrency recommendations for application designers (Introduction to DB2 for z/OS)
Chapter 1. Planning for and designing DB2 applications
19
Concurrency and locks (DB2 Performance) Improving concurrency (DB2 Performance) Improving concurrency in data sharing environments (DB2 Data Sharing Planning and Administration) v Write SQL statements that access data efficiently. The predicates, subqueries, and other structures in SQL statements affect the access paths that DB2 uses to access the data. For information about how to write SQL statements that access data efficiently, see the following topics: Ways to improve query performance (Introduction to DB2 for z/OS) Writing efficient SQL queries (DB2 Performance) v Use EXPLAIN or SQL optimization tools to analyze the access paths that DB2 chooses to process your SQL statements. By analyzing the access path that DB2 uses to access the data for an SQL statement, you can discover potential problems. You can use this information to modify your statement to perform better. For information about how you can use EXPLAIN tables, and SQL optimization tools, to analyze the access paths for your SQL statements, see the following topics: Investigating access path problems (DB2 Performance) Using EXPLAIN to understand the access path (Introduction to DB2 for z/OS) Investigating SQL performance by using EXPLAIN (DB2 Performance) Interpreting data access by using EXPLAIN (DB2 Performance) EXPLAIN tables (DB2 Performance) EXPLAIN (DB2 SQL) Tuning SQL with Optim Query Tuner, Part 1: Understanding access paths Generating visual representations of access plans v Consider performance in the design of applications that access distributed data. The goal is to reduce the amount of network traffic that is required to access the distributed data, and to manage the use of system resources such as distributed database access threads and connections. For information about improving the performance of applications that access distributed data, see the following topics: Ways to reduce network traffic (Introduction to DB2 for z/OS) Managing DB2 threads (DB2 Performance) Improving performance for applications that access distributed data (DB2 Performance) Improving performance for SQL statements in distributed applications (DB2 Performance) v Use stored procedures to improve performance, and consider performance when creating stored procedures. For information about stored procedures and DB2 performance, see the following topics: Implementing DB2 stored procedures (DB2 Administration Guide) Improving the performance of stored procedures and user-defined functions (DB2 Performance)
20
Related concepts: Query and application performance analysis (Introduction to DB2 for z/OS) Programming for the instrumentation facility interface (IFI) (DB2 Performance) Related tasks: Chapter 1, Planning for and designing DB2 applications, on page 1 Chapter 3, Coding SQL statements in application programs: General information, on page 123
Procedure
To design your application for recovery: 1. Put any changes that logically need to be made at the same time in the same unit of work. This action ensures that in case DB2 terminates abnormally or your application fails, the data is left in a consistent state. A unit of work is a logically distinct procedure that contains steps that change the data. If all the steps complete successfully, you want the data changes to become permanent. But, if any of the steps fail, you want all modified data to return to the original value before the procedure began. For example, suppose two employees in the sample table DSN8910.EMP exchange offices. You need to exchange their office phone numbers in the PHONENO column. You need to use two UPDATE statements to make each phone number current. Both statements, taken together, are a unit of work. You want both statements to complete successfully. For example, if only one statement is successful, you want both phone numbers rolled back to their original values before attempting another update. 2. Consider how often you should commit any changes to the data. If your program abends or the system fails, DB2 backs out all uncommitted data changes. Changed data returns to its original condition without interfering with other system activities. For IMS and CICS applications, if the system fails, DB2 data does not always return to a consistent state immediately. DB2 does not process indoubt data (data that is neither uncommitted nor committed) until you restart IMS or the CICS attachment facility. To ensure that DB2 and IMS are synchronized, restart both DB2 and IMS. To ensure that DB2 and CICS are synchronized, restart both DB2 and the CICS attachment facility. 3. Consider whether your application should intercept abends. If your application intercepts abends, DB2 commits work, because it is unaware that an abend has occurred. If you want DB2 to roll back work automatically when an abend occurs in your program, do not let the program or run time environment intercept the abend. If your program uses Language Environment, and you want DB2 to roll back work automatically when an abend occurs in the program, specify the run time options ABTERMENC(ABEND) and TRAP(ON). 4. For TSO applications only: Issue COMMIT statements before you connect to another DBMS.
Chapter 1. Planning for and designing DB2 applications
21
If the system fails at this point, DB2 cannot know whether your transaction is complete. In this case, as in the case of a failure during a one-phase commit operation for a single subsystem, you must make your own provision for maintaining data integrity. 5. For TSO applications only: Determine if you want to provide an abend exit routine in your program. If you provide this routine, it must use tracking indicators to determine if an abend occurs during DB2 processing. If an abend does occur when DB2 has control, you must allow task termination to complete. DB2 detects task termination and terminates the thread with the ABRT parameter. Do not re-run the program. Allowing task termination to complete is the only action that you can take for abends that are caused by the CANCEL command or by DETACH. You cannot use additional SQL statements at this point. If you attempt to execute another SQL statement from the application program or its recovery routine, unexpected errors can occur. Related concepts: Unit of work (Introduction to DB2 for z/OS)
22
unit of work is marked as complete by a commit or synchronization (sync) point, which is defined in one of following ways: v Implicitly at the end of a transaction, which is signaled by a CICS RETURN command at the highest logical level. v Explicitly by CICS SYNCPOINT commands that the program issues at logically appropriate points in the transaction. v Implicitly through a DL/I PSB termination (TERM) call or command. v Implicitly when a batch DL/I program issues a DL/I checkpoint call. This call can occur when the batch DL/I program shares a database with CICS applications through the database sharing facility. For example, consider a program that subtracts the quantity of items sold from an inventory file and then adds that quantity to a reorder file. When both transactions complete (and not before) and the data in the two files is consistent, the program can then issue a DL/I TERM call or a SYNCPOINT command. If one of the steps fails, you want the data to return to the value it had before the unit of work began. That is, you want it rolled back to a previous point of consistency. You can achieve this state by using the SYNCPOINT command with the ROLLBACK option. By using a SYNCPOINT command with the ROLLBACK option, you can back out uncommitted data changes. For example, a program that updates a set of related rows sometimes encounters an error after updating several of them. The program can use the SYNCPOINT command with the ROLLBACK option to undo all of the updates without giving up control. The SQL COMMIT and ROLLBACK statements are not valid in a CICS environment. You can coordinate DB2 with CICS functions that are used in programs, so that DB2 and non-DB2 data are consistent.
Procedure
To plan for program recovery in IMS programs: 1. For a program that processes messages as its input, decide whether to specify single-mode or multiple-mode transactions on the TRANSACT statement of the APPLCTN macro for the program. Single-mode Indicates that a commit point in DB2 occurs each time the program issues a call to retrieve a new message. Specifying single-mode can simplify recovery; if the program abends, you can restart the program from the most recent call for a new message. When IMS restarts the program, the program starts by processing the next message.
23
Multiple-mode Indicates that a commit point occurs when the program issues a checkpoint call or when it terminates normally. Those two events are the only times during the program that IMS sends the program's output messages to their destinations. Because fewer commit points are processed in multiple-mode programs than in single-mode programs, multiple-mode programs could perform slightly better than single-mode programs. When a multiple-mode program abends, IMS can restart it only from a checkpoint call. Instead of having only the most recent message to reprocess, a program might have several messages to reprocess. The number of messages to process depends on when the program issued the last checkpoint call. DB2 does some processing with single- and multiple-mode programs. When a multiple-mode program issues a call to retrieve a new message, DB2 performs an authorization check and closes all open cursors in the program. 2. Decide whether to issue checkpoint calls (CHKP) and if so, how often to issue them. Each call indicates to IMS that the program has reached a sync point and establishes a place in the program from which you can restart the program. Consider the following factors when deciding when to use checkpoint calls: v How long it takes to back out and recover that unit of work. The program must issue checkpoints frequently enough to make the program easy to back out and recover. v How long database resources are locked in DB2 and IMS. v For multiple-mode programs: How you want the output messages grouped. Checkpoint calls establish how a multiple-mode program groups its output messages. Programs must issue checkpoints frequently enough to avoid building up too many output messages. Restriction: You cannot use SQL COMMIT and ROLLBACK statements in the DB2 DL/I batch support environment, because IMS coordinates the unit of work. 3. Issue CLOSE CURSOR statements before any checkpoint calls or GU calls to the message queue, not after. 4. After any checkpoint calls, set the value of any special registers that were reset if their values are needed after the checkpoint: A CHKP call causes IMS to sign on to DB2 again, which resets the special registers that are shown in the following table.
Table 4. Special registers that are reset by a checkpoint call. Special register CURRENT PACKAGESET CURRENT SERVER CURRENT SQLID CURRENT DEGREE Value to which it is reset after a checkpoint call blanks blanks blanks 1
5. After any commit points, reopen the cursors that you want and re-establish positioning 6. Decide whether to specify the WITH HOLD option for any cursors. This option determines whether the program retains the position of the cursor in the DB2 database after you issue IMS CHKP calls. You always lose the program database positioning in DL/I after an IMS CHKP call.
24
The program database positioning in DB2 is affected according to the following criteria: v If you do not specify the WITH HOLD option for a cursor, you lose the position of that cursor. v If you specify the WITH HOLD option for a cursor and the application is message-driven, you lose the position of that cursor. v If you specify the WITH HOLD option for a cursor and the application is operating in DL/I batch or DL/I BMP, you retain the position of that cursor. 7. Use IMS rollback calls, ROLL and ROLB, to back out DB2 and DL/I changes to the last commit point. These options have the following differences: ROLL Specifies that all changes since the last commit point are to be backed out and the program is to be terminated. IMS terminates the program with user abend code U0778 and without a storage dump. When you issue a ROLL call, the only option you supply is the call function, ROLL. ROLLB Specifies that all changes since the last commit point are to be backed out and control is to be returned to the program so that it can continue processing. A ROLB call has the following options: v The call function, ROLB v The name of the I/O PCB How ROLL and ROLB calls effect DL/I changes in a batch environment depends on the IMS system log and back out options that are specified, as shown in the following table.
Table 5. Effects of ROLL and ROLLB calls on DL/I changes in a batch environment Options specified Rollback call ROLL System log option tape disk Backout option any BKO=NO Result DL/I does not back out updates, and abend U0778 occurs. DB2 backs out updates to the previous checkpoint. DL/I backs out updates, and abend U0778 occurs. DB2 backs out updates to the previous checkpoint.
disk
BKO=YES
25
Table 5. Effects of ROLL and ROLLB calls on DL/I changes in a batch environment (continued) Options specified Rollback call ROLB System log option tape disk Backout option any BKO=NO Result DL/I does not back out updates, and an AL status code is returned in the PCB. DB2 backs out updates to the previous checkpoint. The DB2 DL/I support causes the application program to abend when ROLB fails. DL/I backs out database updates, and control is passed back to the application program. DB2 backs out updates to the previous checkpoint. Restriction: You cannot specify the address of an I/O area as one of the options on the call; if you do, your program receives an AD status code. However, you must have an I/O PCB for your program. Specify CMPAT=YES on the CMPAT keyword in the PSBGEN statement for your program's PSB.
disk
BKO=YES
26
v The program issues either a subsequent CHKP or SYNC call, or, for single-mode transactions, a GU call to the I/O PCB. At this point in the processing, the data is consistent. All data changes that were made since the previous commit point are made correctly. v The program issues a subsequent ROLB or ROLL call. At this point in the processing, your program has determined that the data changes are not correct and, therefore, that the data changes should not become permanent. v The program terminates. Restriction: The SQL COMMIT and ROLLBACK statements are not valid in an IMS environment. A commit point occurs in a program as the result of any one of the following events: v The program terminates normally. Normal program termination is always a commit point. v The program issues a checkpoint call. Checkpoint calls are a program's means of explicitly indicating to IMS that it has reached a commit point in its processing. v The program issues a SYNC call. A SYNC call is a Fast Path system service call to request commit-point processing. You can use a SYNC call only in a non-message-driven Fast Path program. v For a program that processes messages as its input, a commit point can occur when the program retrieves a new message. This behavior depends on the mode that you specify in the APPLCTN macro for the program: If you specify single-mode transactions, a commit point in DB2 occurs each time the program issues a call to retrieve a new message. If you specify multiple-mode transactions or you do not specify a mode, a commit point occurs when the program issues a checkpoint call or when it terminates normally. At the time of a commit point, the following actions occur: v IMS and DB2 can release locks that the program has held since the last commit point. Releasing these locks makes the data available to other application programs and users. v DB2 closes any open cursors that the program has been using. v IMS and DB2 make the program's changes to the database permanent. v If the program processes messages, IMS sends the output messages that the application program produces to their final destinations. Until the program reaches a commit point, IMS holds the program's output messages at a temporary destination. If the program abends before reaching the commit point, the following actions occur: v Both IMS and DB2 back out all the changes the program has made to the database since the last commit point. v IMS deletes any output messages that the program has produced since the last commit point (for nonexpress PCBs). v If the program processes messages, people at terminals and other application programs receive information from the terminating application program. If the system fails, a unit of work resolves automatically when DB2 and IMS batch programs reconnect. Any indoubt units of work are resolved at reconnect time.
Chapter 1. Planning for and designing DB2 applications
27
Procedure
To specify checkpoint frequency in IMS programs: 1. Use a counter in your program to keep track of one of the following items: v Elapsed time v The number of root segments that your program accesses v The number of updates that your program performs 2. Issue a checkpoint call after a certain time interval, number of root segments, or number of updates. Checkpoints in IMS programs: Issuing checkpoint calls releases locked resources and establishes a place in the program from which you can restart the program. The decision about whether your program should issue checkpoints (and if so, how often) depends on your program. Generally, the following types of programs should issue checkpoint calls: v Multiple-mode programs v Batch-oriented BMPs v Nonmessage-driven Fast Path programs. (These programs can use a special Fast Path call, but they can also use symbolic checkpoint calls.) v Most batch programs v Programs that run in a data sharing environment. (Data sharing makes it possible for online and batch application programs in separate IMS systems, in the same or separate processors, to access databases concurrently. Issuing checkpoint calls frequently in programs that run in a data sharing environment is important, because programs in several IMS systems access the database.) You do not need to issue checkpoints in the following types of programs: v Single-mode programs v Database load programs v Programs that access the database in read-only mode (defined with the processing option GO during a PSBGEN) and are short enough to restart from the beginning v Programs that, by their nature, must have exclusive use of the database A CHKP call causes IMS to perform the following actions: v Inform DB2 that the changes that your program made to the database can become permanent. DB2 makes the changes to DB2 data permanent, and IMS makes the changes to IMS data permanent. v Send a message that contains the checkpoint identification that is given in the call to the system console operator and to the IMS master terminal operator. v Return the next input message to the program's I/O area if the program processes input messages. In MPPs and transaction-oriented BMPs, a checkpoint call acts like a call for a new message.
28
v Sign on to DB2 again. Programs that issue symbolic checkpoint calls can specify as many as seven data areas in the program that is to be restored at restart. DB2 always recovers to the last checkpoint. You must restart the program from that point. If you use symbolic checkpoint calls, you can use a restart call (XRST) to restart a program after an abend. This call restores the program's data areas to the way they were when the program terminated abnormally, and it restarts the program from the last checkpoint call that the program issued before terminating abnormally. Restriction: For BMP programs that process DB2 databases, you can restart the program only from the latest checkpoint and not from any checkpoint, as in IMS. Checkpoints in MPPs and transaction-oriented BMPs In single-mode programs, checkpoint calls and message retrieval calls (called get-unique calls) both establish commit points. The checkpoint calls retrieve input messages and take the place of get-unique calls. BMPs that access non-DL/I databases and MPPs can issue both get unique calls and checkpoint calls to establish commit points. However, message-driven BMPs must issue checkpoint calls rather than get-unique calls to establish commit points, because they can restart from a checkpoint only. If a program abends after issuing a get-unique call, IMS backs out the database updates to the most recent commit point, which is the get-unique call. In multiple-mode BMPs and MPPs, the only commit points are the checkpoint calls that the program issues and normal program termination. If the program abends and it has not issued checkpoint calls, IMS backs out the program's database updates and cancels the messages that it has created since the beginning of the program. If the program has issued checkpoint calls, IMS backs out the program's changes and cancels the output messages it has created since the most recent checkpoint call. Checkpoints in batch-oriented BMPs If a batch-oriented BMP does not issue checkpoints frequently enough, IMS can abend that BMP or another application program for one of the following reasons: v Other programs cannot get to the data that they need within a specified amount of time. If a BMP retrieves and updates many database records between checkpoint calls, it can monopolize large portions of the databases and cause long waits for other programs that need those segments. (The exception to this situation is a BMP with a processing option of GO; IMS does not enqueue segments for programs with this processing option.) Issuing checkpoint calls releases the segments that the BMP has enqueued and makes them available to other programs. v Not enough storage is available for the segments that the program has read and updated. If IMS is using program isolation enqueuing, the space that is needed to enqueue information about the segments that the program has read and updated must not exceed the amount of storage that is defined for the IMS system. (The amount of storage available is specified during IMS system definition. ) If a BMP enqueues too many segments, the amount of storage that is needed for the enqueued segments can exceed the amount of available storage. In that case,
29
IMS terminates the program abnormally. You then need to increase the program's checkpoint frequency before rerunning the program. When you issue a DL/I CHKP call from an application program that uses DB2 databases, IMS processes the CHKP call for all DL/I databases, and DB2 commits all the DB2 database resources. No checkpoint information is recorded for DB2 databases in the IMS log or the DB2 log. The application program must record relevant information about DB2 databases for a checkpoint, if necessary. One way to record such information is to put it in a data area that is included in the DL/I CHKP call. Performance might be slowed by the commit processing that DB2 does during a DL/I CHKP call, because the program needs to re-establish position within a DB2 database. The fastest way to re-establish a position in a DB2 database is to use an index on the target table, with a key that matches one-to-one with every column in the SQL predicate.
Procedure
To recover data in IMS programs: Take one or more of the following actions depending on the type of program:
Program type DL/I batch applications Recommended action Use the DL/I batch backout utility to back out DL/I changes. DB2 automatically backs out changes whenever the application program abends. Use a restart call (XRST) to restart a program after an abend. This call restores the program's data areas to the way they were when the program terminated abnormally, and it restarts the program from the last checkpoint call that the program issued before terminating abnormally. Restart the program from the latest checkpoint. Restriction: You can restart the program only from the latest checkpoint and not from any checkpoint, as in IMS. No action needed. Recovery and restart are part of the IMS system Follow your location's operational procedures to control recovery and restart.
Applications that use online IMS systems Applications that reside in the batch region
30
Procedure
To undo selected changes within a unit of work by using savepoints: 1. Set any savepoints by using SQL SAVEPOINT statements. Savepoints set a point to which you can undo changes within a unit of work. Consider the following abilities and restrictions when setting savepoints: v You can set a savepoint with the same name multiple times within a unit of work. Each time that you set the savepoint, the new value of the savepoint replaces the old value. v If you do not want a savepoint to have different values within a unit of work, use the UNIQUE option in the SAVEPOINT statement. If an application executes a SAVEPOINT statement with the same name as a savepoint that was previously defined as unique, an SQL error occurs. v If you set a savepoint before you execute a CONNECT statement, the scope of that savepoint is the local site. If you set a savepoint after you execute the CONNECT statement, the scope of that savepoint is the site to which you are connected. v When savepoints are active, which they are until the unit of work completes, you cannot access remote sites by using three-part names or aliases for three-part names. You can, however, use DRDA access with explicit CONNECT statements. v You cannot use savepoints in global transactions, triggers, user-defined functions, or stored procedures that are nested within triggers or user-defined functions. 2. Specify the changes that you want to undo within a unit of work by using the SQL ROLLBACK TO SAVEPOINT statement. DB2 undoes all changes since the specified savepoint. If you do not specify a savepoint name, DB2 rolls back work to the most recently created savepoint. 3. Optional: If you no longer need a savepoint, delete it by using the SQL RELEASE SAVEPOINT statement. Recommendation: If you no longer need a savepoint before the end of a transaction, release it. Otherwise, savepoints are automatically released at the end of a unit of work. Releasing savepoints is essential if you need to use three-part names to access remote locations, because you cannot perform this action while savepoints are active.
Examples
Rolling back to the most recently created savepoint: When the ROLLBACK TO SAVEPOINT statement is executed in the following code, DB2 rolls back work to savepoint B.
EXEC SQL SAVEPOINT A; ... EXEC SQL SAVEPOINT B; ... EXEC SQL ROLLBACK TO SAVEPOINT;
Setting savepoints during distributed processing: An application performs the following tasks: 1. Sets savepoint C1. 2. Does some local processing. 3. Executes a CONNECT statement to connect to a remote site. 4. Sets savepoint C2.
Chapter 1. Planning for and designing DB2 applications
31
Because savepoint C1 is set before the application connects to a remote site, savepoint C1 is known only at the local site. However, because savepoint C2 is set after the application connects to the remote site, savepoint C2 is known only at the remote site. Setting multiple savepoints with the same name: Suppose that the following actions occur within a unit of work: 1. Application A sets savepoint S. 2. Application A calls stored procedure P. 3. Stored procedure P sets savepoint S. 4. Stored procedure P executes the following statement: ROLLBACK TO SAVEPOINT S When DB2 executes the ROLLBACK statement, DB2 rolls back work to the savepoint that was set in the stored procedure, because that value is the most recent value of savepoint S. Related reference: RELEASE SAVEPOINT (DB2 SQL) ROLLBACK (DB2 SQL) SAVEPOINT (DB2 SQL) | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To plan for recovery of table spaces that are not logged: 1. Ensure that you can recover lost data by performing one of the following actions: v Ensure that you have a data recovery source that does not rely on a log record to recreate any lost data. v Limit modifications that are not logged to easily repeatable changes that can be quickly repeated. 2. Avoid placing a table space that is not logged in a RECOVER-pending status. The following actions place a table space in RECOVER-pending status: v Issuing a ROLLBACK statement or ROLLBACK TO SAVEPOINT statement after modifying a table in a table space that is not logged. v Causing duplicate keys or referential integrity violations when you modify a table space that is not logged.
32
| | | | | | | | | | | |
If the table space is placed in RECOVER-pending status, it is unavailable until you manually fix it. 3. For table spaces that are not logged and have associated LOB or XML table spaces, take image copies as a recovery set. This action ensures that the base table space and all the associated LOB or XML table spaces are copied at the same point in time. A subsequent RECOVER TO LASTCOPY operation for the entire set results in consistent data across the base table space and all of the associated LOB and XML table spaces. Related tasks: Clearing the RECOVER-pending status (DB2 Administration Guide) Related reference: RECOVER (DB2 Utilities)
Procedure
To design your application to access distributed data: 1. Ensure that the appropriate authorization ID has been granted authorization at the remote server to connect to that server and use resources from it. 2. If your application contains SQL statements that run at the requester, include at the requester a database request module (DBRM) that is bound either directly to a plan or to a package that is included in the plan's package list. 3. Include a package at the remote server for any SQL statements that run at that server. 4. For TSO and batch applications that update data at a remote server, ensure that one of the following conditions is true: v No other connections exist. v All existing connections are to servers that are restricted to read-only operations. Restriction: If neither of these conditions are met, the application is restricted to read-only operations. If one of these conditions is met, and if the first connection in a logical unit of work is to a server that supports two-phase commit, that server and all servers that support two-phase commit can update data. However, if the first connection is to a server that does not support two-phase commit, only that server is allowed to update data.
33
5. For programs that access at least one restricted system, ensure that your program does not violate any of the limitations for accessing restricted systems. A restricted system is a DBMS that does not implement two-phase commit processing. Accessing restricted systems has the following limitations: v For programs that access CICS or IMS, you cannot update data on restricted systems. v Within a unit of work, you cannot update a restricted system after updating a non-restricted system. v Within a unit of work, if you update a restricted system, you cannot update any other systems. If you are accessing a mixture of systems, some of which might be restricted, you can perform the following actions: v Read from any of the systems at any time. v Update any one system many times in one unit of work. v Update many systems, including CICS or IMS, in one unit of work, provided that none of them is a restricted system. If the first system you update in a unit of work is not restricted, any attempt to update a restricted system in that unit of work returns an error. v Update one restricted system in a unit of work, provided that you do not try to update any other system in the same unit of work. If the first system you update in a unit of work is restricted, any attempt to update any other system in that unit of work returns an error. Related concepts: Phase 6: Accessing data at a remote site (DB2 Installation and Migration) Related tasks: Improving performance for applications that access distributed data (DB2 Performance) Related reference: The private to DRDA protocol REXX tool (DSNTP2DP) (DB2 Installation and Migration)
34
A DBMS, whether local or remote, is known to your DB2 system by its location name. The location name of a remote DBMS is recorded in the communications database. Related tasks: Choosing names for the local subsystem (DB2 Installation and Migration)
35
Procedure
To prepare for coordinated updates to two or more data sources: Ensure that all systems that your program accesses implement two-phase commit processing. This processing ensures that updates to two or more DBMSs are coordinated automatically. For example, DB2 and IMS, and DB2 and CICS, jointly implement a two-phase commit process. You can update an IMS database and a DB2 table in the same unit of work. If a system or communication failure occurs between committing the work on IMS and on DB2, the two programs restore the two systems to a consistent point when activity resumes. You cannot do true coordinated updates within a DBMS that does not implement two-phase commit processing, because DB2 prevents you from updating such a DBMS and any other system within the same unit of work. In this context, update includes the statements INSERT, UPDATE, MERGE, DELETE, CREATE, ALTER, DROP, GRANT, REVOKE, RENAME, COMMENT, and LABEL. However, if you cannot implement two-phase commit processing on all systems that your program accesses, you can simulate the effect of coordinated updates by performing the following actions: 1. Update one system and commit that work. 2. Update the second system and commit its work. 3. Ensure that your program has code to undo the first update if a failure occurs after the first update is committed and before the second update is committed. No automatic provision exists for bringing the two systems back to a consistent point. Related concepts: Two-phase commit process (DB2 Administration Guide)
36
Procedure
To force restricted system rules in your program: When you prepare your program, specify the SQL processing option CONNECT(1). This option applies type 1 CONNECT statement rules. Restriction: Do not use packages that are precompiled with the CONNECT(1) option and packages that are precompiled with the CONNECT(2) option in the same package list. The first CONNECT statement that is executed by your program determines which rules are in effect for the entire execution: type 1 or type 2. If your program attempts to execute a later CONNECT statement that is precompiled with the other type, DB2 returns an error. Related concepts: Options for SQL statement processing on page 958 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Creating a feed in IBM Mashup Center with data from a DB2 for z/OS server
You can create enterprise database feeds based on data from a DB2 for z/OS server. A feed is data that is provided in a format that facilitates frequent content updates.
Procedure
To create a feed based on DB2 for z/OS data: 1. In the MashupHub component of Mashup Center, click Create > New Feed and select the Enterprise Database (JDBC) feed generator. 2. In the SQL Query Builder window, create the SQL query for the feed. The SQL parameters become the feed parameters. Parameters in the format :arg are treated as string parameters. Parameters in the format :arg are treated as numeric parameters. Supported SQL statements are SELECT, INSERT, UPDATE, and DELETE. 3. Save the feed and click View Feed in Browser to execute the SQL statement.
Results
After you create the feed, you can do one or more of the following actions: v Create more feeds based on DB2 for z/OS data and then mix the results of the queries in a data mashup. A data mashup is a feed that you create by applying operators and functions to filter and restructure the source data. Use the data mashup builder in MashupHub to create a data mashup. v Add the feed to the Lotus mashup builder. When you add a feed to the Mashup builder, the feed is added as a widget. A widget is a small application or piece of dynamic content that can be easily placed on a web page. Mashable widgets pass events so that they can be wired together to create something new. v Add the feed to another application by using the "Add to" action.
37
| | | | | | | | | | | |
v Share the feed in the Mashup Center community catalog so that other users can include it in their mashups. Users can tag and rate catalog objects to help others find the information that they need quickly. Related concepts: IBM Mashup Center (Introduction to DB2 for z/OS) Related reference: Lotus Greenhouse IBM Mashup Center developerWorks forum IBM Mashup Center v2.x Information Center IBM Mashup Center v3.x Information Center Configuring MashupHub for enterprise database feeds IBM Mashup Center wiki
38
39
v The language interface modules for CAF and RRSAF, DSNALI and DSNRLI, are shipped with the linkage attributes AMODE(31) and RMODE(ANY). If your applications load CAF or RRSAF below the 16-MB line, you must link-edit DSNALI or DSNRLI again. Related concepts: DB2 attachment facilities (Introduction to DB2 for z/OS) Distributed data facility (Introduction to DB2 for z/OS)
40
connection failures to the controller correctly. Running DSN applications with CAF is not advantageous, and the loss of DSN services can affect how well your program runs.
Procedure
To invoke CAF: Perform one of the following actions: v Explicitly invoke CAF by including in your program CALL DSNALI statements with the appropriate options. The first option is a CAF connection function, which describes the action that you want CAF to take. The effect of any function depends in part on what functions the program has already run. Requirement: For C and PL/I applications, you must also include in your program the compiler directives that are listed in the following table, because DSNALI is an assembler language program.
Table 6. Compiler directives to include in C and PL/I applications that contain CALL DSNALI statements Language C C++ Compiler directive to include #pragma linkage(dsnali, OS) extern "OS" { int DSNALI( char * functn, ...); } DCL DSNALI ENTRY OPTIONS(ASM,INTER,RETCODE;
PL/I
v Implicitly invoke CAF by including SQL statements or IFI calls in your program just as you would in any program. The CAF facility establishes the connections to DB2 with the default values for the subsystem name and plan name. Restriction: If your program can make its first SQL call from different modules with different DBRMs, you cannot use a default plan name and thus, you cannot implicitly invoke CAF. Instead, you must explicitly invoke CAF by using the OPEN function. Requirement: If your application includes both SQL and IFI calls, you must issue at least one SQL call before you issue any IFI calls. This action ensures that your application uses the correct plan. Although doing so is not recommended, you can run existing DSN applications with CAF by allowing them to make implicit connections to DB2. For DB2 to make an implicit connection successfully, the plan name for the application must be the same as the member name of the database request module (DBRM) that DB2 produced when you precompiled the source program that contains the first SQL call. You must also substitute the DSNALI language interface module for the TSO language interface module, DSNELI. If you do not specify the return code and reason code parameters in your CAF calls or you invoked CAF implicitly, CAF puts a return code in register 15 and a reason code in register 0. To determine if an implicit connection was successful, the application program
41
should examine the return and reason codes immediately after the first executable SQL statement in the application program by performing one of the following actions: v Examining registers 0 and 15 directly. v Examining the SQLCA, and if the SQLCODE is -991, obtain the return and reason code from the message text. The return code is the first token, and the reason code is the second token. If the implicit connection was successful, the application can examine the SQLCODE for the first, and subsequent, SQL statements.
Examples
Example of a CAF configuration: The following figure shows an conceptual example of invoking and using CAF. The application contains statements to load DSNALI, DSNHLI2, and DSNWLI2. The application accesses DB2 by using the CAF Language Interface. It calls DSNALI to handle CAF requests, DSNWLI to handle IFI calls, and DSNHLI to handle SQL calls.
42
Application LOAD DSNALI LOAD DSNHLI2 LOAD DSNWLI2 CALL DSNALI ( CONNECT ) ( OPEN ) ( CLOSE ) ( DISCONNECT ) CALL DSNWLI (IFI calls) CALL DSNHLI (SQL calls)
Load
Call DSNALI
DSNHLI (dummy application entry point) CALL DSNHLI2 (Transfer calls to real CAF SQL entry point) DSNWLI (dummy application entry point) CALL DSNWLI2 (Transfer calls to real CAF IFI) DSNWLI DSNHLI2 (Process SQL stmts)
DB2
Sample programs that use CAF: You can find a sample assembler program (DSN8CA) and a sample COBOL program (DSN8CC) that use the CAF in library prefix.SDSNSAMP. A PL/I application (DSN8SPM) calls DSN8CA, and a COBOL application (DSN8SCM) calls DSN8CC. Related concepts: DB2 sample applications on page 1126 Related reference: CAF connection functions on page 53
43
A program that uses CAF can perform the following actions: v Access DB2 from z/OS address spaces where TSO, IMS, or CICS do not exist. v Access DB2 from multiple z/OS tasks in an address space. v Access the DB2 IFI. v Run when DB2 is down. Restriction: The application cannot run SQL when DB2 is down. v Run with or without the TSO terminal monitor program (TMP). v Run without being a subtask of the DSN command processor or of any DB2 code. v Run above or below the 16-MB line. (The CAF code resides below the line.) v Establish an explicit connection to DB2, through a CALL interface, with control over the exact state of the connection. v Establish an implicit connection to DB2, by using SQL statements or IFI calls without first calling CAF, with a default plan name and subsystem identifier. v Verify that the application is using the correct release of DB2. v Supply event control blocks (ECBs), for DB2 to post, that signal startup or termination. v Intercept return codes, reason codes, and abend codes from DB2 and translate them into messages. Any task in an address space can establish a connection to DB2 through CAF. Only one connection can exist for each task control block (TCB). A DB2 service request that is issued by a program that is running under a given task is associated with that task's connection to DB2. The service request operates independently of any DB2 activity under any other task. Each connected task can run a plan. Multiple tasks in a single address space can specify the same plan, but each instance of a plan runs independently from the others. A task can terminate its plan and run a different plan without fully breaking its connection to DB2. CAF does not generate task structures. When you design your application, consider that using multiple simultaneous connections can increase the possibility of deadlocks and DB2 resource contention. A tracing facility provides diagnostic messages that aid in debugging programs and diagnosing errors in the CAF code. In particular, attempts to use CAF incorrectly cause error messages in the trace stream. Restriction: CAF does not provide attention processing exits or functional recovery routines. You can provide whatever attention handling and functional recovery your application needs, but you must use ESTAE/ESTAI type recovery routines and not Enabled Unlocked Task (EUT) FRR routines.
44
Table 7. Properties of CAF connections Property Connection name Value DB2CALL Comments You can use the DISPLAY THREAD command to list CAF applications that have the connection name DB2CALL. BATCH connections use a single phase commit process that is coordinated by DB2. Application programs can also control when statements are committed by using the SQL COMMIT and ROLLBACK statements. DB2 establishes authorization IDs for each task's connection when it processes that connection. For the BATCH connection type, DB2 creates a list of authorization IDs based on the authorization ID that is associated with the address space. This list is the same for every task. A location can provide a DB2 connection authorization exit routine to change the list of IDs.
Connection type
BATCH
Authorization IDs
Scope
CAF processes connections as none if each task is entirely isolated. When a task requests a function, the CAF passes the functions to DB2 and is unaware of the connection status of other tasks in the address space. However, the application program and the DB2 subsystem are aware of the connection status of multiple tasks in an address space.
If a connected task terminates normally before the CLOSE function deallocates the plan, DB2 commits any database changes that the thread made since the last commit point. If a connected task abends before the CLOSE function deallocates the plan, DB2 rolls back any database changes since the last commit point. In either case, DB2 deallocates the plan, if necessary, and terminates the task's connection before it allows the task to terminate. If DB2 abnormally terminates while an application is running, the application is rolled back to the last commit point. If DB2 terminates while processing a commit request, DB2 either commits or rolls back any changes at the next restart. The action taken depends on the state of the commit request when DB2 terminates.
45
46
Procedure
To make DSNALI available: 1. Decide which of the following methods you want to use to make DSNALI available: v Explicitly issuing LOAD requests when your program runs. By explicitly loading the DSNALI module, you beneficially isolate the maintenance of your application from future IBM maintenance to the language interface. If the language interface changes, the change will probably not affect your load module. v Including the DSNALI module in your load module when you link-edit your program. If you do not need explicit calls to DSNALI for CAF functions, link-editing DSNALI into your load module has some advantages. When you include DSNALI during the link-edit, you do not need to code a dummy DSNHLI entry point in your program or specify the precompiler option ATTACH. Module DSNALI contains an entry point for DSNHLI, which is identical to DSNHLI2, and an entry point DSNWLI, which is identical to DSNWLI2. A disadvantage to link-editing DSNALI into your load module is that any IBM maintenance to DSNALI requires a new link-edit of your load module. 2. Depending on the method that you chose in step 1, perform one of the following actions: v If you want to explicitly issue LOAD requests when your program runs: In your program, issue z/OS LOAD service requests for entry points DSNALI and DSNHLI2. If you use IFI services, you must also load DSNWLI2. The entry point addresses that LOAD returns are saved for later use with the CALL macro. Indicate to DB2 which entry point to use in one of the following two ways: Specify the precompiler option ATTACH(CAF). This option causes DB2 to generate calls that specify entry point DSNHLI2. Restriction: You cannot use this option if your application is written in Fortran.
Chapter 2. Connecting to DB2 from your application program
47
Code a dummy entry point named DSNHLI within your load module. If you do not specify the precompiler option ATTACH, the DB2 precompiler generates calls to entry point DSNHLI for each SQL request. The precompiler does not know about and is independent of the different DB2 attachment facilities. When the calls generated by the DB2 precompiler pass control to DSNHLI, your code that corresponds to the dummy entry point must preserve the option list that was passed in R1 and specify the same option list when it calls DSNHLI2. v If you want to include the DSNALI module in your load module when you link-edit your program: Include DSNALI in your load module during a link-edit step. The module must be in a load module library, which is included either in the SYSLIB concatenation or another INCLUDE library that is defined in the linkage editor JCL. Because all language interface modules contain an entry point declaration for DSNHLI, the linkage editor JCL must contain an INCLUDE linkage editor control statement for DSNALI; for example, INCLUDE DB2LIB(DSNALI). By coding these options, you avoid inadvertently picking up the wrong language interface module. Related concepts: LOB file reference variables on page 751 Examples of invoking CAF on page 66 Related tasks: Saving storage when manipulating LOBs by using LOB locators on page 747
48
Related tasks: Chapter 17, Preparing an application to run on DB2 for z/OS, on page 941
Your CAF program should respect these register conventions. CAF also supports high-level languages that cannot examine the contents of individual registers. Related concepts: CALL DSNALI statement parameter list on page 50
49
function only or the OPEN function only. Each of these calls implicitly connects your application to DB2. To terminate an implicit connection, you must use the proper calls. Related concepts: Summary of CAF behavior on page 52
| | | | | | | | | | | | | | |
fnret = dsnali(&functn[0], &ssid[0], &tecb, &secb, &ribptr, &retcode, &reascode, NULL, &eibptr);
v For other languages except assembler language, code zero for that parameter in the CALL DSNALI statement. For example, suppose that you are coding a CONNECT call in a COBOL program, and you want to specify all parameters except the return code parameter. You can write a statement similar to the following statement:
CALL DSNALI USING FUNCTN SSID TECB SECB RIBPTR BY CONTENT ZERO BY REFERENCE REASCODE SRDURA EIBPTR.
v For assembler language, code a comma for that parameter in the CALL DSNALI statement. For example, to specify all optional parameters except the return code parameter write a statement similar to the following statement:
CALL DSNALI,(FUNCTN,SSID,TERMECB,STARTECB,RIBPTR,,REASCODE,SRDURA,EIBPTR, GROUPOVERRIDE)
The following figure shows a sample parameter list structure for the CONNECT function.
50
The preceding figure illustrates how you can omit parameters for the CALL DSNALI statement to control the return code and reason code fields after a CONNECT call. You can terminate the parameter list at any of the following points. These termination points apply to all CALL DSNALI statement parameter lists. 1. Terminates the parameter list without specifying the parameters retcode, reascodeand srdura and places the return code in register 15 and the reason code in register 0. Terminating the parameter list at this point ensures compatibility with CAF programs that require a return code in register 15 and a reason code in register 0. 2. Terminates the parameter list after the parameter retcode and places the return code in the parameter list and the reason code in register 0. Terminating the parameter list at this point enables the application program to take action, based on the return code, without further examination of the associated reason code. 3. Terminates the parameter list after the parameter reascode and places the return code and the reason code in the parameter list. Terminating the parameter list at this point provides support to high-level languages that are unable to examine the contents of individual registers.
Chapter 2. Connecting to DB2 from your application program
51
If you code your CAF application in assembler language, you can specify reason code parameter and omit the return code parameter. 4. Terminates the parameter list after the parameter srdura. If you code your CAF application in assembler language, you can specify parameter and omit the retcode and reascode parameters. 5. Terminates the parameter list after the parameter eibptr. If you code your CAF application in assembler language, you can specify parameter and omit the retcode, reascode, or srdura parameters. 6. Terminates the parameter list after the parameter groupoverride. If you code your CAF application in assembler language, you can specify parameter and omit the retcode, reascode,srdura, or eibptr parameters.
the
this
this
this
Even if you specify that the return code be placed in the parameter list, it is also placed in register 15 to accommodate high-level languages that support special return code processing. Related concepts: How CAF modifies the content of registers on page 49
CONNECT
Error 2011
OPEN
DISCONNECT
TRANSLATE
Error 2011
Error 2021
DISCONNECT
TRANSLATE
Error 2011
Error 2021
CLOSE2
DISCONNECT
TRANSLATE
52
Table 9. Effects of CAF calls, as dependent on connection history (continued) Previous function OPEN SQL or IFI call Next function CONNECT Error 201
1
CLOSE CLOSE
2
Error 2011
Error 2021
CLOSE2
Notes: 1. An error is shown in this table as Error nnn. The corresponding reason code is X'00C10nnn'. The message number is DSNAnnnI or DSNAnnnE. 2. The task and address space connections remain active. If the CLOSE call fails because DB2 was down, the CAF control blocks are reset, the function produces return code 4 and reason code X'00C10824', and CAF is ready for more connection requests when DB2 is up. 3. A TRANSLATE request is accepted, but in this case it is redundant. CAF automatically issues a TRANSLATE request when an SQL or IFI request fails. Related reference: CAF return codes and reason codes on page 64
53
Recommendation: Because the effect of any CAF function depends on what functions the program has already run, carefully plan the calls that your program makes to these CAF connection functions. Read about the summary of CAF behavior and make these function calls accordingly. Related concepts: Summary of CAF behavior on page 52 CALL DSNALI statement parameter list on page 50
54
v The current release level of DB2. To find this information, access the RIBREL field in the release information block (RIB). Restriction: Do not issue CONNECT requests from a TCB that already has an active DB2 connection. Recommendation: Do not mix explicit CONNECT and OPEN requests with implicitly established connections in the same address space. Either explicitly specify which DB2 subsystem you want to use or allow all requests to use the default subsystem. The following diagram shows the syntax for the CONNECT function.
Parameters point to the following areas: function A 12-byte area that contains CONNECT followed by five blanks. ssnm A 4-byte DB2 subsystem name or group attachment name (if used in a data sharing group) to which the connection is made. If ssnm is less than four characters long, pad it on the right with blanks to a length of four characters. termecb A 4-byte integer representing the application's event control block (ECB) for DB2 termination. DB2 posts this ECB when the operator enters the STOP DB2 command or when DB2 is abnormally terminating. The ECB indicates the type of termination by a POST code, as shown in the following table:
Table 10. POST codes and related termination types POST code 8 12 16 Termination type QUIESCE FORCE ABTERM
Before you check termecb in your CAF application program, first check the return code and reason code from the CONNECT call to ensure that the call completed successfully. startecb A 4-byte integer representing the application's startup ECB. If DB2 has not yet
Chapter 2. Connecting to DB2 from your application program
55
started when the application issues the call, DB2 posts the ECB when it successfully completes its startup processing. DB2 posts at most one startup ECB per address space. The ECB is the one associated with the most recent CONNECT call from that address space. Your application program must examine any nonzero CAF and DB2 reason codes before issuing a WAIT on this ECB. If ssnm is a group attachment name, the first DB2 subsystem that starts on the local z/OS system and matches the specified group attachment name posts the ECB. ribptr A 4-byte area in which CAF places the address of the release information block (RIB) after the call. You can determine what release level of DB2 you are currently running by examining the RIBREL field. You can determine the modification level within the release level by examining the RIBCNUMB and RIBCINFO fields. If the value in the RIBCNUMB field is greater than zero, check the RIBCINFO field for modification levels. If the RIB is not available (for example, if you name a subsystem that does not exist), DB2 sets the 4-byte area to zeros. The area to which ribptr points is below the 16-MB line. Your program does not have to use the release information block, but it cannot omit the ribptr parameter. Macro DSNDRIB maps the release information block (RIB). It can be found in prefix.SDSNMACS(DSNDRIB). retcode A 4-byte area in which CAF places the return code. This field is optional. If you do not specify retcode, CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. This field is optional. If you do not specify reascode, CAF places the reason code in register 0. If you specify reascode, you must also specify retcode. srdura A 10-byte area that contains the string 'SRDURA(CD)'. This field is optional. If you specify srdura, the value in the CURRENT DEGREE special register stays in effect from the time of the CONNECT call until the time of the DISCONNECT call. If you do not specify srdura, the value in the CURRENT DEGREE special register stays in effect from the time of the OPEN call until the time of the CLOSE call. If you specify this parameter in any language except assembler, you must also specify retcode and reascode. In assembler language, you can omit these parameters by specifying commas as placeholders. eibptr A 4-byte area in which CAF puts the address of the environment information block (EIB). The EIB contains information that you can use if you are connecting to a DB2 subsystem that is part of a data sharing group. For example, you can determine the name of the data sharing group, the member to which you are connecting, and whether the subsystem is in new-function mode. If the DB2 subsystem that you connect to is not part of a data sharing
56
group, the fields in the EIB that are related to data sharing are blank. If the EIB is not available (for example, if you name a subsystem that does not exist), DB2 sets the 4-byte area to zeros. | The area to which eibptr points is above the 16-MB line. You can omit this parameter when you make a CONNECT call. If you specify this parameter in any language except assembler, you must also specify retcode, reascode, and srdura. In assembler language, you can omit retcode, reascode, and srdura by specifying commas as placeholders. Macro DSNDEIB maps the EIB. It can be found in prefix.SDSNMACS(DSNDEIB). | | | | | | | | | | | | | | | | groupoverride An 8-byte area that the application provides. This parameter is optional. If you do not want group attach to be attempted, specify 'NOGROUP'. This string indicates that the subsystem name that is specified by ssnm is to be used as a DB2 subsystem name, even if ssnm matches a group attachment name. If groupoverride is not provided, ssnm is used as the group attachment name if it matches a group attachment name. If you specify this parameter in any language except assembler, you must also specify retcode, reascode, srdura, and eibptr. In assembler language, you can omit retcode, reascode, srdura, and eibptr by specifying commas as placeholders. Recommendation: Avoid using the groupoverride parameter when possible, because it limits the ability to do dynamic workload routing in a Parallel Sysplex. However, you should use this parameter in a data sharing environment when you want to connect to a specific member of a data sharing group, and the subsystem name of that member is the same as the group attachment name.
C1 COBOL Fortran
PL/I1
Note: v For C and PL/I applications, you must include the appropriate compiler directives, because DSNALI is an assembler language program. These compiler directives are described in the instructions for invoking CAF.
Chapter 2. Connecting to DB2 from your application program
57
Related concepts: Examples of invoking CAF on page 66 Related tasks: Invoking the call attachment facility on page 40 Related reference: Synchronizing Tasks (WAIT, POST, and EVENTS Macros) (MVS Programming: Assembler Services Guide)
Parameters point to the following areas: function A 12-byte area that contains the word OPEN followed by eight blanks. ssnm A 4-byte DB2 subsystem name or group attachment name (if used in a data sharing group). The OPEN function allocates the specified plan to this DB2 subsystem. Also, if the requesting task does not already have a connection to the named DB2 subsystem, the OPEN function establishes it. You must specify the ssnm parameter, even if the requesting task also issues a CONNECT call. If a task issues a CONNECT call followed by an OPEN call, the subsystem names for both calls must be the same. If ssnm is less than four characters long, pad it on the right with blanks to a length of four characters. plan An 8-byte DB2 plan name. retcode A 4-byte area in which CAF places the return code.
58
This field is optional. If you do not specify retcode,CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. This field is optional. If you do not specify reascode, CAF places the reason code in register 0. If you specify reascode, you must also specify retcode. | groupoverride An 8-byte area that the application provides. This field is optional. If you do not want group attach to be attempted, specify 'NOGROUP'. This string indicates that the subsystem name that is specified by ssnm is to be used as a DB2 subsystem name, even if ssnm matches a group attachment name. If you do not specify groupoverride, ssnm is used as the group attachment name if it matches a group attachment name. If you specify this parameter in any language except assembler, you must also specify retcode and reascode. In assembler language, you can omit these parameters by specifying commas as placeholders. Recommendation: Avoid using the groupoverride parameter when possible, because it limits the ability to do dynamic workload routing in a Parallel Sysplex. However, you should use this parameter in a data sharing environment when you want to connect to a specific member of a data sharing group, and the subsystem name of that member is the same as the group attachment name.
fnret=dsnali(&functn[0],&ssid[0], &planname[0],&retcode, &reascode,&grpover[0]); CALL CALL CALL DSNALI USING FUNCTN SSID PLANNAME RETCODE REASCODE GRPOVER. DSNALI(FUNCTN,SSID,PLANNAME, RETCODE,REASCODE,GRPOVER) DSNALI(FUNCTN,SSID,PLANNAME, RETCODE,REASCODE,GRPOVER);
Note: v For C and PL/I applications, you must include the appropriate compiler directives, because DSNALI is an assembler language program. These compiler directives are described in the instructions for invoking CAF. Related concepts: Implicit connections to CAF on page 49 Related tasks: Invoking the call attachment facility on page 40
59
If you did not issue an explicit CONNECT call for the task, the CLOSE function deletes the task's connection to DB2. If no other task in the address space has an active connection to DB2, DB2 also deletes the control block structures that were created for the address space and removes the cross memory authorization. Using the CLOSE function is optional. Consider the following rules and recommendations about when to use and not use the CLOSE function: v Do not use the CLOSE function when your current task does not have a plan allocated. v If you want to use a new plan, you must issue an explicit CLOSE call, followed by an OPEN call with the new plan name. v When shutting down your application you can improve the performance of this shut down by explicitly calling the CLOSE function before the task terminates. If you omit the CLOSE call, DB2 performs an implicit CLOSE. In this case, DB2 performs the same actions when your task terminates, by using the SYNC parameter if termination is normal and the ABRT parameter if termination is abnormal. v If DB2 terminates, issue an explicit CLOSE call for any task that did not issue a CONNECT call. This action enables CAF to reset its control blocks to allow for future connections. This CLOSE call returns the reset accomplished return code (+004) and reason code X'00C10824'. If you omit the CLOSE call in this case, when DB2 is back on line, the task's next connection request fails. You get either the message YOUR TCB DOES NOT HAVE A CONNECTION, with X'00F30018' in register 0, or the CAF error message DSNA201I or DSNA202I, depending on what your application tried to do. The task must then issue a CLOSE call before it can reconnect to DB2. v A task that issued an explicit CONNECT call should issue a DISCONNECT call instead of a CLOSE call. This action causes CAF to reset its control blocks when DB2 terminates. The following diagram shows the syntax for the CLOSE function.
Parameters point to the following areas: function A 12-byte area that contains the word CLOSE followed by seven blanks. termop A 4-byte terminate option, with one of the following values: SYNC Specifies that DB2 is to commit any modified data. ABRT Specifies that DB2 is to roll back data to the previous commit point. retcode A 4-byte area in which CAF is to place the return code.
60
This field is optional. If you do not specify retcode, CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. This field is optional. If you do not specify reascode, CAF places the reason code in register 0. If you specify reascode, you must also specify retcode.
fnret=dsnali(&functn[0], &termop[0], &retcode,&reascode); CALL CALL CALL DSNALI USING FUNCTN TERMOP RETCODE REASCODE. DSNALI(FUNCTN,TERMOP, RETCODE,REASCODE) DSNALI(FUNCTN,TERMOP, RETCODE,REASCODE);
Note: v For C and PL/I applications, you must include the appropriate compiler directives, because DSNALI is an assembler language program. These compiler directives are described in the instructions for invoking CAF. Related tasks: Invoking the call attachment facility on page 40
61
returns the reset accomplished return codes and reason codes (+004 and X'00C10824'). This action ensures that future connection requests from the task work when DB2 is back on line. v A task that did not explicitly issue a CONNECT call must issue a CLOSE call instead of a DISCONNECT call. This action resets the CAF control blocks when DB2 terminates. The following diagram shows the syntax for the DISCONNECT function.
The single parameter points to the following area: function A 12-byte area that contains the word DISCONNECT followed by two blanks. retcode A 4-byte area in which CAF places the return code. This field is optional. If you do not specify retcode, CAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which CAF places a reason code. This field is optional. If you do not specify reascode, CAF places the reason code in register 0. If you specify reascode, you must also specify retcode.
fnret=dsnali(&functn[0], &retcode, &reascode); CALL CALL CALL DSNALI USING FUNCTN RETCODE REASCODE. DSNALI(FUNCTN,RETCODE,REASCODE) DSNALI(FUNCTN,RETCODE,REASCODE);
Note: v For C and PL/I applications, you must include the appropriate compiler directives, because DSNALI is an assembler language program. These compiler directives are described in the instructions for invoking CAF.
62
Parameters point to the following areas: function A 12-byte area the contains the word TRANSLATE followed by three blanks. sqlca The program's SQL communication area (SQLCA). retcode A 4-byte area in which CAF places the return code. This field is optional. If you do not specify retcode, CAF places the return code in register 15 and the reason code in register 0.
Chapter 2. Connecting to DB2 from your application program
63
reascode A 4-byte area in which CAF places a reason code. This field is optional. If you do not specify reascode, CAF places the reason code in register 0. If you specify reascode, you must also specify retcode.
fnret=dsnali(&functn[0], &sqlca, &retcode, &reascode); CALL CALL DSNALI USING FUNCTN SQLCA RETCODE REASCODE. DSNALI(FUNCTN,SQLCA,RETCODE, REASCODE);
COBOL PL/I
1
Note: v For C and PL/I applications, you must include the appropriate compiler directives, because DSNALI is an assembler language program. These compiler directives are described in the instructions for invoking CAF. Related tasks: Invoking the call attachment facility on page 40
Procedure
To turn on a CAF trace: Allocate a DSNTRACE data set either dynamically or by including a DSNTRACE DD statement in your JCL. CAF writes diagnostic trace messages to that data set. The trace message numbers contain the last three digits of the reason codes. Related concepts: Examples of invoking CAF on page 66
64
The following table lists the CAF return codes and reason codes.
Table 16. CAF return codes and reason codes Return code 0 4 8 200
1
Explanation Successful completion. CAF reset complete. CAF is ready to make a new connection. Release level mismatch between DB2 and the CAF code. Received a second CONNECT request from the same TCB. The first CONNECT request could have been implicit or explicit. Received a second OPEN request from the same TCB. The first OPEN request could have been implicit or explicit. CLOSE request issued when no active OPEN request exists. DISCONNECT request issued when no active CONNECT request exists, or the AXSET macro was issued between the CONNECT request and the DISCONNECT request. TRANSLATE request issued when no connection to DB2 exists. Incorrect number of parameters was specified or the end-of-list bit was off. Unrecognized function parameter. Received requests to access two different DB2 subsystems from the same TCB. CAF system error. Probable error in the attach or DB2.
2001 200
1
1. A CAF error probably caused by errors in the parameter lists from the application programs. CAF errors do not change the current state of your connection to DB2; you can continue processing with a corrected request. 2. System errors cause abends. If tracing is on, a descriptive message is written to the DSNTRACE data set just before the abend.
65
CONNECT OPEN allocate a plan SQL or IFI call CLOSE deallocate the current plan OPEN allocate a new plan SQL or IFI call CLOSE DISCONNECT
A task can have a connection to only one DB2 subsystem at any point in time. A CAF error occurs if the subsystem name in the OPEN call does not match the subsystem name in the CONNECT call. To switch to a different subsystem, the application must first disconnect from the current subsystem and then issue a connect request with a new subsystem name.
Multiple tasks
In the following scenario, multiple tasks within the address space use DB2 services. Each task must explicitly specify the same subsystem name on either the CONNECT function request or the OPEN function request. Task 1 makes no SQL or IFI calls. Its purpose is to monitor the DB2 termination and startup ECBs and to check the DB2 release level.
TASK 1 CONNECT OPEN SQL ... CLOSE OPEN SQL ... CLOSE DISCONNECT OPEN SQL ... CLOSE OPEN SQL ... CLOSE OPEN SQL ... CLOSE OPEN SQL ... CLOSE TASK 2 TASK 3 TASK n
66
Example of connecting to DB2 with CAF: The following example code shows how to issue explicit requests for certain actions, such as CONNECT, OPEN, CLOSE, DISCONNECT, and TRANSLATE, and uses the CHEKCODE subroutine to check the return reason codes from CAF.
****************************** CONNECT ******************************** L R15,LIALI Get the Language Interface address MVC FUNCTN,CONNECT Get the function to call CALL (15),(FUNCTN,SSID,TECB,SECB,RIBPTR),VL,MF=(E,CAFCALL) BAL R14,CHEKCODE Check the return and reason codes CLC CONTROL,CONTINUE Is everything still OK BNE EXIT If CONTROL not CONTINUE, stop loop USING R8,RIB Prepare to access the RIB L R8,RIBPTR Access RIB to get DB2 release level WRITE The current DB2 release level is RIBREL ****************************** OPEN *********************************** L R15,LIALI Get the Language Interface address MVC FUNCTN,OPEN Get the function to call CALL (15),(FUNCTN,SSID,PLAN),VL,MF=(E,CAFCALL) BAL R14,CHEKCODE Check the return and reason codes ****************************** SQL ************************************ * Insert your SQL calls here. The DB2 Precompiler * generates calls to entry point DSNHLI. You should * specify the precompiler option ATTACH(CAF), or code * a dummy entry point named DSNHLI to intercept * all SQL calls. A dummy DSNHLI is shown below. ****************************** CLOSE ********************************** CLC CONTROL,CONTINUE Is everything still OK? BNE EXIT If CONTROL not CONTINUE, shut down MVC TRMOP,ABRT Assume termination with ABRT parameter L R4,SQLCODE Put the SQLCODE into a register C R4,CODE0 Examine the SQLCODE BZ SYNCTERM If zero, then CLOSE with SYNC parameter
Chapter 2. Connecting to DB2 from your application program
67
C R4,CODE100 See if SQLCODE was 100 BNE DISC If not 100, CLOSE with ABRT parameter SYNCTERM MVC TRMOP,SYNC Good code, terminate with SYNC parameter DISC DS 0H Now build the CAF parmlist L R15,LIALI Get the Language Interface address MVC FUNCTN,CLOSE Get the function to call CALL (15),(FUNCTN,TRMOP),VL,MF=(E,CAFCALL) BAL R14,CHEKCODE Check the return and reason codes ****************************** DISCONNECT ***************************** CLC CONTROL,CONTINUE Is everything still OK BNE EXIT If CONTROL not CONTINUE, stop loop L R15,LIALI Get the Language Interface address MVC FUNCTN,DISCON Get the function to call CALL (15),(FUNCTN),VL,MF=(E,CAFCALL) BAL R14,CHEKCODE Check the return and reason codes
This example code does not show a task that waits on the DB2 termination ECB. If you want such a task, you can code it by using the z/OS WAIT macro to monitor the ECB. You probably want this task to detach the sample code if the termination ECB is posted. That task can also wait on the DB2 startup ECB. This sample waits on the startup ECB at its own task level. This example code assumes that the variables in the following table are already set:
Table 17. Variables that preceding example assembler code assumes are set Variable LIALI LISQL SSID TECB SECB RIBPTR PLAN CONTROL Usage The entry point that handles DB2 connection service requests. The entry point that handles SQL calls. The DB2 subsystem identifier. The address of the DB2 termination ECB. The address of the DB2 startup ECB. A fullword that CAF sets to contain the RIB address. The plan name to use in the OPEN call. This variable is used to shut down processing because of unsatisfactory return or reason codes. The CHECKCODE subroutine sets this value. List-form parameter area for the CALL macro.
CAFCALL
Example of checking return codes and reason codes when using CAF: The following example code illustrates a way to check the return codes and the DB2 termination ECB after each connection service request and SQL call. The routine sets the variable CONTROL to control further processing within the module.
*********************************************************************** * CHEKCODE PSEUDOCODE * *********************************************************************** *IF TECB is POSTed with the ABTERM or FORCE codes * THEN * CONTROL = SHUTDOWN * WRITE DB2 found FORCE or ABTERM, shutting down * ELSE /* Termination ECB was not POSTed */ * SELECT (RETCODE) /* Look at the return code */ * WHEN (0) ; /* Do nothing; everything is OK */
68
* WHEN (4) ; /* Warning */ * SELECT (REASCODE) /* Look at the reason code */ * WHEN (00C10824X) /* Ready for another CAF call */ * CONTROL = RESTART /* Start over, from the top */ * OTHERWISE * WRITE Found unexpected R0 when R15 was 4 * CONTROL = SHUTDOWN * END INNER-SELECT * WHEN (8,12) /* Connection failure */ * SELECT (REASCODE) /* Look at the reason code */ * WHEN (00C10831X) /* DB2 / CAF release level mismatch*/ * WRITE Found a mismatch between DB2 and CAF release levels * WHEN (00F30002X, /* These mean that DB2 is down but */ * 00F30012X) /* will POST SECB when up again */ * DO * WRITE DB2 is unavailable. Ill tell you when it is up. * WAIT SECB /* Wait for DB2 to come up */ * WRITE DB2 is now available. * END * /**********************************************************/ * /* Insert tests for other DB2 connection failures here. */ * /* CAF Externals Specification lists other codes you can */ * /* receive. Handle them in whatever way is appropriate */ * /* for your application. */ * /**********************************************************/ * OTHERWISE /* Found a code were not ready for*/ * WRITE Warning: DB2 connection failure. Cause unknown * CALL DSNALI (TRANSLATE,SQLCA) /* Fill in SQLCA */ * WRITE SQLCODE and SQLERRM * END INNER-SELECT * WHEN (200) * WRITE CAF found user error. See DSNTRACE data set * WHEN (204) * WRITE CAF system error. See DSNTRACE data set * OTHERWISE * CONTROL = SHUTDOWN * WRITE Got an unrecognized return code * END MAIN SELECT * IF (RETCODE > 4) THEN /* Was there a connection problem?*/ * CONTROL = SHUTDOWN * END CHEKCODE *********************************************************************** * Subroutine CHEKCODE checks return codes from DB2 and Call Attach. * When CHEKCODE receives control, R13 should point to the callers * save area. *********************************************************************** CHEKCODE DS 0H STM R14,R12,12(R13) Prolog ST R15,RETCODE Save the return code ST R0,REASCODE Save the reason code LA R15,SAVEAREA Get save area address ST R13,4(,R15) Chain the save areas ST R15,8(,R13) Chain the save areas LR R13,R15 Put save area address in R13 * ********************* HUNT FOR FORCE OR ABTERM *************** TM TECB,POSTBIT See if TECB was POSTed BZ DOCHECKS Branch if TECB was not POSTed CLC TECBCODE(3),QUIESCE Is this "STOP DB2 MODE=FORCE" BE DOCHECKS If not QUIESCE, was FORCE or ABTERM MVC CONTROL,SHUTDOWN Shutdown WRITE Found found FORCE or ABTERM, shutting down B ENDCCODE Go to the end of CHEKCODE DOCHECKS DS 0H Examine RETCODE and REASCODE * ********************* HUNT FOR 0 ***************************** CLC RETCODE,ZERO Was it a zero? BE ENDCCODE Nothing to do in CHEKCODE for zero * ********************* HUNT FOR 4 *****************************
Chapter 2. Connecting to DB2 from your application program
69
CLC RETCODE,FOUR Was it a 4? BNE HUNT8 If not a 4, hunt eights CLC REASCODE,C10831 Was it a release level mismatch? BNE HUNT824 Branch if not an 831 WRITE Found a mismatch between DB2 and CAF release levels B ENDCCODE We are done. Go to end of CHEKCODE HUNT824 DS 0H Now look for CAF reset reason code CLC REASCODE,C10824 Was it 4? Are we ready to restart? BNE UNRECOG If not 824, got unknown code WRITE CAF is now ready for more input MVC CONTROL,RESTART Indicate that we should re-CONNECT B ENDCCODE We are done. Go to end of CHEKCODE UNRECOG DS 0H WRITE Got RETCODE = 4 and an unrecognized reason code MVC CONTROL,SHUTDOWN Shutdown, serious problem B ENDCCODE We are done. Go to end of CHEKCODE * ********************* HUNT FOR 8 ***************************** HUNT8 DS 0H CLC RETCODE,EIGHT Hunt return code of 8 BE GOT8OR12 CLC RETCODE,TWELVE Hunt return code of 12 BNE HUNT200 GOT8OR12 DS 0H Found return code of 8 or 12 WRITE Found RETCODE of 8 or 12 CLC REASCODE,F30002 Hunt for X00F30002 BE DB2DOWN CLC REASCODE,F30012 Hunt for X00F30012 BE DB2DOWN WRITE DB2 connection failure with an unrecognized REASCODE CLC SQLCODE,ZERO See if we need TRANSLATE BNE A4TRANS If not blank, skip TRANSLATE ********************* TRANSLATE unrecognized RETCODEs ******** WRITE SQLCODE 0 but R15 not, so TRANSLATE to get SQLCODE L R15,LIALI Get the Language Interface address CALL (15),(TRANSLAT,SQLCA),VL,MF=(E,CAFCALL) C R0,C10205 Did the TRANSLATE work? BNE A4TRANS If not C10205, SQLERRM now filled in WRITE Not able to TRANSLATE the connection failure B ENDCCODE Go to end of CHEKCODE DS 0H SQLERRM must be filled in to get here Note: your code should probably remove the XFF separators and format the SQLERRM feedback area. Alternatively, use DB2 Sample Application DSNTIAR to format a message. WRITE SQLERRM is: SQLERRM B ENDCCODE We are done. Go to end of CHEKCODE DS 0H Hunt return code of 200 WRITE DB2 is down and I will tell you when it comes up WAIT ECB=SECB Wait for DB2 to come up WRITE DB2 is now available MVC CONTROL,RESTART Indicate that we should re-CONNECT B ENDCCODE ********************* HUNT FOR 200 *************************** DS 0H Hunt return code of 200 CLC RETCODE,NUM200 Hunt 200 BNE HUNT204 WRITE CAF found user error, see DSNTRACE data set B ENDCCODE We are done. Go to end of CHEKCODE ********************* HUNT FOR 204 *************************** DS 0H Hunt return code of 204 CLC RETCODE,NUM204 Hunt 204 BNE WASSAT If not 204, got strange code WRITE CAF found system error, see DSNTRACE data set B ENDCCODE We are done. Go to end of CHEKCODE ********************* UNRECOGNIZED RETCODE ******************* DS 0H WRITE Got an unrecognized RETCODE
A4TRANS * * * * DB2DOWN
* HUNT200
* HUNT204
* WASSAT
70
MVC CONTROL,SHUTDOWN BE ENDCCODE ENDCCODE DS 0H L R4,RETCODE C R4,FOUR BNH BYEBYE MVC CONTROL,SHUTDOWN BYEBYE DS 0H L R13,4(,R13) RETURN (14,12)
Shutdown We are done. Go to end of CHEKCODE Should we shut down? Get a copy of the RETCODE Have a look at the RETCODE If RETCODE <= 4 then leave CHEKCODE Shutdown Wrap up and leave CHEKCODE Point to callers save area Return to the caller
Example of invoking CAF when you do not specify the precompiler option ATTACH(CAF): Each of the four DB2 attachment facilities contains an entry point named DSNHLI. When you use CAF but do not specify the precompiler option ATTACH(CAF), SQL statements result in BALR instructions to DSNHLI in your program. To find the correct DSNHLI entry point without including DSNALI in your load module, code a subroutine with entry point DSNHLI that passes control to entry point DSNHLI2 in the DSNALI module. DSNHLI2 is unique to DSNALI and is at the same location in DSNALI as DSNHLI. DSNALI uses 31-bit addressing. If the application that calls this intermediate subroutine uses 24-bit addressing, this subroutine should account for the difference. In the following example, LISQL is addressable because the calling CSECT used the same register 12 as CSECT DSNHLI. Your application must also establish addressability to LISQL.
*********************************************************************** * Subroutine DSNHLI intercepts calls to LI EP=DSNHLI *********************************************************************** DS 0D DSNHLI CSECT Begin CSECT STM R14,R12,12(R13) Prologue LA R15,SAVEHLI Get save area address ST R13,4(,R15) Chain the save areas ST R15,8(,R13) Chain the save areas LR R13,R15 Put save area address in R13 L R15,LISQL Get the address of real DSNHLI BASSM R14,R15 Branch to DSNALI to do an SQL call * DSNALI is in 31-bit mode, so use * BASSM to assure that the addressing * mode is preserved. L R13,4(,R13) Restore R13 (callers save area addr) L R14,12(,R13) Restore R14 (return address) RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
Example of variable declarations when using CAF: The following example code shows declarations for some of the variables that were used in the previous subroutines.
****************************** VARIABLES ****************************** SECB DS F DB2 Startup ECB TECB DS F DB2 Termination ECB LIALI DS F DSNALI Entry Point address LISQL DS F DSNHLI2 Entry Point address SSID DS CL4 DB2 Subsystem ID. CONNECT parameter PLAN DS CL8 DB2 Plan name. OPEN parameter TRMOP DS CL4 CLOSE termination option (SYNC|ABRT) FUNCTN DS CL12 CAF function to be called RIBPTR DS F DB2 puts Release Info Block addr here RETCODE DS F Chekcode saves R15 here REASCODE DS F Chekcode saves R0 here CONTROL DS CL8 GO, SHUTDOWN, or RESTART SAVEAREA DS 18F Save area for CHEKCODE ****************************** CONSTANTS ****************************** SHUTDOWN DC CL8SHUTDOWN CONTROL value: Shutdown execution
Chapter 2. Connecting to DB2 from your application program
71
RESTART DC CL8RESTART CONTROL value: Restart execution CONTINUE DC CL8CONTINUE CONTROL value: Everything OK, cont CODE0 DC F0 SQLCODE of 0 CODE100 DC F100 SQLCODE of 100 QUIESCE DC XL3000008 TECB postcode: STOP DB2 MODE=QUIESCE CONNECT DC CL12CONNECT Name of a CAF service. Must be CL12! OPEN DC CL12OPEN Name of a CAF service. Must be CL12! CLOSE DC CL12CLOSE Name of a CAF service. Must be CL12! DISCON DC CL12DISCONNECT Name of a CAF service. Must be CL12! TRANSLAT DC CL12TRANSLATE Name of a CAF service. Must be CL12! SYNC DC CL4SYNC Termination option (COMMIT) ABRT DC CL4ABRT Termination option (ROLLBACK) ****************************** RETURN CODES (R15) FROM CALL ATTACH **** ZERO DC F0 0 FOUR DC F4 4 EIGHT DC F8 8 TWELVE DC F12 12 (Call Attach return code in R15) NUM200 DC F200 200 (User error) NUM204 DC F204 204 (Call Attach system error) ****************************** REASON CODES (R00) FROM CALL ATTACH **** C10205 DC XL400C10205 Call attach could not TRANSLATE C10831 DC XL400C10831 Call attach found a release mismatch C10824 DC XL400C10824 Call attach ready for more input F30002 DC XL400F30002 DB2 subsystem not up F30011 DC XL400F30011 DB2 subsystem not up F30012 DC XL400F30012 DB2 subsystem not up F30025 DC XL400F30025 DB2 is stopping (REASCODE) * * Insert more codes here as necessary for your application * ****************************** SQLCA and RIB ************************** EXEC SQL INCLUDE SQLCA DSNDRIB Get the DB2 Release Information Block ****************************** CALL macro parm list ******************* CAFCALL CALL ,(*,*,*,*,*,*,*,*,*),VL,MF=L
72
Procedure
To invoke RRSAF: 1. Perform one of the following actions: v Explicitly invoke RRSAF by including in your program CALL DSNRLI statements with the appropriate options. The first option is an RRSAF connection function, which describes the action that you want RRSAF to take. The effect of any function depends in part on what functions the program has already performed. To code RRSAF functions in C, COBOL, Fortran, or PL/I, follow the individual language's rules for making calls to assembler language routines. Specify the return code and reason code parameters in the parameter list for each RRSAF call. Requirement: For C, C++, and PL/I applications, you must also include in your program the compiler directives that are listed in the following table, because DSNRLI is an assembler language program.
Table 18. Compiler directives to include in C, C++, and PL/I applications that contain CALL DSNRLI statements Language C C++ Compiler directive to include #pragma linkage(dsnrli, OS) extern "OS" { int DSNRLI( char * functn, ...); } DCL DSNRLI ENTRY OPTIONS(ASM,INTER,RETCODE);
PL/I
v Implicitly invoke RRSAF by including SQL statements or IFI calls in your program just as you would in any program. The RRSAF facility establishes the connection to DB2 with the default values for the subsystem name, plan name and authorization ID. Restriction: If your program can make its first SQL call from different modules with different DBRMs, you cannot use a default plan name and thus, you cannot implicitly invoke RRSAF. Instead, you must explicitly invoke RRSAF by calling the CREATE THREAD function. Requirement: If your application includes both SQL and IFI calls, you must issue at least one SQL call before you issue any IFI calls. This action ensures that your application uses the correct plan. 2. If you implicitly invoked RRSAF, determine if the implicit connection was successful by examining the return code and reason code immediately after the
Chapter 2. Connecting to DB2 from your application program
73
first executable SQL statement within the application program. Your program can check these codes by performing one of the following actions: v Examine registers 0 and 15 directly. v Examine the SQLCA, and if the SQLCODE is -981, obtain the return and reason code from the message text. The return code is the first token, and the reason code is the second token. If the implicit connection is successful, the application can examine the SQLCODE for the first, and subsequent, SQL statements.
74
75
v Coordinate DB2 updates with updates made by all other resource managers that also use z/OS RRS in an z/OS system. v Use the z/OS System Authorization Facility and an external security product, such as RACF, to sign on to DB2 with the authorization ID of an end user. v Sign on to DB2 using a new authorization ID and an existing connection and plan. v v v v v v v v Access DB2 from multiple z/OS tasks in an address space. Switch a DB2 thread among z/OS tasks within a single address space. Access the DB2 IFI. Run with or without the TSO terminal monitor program (TMP). Run without being a subtask of the DSN command processor (or of any DB2 code). Run above or below the 16-MB line. Establish an explicit connection to DB2, through a call interface, with control over the exact state of the connection. Establish an implicit connection to DB2 (with a default subsystem identifier and a default plan name) by using SQL statements or IFI calls without first calling RRSAF. Supply event control blocks (ECBs), for DB2 to post, that signal start-up or termination. Intercept return codes, reason codes, and abend codes from DB2 and translate them into messages as required.
v v
RRSAF uses z/OS Transaction Management and Recoverable Resource Manager Services (z/OS RRS). Any task in an address space can establish a connection to DB2 through RRSAF. Each task control block (TCB) can have only one connection to DB2. A DB2 service request that is issued by a program that runs under a given task is associated with that task's connection to DB2. The service request operates independently of any DB2 activity under any other task. Each connected task can run a plan. Tasks within a single address space can specify the same plan, but each instance of a plan runs independently from the others. A task can terminate its plan and run a different plan without completely breaking its connection to DB2. RRSAF does not generate task structures. When you design your application, consider that using multiple simultaneous connections can increase the possibility of deadlocks and DB2 resource contention. Restriction: RRSAF does not provide attention processing exits or functional recovery routines. You can provide whatever attention handling and functional recovery your application needs, but you must use ESTAE/ESTAI type recovery routines only. A tracing facility provides diagnostic messages that help you debug programs and diagnose errors in the RRSAF code. The trace information is available only in a SYSABEND or SYSUDUMP dump.
76
To commit work in RRSAF applications, use the CPIC SRRCMIT function or the DB2 COMMIT statement. To roll back work, use the CPIC SRRBACK function or the DB2 ROLLBACK statement. Use the following guidelines to decide whether to use the DB2 statements or the CPIC functions for commit and rollback operations: v Use DB2 COMMIT and ROLLBACK statements when all of the following conditions are true: The only recoverable resource that is accessed by your application is DB2 data that is managed by a single DB2 instance. DB2 COMMIT and ROLLBACK statements fail if your RRSAF application accesses recoverable resources other than DB2 data that is managed by a single DB2 instance. The address space from which syncpoint processing is initiated is the same as the address space that is connected to DB2. v If your application accesses other recoverable resources, or syncpoint processing and DB2 access are initiated from different address spaces, use SRRCMIT and SRRBACK. Related reference: COMMIT (DB2 SQL) ROLLBACK (DB2 SQL) Related information: Using Protected Resources (MVS Programming: Callable Services for High-Level Languages)
Connection type
RRSAF
77
Table 19. Properties of RRSAF connections (continued) Property Authorization ID Value Authorization IDs that are associated with each DB2 connection Comments A connection must have a primary ID and can have one or more secondary IDs. Those identifiers are used for the following purposes: v Validating access to DB2 v Checking privileges on DB2 objects v Assigning ownership of DB2 objects v Identifying the user of a connection for audit, performance, and accounting traces. RRSAF relies on the z/OS System Authorization Facility (SAF) and a security product, such as RACF, to verify and authorize the authorization IDs. An application that connects to DB2 through RRSAF must pass those identifiers to SAF for verification and authorization checking. RRSAF retrieves the identifiers from SAF. A location can provide an authorization exit routine for a DB2 connection to change the authorization IDs and to indicate whether the connection is allowed. The actual values that are assigned to the primary and secondary authorization IDs can differ from the values that are provided by a SIGNON or AUTH SIGNON request. A site's DB2 signon exit routine can access the primary and secondary authorization IDs and can modify the IDs to satisfy the site's security requirements. The exit routine can also indicate whether the signon request should be accepted.
78
Table 19. Properties of RRSAF connections (continued) Property Scope Value Comments
RRSAF processes connections None. as if each task is entirely isolated. When a task requests a function, RRSAF passes the function to DB2, regardless of the connection status of other tasks in the address space. However, the application program and the DB2 subsystem have access to the connection status of multiple tasks in an address space.
If an application that is connected to DB2 through RRSAF terminates normally before the TERMINATE THREAD or TERMINATE IDENTIFY functions deallocate the plan, RRS commits any changes made after the last commit point. If the application terminates abnormally before the TERMINATE THREAD or TERMINATE IDENTIFY functions deallocate the plan, z/OS RRS rolls back any changes made after the last commit point. In either case, DB2 deallocates the plan, if necessary, and terminates the application's connection. If DB2 abends while an application is running, DB2 rolls back changes to the last commit point. If DB2 terminates while processing a commit request, DB2 either commits or rolls back any changes at the next restart. The action taken depends on the state of the commit request when DB2 terminates.
Procedure
To make DSNRLI available: 1. Decide which of the following methods you want to use to make DSNRLI available: v Explicitly issuing LOAD requests when your program runs.
Chapter 2. Connecting to DB2 from your application program
79
By explicitly loading the DSNRLI module, you can isolate the maintenance of your application from future IBM maintenance to the language interface. If the language interface changes, the change will probably not affect your load module. v Including the DSNRLI module in your load module when you link-edit your program. A disadvantage of link-editing DSNRLI into your load module is that if IBM makes a change to DSNRLI, you must link-edit your program again. 2. Depending on the method that you chose in step 1, perform one of the following actions: v If you want to explicitly issue LOAD requests when your program runs: In your program, issue z/OS LOAD service requests for entry points DSNRLI and DSNHLIR. If you use IFI services, you must also load DSNWLIR. Save the entry point address that LOAD returns and use it in the CALL macro. Indicate to DB2 which entry point to use in one of the following two ways: Specify the precompiler option ATTACH(RRSAF). This option causes DB2 to generate calls that specify entry point DSNHLIR. Restriction: You cannot use this option if your application is written in Fortran. Code a dummy entry point named DSNHLI within your load module. If you do not specify the precompiler option ATTACH, the DB2 precompiler generates calls to entry point DSNHLI for each SQL request. The precompiler does not know about and is independent of the different DB2 attachment facilities. When the calls that are generated by the DB2 precompiler pass control to DSNHLI, your code that corresponds to the dummy entry point must preserve the option list that is passed in register 1 and call DSNHLIR with the same option list. v If you want to include the DSNRLI module in your load module when you link-edit your program: Include DSNRLI in your load module during a link-edit step. For example, you can use a linkage editor control statement that is similar to the following statement in your JCL:
INCLUDE DB2LIB(DSNRLI).
By coding this statement, you avoid inadvertently picking up the wrong language interface module. When you include the DSNRLI module during the link-edit, do not include a dummy DSNHLI entry point in your program or specify the precompiler option ATTACH. Module DSNRLI contains an entry point for DSNHLI, which is identical to DSNHLIR, and an entry point for DSNWLI, which is identical to DSNWLIR.
80
Related concepts: Program examples for RRSAF on page 118 Related tasks: Making the CAF language interface (DSNALI) available on page 47
81
82
parameters for explicit connection requests. Defaults are provided for only implicit connections. All parameters starting with the return code parameter are optional. When you want to use the default value for a parameter but specify subsequent parameters, code the CALL DSNRLI statement as follows: v For C-language, when you code CALL DSNRLI statements in C, you need to specify the address of every parameter, using the "address of" operator (&), and not the parameter itself. For example, to pass the pklistptr parameter on the "CREATE THREAD" specify the address of the 4-byte pointer to the structure (&pklistptr):
fnret=dsnrli(&crthrdfn[0], &plan[0], &collid[0], &reuse[0], &retcode, &reascode, &pklistptr);
| | | | | | |
v For all languages except assembler language, code zero for that parameter in the CALL DSNRLI statement. For example, suppose that you are coding an IDENTIFY call in a COBOL program, and you want to specify all parameters except the return code parameter. You can write a statement similar to the following statement:
CALL DSNRLI USING IDFYFN SSNM RIBPTR EIBPTR TERMECB STARTECB BY CONTENT ZERO BY REFERENCE REASCODE.
v For assembler language, code a comma for that parameter in the CALL DSNRLI statement. For example, suppose that you are coding an IDENTIFY call, and you want to specify all parameters except the return code parameter. You can write a statement similar to the following statement:
CALL DSNRLI,(IDFYFN,SSNM,RIBPTR,EIBPTR,TERMECB,STARTECB,,REASCODE)
For assembler programs that invoke RRSAF, use a standard parameter list for an z/OS CALL. Register 1 must contain the address of a list of pointers to the parameters. Each pointer is a 4-byte address. The last address must contain the value 1 in the high-order bit.
83
Table 21. Effect of call order when next call is IDENTIFY, SWITCH TO, SIGNON, or CREATE THREAD Next function Previous function Empty: first call IDENTIFY SWITCH TO SIGNON, AUTH SIGNON, or CONTEXT SIGNON CREATE THREAD TERMINATE THREAD IFI SQL SRRCMIT or SRRBACK Notes: 1. Errors are identified by the DB2 reason code that RRSAF returns. 2. Signon means either the SIGNON function, the AUTH SIGNON function, or the CONTEXT SIGNON function. 3. The SIGNON, AUTH SIGNON, or CONTEXT SIGNON functions are not allowed if any SQL operations are requested after the CREATE THREAD function or after the last SRRCMIT or SRRBACK request. IDENTIFY IDENTIFY X'00F30049'1 IDENTIFY X'00F30049'
1
SWITCH TO X'00C12205'1 Switch to ssnm Switch to ssnm Switch to ssnm Switch to ssnm Switch to ssnm Switch to ssnm Switch to ssnm Switch to ssnm
SIGNON, AUTH SIGNON, or CONTEXT SIGNON CREATE THREAD X'00C12204'1 Signon Signon Signon Signon Signon Signon
2 2 2
X'00C12204'1 X'00C12217'1 CREATE THREAD CREATE THREAD X'00C12202'1 CREATE THREAD X'00C12202'1 X'00C12202'1 X'00C12202'1
2 2 2
X'00F30049'1 X'00F30049'1
X'00F30092'13 Signon
2
| | |
The following table summarizes RRSAF behavior when the next call is an SQL statement or an IFI call or to the TERMINATE THREAD function, the TERMINATE IDENTIFY function, or the TRANSLATE function.
Table 22. Effect of call order when next call is SQL or IFI, TERMINATE THREAD, TERMINATE IDENTIFY, or TRANSLATE Next function Previous function Empty: first call IDENTIFY SWITCH TO SIGNON, AUTH SIGNON, or CONTEXT SIGNON CREATE THREAD TERMINATE THREAD IFI SQL SRRCMIT or SRRBACK SQL or IFI SQL or IFI call
4
TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE X'00C12204'1 X'00C12203'1 TERMINATE THREAD TERMINATE THREAD TERMINATE THREAD X'00C12203'
1
X'00C12204'1 TERMINATE IDENTIFY TERMINATE IDENTIFY TERMINATE IDENTIFY TERMINATE IDENTIFY TERMINATE IDENTIFY TERMINATE IDENTIFY X'00F30093'
13
SQL or IFI call4 SQL or IFI call SQL or IFI call SQL or IFI call SQL or IFI call
4 4 4 4
TERMINATE THREAD
TERMINATE IDENTIFY
84
Table 22. Effect of call order when next call is SQL or IFI, TERMINATE THREAD, TERMINATE IDENTIFY, or TRANSLATE (continued) Next function Previous function Notes: 1. Errors are identified by the DB2 reason code that RRSAF returns. 2. TERMINATE THREAD is not allowed if any SQL operations are requested after the CREATE THREAD function or after the last SRRCMIT or SRRBACK request. 3. TERMINATE IDENTIFY is not allowed if any SQL operations are requested after the CREATE THREAD function or after the last SRRCMIT or SRRBACK request. 4. If you are using an implicit connection to RRSAF and issue SQL or IFI calls, RRSAF issues implicit IDENTIFY and CREATE THREAD requests. If you continue with explicit RRSAF statements, you must follow the standard order of explicit RRSAF calls. Implicitly connecting to RRSAF does not cause an implicit SIGNON request. Therefore, you might need to issue an explicit SIGNON request to satisfy the standard order requirement. For example, an SQL statement followed by an explicit TERMINATE THREAD request results in an error. You must issue an explicit SIGNON request before issuing the TERMINATE THREAD request. SQL or IFI TERMINATE THREAD TERMINATE IDENTIFY TRANSLATE
Related concepts: X'C1......' codes (DB2 Codes) X'F3......' codes (DB2 Codes)
85
|
, retcode , reascode , groupoverride , decpptr
Parameters point to the following areas: function An 18-byte area that contains IDENTIFY followed by 10 blanks. ssnm A 4-byte DB2 subsystem name or group attachment name (if used in a data sharing group) to which the connection is made. If ssnm is less than four characters long, pad it on the right with blanks to a length of four characters. ribptr A 4-byte area in which RRSAF places the address of the release information block (RIB) after the call. You can use the RIB to determine the release level of the DB2 subsystem to which the application is connected. You can determine the modification level within the release level by examining the RIBCNUMB and RIBCINFO fields. If the value in the RIBCNUMB field is greater than zero, check the RIBCINFO field for modification levels. If the RIB is not available (for example, if ssnm names a subsystem that does not exist), DB2 sets the 4-byte area to zeros. The area to which ribptr points is below the 16-MB line. This parameter is required. However, the application does not need to refer to the returned information. eibptr A 4-byte area in which RRSAF places the address of the environment information block (EIB) after the call. The EIB contains environment information, such as the data sharing group, the name of the DB2 member to which the IDENTIFY request was issued, and whether the subsystem is in new-function mode. If the DB2 subsystem is not in a data sharing group, RRSAF sets the data sharing group and member names to blanks. If the EIB is not available (for example, if ssnm names a subsystem that does not exist), RRSAF sets the 4-byte area to zeros. The area to which eibptr points is above the 16-MB line. This parameter is required. However, the application does not need to refer to the returned information. termecb The address of the application's event control block (ECB) that is used for DB2 termination. DB2 posts this ECB when the system operator enters the STOP DB2 command or when DB2 is terminating abnormally. Specify a value of 0 if you do not want to use a termination ECB.
86
RRSAF puts a POST code in the ECB to indicate the type of termination as shown in the following table.
Table 23. Post codes for types of DB2 termination POST code 8 12 16 Termination type QUIESCE FORCE ABTERM
startecb The address of the application's startup ECB. If DB2 has not started when the application issues the IDENTIFY call, DB2 posts the ECB when DB2 has started. Enter a value of zero if you do not want to use a startup ECB. DB2 posts no more than one startup ECB per address space. The ECB that is posted is associated with the most recent IDENTIFY call from that address space. The application program must examine any nonzero RRSAF or DB2 reason codes before issuing a WAIT request on this ECB. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places a reason code. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascode, you must also specify retcode or its default. You can specify a default for retcode by specifying a comma or zero, depending on the language. | | | | | | | | | | | | | | | | | | groupoverride An 8-byte area that the application provides. This parameter is optional. If you do not want group attach to be attempted, specify 'NOGROUP'. This string indicates that the subsystem name that is specified by ssnm is to be used as a DB2 subsystem name, even if ssnm matches a group attachment name. If groupoverride is not provided, ssnm is used as the group attachment name if it matches a group attachment name. If you specify this parameter in any language except assembler, you must also specify the retcode and reascode parameters. In assembler language, you can omit the retcode and reascode parameters by specifying commas as place-holders. Recommendation: Avoid using the groupoverride parameter when possible, because it limits the ability to do dynamic workload routing in a Parallel Sysplex. However, you should use this parameter in a data sharing environment when you want to connect to a specific member of a data sharing group, and the subsystem name of that member is the same as the group attachment name. decpptr
87
| | | | | | | | |
A 4-byte area in which RRSAF is to put the address of the DSNHDECP control block that was loaded by subsystem ssnm when that subsystem was started. This 4-byte area is a 31-bit pointer. If ssnm is not found, the 4-byte area is set to 0. The area to which decpptr points is above the 16-MB line. If you specify this parameter in any language except assembler, you must also specify the retcode, reascode, and groupoverride parameters. In assembler language, you can omit the retcode, reascode, and groupoverride parameters by specifying commas as placeholders.
fnret=dsnrli(&idfyfn[0],&ssnm[0], &ribptr, &eibptr, &termecb, &startecb, &retcode, &reascode,&grpover[0],&decpptr); CALL DSNRLI USING IDFYFN SSNM RIBTPR EIBPTR TERMECB STARTECB RETCODE REASCODE GRPOVER DECPPTR. CALL CALL DSNRLI(IDFYFN,SSNM,RIBPTR,EIBPTR,TERMECB,STARTECB, RETCODE,REASCODE,GRPOVER,DECPPTR) DSNRLI(IDFYFN,SSNM,RIBPTR,EIBPTR,TERMECB,STARTECB, RETCODE,REASCODE,GRPOVER,DECPPTR);
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF.
88
| | | | | |
LABEL with RACF by using the RACROUTE VERIFY request. This security label is used to verify multi-level security for SYSTEM AUTHID.If a matching trusted context is defined, DB2 establishes the connection as trusted. Otherwise, the connection is established without any additional privileges. 4. DB2 then sets the connection name to RRSAF and the connection type to RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72
Parameters point to the following areas: function An 18-byte area that contains SWITCH TO followed by nine blanks. ssnm A 4-byte DB2 subsystem name or group attachment name (if used in a data sharing group) to which the connection is made. If ssnm is less than four characters long, pad it on the right with blanks to a length of four characters. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code.
89
This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode. | | | | | | | | | | | | | | | | groupoverride An 8-byte area that the application provides. This parameter is optional. If you do not want group attach to be attempted, specify 'NOGROUP'. This string indicates that the subsystem name that is specified by ssnm is to be used as a DB2 subsystem name, even if ssnm matches a group attachment name. If groupoverride is not provided, ssnm is used as the group attachment name if it matches a group attachment name. If you specify this parameter in any language except assembler, you must also specify the retcode and reascode parameters. In assembler language, you can omit the retcode and reascode parameters by specifying commas as place-holders. Recommendation: Avoid using the groupoverride parameter when possible, because it limits the ability to do dynamic workload routing in a Parallel Sysplex. However, you should use this parameter in a data sharing environment when you want to connect to a specific member of a data sharing group, and the subsystem name of that member is the same as the group attachment name.
Examples
Examples of RRSAF SWITCH TO calls: The following table shows a SWITCH TO call in each language.
Table 25. Examples of RRSAF SWITCH TO calls Language Assembler C1 COBOL Fortran PL/I1 Call example CALL DSNRLI,(SWITCHFN,SSNM,RETCODE,REASCODE,GRPOVER)
fnret=dsnrli(&switchfn[0], &ssnm[0], &retcode, &reascode,&grpover[0]); CALL CALL CALL DSNRLI USING SWITCHFN RETCODE REASCODE GRPOVER. DSNRLI(SWITCHFN,RETCODE,REASCODE,GRPOVER) DSNRLI(SWITCHFN,RETCODE,REASCODE,GRPOVER);
1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Example of using the SWITCH TO function to interact with multiple DB2 subsystems: The following example shows how you can use the SWITCH TO function to interact with three DB2 subsystems. | | | | | | | | | | | | |
RRSAF calls for subsystem db21: IDENTIFY SIGNON CREATE THREAD Execute SQL on subsystem db21 SWITCH TO db22 IF retcode = 4 AND reascode = 00C12205X THEN DO; RRSAF calls on subsystem db22: IDENTIFY SIGNON CREATE THREAD END;
90
| | | | | | | | | | | | | | | | | | | | | | | |
Execute SQL on subsystem db22 SWITCH TO db23 IF retcode = 4 AND reascode = 00C12205X THEN DO; RRSAF calls on subsystem db23: IDENTIFY SIGNON CREATE THREAD END; Execute SQL on subsystem 23 SWITCH TO db21 Execute SQL on subsystem 21 SWITCH TO db22 Execute SQL on subsystem 22 SWITCH TO db21 Execute SQL on subsystem 21 SRRCMIT (to commit the UR) SWITCH TO db23 Execute SQL on subsystem 23 SWITCH TO db22 Execute SQL on subsystem 22 SWITCH TO db21 Execute SQL on subsystem 21 SRRCMIT (to commit the UR)
Related tasks: Invoking the Resource Recovery Services attachment facility on page 72
91
| |
with RACF by using the RACROUTE VERIFY request. This security label is used to verify multi-level security for SYSTEM AUTHID. The following diagram shows the syntax for the SIGNON function.
Parameters point to the following areas: function An 18-byte area that contains SIGNON followed by twelve blanks. correlation-id A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is displayed in DB2 accounting and statistics trace records. You can use the correlation ID to correlate work units. This token appears in the output from the DISPLAY THREAD command. If you do not want to specify a correlation ID, fill the 12-byte area with blanks. accounting-token A 22-byte area in which you can put a value for a DB2 accounting token. This value is displayed in DB2 accounting and statistics trace records in the QWHCTOKN field, which is mapped by DSNDQWHC DSECT. Setting the value of the accounting token sets the value of the CURRENT CLIENT_ACCTNG special register. If accounting-token is less than 22 characters long, you must pad it on the right with blanks to a length of 22 characters. If you do not want to specify an accounting token, fill the 22-byte area with blanks. Alternatively, you change the value of the DB2 accounting token with RRSAF functions AUTH SIGNON, CONTEXT SIGNON or SET_CLIENT_ID. You can retrieve the DB2 accounting token with the CURRENT CLIENT_ACCTNG special register only if the DDF accounting string is not set. accounting-interval A 6-byte area that specifies when DB2 writes an accounting record. | | | | | | | If you specify COMMIT in that area, DB2 writes an accounting record each time that the application issues SRRCMIT without open held cursors. If the accounting interval is COMMIT and an SRRCMIT is issued while a held cursor is open, the accounting interval spans that commit and ends at the next valid accounting interval end point (such as the next SRRCMIT that is issued without open held cursors, application termination, or SIGNON with a new authorization ID). If you specify any other value, DB2 writes an accounting record when the application terminates or when you call the SIGNON function with a new authorization ID.
92
retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify this parameter, you must also specify retcode. user A 16-byte area that contains the user ID of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays this user ID in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the user ID sets the value of the CURRENT CLIENT_USERID special register. If user is less than 16 characters long, you must pad it on the right with blanks to a length of 16 characters. This parameter is optional. If you specify user, you must also specify retcode and reascode. If you do not specify user, no user ID is associated with the connection. appl A 32-byte area that contains the application or transaction name of the end user's application. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the application name in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the application name sets the value of the CURRENT CLIENT_APPLNAME special register. If appl is less than 32 characters long, you must pad it on the right with blanks to a length of 32 characters. This parameter is optional. If you specify appl, you must also specify retcode, reascode, and user. If you do not specify appl, no application or transaction is associated with the connection. ws An 18-byte area that contains the workstation name of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the workstation name in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the workstation name sets the value of the CURRENT CLIENT_WRKSTNNAME special register. If ws is less than 18 characters long, you must pad it on the right with blanks to a length of 18 characters. This field is optional. If you specify ws, you must also specify retcode, reascode, user, and appl. If you do not specify ws, no workstation name is associated with the connection. xid A 4-byte area that indicates whether the thread is part of a global transaction. A DB2 thread that is part of a global transaction can share locks with other DB2 threads that are part of the same global transaction and can access and modify the same data. A global transaction exists until one of the threads that is part of the global transaction is committed or rolled back. You can specify one of the following values for xid:
Chapter 2. Connecting to DB2 from your application program
93
0 1
Indicates that the thread is not part of a global transaction. The value 0 must be specified as a binary integer. Indicates that the thread is part of a global transaction and that DB2 should retrieve the global transaction ID from RRS. If a global transaction ID already exists for the task, the thread becomes part of the associated global transaction. Otherwise, RRS generates a new global transaction ID. The value 1 must be specified as a binary integer. Alternatively, if you want DB2 to return the generated global transaction ID to the caller, specify an address instead of 1.
address The 4-byte address of an area in which you enter a global transaction ID for the thread. If the global transaction ID already exists, the thread becomes part of the associated global transaction. Otherwise, RRS creates a new global transaction with the ID that you specify. Alternatively, if you want DB2 to generate and return a global transaction ID, pass the address of a null global transaction ID by setting the format ID field of the global transaction ID to binary -1 ('FFFFFFF'X). DB2 then replaces the contents of the area with the generated transaction ID. The area at the specified address must be in writable storage and have a length of at least 140 bytes to accommodate the largest possible transaction ID value. The following table shows the format of a global transaction ID.
Table 26. Format of a user-created global transaction ID Field description Format ID Global transaction ID length (1 - 64) Branch qualifier length (1 64) Global transaction ID Length in bytes 4 4 4 1 to 64 0 to 64 Data type Integer Integer Integer Character Character
Branch qualifier
accounting-string A one-byte length field and a 255-byte area in which you can put a value for a DB2 accounting string. This value is placed in the DDF accounting trace records in the QMDASQLI field, which is mapped by DSNDQMDA DSECT. If accounting-string is less than 255 characters, you must pad it on the right with zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA DSECT. This parameter is optional. If you specify accounting-string, you must also specify retcode, reascode, user, appl and xid. If you do not specify accounting-string, no accounting string is associated with the connection. You can also change the value of the accounting string with RRSAF functions AUTH SIGNON, CONTEXT SIGNON, or SET_CLIENT_ID. You can retrieve the DDF suffix portion of the accounting string with the CURRENT CLIENT_ACCTNG special register. The suffix portion of accounting-string can contain a maximum of 200 characters. The QMDASFLN field contains the accounting suffix length, and the QMDASUFX field contains the accounting suffix value. If the DDF accounting string is set, you cannot query the accounting token with the CURRENT CLIENT_ACCTNG special register.
94
fnret=dsnrli(&sgnonfn[0], &corrid[0], &accttkn[0], &acctint[0], &retcode, &reascode, &userid[0], &applname[0], &wsname[0], &xidptr); CALL DSNRLI USING SGNONFN CORRID ACCTTKN ACCTINT RETCODE REASCODE USERID APPLNAME WSNAME XIDPTR. CALL CALL DSNRLI(SGNONFN,CORRID,ACCTTKN,ACCTINT, RETCODE,REASCODE,USERID,APPLNAME,WSNAME,XIDPTR) DSNRLI(SGNONFN,CORRID,ACCTTKN,ACCTINT, RETCODE,REASCODE,USERID,APPLNAME,WSNAME,XIDPTR);
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72 Related reference: RACROUTE REQUEST=VERIFY: Identify and verify a RACF-defined user (Security Server RACROUTE Macro Reference)
95
accounting-interval, primary-authid, ACEE-address, secondary-authid ) ,retcode ,reascode ,user ,appl ,ws ,xid ,accounting-string
Parameters point to the following areas: function An 18-byte area that contains AUTH SIGNON followed by seven blanks. correlation-id A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is displayed in DB2 accounting and statistics trace records. You can use the correlation ID to correlate work units. This token appears in output from the DISPLAY THREAD command. If you do not want to specify a correlation ID, fill the 12-byte area with blanks. accounting-token A 22-byte area in which you can put a value for a DB2 accounting token. This value is displayed in DB2 accounting and statistics trace records in the QWHCTOKN field, which is mapped by DSNDQWHC DSECT. Setting the value of the accounting token sets the value of the CURRENT CLIENT_ACCTNG special register. If accounting-token is less than 22 characters long, you must pad it on the right with blanks to a length of 22 characters. If you do not want to specify an accounting token, fill the 22-byte area with blanks. You can also change the value of the DB2 accounting token with RRSAF functions SIGNON, CONTEXT SIGNON or SET_CLIENT_ID. You can retrieve the DB2 accounting token with the CURRENT CLIENT_ACCTNG special register only if the DDF accounting string is not set. accounting-interval A 6-byte area with that specifies when DB2 writes an accounting record. | | | | | | | If you specify COMMIT in that area, DB2 writes an accounting record each time that the application issues SRRCMIT without open held cursors. If the accounting interval is COMMIT and an SRRCMIT is issued while a held cursor is open, the accounting interval spans that commit and ends at the next valid accounting interval end point (such as the next SRRCMIT that is issued without open held cursors, application termination, or SIGNON with a new authorization ID). If you specify any other value, DB2 writes an accounting record when the application terminates or when you call the SIGNON function with a new authorization ID. primary-authid An 8-byte area in which you can put a primary authorization ID. If you are not passing the authorization ID to DB2 explicitly, put X'00' or a blank in the first byte of the area.
96
ACEE-address The 4-byte address of an ACEE that you pass to DB2. If you do not want to provide an ACEE, specify 0 in this field. secondary-authid An 8-byte area in which you can put a secondary authorization ID. If you do not pass the authorization ID to DB2 explicitly, put X'00' or a blank in the first byte of the area. If you enter a secondary authorization ID, you must also enter a primary authorization ID. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascoder, you must also specify retcode. user A 16-byte area that contains the user ID of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays this user ID in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the user ID sets the value of the CURRENT CLIENT_USERID special register. If user is less than 16 characters long, you must pad it on the right with blanks to a length of 16 characters. This parameter is optional. If you specify user, you must also specify retcode and reascode. If you do not specify this parameter, no user ID is associated with the connection. appl A 32-byte area that contains the application or transaction name of the end user's application. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the application name in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the application name sets the value of the CURRENT CLIENT_APPLNAME special register. If appl is less than 32 characters long, you must pad it on the right with blanks to a length of 32 characters. This parameter is optional. If you specify appl, you must also specify retcode, reascode, and user. If you do not specify this parameter, no application or transaction is associated with the connection. ws An 18-byte area that contains the workstation name of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the workstation name in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the workstation name sets the value of the CURRENT CLIENT_WRKSTNNAME special register. If ws is less than 18 characters long, you must pad it on the right with blanks to a length of 18 characters.
97
This parameter is optional. If you specify ws, you must also specify retcode, reascode, user, and appl. If you do not specify this parameter, no workstation name is associated with the connection. You can also change the value of the workstation name with RRSAF functions SIGNON, CONTEXT SIGNON or SET_CLIENT_ID. You can retrieve the workstation name with the CURRENT CLIENT_WRKSTNNAME special register. xid A 4-byte area that indicates whether the thread is part of a global transaction. A DB2 thread that is part of a global transaction can share locks with other DB2 threads that are part of the same global transaction and can access and modify the same data. A global transaction exists until one of the threads that is part of the global transaction is committed or rolled back. You can specify one of the following values for xid: 0 1 Indicates that the thread is not part of a global transaction. The value 0 must be specified as a binary integer. Indicates that the thread is part of a global transaction and that DB2 should retrieve the global transaction ID from RRS. If a global transaction ID already exists for the task, the thread becomes part of the associated global transaction. Otherwise, RRS generates a new global transaction ID. The value 1 must be specified as a binary integer. Alternatively, if you want DB2 to return the generated global transaction ID to the caller, specify an address instead of 1.
address The 4-byte address of an area into which you enter a global transaction ID for the thread. If the global transaction ID already exists, the thread becomes part of the associated global transaction. Otherwise, RRS creates a new global transaction with the ID that you specify. Alternatively, if you want DB2 to generate and return a global transaction ID, pass the address of a null global transaction ID by setting the format ID field of the global transaction ID to binary -1 ('FFFFFFF'X). DB2 then replaces the contents of the area with the generated transaction ID. The area at the specified address must be in writable storage and have a length of at least 140 bytes to accommodate the largest possible transaction ID value. The format of a global transaction ID is shown in the description of the RRSAF SIGNON function. accounting-string A one-byte length field and a 255-byte area in which you can put a value for a DB2 accounting string. This value is placed in the DDF accounting trace records in the QMDASQLI field, which is mapped by DSNDQMDA DSECT. If accounting-string is less than 255 characters, you must pad it on the right with zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA DSECT. This parameter is optional. If you specify this accounting-string, you must also specify retcode, reascode, user, appl and xid. If you do not specify this parameter, no accounting string is associated with the connection. You can also change the value of the accounting string with RRSAF functions AUTH SIGNON, CONTEXT SIGNON, or SET_CLIENT_ID. You can retrieve the DDF suffix portion of the accounting string with the CURRENT CLIENT_ACCTNG special register. The suffix portion of
98
accounting-string can contain a maximum of 200 characters. The QMDASFLN field contains the accounting suffix length, and the QMDASUFX field contains the accounting suffix value. If the DDF accounting string is set, you cannot query the accounting token with the CURRENT CLIENT_ACCTNG special register.
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72 Related reference: SIGNON function for RRSAF on page 91
99
ALET
A 4-byte area that can contain an ALET value. DB2 does not reference this area.
ACEE address A 4-byte area that contains an ACEE address or 0 if an ACEE is not provided. DB2 requires that the ACEE is in the home address space of the task. If you pass an ACEE address, the CONTEXT SIGNON function uses the value in ACEEGRPN as the secondary authorization ID if the length of the group name (ACEEGRPL) is not 0. primary-authid An 8-byte area that contains the primary authorization ID to be used. If the authorization ID is less than 8 bytes in length, pad it on the right with blank characters to a length of 8 bytes. If the new primary authorization ID is not different than the current primary authorization ID (which was established when the IDENTIFY function was invoked or at a previous SIGNON invocation), DB2 invokes only the signon exit. If the value has changed, DB2 establishes a new primary authorization ID and new SQL authorization ID and then invokes the signon exit. Generally, you issue a CONTEXT SIGNON call after an IDENTIFY call and before a CREATE THREAD call. You can also issue a CONTEXT SIGNON call if the application is at a point of consistency, and one of the following conditions is true: v The value of reuse in the CREATE THREAD call was RESET. v The value of reuse in the CREATE THREAD call was INITIAL, no held cursors are open, the package or plan is bound with KEEPDYNAMIC(NO), and all special registers are at their initial state. If open held cursors exist or the package or plan is bound with KEEPDYNAMIC(YES), a SIGNON call is permitted only if the primary authorization ID has not changed. The following diagram shows the syntax for the CONTEXT SIGNON function.
Parameters point to the following areas: function An 18-byte area that contains CONTEXT SIGNON followed by four blanks. correlation-id A 12-byte area in which you can put a DB2 correlation ID. The correlation ID is displayed in DB2 accounting and statistics trace records. You can use the
100
correlation ID to correlate work units. This token appears in output from the DISPLAY THREAD command. If you do not want to specify a correlation ID, fill the 12-byte area with blanks. accounting-token A 22-byte area in which you can put a value for a DB2 accounting token. This value is displayed in DB2 accounting and statistics trace records in the QWHCTOKN field, which is mapped by DSNDQWHC DSECT. Setting the value of the accounting token sets the value of the CURRENT CLIENT_ACCTNG special register. If accounting-token is less than 22 characters long, you must pad it on the right with blanks to a length of 22 characters. If you do not want to specify an accounting token, fill the 22-byte area with blanks. You can also change the value of the DB2 accounting token with RRSAF functions SIGNON, AUTH SIGNON, or SET_CLIENT_ID. You can retrieve the DB2 accounting token with the CURRENT CLIENT_ACCTNG special register only if the DDF accounting string is not set. accounting-interval A 6-byte area that specifies when DB2 writes an accounting record. | | | | | | | If you specify COMMIT in that area, DB2 writes an accounting record each time that the application issues SRRCMIT without open held cursors. If the accounting interval is COMMIT and an SRRCMIT is issued while a held cursor is open, the accounting interval spans that commit and ends at the next valid accounting interval end point (such as the next SRRCMIT that is issued without open held cursors, application termination, or SIGNON with a new authorization ID). If you specify any other value, DB2 writes an accounting record when the application terminates or when you call the SIGNON function with a new authorization ID. context-key A 32-byte area in which you put the context key that you specified when you called the RRS Set Context Data (CTXSDTA) service to save the primary authorization ID and an optional ACEE address. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascode, you must also specify retcode. user A 16-byte area that contains the user ID of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays this user ID in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the user ID sets the value of the CURRENT CLIENT_USERID special register. If user is less than 16 characters long, you must pad it on the right with blanks to a length of 16 characters.
101
This parameter is optional. If you specify user, you must also specify retcode and reascode. If you do not specify user, no user ID is associated with the connection. appl A 32-byte area that contains the application or transaction name of the end user's application. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the application name in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the application name sets the value of the CURRENT CLIENT_APPLNAME special register. If appl is less than 32 characters long, you must pad it on the right with blanks to a length of 32 characters. This parameter is optional. If you specify appl, you must also specify retcode, reascode, and user. If you do not specify appl, no application or transaction is associated with the connection. ws An 18-byte area that contains the workstation name of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 displays the workstation name in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. Setting the workstation name sets the value of the CURRENT CLIENT_WRKSTNNAME special register. If ws is less than 18 characters long, you must pad it on the right with blanks to a length of 18 characters. This parameter is optional. If you specify ws, you must also specify retcode, reascode, user, and appl. If you do not specify ws, no workstation name is associated with the connection. You can also change the value of the workstation name with the RRSAF functions SIGNON, AUTH SIGNON, or SET_CLIENT_ID. You can retrieve the workstation name with the CLIENT_WRKSTNNAME special register. xid A 4-byte area that indicates whether the thread is part of a global transaction. A DB2 thread that is part of a global transaction can share locks with other DB2 threads that are part of the same global transaction and can access and modify the same data. A global transaction exists until one of the threads that is part of the global transaction is committed or rolled back. You can specify one of the following values for xid: 0 1 Indicates that the thread is not part of a global transaction. The value 0 must be specified as a binary integer. Indicates that the thread is part of a global transaction and that DB2 should retrieve the global transaction ID from RRS. If a global transaction ID already exists for the task, the thread becomes part of the associated global transaction. Otherwise, RRS generates a new global transaction ID. The value 1 must be specified as a binary integer. Alternatively, if you want DB2 to return the generated global transaction ID to the caller, specify an address instead of 1.
address The 4-byte address of an area into which you enter a global transaction ID for the thread. If the global transaction ID already exists, the thread becomes part of the associated global transaction. Otherwise, RRS creates a new global transaction with the ID that you specify.
102
Alternatively, if you want DB2 to generate and return a global transaction ID, pass the address of a null global transaction ID by setting the format ID field of the global transaction ID to binary -1 ('FFFFFFF'X). DB2 then replaces the contents of the area with the generated transaction ID. The area at the specified address must be in writable storage and have a length of at least 140 bytes to accommodate the largest possible transaction ID value. The format of a global transaction ID is shown in the description of the RRSAF SIGNON function. accounting-string A one-byte length field and a 255-byte area in which you can put a value for a DB2 accounting string. This value is placed in the DDF accounting trace records in the QMDASQLI field, which is mapped by DSNDQMDA DSECT. If accounting-string is less than 255 characters, you must pad it on the right with zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA DSECT. This parameter is optional. If you specify this accounting-string, you must also specify retcode, reascode, user, appl and xid. If you do not specify this parameter, no accounting string is associated with the connection. You can also change the value of the accounting string with RRSAF functions AUTH SIGNON, CONTEXT SIGNON, or SET_CLIENT_ID. You can retrieve the DDF suffix portion of the accounting string with the CURRENT CLIENT_ACCTNG special register. The suffix portion of accounting-string can contain a maximum of 200 characters. The QMDASFLN field contains the accounting suffix length, and the QMDASUFX field contains the accounting suffix value. If the DDF accounting string is set, you cannot query the accounting token with the CURRENT CLIENT_ACCTNG special register.
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF.
103
Related tasks: Invoking the Resource Recovery Services attachment facility on page 72 Related reference: SIGNON function for RRSAF on page 91
Parameters point to the following areas: function An 18-byte area that contains SET_ID followed by 12 blanks. program-id An 80-byte area that contains the caller-provided string to be passed to DB2. If program-id is less than 80 characters, you must pad it with blanks on the right to a length of 80 characters. DB2 places the contents of program-id into IFCID 316 records, along with other statistics, so that you can identify which program is associated with a particular SQL statement. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascode, you must also specify retcode.
104
Table 30. Examples of RRSAF SET_ID calls (continued) Language COBOL Fortran PL/I
1
Call example CALL CALL CALL DSNRLI USING SETIDFN PROGID RETCODE REASCODE. DSNRLI(SETIDFN,PROGID,RETCODE,REASCODE) DSNRLI(SETIDFN,PROGID,RETCODE,REASCODE);
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72
Parameters point to the following areas: function An 18-byte area that contains SET_CLIENT_ID followed by 5 blanks. accounting-token A 22-byte area in which you can put a value for a DB2 accounting token. This value is placed in the DB2 accounting and statistics trace records in the QWHCTOKN field, which is mapped by DSNDQWHC DSECT. If accounting-token is less than 22 characters long, you must pad it on the right with blanks to a length of 22 characters. You can omit this parameter by specifying a value of 0 in the parameter list. Alternatively, you can change the value of the DB2 accounting token with the RRSAF functions SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can
Chapter 2. Connecting to DB2 from your application program
105
retrieve the DB2 accounting token with the CURRENT CLIENT_ACCTNG special register only if the DDF accounting string is not set. user A 16-byte area that contains the user ID of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 places this user ID in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. If user is less than 16 characters long, you must pad it on the right with blanks to a length of 16 characters. You can omit this parameter by specifying a value of 0 in the parameter list. You can also change the value of the client user ID with the RRSAF functions SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can retrieve the client user ID with the CLIENT_USERID special register. appl An 32-byte area that contains the application or transaction name of the end user's application. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 places the application name in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. If appl is less than 32 characters, you must pad it on the right with blanks to a length of 32 characters. You can omit this parameter by specifying a value of 0 in the parameter list. You can also change the value of the application name with the RRSAF functions SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can retrieve the application name with the CLIENT_APPLNAME special register. ws An 18-byte area that contains the workstation name of the client end user. You can use this parameter to provide the identity of the client end user for accounting and monitoring purposes. DB2 places this workstation name in the output from the DISPLAY THREAD command and in DB2 accounting and statistics trace records. If ws is less than 18 characters, you must pad it on the right with blanks to a length of 18 characters. You can omit this parameter by specifying a value of 0 in the parameter list. You can also change the value of the workstation name with the RRSAF functions SIGNON, AUTH SIGNON, or CONTEXT SIGNON. You can retrieve the workstation name with the CLIENT_WRKSTNNAME special register. retcode A 4-byte area in which RRSAF places the return code. You can omit this parameter by specifying a value of 0 in the parameter list. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. You can omit this parameter by specifying a value of 0 in the parameter list. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascode, you must also specify retcode. accounting-string A one-byte length field and a 255-byte area in which you can put a value for a
106
DB2 accounting string. This value is placed in the DDF accounting trace records in the QMDASUFX field, which is mapped by DSNDQMDA DSECT. If accounting-string is less than 255 characters, you must pad it on the right with zeros to a length of 255 bytes. The entire 256 bytes is mapped by DSNDQMDA DSECT. You can omit this parameter by specifying a value of 0 in the parameter list. This parameter is optional. If you specify this accounting-string, you must also specify retcode, reascode, user, and appl. If you do not specify this parameter, no accounting string is associated with the connection. You can also change the value of the accounting string with RRSAF functions AUTH SIGNON, CONTEXT SIGNON, or SET_CLIENT_ID. You can retrieve the DDF suffix portion of the accounting string with the CURRENT CLIENT_ACCTNG special register. The suffix portion of accounting-string can contain a maximum of 200 characters. The QMDASFLN field contains the accounting suffix length, and the QMDASUFX field contains the accounting suffix value. If the DDF accounting string is set, you cannot query the accounting token with the CURRENT CLIENT_ACCTNG special register.
fnret=dsnrli(&seclidfn[0], &acct[0], &user[0], &appl[0], &ws[0], &retcode, &reascode); CALL CALL CALL DSNRLI USING SECLIDFN ACCT USER APPL WS RETCODE REASCODE. DSNRLI(SECLIDFN,ACCT,USER,APPL,WS,RETCODE,REASCODE) DSNRLI(SECLIDFN,ACCT,USER,APPL,WS,RETCODE,REASCODE);
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72
107
Parameters point to the following areas: function An 18-byte area that contains CREATE THREAD followed by five blanks. plan An 8-byte DB2 plan name. RRSAF allocates the named plan. If you provide a collection name instead of a plan name, specify the question mark character (?) in the first byte of this field. DB2 then allocates a special plan named ?RRSAF and uses the value that you specify for collection . When DB2 allocates a plan named ?RRSAF, DB2 checks authorization to execute the package in the same way as it checks authorization to execute a package from a requester other than DB2 for z/OS. If you do not provide a collection name in the collection field, you must enter a valid plan name in this field. collection An 18-byte area in which you enter a collection name. DB2 uses the collection names to locate a package that is associated with the first SQL statement in the program. When you provide a collection name and put the question mark character (?) in the plan field, DB2 allocates a plan named ?RRSAF and a package list that contains the following two entries: v The specified collection name. v An entry that contains * for the location, collection name, and package name. (This entry lets the application access remote locations and access packages in collections other than the default collection that is specified at create thread time.) The application can use the SET CURRENT PACKAGESET statement to change the collection ID that DB2 uses to locate a package. If you provide a plan name in the plan field, DB2 ignores the value in the collection field. reuse An 8-byte area that controls the action that DB2 takes if a SIGNON call is issued after a CREATE THREAD call. Specify one of the following values in this field: RESET Releases any held cursors and reinitializes the special registers INITIAL Does not allow the SIGNON call
108
This parameter is required. If the 8-byte area does not contain either RESET or INITIAL, the default value is INITIAL. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascode, you must also specify retcode. pklistptr A 4-byte field that contains a pointer to a user-supplied data area that contains a list of collection IDs. A collection ID is an SQL identifier of 1 to 128 letters, digits, or the underscore character that identifies a collection of packages. The length of the data area is a maximum of 2050 bytes. The data area contains a 2-byte length field, followed by up to 2048 bytes of collection ID entries, separated by commas. When you specify pklistptr and the question mark character (?) in the plan field, DB2 allocates a special plan named ?RRSAF and a package list that contains the following entries: v The collection names that you specify in the data area to which pklistptr points v An entry that contains * for the location, collection ID, and package name If you also specify collection, DB2 ignores that value. Each collection entry must be of the form collection-ID.*, *.collection-ID.*, or *.*.*. collection-ID and must follow the naming conventions for a collection ID, as described in the description of the BIND and REBIND options. DB2 uses the collection names to locate a package that is associated with the first SQL statement in the program. The entry that contains *.*.* lets the application access remote locations and access packages in collections other than the default collection that is specified at create thread time. The application can use the SET CURRENT PACKAGESET statement to change the collection ID that DB2 uses to locate a package. This parameter is optional. If you specify this parameter, you must also specify retcode and reascode. If you provide a plan name in the plan field, DB2 ignores the pklistptr value. Recommendation: Using a package list can have a negative impact on performance. For better performance, specify a short package list.
109
Call example fnret=dsnrli(&crthrdfn[0], &plan[0], &collid[0], &reuse[0], &retcode, &reascode, &pklistptr); CALL CALL CALL DSNRLI USING CRTHRDFN PLAN COLLID REUSE RETCODE REASCODE PKLSTPTR. DSNRLI(CRTHRDFN,PLAN,COLLID,REUSE,RETCODE,REASCODE,PKLSTPTR) DSNRLI(CRTHRDFN,PLAN,COLLID,REUSE,RETCODE,REASCODE,PKLSTPTR);
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72 Authorizing plan or package access through applications (Managing Security) Related reference: BIND and REBIND options (DB2 Commands)
Parameters point to the following areas: function An 18-byte area the contains TERMINATE THREAD followed by two blanks. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code.
110
This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascode, you must also specify retcode.
fnret=dsnrli(&trmthdfn[0], &retcode, &reascode); CALL CALL CALL DSNRLI USING TRMTHDFN RETCODE REASCODE. DSNRLI(TRMTHDFN,RETCODE,REASCODE) DSNRLI(TRMTHDFN,RETCODE,REASCODE);
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72
111
Parameters point to the following areas: function An 18-byte area that contains TERMINATE IDENTIFY. retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascode, you must also specify retcode.
fnret=dsnrli(&tmidfyfn[0], &retcode, &reascode); CALL CALL CALL DSNRLI USING TMIDFYFN RETCODE REASCODE. DSNRLI(TMIDFYFN,RETCODE,REASCODE) DSNRLI(TMIDFYFN,RETCODE,REASCODE);
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72
112
v Call the TRANSLATE function only after a successful IDENTIFY operation. For errors that occur during SQL or IFI requests, the TRANSLATE function performs automatically. v The TRANSLATE function translates codes that begin with X'00F3', but it does not translate RRSAF reason codes that begin with X'00C1'. If you receive error reason code X'00F30040' (resource unavailable) after an OPEN request, the TRANSLATE function returns the name of the unavailable database object in the last 44 characters of the SQLERRM field. If the TRANSLATE function does not recognize the error reason code, it returns SQLCODE -924 (SQLSTATE '58006') and places a printable copy of the original DB2 function code and the return and error reason codes in the SQLERRM field. The contents of registers 0 and 15 do not change, unless TRANSLATE fails. In this case, register 0 is set to X'00C12204', and register 15 is set to 200. The following diagram shows the syntax of the TRANSLATE function.
Parameters point to the following areas: function An 18-byte area that contains the word TRANSLATE followed by nine blanks. sqlca The program's SQL communication area (SQLCA). retcode A 4-byte area in which RRSAF places the return code. This parameter is optional. If you do not specify retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF places the reason code. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0. If you specify reascode, you must also specify retcode.
113
Table 35. Examples of RRSAF TRANSLATE calls (continued) Language COBOL PL/I
1
Call example CALL CALL DSNRLI USING XLATFN SQLCA RETCODE REASCODE. DSNRLI(XLATFN,SQLCA,RETCODE,REASCODE);
Note: 1. For C, C++, and PL/I applications, you must include the appropriate compiler directives, because DSNRLI is an assembler language program. These compiler directives are described in the instructions for invoking RRSAF. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Parameters point to the following areas: function An 18-byte area that contains FIND_DB2_SYSTEMS followed by two blanks. ssnma A storage area for an array of 4-byte character strings into which RRSAF places the names of all the DB2 subsystems (SSIDs) that are defined for the current LPAR. You must provide the storage area. If the array is larger than the number of DB2 subsystems, RRSAF returns the value (four blanks) in all unused array members. activea A storage area for an array of 4-byte values into which RRSAF returns an indication of whether a defined subsystem is active. Each value is represented as a fixed 31-bit integer. The value 1 means that the subsystem is active. The value 0 means that the subsystem is not active. The size of this array must be the same as the size of the ssnma array. If the array is larger than the number of DB2 subsystems, RRSAF returns the value -1 in all unused array members. The information in the activea array is the information that is available at the point in time that you requested it and might change at any time. arraysz A 4-byte area, represented as a fixed 31-bit integer, that specifies the number of entries for the ssnma and activea arrays. If the number of array entries is DSNRLI FIND_DB2_SYSTEMS function
CALL DSNRLI ( function , ssnma ) , retcode , reascode , activea , arraysz ,
114
| | | | | | | | | | | | | | | | | | | | | | | | |
insufficient to contain all of the subsystems defined on the current LPAR, RRSAF uses all available entries and returns return code 4. retcode A 4-byte area in which RRSAF is to place the return code for this call to the FIND_DB2_SYSTEMS function. This parameter is optional. If you do not retcode, RRSAF places the return code in register 15 and the reason code in register 0. reascode A 4-byte area in which RRSAF is to place the reason code for this call to the FIND_DB2_SYSTEMS function. This parameter is optional. If you do not specify reascode, RRSAF places the reason code in register 0.
Related tasks: Invoking the Resource Recovery Services attachment facility on page 72
115
Table 37. RRSAF return codes (continued) Return code >4 Explanation The call failed. See the reason code for details.
A single task
The following example pseudocode illustrates a single task running in an address space that explicitly connects to DB2 through RRSAF. z/OS RRS controls commit processing when the task terminates normally.
IDENTIFY SIGNON CREATE THREAD SQL or IFI . . . TERMINATE IDENTIFY
Multiple tasks
In the following scenario, multiple tasks in an address space explicitly connect to DB2 through RRSAF. Task 1 executes no SQL statements and makes no IFI calls. Its purpose is to monitor DB2 termination and startup ECBs and to check the DB2 release level.
TASK 1 IDENTIFY TASK 2 IDENTIFY SIGNON CREATE THREAD SQL ... SRRCMIT SQL ... SRRCMIT ... TASK 3 IDENTIFY SIGNON CREATE THREAD SQL ... SRRCMIT SQL ... SRRCMIT ... TASK n IDENTIFY SIGNON CREATE THREAD SQL ... SRRCMIT SQL ... SRRCMIT ...
TERMINATE IDENTIFY
116
The applications perform the following steps: v Task 1 creates context a, switches contexts so that context a is active for task 1, and calls the IDENTIFY function to initialize a connection to a subsystem. A task must always call the IDENTIFY function before a context switch can occur. After the IDENTIFY operation is complete, task 1 allocates a thread for user A, and performs SQL operations. At the same time, task 2 creates context b, switches contexts so that context b is active for task 2, calls the IDENTIFY function to initialize a connection to the subsystem, allocates a thread for user B, and performs SQL operations. When the SQL operations complete, both tasks perform RRS context switch operations. Those operations disconnect each DB2 thread from the task under which it was running. v Task 1 then creates context c, calls the IDENTIFY function to initialize a connection to the subsystem, switches contexts so that context c is active for task 1, allocates a thread for user C, and performs SQL operations for user C. Task 2 does the same operations for user D. v When the SQL operations for user C complete, task 1 performs a context switch operation to perform the following actions: Switch the thread for user C away from task 1. Switch the thread for user B to task 1. For a context switch operation to associate a task with a DB2 thread, the DB2 thread must have previously performed an IDENTIFY operation. Therefore, before the thread for user B can be associated with task 1, task 1 must have performed an IDENTIFY operation. v Task 2 performs two context switch operations to perform the following actions:
Chapter 2. Connecting to DB2 from your application program
117
Disassociate the thread for user D from task 2. Associate the thread for user A with task 2.
118
*********************************************************************** * Subroutine DSNHLI intercepts calls to LI EP=DSNHLI *********************************************************************** DS 0D DSNHLI CSECT Begin CSECT STM R14,R12,12(R13) Prologue LA R15,SAVEHLI Get save area address ST R13,4(,R15) Chain the save areas ST R15,8(,R13) Chain the save areas LR R13,R15 Put save area address in R13 L R15,LISQL Get the address of real DSNHLI BASSM R14,R15 Branch to DSNRLI to do an SQL call * DSNRLI is in 31-bit mode, so use * BASSM to assure that the addressing * mode is preserved. L R13,4(,R13) Restore R13 (callers save area addr) L R14,12(,R13) Restore R14 (return address) RETURN (1,12) Restore R1-12, NOT R0 and R15 (codes)
The following example code shows how to issue requests for the RRSAF functions IDENTIFY, SIGNON, CREATE THREAD, TERMINATE THREAD, and TERMINATE IDENTIFY. This example does not show a task that waits on the DB2 termination ECB. You can code such a task and use the z/OS WAIT macro to monitor the ECB. The task that waits on the termination ECB should detach the sample code if the termination ECB is posted. That task can also wait on the DB2 startup ECB. This example waits on the startup ECB at its own task level.
***************************** IDENTIFY ******************************** L R15,LIRLI Get the Language Interface address CALL (15),(IDFYFN,SSNM,RIBPTR,EIBPTR,TERMECB,STARTECB),VL,MF=X (E,RRSAFCLL)
Chapter 2. Connecting to DB2 from your application program
119
Call a routine (not shown) to check return and reason codes CLC CONTROL,CONTINUE Is everything still OK BNE EXIT If CONTROL not CONTINUE, stop loop USING R8,RIB Prepare to access the RIB L R8,RIBPTR Access RIB to get DB2 release level WRITE The current DB2 release level is RIBREL ***************************** SIGNON ********************************** L R15,LIRLI Get the Language Interface address CALL (15),(SGNONFN,CORRID,ACCTTKN,ACCTINT),VL,MF=(E,RRSAFCLL) BAL R14,CHEKCODE Check the return and reason codes *************************** CREATE THREAD ***************************** L R15,LIRLI Get the Language Interface address CALL (15),(CRTHRDFN,PLAN,COLLID,REUSE),VL,MF=(E,RRSAFCLL) BAL R14,CHEKCODE Check the return and reason codes ****************************** SQL ************************************ * Insert your SQL calls here. The DB2 Precompiler * generates calls to entry point DSNHLI. You should * code a dummy entry point of that name to intercept * all SQL calls. A dummy DSNHLI is shown in the following * section. ************************ TERMINATE THREAD ***************************** CLC CONTROL,CONTINUE Is everything still OK? BNE EXIT If CONTROL not CONTINUE, shut down L R15,LIRLI Get the Language Interface address CALL (15),(TRMTHDFN),VL,MF=(E,RRSAFCLL) BAL R14,CHEKCODE Check the return and reason codes ************************ TERMINATE IDENTIFY *************************** CLC CONTROL,CONTINUE Is everything still OK BNE EXIT If CONTROL not CONTINUE, stop loop L R15,LIRLI Get the Language Interface address CALL (15),(TMIDFYFN),VL,MF=(E,RRSAFCLL) BAL R14,CHEKCODE Check the return and reason codes *
BAL
R14,CHEKCODE
Procedure
To control the CICS attachment facility: 1. To start the CICS attachment facility, perform one of the following actions: v Include the following statement in your application:
EXEC CICS LINK PROGRAM(DSN2COM0)
v Use the system programming interface SET DB2CONN for the CICS Transaction Server. 2. To stop the CICS attachment facility, perform one of the following actions: v Include the following statement in your application:
EXEC CICS LINK PROGRAM(DSN2COM2)
v Use the system programming interface SET DB2CONN for the CICS Transaction Server.
120
Procedure
To detect whether the CICS attachment facility is operational: Use the INQUIRE EXITPROGRAM command for the CICS Transaction Server in your application. The following example shows how to use this command. In this example, the INQUIRE EXITPROGRAM command tests whether the resource manager for SQL, DSNCSQL, is up and running. CICS returns the results in the EIBRESP field of the EXEC interface block (EIB) and in the field whose name is the argument of the CONNECTST parameter (in this case, STST). If the EIBRESP value indicates that the command completed normally and the STST value indicates that the resource manager is available, you can then execute SQL statements.
STST DS F ENTNAME DS CL8 EXITPROG DS CL8 . . . MVC ENTNAME,=CL8DSNCSQL MVC EXITPROG,=CL8DSN2EXT1 EXEC CICS INQUIRE EXITPROGRAM(EXITPROG) ENTRYNAME(ENTNAME) CONNECTST(STST) NOHANDLE CLC EIBRESP,DFHRESP(NORMAL) BNE NOTREADY CLC STST,DFHVALUE(CONNECTED) BNE NOTREADY UPNREADY DS 0H attach is up NOTREADY DS 0H attach is not up yet
If you use the INQUIRE EXITPROGRAM command to avoid AEY9 abends and the CICS attachment facility is down, the storm drain effect can occur. The storm drain effect is a condition that occurs when a system continues to receive work, even though that system is down.
121
Related concepts: Storm-drain effect (DB2 Installation and Migration) Related information: INQUIRE EXITPROGRAM (CICS Transaction Server for z/OS) -923 (DB2 Codes)
Procedure
To improve thread reuse in CICS applications: Close all cursors that are declared with the WITH HOLD option before each sync point. DB2 does not automatically close them. A thread for an application that contains an open cursor cannot be reused. You should close all cursors immediately after you finish using them. Related concepts: Held and non-held cursors on page 708
122
Procedure
To include DB2 queries in an application program: 1. Choose one of the following methods for communicating with DB2: v Static SQL v Embedded dynamic SQL v Open Database Connectivity (ODBC) v JDBC application support v SQLJ application support ODBC lets you access data through ODBC function calls in your application. You execute SQL statements by passing them to DB2 through a ODBC function call. ODBC eliminates the need for precompiling and binding your application and increases the portability of your application by using the ODBC interface. If you are writing your applications in Java, you can use JDBC application support to access DB2. JDBC is similar to ODBC but is designed specifically for use with Java. In addition to using JDBC, you can use SQLJ application support to access DB2. SQLJ is designed to simplify the coding of DB2 calls for Java applications. 2. Optional: Declare the tables and views that you use. You can use DCLGEN to generate these declarations. 3. Define the items that your program can use to check whether an SQL statement executed successfully. You can either define an SQL communications area (SQLCA) or declare SQLSTATE and SQLCODE host variables. 4. Define at least one SQL descriptor area (SQLDA). 5. Declare any of the following data items for passing data between DB2 and a host language: v host variables v host variable arrays v host structures Ensure that you use the appropriate data types. 6. Code SQL statements to access DB2 data. Ensure that you delimit these statements properly. Consider using cursors to select a set of rows and then process the set either one row at a time or one rowset at a time. 7. Check the execution of the SQL statements. 8. Handle any SQL error codes.
123
Related concepts: Dynamic SQL on page 158 Introduction to DB2 ODBC (DB2 Programming for ODBC) JDBC application programming (DB2 Application Programming for Java) SQLJ application programming (DB2 Application Programming for Java) Related tasks: Delimiting an SQL statement on page 147 Retrieving a set of rows by using a cursor on page 704 Programming applications for performance (DB2 Performance)
Procedure
To declare table and view definitions: Perform one of the following actions: v Include an SQL DECLARE TABLE statement in your program. Specify the name of the table or view and list each column and its data type. When you declare a table or view that contains a column with a distinct type, declare that column with the source type of the distinct type rather than with the distinct type itself. When you declare the column with the source type, DB2 can check embedded SQL statements that reference that column at precompile time. In a COBOL program, code the DECLARE TABLE statement in the WORKING-STORAGE SECTION or LINKAGE SECTION within the DATA DIVISION. Example DECLARE statement in a COBOL program: The following DECLARE TABLE statement in a COBOL program defines the DSN8910.DEPT table:
124
EXEC SQL DECLARE DSN8910.DEPT TABLE (DEPTNO CHAR(3) DEPTNAME VARCHAR(36) MGRNO CHAR(6) ADMRDEPT CHAR(3) LOCATION CHAR(16) END-EXEC.
v Use DCLGEN, the declarations generator that is supplied with DB2, to create these declarations for you and then include them in your program. Restriction: You can use DCLGEN for only C, COBOL, and PL/I programs. Related reference: DECLARE TABLE (DB2 SQL)
Procedure
To generate table and view declarations by using DCLGEN: 1. Invoke DCLGEN by performing one of the following actions:
125
v To start DCLGEN from ISPF through DB2I: Select the DCLGEN option on the DB2I Primary Option Menu panel. Then follow the detailed instructions for generating table and view declarations by using DCLGEN from DB2I. v To start DCLGEN directly from TSO: Sign on to TSO, issue the TSO command DSN, and then issue the subcommand DCLGEN. v To start DCLGEN directly from a CLIST: From a CLIST, running in TSO foreground or background, issue DSN and then DCLGEN. v To start DCLGEN with JCL: Supply the required information in JCL and run DCLGEN in batch. Use the sample jobs DSNTEJ2C and DSNTEJ2P in the prefix.SDSNSAMP library as models. Requirement: If you want to start DCLGEN in the foreground and your table names include DBCS characters, you must provide and display double-byte characters. If you do not have a terminal that displays DBCS characters, you can enter DBCS characters by using the hex mode of ISPF edit. DCLGEN creates the declarations in the specified data set. DCLGEN generates a table or column name in the DECLARE statement as a non-delimited identifier unless at least one of the following conditions is true: v The name contains special characters and is not a DBCS string. v The name is a DBCS string, and you have requested delimited DBCS names. 2. If you use an SQL reserved word as an identifier, edit the DCLGEN output to add the appropriate SQL delimiters. 3. Make any other necessary edits to the DCLGEN output. DCLGEN produces output that is intended to meet the needs of most users, but occasionally, you need to edit the DCLGEN output to work in your specific case. For example, DCLGEN is unable to determine whether a column that is defined as NOT NULL also contains the DEFAULT clause, so you must edit the DCLGEN output to add the DEFAULT clause to the appropriate column definitions. DCLGEN produces declarations based on the encoding scheme of the source table. Therefore, if your application uses a different encoding scheme, you might need to manually adjust the declarations. For example, if your source table is in EBCDIC with CHAR columns and your application is in COBOL, DCLGEN produces declarations of type PIC X. However, suppose your host variables in your COBOL application are UTF-16. In this case, you will need to manually change the declarations to be type PIC N USAGE NATIONAL. Related reference: DCLGEN (DECLARATIONS GENERATOR) (DSN) (DB2 Commands) DSN (TSO) (DB2 Commands) Reserved words (DB2 SQL)
| | | | | | |
Procedure
To generate table and view declarations by using DCLGEN from DB2I:
126
1. From the DB2I Primary Option Menu panel, select the DCLGEN option. The following DCLGEN panel is displayed: | | | | | | | | | | | | | | | | | | | | | | |
DSNEDP01 ===> DCLGEN SSID: DSN
Enter table name for which declarations are required: 1 SOURCE TABLE NAME ===> 2 TABLE OWNER ..... ===> (Optional) (Can be sequential or partitioned) (If password protected) (ADD new or REPLACE old declaration) (Enter YES for column label) (Optional) (Optional) (Enter YES to delimit DBCS identifiers) (Enter YES to append column name) (Enter YES for indicator variables) (Enter YES to change additional options) HELP for more information
3 AT LOCATION ..... ===> Enter destination data set: 4 DATA SET NAME ... ===> 5 DATA SET PASSWORD ===> Enter options as desired: 6 ACTION .......... ===> ADD 7 COLUMN LABEL .... ===> NO 8 STRUCTURE NAME .. ===> 9 FIELD NAME PREFIX ===> 10 DELIMIT DBCS .... ===> YES 11 COLUMN SUFFIX ... ===> NO 12 INDICATOR VARS .. ===> NO 13 ADDITIONAL OPTIONS===> YES PRESS: ENTER to process
END to exit
2. Fill in the following fields on the DCLGEN panel: 1 SOURCE TABLE NAME Is the unqualified name of the table, view, or created temporary table for which you want DCLGEN to produce SQL data declarations. The table can be stored at your DB2 location or at another DB2 location. To specify a table name at another DB2 location, enter the table qualifier in the TABLE OWNER field and the location name in the AT LOCATION field. DCLGEN generates a three-part table name from the SOURCE TABLE NAME, TABLE OWNER, and AT LOCATION fields. You can also use an alias for a table name. To specify a table name that contains special characters or blanks, enclose the name in apostrophes. If the name contains apostrophes, you must double each one(' '). For example, to specify a table named DON'S TABLE, enter the following text:
DONS TABLE
The underscore is not handled as a special character in DCLGEN. For example, the table name JUNE_PROFITS does not need to be enclosed in apostrophes. Because COBOL field names cannot contain underscores, DCLGEN substitutes hyphens (-) for single-byte underscores in COBOL field names that are built from the table name. You do not need to enclose DBCS table names in apostrophes. If you do not enclose the table name in apostrophes, DB2 converts lowercase characters to uppercase. | | | | 2 TABLE OWNER Is the schema qualifier of the source table. If you do not specify this value and the table is a local table, DB2 assumes that the table qualifier is your TSO logon ID. If the table is at a remote location, you must specify this value.
127
3 AT LOCATION Is the location of a table or view at another DB2 subsystem. The value of the AT LOCATION field becomes a prefix for the table name on the SQL DECLARE statement, as follows: location_name, schema_name, table_name For example, if the location name is PLAINS_GA, the schema name is CARTER, and the table name is CROP_YIELD_89, the following table name is included in the SQL DECLARE statement: PLAINS_GA.CARTER.CROP_YIELD_89 The default is the local location name. This field applies to DB2 private protocol access only. The location must be another DB2 for z/OS subsystem. 4 DATA SET NAME Is the name of the data set that you allocated to contain the declarations that DCLGEN produces. You must supply a name; no default exists. The data set must already exist and be accessible to DCLGEN. The data set can be either sequential or partitioned. If you do not enclose the data set name in apostrophes, DCLGEN adds a standard TSO prefix (user ID) and suffix (language). DCLGEN determines the host language from the DB2I defaults panel. For example, for library name LIBNAME(MEMBNAME), the name becomes userid.libname.language(membname) For library name LIBNAME, the name becomes userid.libname.language. If this data set is password protected, you must supply the password in the DATA SET PASSWORD field. 5 DATA SET PASSWORD Is the password for the data set that is specified in the DATA SET NAME field, if the data set is password protected. The password is not displayed on your terminal, and it is not recognized if you issued it from a previous session. 6 ACTION Specifies what DCLGEN is to do with the output when it is sent to a partitioned data set. (The option is ignored if the data set you specify in the DATA SET NAME field is sequential.) You can specify one of the following values: ADD Indicates that an old version of the output does not exist and creates a new member with the specified data set name. ADD is the default. REPLACE Replaces an old version, if it already exists. If the member does not exist, this option creates a new member. 7 COLUMN LABEL Specifies whether DCLGEN is to include labels that are declared on any columns of the table or view as comments in the data declarations. (The SQL LABEL statement creates column labels to use as supplements to column names.) You can specify one of the following values: YES Include column labels.
128
NO Ignore column labels. NO is the default. 8 STRUCTURE NAME Is the name of the generated data structure. The name can be up to 31 characters. If the name is not a DBCS string, and the first character is not alphabetic, enclose the name in apostrophes. If you use special characters, be careful to avoid name conflicts. If you leave this field blank, DCLGEN generates a name that contains the table or view name with a prefix of DCL. If the language is COBOL or PL/I and the table or view name consists of a DBCS string, the prefix consists of DBCS characters. For C, lowercase characters that you enter in this field are not converted to uppercase. 9 FIELD NAME PREFIX Specifies a prefix that DCLGEN uses to form field names in the output. For example, if you choose ABCDE, the field names generated are ABCDE1, ABCDE2, and so on. You can specify a field name prefix of up to 28 bytes that can include special and double-byte characters. If you specify a single-byte or mixed-string prefix and the first character is not alphabetic, enclose the prefix in apostrophes. If you use special characters, be careful to avoid name conflicts. For COBOL and PL/I, if the name is a DBCS string, DCLGEN generates DBCS equivalents of the suffix numbers. For C, lowercase characters that you enter in this field do not converted to uppercase. If you leave this field blank, the field names are the same as the column names in the table or view. 10 DELIMIT DBCS Specifies whether DCLGEN is to delimit DBCS table names and column names in the table declaration. You can specify one of the following values: YES Specifies that DCLGEN is to enclose the DBCS table and column names with SQL delimiters. NO Specifies that DCLGEN is not to delimit the DBCS table and column names. 11 COLUMN SUFFIX Specifies whether DCLGEN is to form field names by attaching the column name as a suffix to the value that you specify in FIELD NAME PREFIX. You can specify one of the following values: YES Specifies that DCLGEN is to use the column name as a suffix. For example, if you specify YES, the field name prefix is NEW, and the column name is EMPNO, the field name is NEWEMPNO. If you specify YES, you must also enter a value in FIELD NAME PREFIX. If you do not enter a field name prefix, DCLGEN issues a warning message and uses the column names as the field names.
129
NO Specifies that DCLGEN is not to use the column name as a suffix. The default is NO. 12 INDICATOR VARS Specifies whether DCLGEN is to generate an array of indicator variables for the host variable structure. You can specify one of the following values: YES Specifies that DCLGEN is to generate an array of indicator variables for the host variable structure. If you specify YES, the array name is the table name with a prefix of I (or DBCS letter <I> if the table name consists solely of double-byte characters). The form of the data declaration depends on the language, as shown in the following table. n is the number of columns in the table.
Table 38. Declarations for indicator variable arrays from DCLGEN Language C COBOL PL/I Declaration form short int Itable-name[n]; 01 Itable-name PIC S9(4) USAGE COMP OCCURS n TIMES. DCL Itable-name(n) BIN FIXED(15);
If you request an array of indicator variables for a COBOL program, DCLGEN might generate the following host variable declaration:
01 DCLHASNULLS. 10 CHARCOL1 PIC X(1). 10 CHARCOL2 PIC X(1). 01 IHASNULLS PIC S9(4) USAGE COMP OCCURS 2 TIMES.
NO Specifies that DCLGEN is not to generate an array of indicator variables. The default is NO. | | | | | 13 ADDITIONAL OPTIONS Indicates whether to display the panel for additional DCLGEN options, including the break point for statement tokens and whether to generate DECLARE VARIABLE statements for FOR BIT DATA columns. You can specify YES or NO. The default is YES. If you specified YES in the ADDITIONAL OPTIONS field, the following ADDITIONAL DCLGEN OPTIONS panel is displayed:
130
DSNEDP02 ===>
SSID: DSN
Enter options as desired: 1 RIGHT MARGIN .... ===> 72 2 FOR BIT DATA .... ===> NO
(Enter 72 or 80) (Enter YES to declare SQL variables for FOR BIT DATA columns) HELP for more information
END to exit
| | | | | | | | | | | | | | | | | | |
Otherwise, DCLGEN creates the declarations in the specified data set. 3. If the ADDITIONAL DCLGEN OPTIONS panel is displayed, fill in the following fields on that panel: 1 RIGHT MARGIN Specifies the break point for statement tokens that must be wrapped to one or more subsequent records. You can specify column 72 or column 80. The default is 72. 2 FOR BIT DATA Specifies whether DCLGEN is to generate a DECLARE VARIABLE statement for SQL variables for columns that are declared as FOR BIT DATA. This statement is required in DB2 applications that meet all of the following criteria: v are written in COBOL v have host variables for FOR BIT DATA columns v are prepared with the SQLCCSID option of the DB2 coprocessor. You can specify YES or NO. The default is NO. If the table or view does not have FOR BIT DATA columns, DCLGEN does not generate this statement. DCLGEN creates the declarations in the specified data set. Related reference: DB2I primary option menu on page 1010 LABEL (DB2 SQL)
131
Table 39. Type declarations that DCLGEN generates (continued) SQL data type1 DECIMAL(p,s) or NUMERIC(p,s) C decimal(p,s)
2
PL/I DEC FIXED(p,s) If p>15, the PL/I compiler must support this precision, or a warning is generated.
REAL or FLOAT(n) 1 <= n <= 21 DOUBLE PRECISION, DOUBLE, or FLOAT(n) CHAR(1) CHAR(n) VARCHAR(n)
float double char char var [n+1] struct {short int var_len; char var_data[n]; } var; SQL TYPE IS CLOB_LOCATOR sqldbchar sqldbchar var[n+1];
USAGE COMP-1 USAGE COMP-2 PIC X(1) PIC X(n) 10 var. 49 var_LEN PIC 9(4) USAGE COMP. 49 var_TEXT PIC X(n). USAGE SQL TYPE IS CLOB-LOCATOR PIC G(1) PIC G(n) USAGE DISPLAY-1.4 or PIC N(n).4 10 var. 49 var_LEN PIC 9(4) USAGE COMP. 49 var_TEXT PIC G(n) USAGE DISPLAY-1.4 or 10 var. 49 var_LEN PIC 9(4) USAGE COMP. 49 var_TEXT PIC N(n).4 USAGE SQL TYPE IS DBCLOB-LOCATOR USAGE SQL TYPE IS BINARY(n) USAGE SQL TYPE IS VARBINARY(n) USAGE SQL TYPE IS BLOB-LOCATOR PIC X(10)5 PIC X(8)6 PIC X(26) USAGE SQL TYPE IS ROWID PIC S9(18) USAGE COMP SQL TYPE IS XML AS CLOB(1M)
GRAPHIC(n) VAR
DBCLOB(n)3
SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BINARY(n) SQL TYPE IS VARBINARY(n) SQL TYPE IS BLOB_LOCATOR char var[11]5 char var[9]6 char var[27] SQL TYPE IS ROWID long long int SQL TYPE IS XML AS CLOB(1M)
SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BINARY(n) SQL TYPE IS VARBINARY(n) SQL TYPE IS BLOB_LOCATOR CHAR(10)5 CHAR(8)6 CHAR(26) SQL TYPE IS ROWID FIXED BIN(63) SQL TYPE IS XML AS CLOB(1M)
| BINARY(n) | | VARBINARY(n) |
BLOB(n)3 DATE TIME TIMESTAMP ROWID
| BIGINT | XML |
7
132
Table 39. Type declarations that DCLGEN generates (continued) SQL data type1 C COBOL PL/I
| | | | | | | | | | | |
Notes: 1. For a distinct type, DCLGEN generates the host language equivalent of the source data type. 2. If your C compiler does not support the decimal data type, edit your DCLGEN output and replace the decimal data declarations with declarations of type double. 3. For a BLOB, CLOB, or DBCLOB data type, DCLGEN generates a LOB locator. 4. DCLGEN chooses the format based on the character that you specify as the DBCS symbol on the COBOL Defaults panel. 5. This declaration is used unless a date installation exit routine exists for formatting dates, in which case the length is that specified for the LOCAL DATE LENGTH installation option. 6. This declaration is used unless a time installation exit routine exists for formatting times, in which case the length is that specified for the LOCAL TIME LENGTH installation option. 7. The default setting for XML is 1M; however, you might need to adjust it.
Procedure
To include declarations from DCLGEN in your program: Code the following SQL INCLUDE statement in your program:
EXEC SQL INCLUDE member-name END-EXEC.
member-name is the name of the data set member where the DCLGEN output is stored. Example: Suppose that you used DCLGEN to generate a table declaration and corresponding COBOL record description for the table DSN8910.EMP, and those declarations were stored in the data set member DECEMP. (A COBOL record description is a two-level host structure that corresponds to the columns of a table's row. ) To include those declarations in your program, include the following statement in your COBOL program:
EXEC SQL INCLUDE DECEMP END-EXEC.
133
END to cancel
The DB2I DEFAULTS PANEL 2 panel for COBOL is then displayed. c. Fill in the DB2I DEFAULTS PANEL 2 panel, shown in the following figure, as needed and press Enter to save the new defaults, if any.
134
DSNEOP02 COMMAND ===>_ Change defaults as desired: 1 DB2I ===> ===> ===> ===>
JOB STATEMENT: (Optional if your site has a SUBMIT exit) //ADMF001A JOB (ACCOUNT),NAME //* //* //*
2 3
COBOL DEFAULTS: COBOL STRING DELIMITER ===> DEFAULT DBCS SYMBOL FOR DCLGEN ===> G
Figure 7. The COBOL defaults panel. Shown only if the field APPLICATION LANGUAGE on the DB2I DEFAULTS PANEL l panel is IBMCOB.
The DB2I Primary Option menu is displayed. 2. Generate the table and host structure declarations by completing the following actions: a. On the DB2I Primary Option menu, select the DCLGEN option and press Enter to display the DCLGEN panel. b. Fill in the fields as shown in the following figure and press Enter. | | | | | | | | | | | | | | | | | | | | | | |
DSNEDP01 ===> DCLGEN SSID: DSN
Enter table name for which declarations are required: 1 SOURCE TABLE NAME ===> DSN8910.VPHONE 2 TABLE OWNER ..... ===>
3 AT LOCATION ..... ===> (Optional) Enter destination data set: (Can be sequential or partitioned) 4 DATA SET NAME ... ===> TEMP(VPHONEC) 5 DATA SET PASSWORD ===> (If password protected) Enter options as desired: 6 ACTION .......... ===> ADD (ADD new or REPLACE old declaration) 7 COLUMN LABEL .... ===> NO (Enter YES for column label) 8 STRUCTURE NAME .. ===> (Optional) 9 FIELD NAME PREFIX ===> (Optional) 10 DELIMIT DBCS .... ===> YES (Enter YES to delimit DBCS identifiers) 11 COLUMN SUFFIX ... ===> NO (Enter YES to append column name) 12 INDICATOR VARS .. ===> NO (Enter YES for indicator variables) 13 ADDITIONAL OPTIONS===> NO (Enter YES to change additional options) PRESS: ENTER to process END to exit HELP for more information
A successful completion message, such as the one in the following figure, is displayed at the top of your screen.
DSNE905I EXECUTION COMPLETE, MEMBER VPHONEC ADDED ***
DB2 again displays the DCLGEN screen, as shown in the following figure.
135
| DSNEDP01 DCLGEN SSID: DSN | ===> | | Enter table name for which declarations are required: | 1 SOURCE TABLE NAME ===> DSN8910.VPHONE | | 2 TABLE OWNER ..... ===> | | 3 AT LOCATION ..... ===> (Optional) Enter destination data set: (Can be sequential or partitioned) | | 4 DATA SET NAME ... ===> TEMP(VPHONEC) | 5 DATA SET PASSWORD ===> (If password protected) Enter options as desired: | | 6 ACTION .......... ===> ADD (ADD new or REPLACE old declaration) | 7 COLUMN LABEL .... ===> NO (Enter YES for column label) | 8 STRUCTURE NAME .. ===> (Optional) 9 FIELD NAME PREFIX ===> (Optional) | | 10 DELIMIT DBCS .... ===> YES (Enter YES to delimit DBCS identifiers) | 11 COLUMN SUFFIX ... ===> NO (Enter YES to append column name) | 12 INDICATOR VARS .. ===> NO (Enter YES for indicator variables) | 13 ADDITIONAL OPTIONS===> NO (Enter YES to change additional options) | | PRESS: ENTER to process END to exit HELP for more information | | | Figure 10. DCLGEN paneldisplaying system and user return codes | c. Press Enter to return to the DB2I Primary Option menu. 3. Exit from DB2I. 4. Examine the DCLGEN output by selecting either the browse or the edit option from the ISPF/PDF menu to view the results in the specified data set member. For this example, the data set to edit is prefix.TEMP.COBOL(VPHONEC). This data set member contains the following information.
***** DCLGEN TABLE(DSN8910.VPHONE) *** ***** LIBRARY(SYSADM.TEMP.COBOL(VPHONEC)) *** ***** QUOTE *** ***** ... IS THE DCLGEN COMMAND THAT MADE THE FOLLOWING STATEMENTS *** EXEC SQL DECLARE DSN8910.VPHONE TABLE ( LASTNAME VARCHAR(15) NOT NULL, FIRSTNAME VARCHAR(12) NOT NULL, MIDDLEINITIAL CHAR(1) NOT NULL, PHONENUMBER VARCHAR(4) NOT NULL, EMPLOYEENUMBER CHAR(6) NOT NULL, DEPTNUMBER CHAR(3) NOT NULL, DEPTNAME VARCHAR(36) NOT NULL ) END-EXEC. ***** COBOL DECLARATION FOR TABLE DSN8910.VPHONE ****** 01 DCLVPHONE. 10 LASTNAME. 49 LASTNAME-LEN PIC S9(4) USAGE COMP. 49 LASTNAME-TEXT PIC X(15). 10 FIRSTNAME. 49 FIRSTNAME-LEN PIC S9(4) USAGE COMP. 49 FIRSTNAME-TEXT PIC X(12). 10 MIDDLEINITIAL PIC X(1). 10 PHONENUMBER. 49 PHONENUMBER-LEN PIC S9(4) USAGE COMP. 49 PHONENUMBER-TEXT PIC X(4). 10 EMPLOYEENUMBER PIC X(6). 10 DEPTNUMBER PIC X(3). 10 DEPTNAME. 49 DEPTNAME-LEN PIC S9(4) USAGE COMP. 49 DEPTNAME-TEXT PIC X(36). ***** THE NUMBER OF COLUMNS DESCRIBED BY THIS DECLARATION IS 7 ******
You can now pull these declarations into your program by using an SQL INCLUDE statement.
136
Defining the items that your program can use to check whether an SQL statement executed successfully
If your program contains SQL statements, the program should define some infrastructure so that it can check whether the statements executed successfully. You can either include an SQL communications area (SQLCA), which contains SQLCODE and SQLSTATE variables, or declare individual SQLCODE and SQLSTATE host variables.
137
v v v v v v v v
DESCRIBE CURSOR host-variable INTO descriptor-name DESCRIBE INPUT statement-name INTO descriptor-name DESCRIBE PROCEDURE host-variable INTO descriptor-name DESCRIBE TABLE host-variable INTO descriptor-name EXECUTE ... USING DESCRIPTOR descriptor-name FETCH ... INTO DESCRIPTOR descriptor-name OPEN ... USING DESCRIPTOR descriptor-name PREPARE ... INTO descriptor-name
Unlike the SQLCA, a program can have more than one SQLDA, and an SQLDA can have any valid name.
Procedure
To define SQL descriptor areas: Take the actions that are appropriate for the programming language that you use. Related tasks: Defining SQL descriptor areas in assembler on page 231 Defining Defining Defining Defining Defining SQL descriptor SQL descriptor SQL descriptor SQL descriptor SQL descriptor areas areas areas areas areas in in in in in C on page 250 COBOL on page 301 Fortran on page 372 PL/I on page 384 REXX on page 413
Related reference: Descriptions of SQL processing options on page 959 Description of SQLCA fields (DB2 SQL) SQL descriptor area (SQLDA) (DB2 SQL) The REXX SQLCA (DB2 SQL)
Procedure
To declare host variables, host variable arrays, and host structures: Use the techniques that are appropriate for the programming language that you use.
138
Related tasks: Accessing data by using a rowset-positioned cursor on page 714 Determining whether a retrieved value in a host variable is null or truncated on page 150 Related reference: Descriptions of SQL processing options on page 959
Host variables
Use host variables to pass a single data item between DB2 and your application. A host variable is a single data item that is declared in the host language to be used within an SQL statement. You can use host variables in application programs that are written in the following languages: assembler, C, C++, COBOL, Fortran, and PL/I to perform the following actions: v Retrieve data into the host variable for your application program's use v Place data into the host variable to insert into a table or to change the contents of a row v Use the data in the host variable when evaluating a WHERE or HAVING clause v Assign the value that is in the host variable to a special register, such as CURRENT SQLID and CURRENT DEGREE v Insert null values into columns by using a host indicator variable that contains a negative value v Use the data in the host variable in statements that process dynamic SQL, such as EXECUTE, PREPARE, and OPEN Related concepts: Rules for host variables in an SQL statement on page 147 Related reference: Host variables in assembler on page 232 Host variables in C on page 251 Host variables in COBOL on page 302 Host variables in Fortran on page 373 Host variables in PL/I on page 385
139
Related concepts: Host variable arrays in an SQL statement on page 155 Related tasks: Inserting multiple rows of data from host variable arrays on page 157 Retrieving multiple rows of data into host variable arrays on page 156 Related reference: Host variable arrays in C on page 263 Host variable arrays in COBOL on page 312 Host variable arrays in PL/I on page 391
Host structures
Use host structures to pass a group of host variables between DB2 and your application. A host structure is a group of host variables that can be referenced with a single name. You can use host structures in all host languages except REXX. You define host structures with statements in the host language. You can refer to a host structure in any context where you want to refer to the list of host variables in the structure. A host structure reference is equivalent to a reference to each of the host variables within the structure in the order in which they are defined in the structure declaration. You can also use indicator variables (or indicator structures) with host structures. Related tasks: Retrieving a single row of data into a host structure on page 157 Related reference: Host structures in C on page 271 Host structures in COBOL on page 321 Host structures in PL/I on page 396
140
X. Your program should check the indicator variable before using X. If the indicator variable is negative, you know that X is null and any value that you find in X is irrelevant. When your program uses variable X to assign a null value to a column, the program should set the indicator variable to a negative number. DB2 then assigns a null value to the column and ignores any value in X. An indicator variable array contains a series of small integers to help you determine the associated information for the corresponding item in a host data array. When you retrieve data into a host variable array, you can check the values in the associated indicator array to determine how to handle each data item. If a value in the associated indicator array is negative, you can disregard the contents of the corresponding element in the host variable array. Values in indicator arrays have the following meanings: -1 -2 -3 The corresponding row in the column that is being retrieved is null. DB2 returns a null value, because an error occurred in numeric conversion or in an arithmetic expression in the corresponding row. DB2 returns a null value, because a hole was detected for the corresponding row during a multiple-row FETCH operation.
An indicator structure is an array of halfword integer variables that supports a specified host structure. If the column values that your program retrieves into a host structure can be null, you can attach an indicator structure name to the host structure name. This name enables DB2 to notify your program about each null value it returns to a host variable in the host structure. Related concepts: Holes in the result table of a scrollable cursor on page 724 Related tasks: Executing SQL statements by using a rowset cursor on page 716 Related reference: Indicator variables in assembler on page 237 Indicator variables, indicator arrays, and host structure indicator arrays in C on page 273 Indicator variables, indicator arrays, and host structure indicator arrays in COBOL on page 326 Indicator variables in Fortran on page 376 Indicator variables in PL/I on page 398
Procedure
To set the CCSID for host variables: Specify the DECLARE VARIABLE statement after the corresponding host variable declaration and before your first reference to that host variable. This statement associates an encoding scheme and a CCSID with individual host variables. You can use this statement in static or dynamic SQL applications.
Chapter 3. Coding SQL statements in application programs: General information
141
Restriction: You cannot use the DECLARE VARIABLE statement to control the CCSID and encoding scheme of data that you retrieve or update by using an SQLDA. The DECLARE VARIABLE statement has the following effects on a host variable: v When you use the host variable to update a table, the local subsystem or the remote server assumes that the data in the host variable is encoded with the CCSID and encoding scheme that the DECLARE VARIABLE statement assigns. v When you retrieve data from a local or remote table into the host variable, the retrieved data is converted to the CCSID and encoding scheme that are assigned by the DECLARE VARIABLE statement.
Example
Suppose that you are writing a C program that runs on a DB2 for z/OS subsystem. The subsystem has an EBCDIC application encoding scheme. The C program retrieves data from the following columns of a local table that is defined with the CCSID UNICODE option:
PARTNUM CHAR(10) JPNNAME GRAPHIC(10) ENGNAME VARCHAR(30)
Because the application encoding scheme for the subsystem is EBCDIC, the retrieved data is EBCDIC. To make the retrieved data Unicode, use DECLARE VARIABLE statements to specify that the data that is retrieved from these columns is encoded in the default Unicode CCSIDs for the subsystem. Suppose that you want to retrieve the character data in Unicode CCSID 1208 and the graphic data in Unicode CCSID 1200. Use the following DECLARE VARIABLE statements:
EXEC SQL BEGIN DECLARE SECTION; char hvpartnum[11]; EXEC SQL DECLARE :hvpartnum VARIABLE CCSID 1208; sqldbchar hvjpnname[11]; EXEC SQL DECLARE :hvjpnname VARIABLE CCSID 1200; struct { short len; char d[30]; } hvengname; EXEC SQL DECLARE :hvengname VARIABLE CCSID 1208; EXEC SQL END DECLARE SECTION;
Determining what caused an error when retrieving data into a host variable
Errors that occur when DB2 passes data to host variables in an application are usually caused by a problem in converting from one data type to another. These errors do not affect the position of the cursor.
142
The variable to which DB2 assigns the data is called the output host variable. If you provide an indicator variable for the output host variable or if data type conversion is not required, DB2 returns a positive SQLCODE for the row in most cases. In other cases where data conversion problems occur, DB2 returns a negative SQLCODE for that row. Regardless of the SQLCODE for the row, no new values are assigned to the host variable or to subsequent variables for that row. Any values that are already assigned to variables remain assigned. Even when a negative SQLCODE is returned for a row, statement processing continues and DB2 returns a positive SQLCODE for the statement (SQLSTATE 01668, SQLCODE +354).
Procedure
To determine what caused an error when retrieving data into a host variable: 1. When DB2 returns SQLCODE = +354, use the GET DIAGNOSTICS statement with the NUMBER option to determine the number of errors and warnings. Example: Suppose that no indicator variables are provided for the values that are returned by the following statement:
FETCH FIRST ROWSET FROM C1 FOR 10 ROWS INTO :hva_col1, :hva_col2;
For each row with an error, DB2 records a negative SQLCODE and continues processing until the 10 rows are fetched. When SQLCODE = +354 is returned for the statement, you can use the GET DIAGNOSTICS statement to determine which errors occurred for which rows. The following statement returns num_rows = 10 and num_cond = 3:
GET DIAGNOSTICS :num_rows = ROW_COUNT, :num_cond = NUMBER;
2. To investigate the errors and warnings, use additional GET DIAGNOSTIC statements with the CONDITION option. Example: To investigate the three conditions that were reported in the example in the previous step, use the following statements:
Table 40. GET DIAGNOSTIC statements to investigate conditions Statement GET DIAGNOSTICS CONDITION 3 :sqlstate = RETURNED_SQLSTATE, :sqlcode = DB2_RETURNED_SQLCODE, :row_num = DB2_ROW_NUMBER; GET DIAGNOSTICS CONDITION 2 :sqlstate = RETURNED_SQLSTATE, :sqlcode = DB2_RETURNED_SQLCODE, :row_num = DB2_ROW_NUMBER; GET DIAGNOSTICS CONDITION 1 :sqlstate = RETURNED_SQLSTATE, :sqlcode = DB2_RETURNED_SQLCODE, :row_num = DB2_ROW_NUMBER; Output sqlstate = 22003 sqlcode = -304 row_num = 5 sqlstate = 22003 sqlcode = -802 row_num = 7 sqlstate = 01668 sqlcode = +354 row_num = 0
This output shows that the fifth row has a data mapping error (-304) for column 1 and that the seventh row has a data mapping error (-802) for column 2. These rows do not contain valid data, and they should not be used.
143
Related concepts: Indicator variables, arrays, and structures on page 140 Related reference: GET DIAGNOSTICS (DB2 SQL) Related information: +354 (DB2 Codes)
| | | | | | | | | |
144
v Graphic data types are compatible with each other: Assembler: A GRAPHIC, VARGRAPHIC, or DBCLOB column is compatible with a fixed-length or varying-length assembler graphic character host variable. C/C++: A GRAPHIC, VARGRAPHIC, or DBCLOB column is compatible with a single character, NUL-terminated, or VARGRAPHIC structured form of a C graphic host variable. COBOL: A GRAPHIC, VARGRAPHIC, or DBCLOB column is compatible with a fixed-length or varying-length COBOL graphic string host variable. PL/I: A GRAPHIC, VARGRAPHIC, or DBCLOB column is compatible with a fixed-length or varying-length PL/I graphic character host variable. v Graphic data types are partially compatible with DBCLOB locators. You can perform the following assignments: Assign a value in a DBCLOB locator to a GRAPHIC or VARGRAPHIC column Use a SELECT INTO statement to assign a GRAPHIC or VARGRAPHIC column to a DBCLOB locator host variable. Assign a GRAPHIC or VARGRAPHIC output parameter from a user-defined function or stored procedure to a DBCLOB locator host variable. Use a SET assignment statement to assign a GRAPHIC or VARGRAPHIC transition variable to a DBCLOB locator host variable. Use a VALUES INTO statement to assign a GRAPHIC or VARGRAPHIC function parameter to a DBCLOB locator host variable. However, you cannot use a FETCH statement to assign a value in a GRAPHIC or VARGRAPHIC column to a DBCLOB locator host variable. v Binary data types are compatible with each other. v Binary data types are partially compatible with BLOB locators. You can perform the following assignments: Assign a value in a BLOB locator to a BINARY or VARBINARY column. Use a SELECT INTO statement to assign a BINARY or VARBINARY column to a BLOB locator host variable. Assign a BINARY or VARBINARY output parameter from a user-defined function or stored procedure to a BLOB locator host variable. Use a SET assignment statement to assign a BINARY or VARBINARY transition variable to a BLOB locator host variable. Use a VALUES INTO statement to assign a BINARY or VARBINARY function parameter to a BLOB locator host variable. However, you cannot use a FETCH statement to assign a value in a BINARY or VARBINARY column to a BLOB locator host variable. v Fortran: A BINARY, VARBINARY, or BLOB column or BLOB locator is compatible only with a BLOB host variable. v C: For varying-length BIT data, use BINARY. Some C string manipulation functions process NUL-terminated strings and other functions process strings that are not NUL-terminated. The C string manipulation functions that process NUL-terminated strings cannot handle bit data because these functions might misinterpret a NUL character to be a NUL-terminator. v Datetime data types are compatible with character host variables. Assembler: A DATE, TIME, or TIMESTAMP column is compatible with a fixed-length or varying-length assembler character host variable.
| | | | | | | | | | | | | | | |
145
C/C++: A DATE, TIME, or TIMESTAMP column is compatible with a single-character, NUL-terminated, or VARCHAR structured form of a C character host variable. COBOL: A DATE, TIME, or TIMESTAMP column is compatible with a fixed-length or varying length COBOL character host variable. Fortran: A DATE, TIME, or TIMESTAMP column is compatible with a Fortran character host variable. PL/I: A DATE, TIME, or TIMESTAMP column is compatible with a fixed-length or varying-length PL/I character host variable. v The ROWID column is compatible only with a ROWID host variable. v A host variable is compatible with a distinct type if the host variable type is compatible with the source type of the distinct type. v XML columns are compatible with the XML host variable types, character types, and binary string types. Recommendation: Use the XML host variable types for data from XML columns. v Assembler:You can assign LOB data to a file reference variable (BLOB_FILE, CLOB_FILE, and DBCLOB_FILE). When necessary, DB2 automatically converts a fixed-length string to a varying-length string, or a varying-length string to a fixed-length string. Related concepts: Distinct types on page 489 Host variable data types for XML data in embedded SQL applications on page 216 Related reference: Equivalent SQL and assembler data types on page 238 Equivalent SQL and C data types on page 278 Equivalent Equivalent Equivalent Equivalent SQL and SQL and SQL and SQL and COBOL data types on page 329 Fortran data types on page 377 PL/I data types on page 399 REXX data types on page 414
| | | | | |
Procedure
To embed SQL statements in your application: Take action based on the program language that you use.
146
Related concepts: SQL statements in SQL statements in SQL statements in SQL statements in
assembler programs on page 243 C programs on page 283 COBOL programs on page 334 Fortran programs on page 379
SQL statements in PL/I programs on page 403 SQL statements in REXX programs on page 415
Procedure
To delimit an SQL statement: Take action based on the programming language that you use. Related concepts: Delimiters in SQL statements in assembler programs on page 248 Delimiters Delimiters Delimiters Delimiters Delimiters in in in in in SQL statements SQL statements SQL statements SQL statements SQL statements in in in in in C programs on page 287 COBOL programs on page 339 Fortran programs on page 381 PL/I programs on page 407 REXX programs on page 418
147
v To optimize performance, make sure that the host language declaration maps as closely as possible to the data type of the associated data in the database. v For assignments and comparisons between a DB2 column and a host variable of a different data type or length, expect conversions to occur. Related concepts: Dynamic SQL on page 158 Assignment and comparison (DB2 SQL)
Procedure
To retrieve a single row of data into host variables: In the SELECT statement specify the INTO clause with the name of one or more host variables to contain the retrieved values. Specify one variable for each value that is to be retrieved. The retrieved value can be a column value, a value of a host variable, the result of an expression, or the result of an aggregate function. Recommendation: If you want to ensure that only one row is returned, specify the FETCH FIRST 1 ROW ONLY clause. Consider using the ORDER BY clause to control which row is returned. If you specify both the ORDER BY clause and the FETCH FIRST clause, ordering is performed on the entire result set before the first row is returned. DB2 assigns the first value in the result row to the first variable in the list, the second value to the second variable, and so on. If the SELECT statement returns more than one row, DB2 returns an error, and any data that is returned is undefined and unpredictable.
Examples
Example of retrieving a single row into a host variable: Suppose that you are retrieving the LASTNAME and WORKDEPT column values from the DSN8910.EMP table for a particular employee. You can define a host variable in your program to hold each column value and then name the host variables in the INTO clause of the SELECT statement, as shown in the following COBOL example.
MOVE 000110 TO CBLEMPNO. EXEC SQL SELECT LASTNAME, WORKDEPT INTO :CBLNAME, :CBLDEPT FROM DSN8910.EMP WHERE EMPNO = :CBLEMPNO END-EXEC.
In this example, the host variable CBLEMPNO is preceded by a colon (:) in the SQL statement, but it is not preceded by a colon in the COBOL MOVE statement.
148
This example also uses a host variable to specify a value in a search condition. The host variable CBLEMPNO is defined for the employee number, so that you can retrieve the name and the work department of the employee whose number is the same as the value of the host variable, CBLEMPNO; in this case, 000110. In the DATA DIVISION section of a COBOL program, you must declare the host variables CBLEMPNO, CBLNAME, and CBLDEPT to be compatible with the data types in the columns EMPNO, LASTNAME, and WORKDEPT of the DSN8910.EMP table. Example of ensuring that a query returns only a single row: You can use the FETCH FIRST 1 ROW ONLY clause in a SELECT statement to ensure that only one row is returned. This action prevents undefined and unpredictable data from being returned when you specify the INTO clause of the SELECT statement. The following example SELECT statement ensures that only one row of the DSN8910.EMP table is returned.
EXEC SQL SELECT LASTNAME, WORKDEPT INTO :CBLNAME, :CBLDEPT FROM DSN8910.EMP FETCH FIRST 1 ROW ONLY END-EXEC.
You can include an ORDER BY clause in the preceding example to control which row is returned. The following example SELECT statement ensures that the only row returned is the one with a last name that is first alphabetically.
EXEC SQL SELECT LASTNAME, WORKDEPT INTO :CBLNAME, :CBLDEPT FROM DSN8810.EMP ORDER BY LASTNAME FETCH FIRST 1 ROW ONLY END-EXEC.
Example of retrieving the results of host variable values and expressions into host variables: When you specify a list of items in the SELECT clause, that list can include more than the column names of tables and views. You can request a set of column values mixed with host variable values and constants. For example, the following query requests the values of several columns (EMPNO, LASTNAME, and SALARY), the value of a host variable (RAISE), and the value of the sum of a column and a host variable (SALARY and RAISE). For each of these five items in the SELECT list, a host variable is listed in the INTO clause.
MOVE 4476 TO RAISE. MOVE 000220 TO PERSON. EXEC SQL SELECT EMPNO, LASTNAME, SALARY, :RAISE, SALARY + :RAISE INTO :EMP-NUM, :PERSON-NAME, :EMP-SAL, :EMP-RAISE, :EMP-TTL FROM DSN8910.EMP WHERE EMPNO = :PERSON END-EXEC.
The preceding SELECT statement returns the following results. The column headings represent the names of the host variables.
EMP-NUM ======= 000220 PERSON-NAME =========== LUTZ EMP-SAL ======= 29840 EMP-RAISE ========= 4476 EMP-TTL ======= 34316
149
Example of retrieving the result of an aggregate function into a host variable: A query can request summary values to be returned from aggregate functions and store those values in host variables. For example, the following query requests that the result of the AVG function be stored in the AVG-SALARY host variable.
MOVE D11 TO DEPTID. EXEC SQL SELECT WORKDEPT, AVG(SALARY) INTO :WORK-DEPT, :AVG-SALARY FROM DSN8910.EMP WHERE WORKDEPT = :DEPTID END-EXEC.
Related tasks: Retrieving a set of rows by using a cursor on page 704 Related reference: SELECT INTO (DB2 SQL)
Procedure
To determine whether a retrieved value in a host variable is null or truncated: Determine the value of the indicator variable, array, or structure that is associated with the host variable, array, or structure. Those values have the following meanings:
Table 41. Meanings of values in indicator variables Value of indicator variable Less than zero Meaning The column value is null. The value of the host variable does not change from its previous value. If the indicator variable value is -2, the column value is null because of a numeric or character conversion error, Zero The column value is nonnull. If the column value is a character string, the retrieved value is not truncated.
150
Table 41. Meanings of values in indicator variables (continued) Value of indicator variable Positive integer Meaning The retrieved value is truncated. The integer is the original length of the string.
Examples
Example of testing an indicator variable: Assume that you have defined the following indicator variable INDNULL for the host variable CBLPHONE.
EXEC SQL SELECT PHONENO INTO :CBLPHONE:INDNULL FROM DSN8910.EMP WHERE EMPNO = :EMPID END-EXEC.
You can then test INDNULL for a negative value. If the value is negative, the corresponding value of PHONENO is null, and you can disregard the contents of CBLPHONE. Example of testing an indicator variable array: Suppose that you declare the following indicator array INDNULL for the host variable array CBLPHONE.
EXEC SQL FETCH NEXT ROWSET CURS1 FOR 10 ROWS INTO :CBLPHONE :INDNULL END-EXEC.
After the multiple-row FETCH statement, you can test each element of the INDNULL array for a negative value. If an element is negative, you can disregard the contents of the corresponding element in the CBLPHONE host variable array. Example of testing an indicator structure in COBOL: The following example defines the indicator structure EMP-IND as an array that contains six values and corresponds to the PEMP-ROW host structure.
01 PEMP-ROW. 10 EMPNO PIC X(6). 10 FIRSTNME. 49 FIRSTNME-LEN PIC S9(4) USAGE COMP. 49 FIRSTNME-TEXT PIC X(12). 10 MIDINIT PIC X(1). 10 LASTNAME. 49 LASTNAME-LEN PIC S9(4) USAGE COMP. 49 LASTNAME-TEXT PIC X(15). 10 WORKDEPT PIC X(3). 10 EMP-BIRTHDATE PIC X(10). 01 INDICATOR-TABLE. 02 EMP-IND PIC S9(4) COMP OCCURS 6 TIMES. . . . MOVE 000230 TO EMPNO. . . . EXEC SQL SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, BIRTHDATE INTO :PEMP-ROW:EMP-IND FROM DSN8910.EMP WHERE EMPNO = :EMPNO END-EXEC.
151
You can test the indicator structure EMP-IND for negative values. If, for example, EMP-IND(6) contains a negative value, the corresponding host variable in the host structure (EMP-BIRTHDATE) contains a null value. Related concepts: Arithmetic and conversion errors on page 214 Related tasks: Declaring host variables and indicator variables on page 138
Procedure
To determine whether a column value is null: Use the IS NULL predicate or the IS DISTINCT FROM predicate. Restriction: You cannot determine whether a column value is null by comparing it to a host variable with an indicator variable that is set to -1.
Example
The following code, which uses an indicator variable, does not select the employees who have no phone number:
MOVE -1 TO PHONE-IND. EXEC SQL SELECT LASTNAME INTO :PGM-LASTNAME FROM DSN8910.EMP WHERE PHONENO = :PHONE-HV:PHONE-IND END-EXEC.
Instead, use the following statement with the IS NULL predicate to select employees who have no phone number:
EXEC SQL SELECT LASTNAME INTO :PGM-LASTNAME FROM DSN8910.EMP WHERE PHONENO IS NULL END-EXEC.
To select employees whose phone numbers are equal to the value of :PHONE-HV and employees who have no phone number (as in the second example), code two predicates, one to handle the non-null values and another to handle the null values, as in the following statement:
EXEC SQL SELECT LASTNAME INTO :PGM-LASTNAME FROM DSN8910.EMP WHERE (PHONENO = :PHONE-HV AND PHONENO IS NOT NULL AND :PHONE-HV IS NOT NULL) OR (PHONENO IS NULL AND :PHONE-HV:PHONE-IND IS NULL) END-EXEC.
You can simplify the preceding example by coding the following statement with the NOT form of the IS DISTINCT FROM predicate:
152
EXEC SQL SELECT LASTNAME INTO :PGM-LASTNAME FROM DSN8910.EMP WHERE PHONENO IS NOT DISTINCT FROM :PHONE-HV:PHONE-IND END-EXEC.
Related tasks: Declaring host variables and indicator variables on page 138 Related reference: DISTINCT predicate (DB2 SQL) NULL predicate (DB2 SQL)
Procedure
To update data by using host variables: 1. Declare the necessary host variables. 2. Specify an UPDATE statement with the appropriate host variable names in the SET clause.
Examples
Example of updating a single row by using a host variable: The following COBOL example changes an employee's phone number to the value in the NEWPHONE host variable. The employee ID value is passed through the EMPID host variable.
MOVE 4246 TO NEWPHONE. MOVE 000110 TO EMPID. EXEC SQL UPDATE DSN8910.EMP SET PHONENO = :NEWPHONE WHERE EMPNO = :EMPID END-EXEC.
Example of updating multiple rows by using a host variable value in the search condition: The following example gives the employees in a particular department a salary increase of 10%. The department value is passed through the DEPTID host variable.
MOVE D11 TO DEPTID. EXEC SQL UPDATE DSN8910.EMP SET SALARY = 1.10 * SALARY WHERE WORKDEPT = :DEPTID END-EXEC.
153
Procedure
To insert a single row by using host variables: Specify an INSERT statement with column values in the VALUES clause. Specify host variables or a combination of host variables and constants as the column values. DB2 inserts the first value into the first column in the list, the second value into the second column, and so on.
Example
The following example uses host variables to insert a single row into the activity table.
EXEC SQL INSERT INTO DSN8910.ACT VALUES (:HV-ACTNO, :HV-ACTKWD, :HV-ACTDESC) END-EXEC.
Related tasks: Inserting multiple rows of data from host variable arrays on page 157 Related reference: INSERT (DB2 SQL)
Procedure
To insert null values into columns by using indicator variables or arrays: 1. Define an indicator variable or array for a particular host variable or array. 2. Assign a negative value to the indicator variable or array. | | | | 3. Issue the appropriate INSERT, UPDATE, or MERGE statement with the host variable or array and its indicator variable or array. When DB2 processes INSERT, UPDATE, and MERGE statements, it checks the indicator variable if one exists. If the indicator variable is negative, the column
154
| |
value is null. If the indicator variable is greater than -1, the associated host variable contains a value for the column.
Examples
Example of setting a column value to null by using an indicator variable: Suppose your program reads an employee ID and a new phone number and must update the employee table with the new number. The new number could be missing if the old number is incorrect, but a new number is not yet available. If the new value for column PHONENO might be null, you can use an indicator variable, as shown in the following UPDATE statement.
EXEC SQL UPDATE DSN8910.EMP SET PHONENO = :NEWPHONE:PHONEIND WHERE EMPNO = :EMPID END-EXEC.
When NEWPHONE contains a non-null value, set the indicator variable PHONEIND to zero by preceding the UPDATE statement with the following line:
MOVE 0 TO PHONEIND.
When NEWPHONE contains a null value, set PHONEIND to a negative value by preceding the UPDATE statement with the following line:
MOVE -1 TO PHONEIND.
Example of setting a column value to null by using an indicator variable array: Assume that host variable arrays hva1 and hva2 have been populated with values that are to be inserted into the ACTNO and ACTKWD columns. Assume the ACTDESC column allows nulls. To set the ACTDESC column to null, assign -1 to the elements in its indicator array, ind3, as shown in the following example:
/* Initialize each indicator array */ for (i=0; i<10; i++) { ind1[i] = 0; ind2[i] = 0; ind3[i] = -1; } EXEC SQL INSERT INTO DSN8910.ACT (ACTNO, ACTKWD, ACTDESC) VALUES (:hva1:ind1, :hva2:ind2, :hva3:ind3) FOR 10 ROWS;
DB2 ignores the values in the hva3 array and assigns the values in the ARTDESC column to null for the 10 rows that are inserted. Related tasks: Declaring host variables and indicator variables on page 138
155
To use a host variable array in an SQL statement, specify any valid host variable array that is declared according to the host language rules. You can specify host variable arrays in C or C++, COBOL, and PL/I. You must declare the array in the host program before you use it. | | | | | | | | | | | | | | | | | | | | | | | | | | | | Restrictions: Use of host variable arrays in assembler programs is limited in the following: v The DB2 precompiler does not recognize declarations of host variable arrays for assembler, it recognizes these declarations only in C, COBOL, and PL/I. v Assembler does not support multiple-row MERGE. You cannot specify MERGE statements that reference host variable arrays. v Assembler support for multiple-row FETCH is limited to the FETCH statement with the INTO DESCRIPTOR clause. For example:
EXEC SQL FETCH NEXT ROWSET FROM C1 FOR 10 ROWS INTO DESCRIPTOR :SQLDA X
v Assembler support for multiple-row INSERT is limited to the following cases: Static multiple-row INSERT statement with scalar values (scalar host variables or scalar expressions) in the VALUES clause. For example:
EXEC SQL INSERT INTO T1 VALUES (1, CURRENT DATE, TEST) FOR 10 ROWS X
Dynamic multiple-row INSERT executed with the USING DESCRIPTOR clause on the EXECUTE statement. For example:
ATR S1 DS CL20 ATTRIBUTES FOR PREPARE DS H,CL30 VARCHAR STATEMENT STRING MVC ATR(20),=CFOR MULTIPLE ROWS MVC S1(2),=H25 MVC S1+2(30),=CINSERT INTO T1 VALUES (?) EXEC SQL PREPARE STMT ATTRIBUTES :ATR FROM :S1 EXEC SQL EXECUTE STMT USING DESCRIPTOR :SQLDA FOR 10 ROWS where the descriptor is set up correctly in advance according to the specifications for dynamic execution of a multiple-row INSERT statement with a descriptor
Related concepts: Host variable arrays on page 139 Related tasks: Embedding SQL statements in your application on page 146 Inserting multiple rows of data from host variable arrays on page 157 Retrieving multiple rows of data into host variable arrays
156
Related concepts: Host variable arrays in an SQL statement on page 155 Host variable arrays on page 139 Related tasks: Accessing data by using a rowset-positioned cursor on page 714 Inserting multiple rows of data from host variable arrays
Example
You can insert the number of rows that are specified in the host variable NUM-ROWS by using the following INSERT statement:
EXEC SQL INSERT INTO DSN8910.ACT (ACTNO, ACTKWD, ACTDESC) VALUES (:HVA1, :HVA2, :HVA3) FOR :NUM-ROWS ROWS END-EXEC.
Assume that the host variable arrays HVA1, HVA2, and HVA3 have been declared and populated with the values that are to be inserted into the ACTNO, ACTKWD, and ACTDESC columns. The NUM-ROWS host variable specifies the number of rows that are to be inserted, which must be less than or equal to the dimension of each host variable array. Related tasks: Retrieving multiple rows of data into host variable arrays on page 156
157
If you want to avoid listing host variables, you can substitute the name of a structure, say :PEMP, that contains :EMPNO, :FIRSTNME, :MIDINIT, :LASTNAME, and :WORKDEPT. The example then reads:
EXEC SQL SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT INTO :PEMP FROM DSN8910.VEMP WHERE EMPNO = :EMPID END-EXEC.
You can declare a host structure yourself, or you can use DCLGEN to generate a COBOL record description, PL/I structure declaration, or C structure declaration that corresponds to the columns of a table. Related concepts: DCLGEN (declarations generator) on page 125 Host structures on page 140 Example: Adding DCLGEN declarations to a library on page 134
Dynamic SQL
Dynamic SQL statements are prepared and executed while the program is running. Use dynamic SQL when you do not know what SQL statements your application needs to execute before run time. Before you decide to use dynamic SQL, you should consider whether using static SQL or dynamic SQL is the best technique for your application. For most DB2 users, static SQL, which is embedded in a host language program and bound before the program runs, provides a straightforward, efficient path to DB2 data. You can use static SQL when you know before run time what SQL statements your application needs to execute. Dynamic SQL prepares and executes the SQL statements within a program, while the program is running. Four types of dynamic SQL are: v Interactive SQL A user enters SQL statements through SPUFI or the command line processor. DB2 prepares and executes those statements as dynamic SQL statements. v Embedded dynamic SQL Your application puts the SQL source in host variables and includes PREPARE and EXECUTE statements that tell DB2 to prepare and run the contents of those host variables at run time. You must precompile and bind programs that include embedded dynamic SQL. v Deferred embedded SQL Deferred embedded SQL statements are neither fully static nor fully dynamic. Like static statements, deferred embedded SQL statements are embedded within applications, but like dynamic statements, they are prepared at run time. DB2 processes deferred embedded SQL statements with bind-time rules. For example,
| |
158
DB2 uses the authorization ID and qualifier determined at bind time as the plan or package owner. Deferred embedded SQL statements are used for DB2 private protocol access to remote data. v Dynamic SQL executed through ODBC functions Your application contains ODBC function calls that pass dynamic SQL statements as arguments. You do not need to precompile and bind programs that use ODBC function calls. Differences between static and dynamic SQL: Static and dynamic SQL are each appropriate for different circumstances. You should consider the differences between the two when determining whether static SQL or dynamic SQL is best for your application. Flexibility of static SQL with host variables When you use static SQL, you cannot change the form of SQL statements unless you make changes to the program. However, you can increase the flexibility of static statements by using host variables. Example: In the following example, the UPDATE statement can update the salary of any employee. At bind time, you know that salaries must be updated, but you do not know until run time whose salaries should be updated, and by how much.
01 . . . IOAREA. 02 EMPID 02 NEW-SALARY PIC X(06). PIC S9(7)V9(2) COMP-3.
(Other declarations) READ CARDIN RECORD INTO IOAREA AT END MOVE N TO INPUT-SWITCH. . . . (Other COBOL statements) EXEC SQL UPDATE DSN8910.EMP SET SALARY = :NEW-SALARY WHERE EMPNO = :EMPID END-EXEC.
The statement (UPDATE) does not change, nor does its basic structure, but the input can change the results of the UPDATE statement. Flexibility of dynamic SQL What if a program must use different types and structures of SQL statements? If there are so many types and structures that it cannot contain a model of each one, your program might need dynamic SQL. | | | | | | | | You can use one of the following programs to execute dynamic SQL: DB2 Query Management Facility (DB2 QMF) Provides an alternative interface to DB2 that accepts almost any SQL statement SPUFI Accepts SQL statements from an input data set, and then processes and executes them dynamically command line processor Accepts SQL statements from a UNIX System Services environment.
Chapter 3. Coding SQL statements in application programs: General information
159
Limitations of dynamic SQL You cannot use some of the SQL statements dynamically. | | | For reactive governing cases, the ASUTIME limit specified for the top-level calling package is applied for the entire thread, regardless of any value specified for the routines that are called. Dynamic SQL processing A program that provides for dynamic SQL accepts as input, or generates, an SQL statement in the form of a character string. You can simplify the programming if you can plan the program not to use SELECT statements, or to use only those that return a known number of values of known types. In the most general case, in which you do not know in advance about the SQL statements that will execute, the program typically takes these steps: 1. Translates the input data, including any parameter markers, into an SQL statement 2. Prepares the SQL statement to execute and acquires a description of the result table 3. Obtains, for SELECT statements, enough main storage to contain retrieved data 4. Executes the statement or fetches the rows of data 5. Processes the information returned 6. Handles SQL return codes. Performance of static and dynamic SQL To access DB2 data, an SQL statement requires an access path. Two big factors in the performance of an SQL statement are the amount of time that DB2 uses to determine the access path at run time and whether the access path is efficient. DB2 determines the access path for a statement at either of these times: v When you bind the plan or package that contains the SQL statement v When the SQL statement executes The time at which DB2 determines the access path depends on these factors: v Whether the statement is executed statically or dynamically v Whether the statement contains input host variables v Whether the statement contains a declared global temporary table. Static SQL statements with no input host variables For static SQL statements that do not contain input host variables, DB2 determines the access path when you bind the plan or package. This combination yields the best performance because the access path is already determined when the program executes. Static SQL statements with input host variables | | | | | | For static SQL statements that have input host variables, the time at which DB2 determines the access path depends on which bind option you specify: REOPT(NONE) or REOPT(ALWAYS). REOPT(NONE) is the default. Do not specify REOPT(AUTO) or REOPT(ONCE); these options are applicable only to dynamic statements. DB2 ignores REOPT(ONCE) and REOPT(AUTO) for static SQL statements, because DB2 only caches dynamic statements.
160
If you specify REOPT(NONE), DB2 determines the access path at bind time, just as it does when there are no input variables. DB2 ignores REOPT(ONCE) for static SQL statements because DB2 can cache only dynamic SQL statements If you specify REOPT(ALWAYS), DB2 determines the access path at bind time and again at run time, using the values in these types of input variables: v Host variables v Parameter markers v Special registers This means that DB2 must spend extra time determining the access path for statements at run time, but if DB2 determines a significantly better access path using the variable values, you might see an overall performance improvement. With REOPT(ALWAYS), DB2 optimizes statements using known literal values. Knowing the literal values can help DB2 to choose a more efficient access path when the columns contain skewed data. DB2 can also recognize which partitions qualify if there are search conditions with host variables on the limit keys of partitioned table spaces. With REOPT(ALWAYS) DB2 does not start the optimization over from the beginning. For example DB2 does not perform query transformations based on the literal values. Consequently, static SQL statements that use host variables optimized with REOPT(ALWAYS) and similar SQL statements that use explicit literal values might result in different access paths. Dynamic SQL statements For dynamic SQL statements, DB2 determines the access path at run time, when the statement is prepared. The repeating cost of preparing a dynamic statement can make the performance worse than that of static SQL statements. However, if you execute the same SQL statement often, you can use the dynamic statement cache to decrease the number of times that those dynamic statements must be prepared. | | | | Dynamic SQL statements with input host variables: When you bind applications that contain dynamic SQL statements with input host variables, consider using the REOPT(ALWAYS), REOPT(ONCE), or REOPT(AUTO) options, instead of the REOPT(NONE) option. Use REOPT(ALWAYS) when you are not using the dynamic statement cache. DB2 determines the access path for statements at each EXECUTE or OPEN of the statement. This ensure the best access path for a statement, but using REOPT(ALWAYS) can increase the cost of frequently used dynamic SQL statements. Consequently, the REOPT(ALWAYS) option is not a good choice for high-volume sub-second queries. For high-volume fast running queries, the repeating cost of prepare can exceed the execution cost of the statement. Statements that are processed under the REOPT(ALWAYS) option are excluded from the dynamic statement cache even if dynamic statement caching is enabled because DB2 cannot reuse access paths when REOPT(ALWAYS) is specified. | | Use REOPT(ONCE) or REOPT(AUTO) when you are using the dynamic statements cache:
161
| | | | | | | | | | | | | | | | | | | | | | | | | |
v If you specify REOPT(ONCE), DB2 determines and the access path for statements only at the first EXECUTE or OPEN of the statement. It saves that access path in the dynamic statement cache and uses it until the statement is invalidated or removed from the cache. This reuse of the access path reduces the prepare cost of frequently used dynamic SQL statements that contain input host variables; however, it does not account for changes to parameter marker values for dynamic statements. The REOPT(ONCE) option is ideal for ad-hoc query applications such as SPUFI, DSNTEP2, DSNTEP4, DSNTIAUL, and QMF. DB2 can better optimize statements knowing the literal values for special registers such as CURRENT DATE and CURRENT TIMESTAMP, rather than using default filter factor estimates. v If you specify REOPT(AUTO), DB2 determines the access path at run time. For each execution of a statement with parameter markers, DB2 generates a new access path if it determines that a new access path is likely to improve performance. You should code your PREPARE statements to minimize overhead. With REOPT(AUTO), REOPT(ALWAYS), and REOPT(ONCE), DB2 prepares an SQL statement at the same time as it processes OPEN or EXECUTE for the statement. That is, DB2 processes the statement as if you specify DEFER(PREPARE). However, in the following cases, DB2 prepares the statement twice: v If you execute the DESCRIBE statement before the PREPARE statement in your program v If you use the PREPARE statement with the INTO parameter For the first prepare, DB2 determines the access path without using input variable values. For the second prepare, DB2 uses the input variable values to determine the access path. This extra prepare can decrease performance. If you specify REOPT(ALWAYS), DB2 prepares the statement twice each time it is run. If you specify REOPT(ONCE), DB2 prepares the statement twice only when the statement has never been saved in the cache. If the statement has been prepared and saved in the cache, DB2 will use the saved version of the statement to complete the DESCRIBE statement. If you specify REOPT(AUTO), DB2 initially prepares the statement without using input variable values. If the statement has been saved in the cache, for the subsequent OPEN or EXECUTE, DB2 determines if a new access path is needed according to the input variable values. For a statement that uses a cursor, you can avoid the double prepare by placing the DESCRIBE statement after the OPEN statement in your program. If you use predictive governing, and a dynamic SQL statement that is bound with either REOPT(ALWAYS) or REOPT(ONCE) exceeds a predictive governing warning threshold, your application does not receive a warning SQLCODE. However, it will receive an error SQLCODE from the OPEN or EXECUTE statement.
162
Procedure
Your program must take the following steps: 1. Include an SQLCA. The requirements for an SQL communications area (SQLCA) are the same as for static SQL statements. For REXX, DB2 includes the SQLCA automatically. 2. Load the input SQL statement into a data area. The procedure for building or reading the input SQL statement is not discussed here; the statement depends on your environment and sources of information. You can read in complete SQL statements, or you can get information to build the statement from data sets, a user at a terminal, previously set program variables, or tables in the database. If you attempt to execute an SQL statement dynamically that DB2 does not allow, you get an SQL error. 3. Execute the statement. You can use either of these methods: v EXECUTE IMMEDIATE v PREPARE and EXECUTE
163
4. Handle any errors that might result. The requirements are the same as those for static SQL statements. The return code from the most recently executed SQL statement appears in the host variables SQLCODE and SQLSTATE or corresponding fields of the SQLCA. Related concepts: Sample dynamic and static SQL in a C program on page 287 SQL statements in assembler programs on page 243 SQL statements SQL statements SQL statements SQL statements SQL statements in in in in in C programs on page 283 COBOL programs on page 334 Fortran programs on page 379 PL/I programs on page 403 REXX programs on page 415
Related tasks: Checking the execution of SQL statements on page 201 Dynamically executing an SQL statement by using EXECUTE IMMEDIATE on page 184 Dynamically executing an SQL statement by using PREPARE and EXECUTE on page 185
Procedure
To execute a fixed-list SELECT statement dynamically, your program must: 1. Include an SQLCA. 2. Load the input SQL statement into a data area. The preceding two steps are exactly the same including dynamic SQL for non-SELECT statements in your program. 3. Declare a cursor for the statement name. 4. Prepare the statement. 5. Open the cursor. 6. Fetch rows from the result table.
164
7. Close the cursor. 8. Handle any resulting errors. This step is the same as for static SQL, except for the number and types of errors that can result.
Results
Example: Suppose that your program retrieves last names and phone numbers by dynamically executing SELECT statements of this form:
SELECT LASTNAME, PHONENO FROM DSN8910.EMP WHERE ... ;
The program reads the statements from a terminal, and the user determines the WHERE clause. As with non-SELECT statements, your program puts the statements into a varying-length character variable; call it DSTRING. Eventually you prepare a statement from DSTRING, but first you must declare a cursor for the statement and give it a name. Declaring a cursor for the statement name: Dynamic SELECT statements cannot use INTO. Therefore, you must use a cursor to put the results into host variables. Example: When you declare the cursor, use the statement name (call it STMT), and give the cursor itself a name (for example, C1):
EXEC SQL DECLARE C1 CURSOR FOR STMT;
Preparing the statement: Prepare a statement (STMT) from DSTRING. Example: This is one possible PREPARE statement:
EXEC SQL PREPARE STMT FROM :DSTRING ATTRIBUTES :ATTRVAR;
ATTRVAR contains attributes that you want to add to the SELECT statement, such as FETCH FIRST 10 ROWS ONLY or OPTIMIZE for 1 ROW. In general, if the SELECT statement has attributes that conflict with the attributes in the PREPARE statement, the attributes on the SELECT statement take precedence over the attributes on the PREPARE statement. However, in this example, the SELECT statement in DSTRING has no attributes specified, so DB2 uses the attributes in ATTRVAR for the SELECT statement. As with non-SELECT statements, the fixed-list SELECT could contain parameter markers. However, this example does not need them. To execute STMT, your program must open the cursor, fetch rows from the result table, and close the cursor. Opening the cursor: The OPEN statement evaluates the SELECT statement named STMT. Example: Without parameter markers, use this statement:
EXEC SQL OPEN C1;
Chapter 3. Coding SQL statements in application programs: General information
165
If STMT contains parameter markers, you must use the USING clause of OPEN to provide values for all of the parameter markers in STMT. Example: If four parameter markers are in STMT, you need the following statement:
EXEC SQL OPEN C1 USING :PARM1, :PARM2, :PARM3, :PARM4;
Fetching rows from the result table: Example: Your program could repeatedly execute a statement such as this:
EXEC SQL FETCH C1 INTO :NAME, :PHONE;
The key feature of this statement is the use of a list of host variables to receive the values returned by FETCH. The list has a known number of items (in this case, two items, :NAME and :PHONE) of known data types (both are character strings, of lengths 15 and 4, respectively). You can use this list in the FETCH statement only because you planned the program to use only fixed-list SELECTs. Every row that cursor C1 points to must contain exactly two character values of appropriate length. If the program is to handle anything else, it must use the techniques for including dynamic SQL for varying-list SELECT statements in your program. Closing the cursor: This step is the same as for static SQL. Example: A WHENEVER NOT FOUND statement in your program can name a routine that contains this statement:
EXEC SQL CLOSE C1;
Related concepts: Sample dynamic and static SQL in a C program on page 287 SQL statements in assembler programs on page 243 SQL statements in C programs on page 283 SQL statements in COBOL programs on page 334 SQL statements in Fortran programs on page 379 SQL statements in PL/I programs on page 403 SQL statements in REXX programs on page 415 Related tasks: Including dynamic SQL for non-SELECT statements in your program on page 163 Including dynamic SQL for varying-list SELECT statements in your program
166
167
You cannot include an SQLDA in a Fortran, or REXX program. Obtaining information about the SQL statement: An SQLDA can contain a variable number of occurrences of SQLVAR, each of which is a set of five fields that describe one column in the result table of a SELECT statement. The number of occurrences of SQLVAR depends on the following factors: v The number of columns in the result table you want to describe. v Whether you want the PREPARE or DESCRIBE to put both column names and labels in your SQLDA. This is the option USING BOTH in the PREPARE or DESCRIBE statement. v Whether any columns in the result table are LOB types or distinct types. The following table shows the minimum number of SQLVAR instances you need for a result table that contains n columns.
Table 42. Minimum number of SQLVARs for a result table with n columns Type of DESCRIBE and contents of result table No distinct types or LOBs Distinct types but no LOBs LOBs but no distinct types LOBs and distinct types Not USING BOTH n 2*n 2*n 2*n USING BOTH 2*n 3*n 2*n 3*n
An SQLDA with n occurrences of SQLVAR is referred to as a single SQLDA, an SQLDA with 2*n occurrences of SQLVAR a double SQLDA, an SQLDA with 3*n occurrences of SQLVAR a triple SQLDA. A program that admits SQL statements of every kind for dynamic execution has two choices: v Provide the largest SQLDA that it could ever need. The maximum number of columns in a result table is 750, so an SQLDA for 750 columns occupies 33 016 bytes for a single SQLDA, 66 016 bytes for a double SQLDA, or 99 016 bytes for a triple SQLDA. Most SELECT statements do not retrieve 750 columns, so the program does not usually use most of that space. v Provide a smaller SQLDA, with fewer occurrences of SQLVAR. From this the program can find out whether the statement was a SELECT and, if it was, how many columns are in its result table. If more columns are in the result than the SQLDA can hold, DB2 returns no descriptions. When this happens, the program must acquire storage for a second SQLDA that is long enough to hold the column descriptions, and ask DB2 for the descriptions again. Although this technique is more complicated to program than the first, it is more general. How many columns should you allow? You must choose a number that is large enough for most of your SELECT statements, but not too wasteful of space; 40 is
168
a good compromise. To illustrate what you must do for statements that return more columns than allowed, the example in this discussion uses an SQLDA that is allocated for at least 100 columns. Declaring a cursor for the statement: As before, you need a cursor for the dynamic SELECT. For example, write:
EXEC SQL DECLARE C1 CURSOR FOR STMT;
Preparing the statement using the minimum SQLDA: Suppose that your program declares an SQLDA structure with the name MINSQLDA, having 100 occurrences of SQLVAR and SQLN set to 100. To prepare a statement from the character string in DSTRING and also enter its description into MINSQLDA, write this:
EXEC SQL PREPARE STMT FROM :DSTRING; EXEC SQL DESCRIBE STMT INTO :MINSQLDA;
Equivalently, you can use the INTO clause in the PREPARE statement:
EXEC SQL PREPARE STMT INTO :MINSQLDA FROM :DSTRING;
Do not use the USING clause in either of these examples. At the moment, only the minimum SQLDA is in use. The following figure shows the contents of the minimum SQLDA in use.
Header
SQLDAID
SQLDABC
100
SQLD
SQLN determines what SQLVAR gets: The SQLN field, which you must set before using DESCRIBE (or PREPARE INTO), tells how many occurrences of SQLVAR the SQLDA is allocated for. If DESCRIBE needs more than that, the results of the DESCRIBE depend on the contents of the result table. Let n indicate the number of columns in the result table. Then: v If the result table contains at least one distinct type column but no LOB columns, you do not specify USING BOTH, and n<=SQLN<2*n, then DB2 returns base SQLVAR information in the first n SQLVAR occurrences, but no distinct type information. Base SQLVAR information includes: Data type code Length attribute (except for LOBs) Column name or label Host variable address Indicator variable address v Otherwise, if SQLN is less than the minimum number of SQLVARs specified in the table above, then DB2 returns no information in the SQLVARs. Regardless of whether your SQLDA is big enough, whenever you execute DESCRIBE, DB2 returns the following values, which you can use to build an SQLDA of the correct size:
169
v SQLD is 0 if the SQL statement is not a SELECT. Otherwise, SQLD is the number of columns in the result table. The number of SQLVAR occurrences you need for the SELECT depends on the value in the seventh byte of SQLDAID. v The seventh byte of SQLDAID is 2 if each column in the result table requires two SQLVAR entries. The seventh byte of SQLDAID is 3 if each column in the result table requires three SQLVAR entries. If the statement is not a SELECT: To find out if the statement is a SELECT, your program can query the SQLD field in MINSQLDA. If the field contains 0, the statement is not a SELECT, the statement is already prepared, and your program can execute it. If no parameter markers are in the statement, you can use:
EXEC SQL EXECUTE STMT;
(If the statement does contain parameter markers, you must use an SQL descriptor area) Acquiring storage for a second SQLDA if needed: Now you can allocate storage for a second, full-size SQLDA; call it FULSQLDA. The following figure shows its structure.
FULSQLDA has a fixed-length header of 16 bytes in length, followed by a varying-length section that consists of structures with the SQLVAR format. If the result table contains LOB columns or distinct type columns, a varying-length section that consists of structures with the SQLVAR2 format follows the structures with SQLVAR format. All SQLVAR structures and SQLVAR2 structures are 44 bytes long. The number of SQLVAR and SQLVAR2 elements you need is in the SQLD field of MINSQLDA, and the total length you need for FULSQLDA (16 + SQLD * 44) is in the SQLDABC field of MINSQLDA. Allocate that amount of storage.
170
Describing the SELECT statement again: After allocating sufficient space for FULSQLDA, your program must take these steps: 1. Put the total number of SQLVAR and SQLVAR2 occurrences in FULSQLDA into the SQLN field of FULSQLDA. This number appears in the SQLD field of MINSQLDA. 2. Describe the statement again into the new SQLDA:
EXEC SQL DESCRIBE STMT INTO :FULSQLDA;
After the DESCRIBE statement executes, each occurrence of SQLVAR in the full-size SQLDA (FULSQLDA in our example) contains a description of one column of the result table in five fields. If an SQLVAR occurrence describes a LOB column or distinct type column, the corresponding SQLVAR2 occurrence contains additional information specific to the LOB or distinct type. The following figure shows an SQLDA that describes two columns that are not LOB columns or distinct type columns.
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) 452 453
8816 0 0
200 8 7
Acquiring storage to hold a row: Before fetching rows of the result table, your program must: 1. Analyze each SQLVAR description to determine how much space you need for the column value. 2. Derive the address of some storage area of the required size. 3. Put this address in the SQLDATA field. If the SQLTYPE field indicates that the value can be null, the program must also put the address of an indicator variable in the SQLIND field. The following figures show the SQL descriptor area after you take certain actions. In the previous figure, the DESCRIBE statement inserted all the values except the first occurrence of the number 200. The program inserted the number 200 before it executed DESCRIBE to tell how many occurrences of SQLVAR to allow. If the result table of the SELECT has more columns than this, the SQLVAR fields describe nothing. The first SQLVAR pertains to the first column of the result table (the WORKDEPT column). SQLVAR element 1 contains fixed-length character strings and does not allow null values (SQLTYPE=452); the length attribute is 3. The following figure shows the SQLDA after your program acquires storage for the column values and their indicators, and puts the addresses in the SQLDATA fields of the SQLDA.
171
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) 452 453
8816
200
FLDA CHAR(3)
FLDB CHAR(4)
Figure 14. SQL descriptor area after analyzing descriptions and acquiring storage
The following figure shows the SQLDA after your program executes a FETCH statement.
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) 452 453
8816
200
The number of characters in the column name The column name of the first column
172
After analyzing the description of each column, your program must replace the content of each SQLDATA field with the address of a storage area large enough to hold values from that column. Similarly, for every column that allows nulls, the program must replace the content of the SQLIND field. The content must be the address of a halfword that you can use as an indicator variable for the column. The program can acquire storage for this purpose, of course, but the storage areas used do not have to be contiguous. Figure 14 on page 172 shows the content of the descriptor area before the program obtains any rows of the result table. Addresses of fields and indicator variables are already in the SQLVAR. Changing the CCSID for retrieved data: All DB2 string data has an encoding scheme and CCSID associated with it. When you select string data from a table, the selected data generally has the same encoding scheme and CCSID as the table. If the application uses some method, such as issuing the DECLARE VARIABLE statement, to change the CCSID of the selected data, the data is converted from the CCSID of the table to the CCSID that is specified by the application. You can set the default application encoding scheme for a plan or package by specifying the value in the APPLICATION ENCODING field of the panel DEFAULTS FOR BIND PACKAGE or DEFAULTS FOR BIND PLAN. The default application encoding scheme for the DB2 subsystem is the value that was specified in the APPLICATION ENCODING field of installation panel DSNTIPF. If you want to retrieve the data in an encoding scheme and CCSID other than the default values, you can use one of the following techniques: v For dynamic SQL, set the CURRENT APPLICATION ENCODING SCHEME special register before you execute the SELECT statements. For example, to set the CCSID and encoding scheme for retrieved data to the default CCSID for Unicode, execute this SQL statement:
EXEC SQL SET CURRENT APPLICATION ENCODING SCHEME =UNICODE;
The initial value of this special register is the application encoding scheme that is determined by the BIND option. v For static and dynamic SQL statements that use host variables and host variable arrays, use the DECLARE VARIABLE statement to associate CCSIDs with the host variables into which you retrieve the data. See Setting the CCSID for host variables on page 141 for information about this technique. v For static and dynamic SQL statements that use a descriptor, set the CCSID for the retrieved data in the SQLDA. The following text describes that technique. To change the encoding scheme for SQL statements that use a descriptor, set up the SQLDA, and then make these additional changes to the SQLDA: 1. Put the character + in the sixth byte of field SQLDAID. 2. For each SQLVAR entry: a. Set the length field of SQLNAME to 8. b. Set the first two bytes of the data field of SQLNAME to X'0000'. c. Set the third and fourth bytes of the data field of SQLNAME to the CCSID, in hexadecimal, in which you want the results to display, or to X'0000'. X'0000' indicates that DB2 should use the default CCSID If you specify a nonzero CCSID, it must meet one of the following conditions:
Chapter 3. Coding SQL statements in application programs: General information
173
v A row in catalog table SYSSTRINGS has a matching value for OUTCCSID. v The Unicode conversion services support conversion to that CCSID. See z/OS C/C++ Programming Guide for information about the conversions supported. If you are modifying the CCSID to retrieve the contents of an ASCII, EBCDIC, or Unicode table on a DB2 for z/OS system, and you previously executed a DESCRIBE statement on the SELECT statement that you are using to retrieve the data, the SQLDATA fields in the SQLDA that you used for the DESCRIBE contain the ASCII or Unicode CCSID for that table. To set the data portion of the SQLNAME fields for the SELECT, move the contents of each SQLDATA field in the SQLDA from the DESCRIBE to each SQLNAME field in the SQLDA for the SELECT. If you are using the same SQLDA for the DESCRIBE and the SELECT, be sure to move the contents of the SQLDATA field to SQLNAME before you modify the SQLDATA field for the SELECT. For REXX, you set the CCSID in the stem.n.SQLUSECCSID field instead of setting the SQLDAID and SQLNAME fields. For example, suppose that the table that contains WORKDEPT and PHONENO is defined with CCSID ASCII. To retrieve data for columns WORKDEPT and PHONENO in ASCII CCSID 437 (X'01B5'), change the SQLDA as shown in the following figure.
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes)
8816
200
FLDA CHAR(3)
FLDB CHAR(4)
Figure 16. SQL descriptor area for retrieving data in ASCII CCSID 437
Specifying that DESCRIBE use column labels in the SQLNAME field: By default, DESCRIBE describes each column in the SQLNAME field by the column name. You can tell it to use column labels instead. Restriction: You cannot use column labels with set operators (UNION, INTERSECT, and EXCEPT). To specify that DESCRIBE use column labels in the SQLNAME field, specify one of the following options when you issue the DESCRIBE statement: USING LABELS Specifies that SQLNAME is to contain labels. If a column has no label, SQLNAME contains nothing. USING ANY Specifies that SQLNAME is to contain labels wherever they exist. If a column has no label, SQLNAME contains the column name.
174
USING BOTH Specifies that SQLNAME is to contain both labels and column names, when both exist. In this case, FULSQLDA must contain a second set of occurrences of SQLVAR. The first set contains descriptions of all the columns with column names; the second set contains descriptions with column labels. If you choose this option, perform the following actions: v Allocate a longer SQLDA for the second DESCRIBE statement ((16 + SQLD * 88 bytes) instead of (16 + SQLD * 44)) v Put double the number of columns (SLQD * 2) in the SQLN field of the second SQLDA. These actions ensure that enough space is available. Otherwise, if not enough space is available, DESCRIBE does not enter descriptions of any of the columns.
EXEC SQL DESCRIBE STMT INTO :FULSQLDA USING LABELS;
| | | |
Some columns, such as those derived from functions or expressions, have neither name nor label; SQLNAME contains nothing for those columns. For example, if you use a UNION to combine two columns that do not have the same name and do not use a label, SQLNAME contains a string of length zero. Describing tables with LOB and distinct type columns: In general, the steps that you perform when you prepare an SQLDA to select rows from a table with LOB and distinct type columns are similar to the steps that you perform if the table has no columns of this type. The only difference is that you need to analyze some additional fields in the SQLDA for LOB or distinct type columns. Example: Suppose that you want to execute this SELECT statement:
SELECT USER, A_DOC FROM DOCUMENTS;
The USER column cannot contain nulls and is of distinct type ID, defined like this:
CREATE DISTINCT TYPE SCHEMA1.ID AS CHAR(20);
The A_DOC column can contain nulls and is of type CLOB(1M). The result table for this statement has two columns, but you need four SQLVAR occurrences in your SQLDA because the result table contains a LOB type and a distinct type. Suppose that you prepare and describe this statement into FULSQLDA, which is large enough to hold four SQLVAR occurrences. FULSQLDA looks like the following figure .
175
SQLDA header SQLVAR element 1 (44 bytes) SQLVAR element 2 (44 bytes) SQLVAR2 element 1 (44 bytes) SQLVAR2 element 2 (44 bytes) 452 409
192 0 0
4 4 5 7
1 048 576
11
Figure 17. SQL descriptor area after describing a CLOB and distinct type
The next steps are the same as for result tables without LOBs or distinct types: 1. Analyze each SQLVAR description to determine the maximum amount of space you need for the column value. For a LOB type, retrieve the length from the SQLLONGL field instead of the SQLLEN field. 2. Derive the address of some storage area of the required size. For a LOB data type, you also need a 4-byte storage area for the length of the LOB data. You can allocate this 4-byte area at the beginning of the LOB data or in a different location. 3. Put this address in the SQLDATA field. For a LOB data type, if you allocated a separate area to hold the length of the LOB data, put the address of the length field in SQLDATAL. If the length field is at beginning of the LOB data area, put 0 in SQLDATAL. When you use a file reference variable for a LOB column, the indicator variable indicates whether the data in the file is null, not whether the data to which SQLDATA points is null. 4. If the SQLTYPE field indicates that the value can be null, the program must also put the address of an indicator variable in the SQLIND field. The following figure shows the contents of FULSQLDA after you fill in pointers to the storage locations.
Figure 18. SQL descriptor area after analyzing CLOB and distinct type descriptions and acquiring storage
The following figure shows the contents of FULSQLDA after you execute a FETCH statement.
176
Figure 19. SQL descriptor area after executing FETCH on a table with CLOB and distinct type columns
Setting an XML host variable in an SQLDA: Instead of specifying host variables to store XML values from a table, you can create an SQLDA to point to the data areas where DB2 puts the retrieved data. The SQLDA needs to describe the data type for each data area. To set an XML host variable in an SQLDA: 1. Allocate an appropriate SQLDA. 2. Issue a DESCRIBE statement for the SQL statement whose result set you want to store. The DESCRIBE statement populates the SQLDA based on the column definitions. In the SQLDA, an SQLVAR entry is populated for each column in the result set. (Multiple SQLVAR entries are populated for LOB columns and columns with distinct types.) For columns of type XML the associated SQLVAR entry is populated as follows:
Table 44. SQLVAR field values for XML columns SQLVAR field sqltype SQLTYPE sqllen SQLLEN 0 sqldata SQLDATA 0 sqlind SQLIND The unqualified name or label of the column sqlname SQLNAME Value for an XML column 988 for a column that is not nullable or 989 for a nullable column 0
3. Check the SQLTYPE field of each SQLVAR entry. If the SQLTYPE field is 988 or 989, the column in the result set is an XML column.
Chapter 3. Coding SQL statements in application programs: General information
177
4. For each XML column, make the following changes to the associated SQLVAR entry: a. Change the SQLTYPE field to indicate the data type of the host variable to receive the XML data. You can retrieve the XML data into a host variable of type XML AS BLOB, XML AS CLOB, or XML AS DBCLOB, or a compatible string data type. If the target host variable type is XML AS BLOB, XML AS CLOB, or XML AS DBCLOB, set the SQLTYPE field to one of the following values: 404 XML AS BLOB 405 nullable XML AS BLOB 408 XML AS CLOB 409 nullable XML AS CLOB 412 XML AS DBCLOB 413 nullable XML AS DBCLOB If the target host variable type is a string data type, set the SQLTYPE field to a valid string value. Restriction: You cannot use the XML type (988/989) as a target host variable type. b. If the target host variable type is XML AS BLOB, XML AS CLOB, or XML AS DBCLOB, change the first two bytes in the SQLNAME field to X'0000' and the fifth and sixth bytes to X'0100'. These bytes indicate that the value to be received is an XML value. 5. Populate the extended SQLVAR fields for each XML column as you would for a LOB column, as indicated in the following table.
Table 45. Fields for an extended SQLVAR entry for an XML host variable SQLVAR field len.sqllonglen SQLLONGL SQLLONGLEN * sqldatalen SQLDATAL SQLDATALEN not used sqldatatype_name SQLTNAME SQLDATATYPENAME Reserved pointer to the length of the XML host variable Value for an XML host variable length attribute for the XML host variable
178
You can now use the SQLDA to retrieve the XML data into a host variable of type XML AS BLOB, XML AS CLOB, or XML AS DBCLOB, or a compatible string data type. Executing a varying-list SELECT statement dynamically: You can easily retrieve rows of the result table using a varying-list SELECT statement. The statements differ only a little from those for the fixed-list example. Open the cursor: If the SELECT statement contains no parameter marker, this step is simple enough. For example:
EXEC SQL OPEN C1;
Fetch rows from the result table: This statement differs from the corresponding one for the case of a fixed-list select. Write:
EXEC SQL FETCH C1 USING DESCRIPTOR :FULSQLDA;
The key feature of this statement is the clause USING DESCRIPTOR :FULSQLDA. That clause names an SQL descriptor area in which the occurrences of SQLVAR point to other areas. Those other areas receive the values that FETCH returns. It is possible to use that clause only because you previously set up FULSQLDA to look like Figure 13 on page 171. Figure 15 on page 172 shows the result of the FETCH. The data areas identified in the SQLVAR fields receive the values from a single row of the result table. Successive executions of the same FETCH statement put values from successive rows of the result table into these same areas. Close the cursor: This step is the same as for the fixed-list case. When no more rows need to be processed, execute the following statement:
EXEC SQL CLOSE C1;
When COMMIT ends the unit of work containing OPEN, the statement in STMT reverts to the unprepared state. Unless you defined the cursor using the WITH HOLD option, you must prepare the statement again before you can reopen the cursor. Executing arbitrary statements with parameter markers: Consider, as an example, a program that executes dynamic SQL statements of several kinds, including varying-list SELECT statements, any of which might contain a variable number of parameter markers. This program might present your users with lists of choices: choices of operation (update, select, delete); choices of table names; choices of columns to select or update. The program also enables the users to enter lists of employee numbers to apply to the chosen operation. From this, the program constructs SQL statements of several forms, one of which looks like this:
SELECT .... FROM DSN8910.EMP WHERE EMPNO IN (?,?,?,...?);
The program then executes these statements dynamically. When the number and types of parameters are known: In the preceding example, you do not know in advance the number of parameter markers, and perhaps the
Chapter 3. Coding SQL statements in application programs: General information
179
kinds of parameter they represent. You can use techniques described previously if you know the number and types of parameters, as in the following examples: v If the SQL statement is not SELECT, name a list of host variables in the EXECUTE statement:
WRONG: RIGHT: EXEC SQL EXECUTE STMT; EXEC SQL EXECUTE STMT USING :VAR1, :VAR2, :VAR3;
v If the SQL statement is SELECT, name a list of host variables in the OPEN statement:
WRONG: RIGHT: EXEC SQL OPEN C1; EXEC SQL OPEN C1 USING :VAR1, :VAR2, :VAR3;
In both cases, the number and types of host variables named must agree with the number of parameter markers in STMT and the types of parameter they represent. The first variable (VAR1 in the examples) must have the type expected for the first parameter marker in the statement, the second variable must have the type expected for the second marker, and so on. There must be at least as many variables as parameter markers. When the number and types of parameters are not known: When you do not know the number and types of parameters, you can adapt the SQL descriptor area. Your program can include an unlimited number of SQLDAs, and you can use them for different purposes. Suppose that an SQLDA, arbitrarily named DPARM, describes a set of parameters. The structure of DPARM is the same as that of any other SQLDA. The number of occurrences of SQLVAR can vary, as in previous examples. In this case, every parameter marker must have one SQLVAR. Each occurrence of SQLVAR describes one host variable that replaces one parameter marker at run time. DB2 replaces the parameter markers when a non-SELECT statement executes or when a cursor is opened for a SELECT statement. You must fill in certain fields in DPARM before using EXECUTE or OPEN; you can ignore the other fields. Field Use when describing host variables for parameter markers
SQLDAID The seventh byte indicates whether more than one SQLVAR entry is used for each parameter marker. If this byte is not blank, at least one parameter marker represents a distinct type or LOB value, so the SQLDA has more than one set of SQLVAR entries. You do not set this field for a REXX SQLDA. SQLDABC The length of the SQLDA, which is equal to SQLN * 44 + 16. You do not set this field for a REXX SQLDA. SQLN The number of occurrences of SQLVAR allocated for DPARM. You do not set this field for a REXX SQLDA. SQLD The number of occurrences of SQLVAR actually used. This number must not be less than the number of parameter markers. In each occurrence of SQLVAR, put information in the following fields: SQLTYPE, SQLLEN, SQLDATA, SQLIND.
180
SQLTYPE The code for the type of variable, and whether it allows nulls. SQLLEN The length of the host variable. SQLDATA The address of the host variable. For REXX, this field contains the value of the host variable. SQLIND The address of an indicator variable, if needed. For REXX, this field contains a negative number if the value in SQLDATA is null. SQLNAME Ignore. Using the SQLDA with EXECUTE or OPEN: To indicate that the SQLDA called DPARM describes the host variables substituted for the parameter markers at run time, use a USING DESCRIPTOR clause with EXECUTE or OPEN. v For a non-SELECT statement, write:
EXEC SQL EXECUTE STMT USING DESCRIPTOR :DPARM;
How bind options REOPT(ALWAYS), REOPT(AUTO) and REOPT(ONCE) affect dynamic SQL: When you specify the bind option REOPT(ALWAYS), DB2 reoptimizes the access path at run time for SQL statements that contain host variables, parameter markers, or special registers. The option REOPT(ALWAYS) has the following effects on dynamic SQL statements: v When you specify the option REOPT(ALWAYS), DB2 automatically uses DEFER(PREPARE), which means that DB2 waits to prepare a statement until it encounters an OPEN or EXECUTE statement. v When you execute a DESCRIBE statement and then an EXECUTE statement on a non-SELECT statement, DB2 prepares the statement twice: Once for the DESCRIBE statement and once for the EXECUTE statement. DB2 uses the values in the input variables only during the second PREPARE. These multiple PREPAREs can cause performance to degrade if your program contains many dynamic non-SELECT statements. To improve performance, consider putting the code that contains those statements in a separate package and then binding that package with the option REOPT(NONE). v If you execute a DESCRIBE statement before you open a cursor for that statement, DB2 prepares the statement twice. If, however, you execute a DESCRIBE statement after you open the cursor, DB2 prepares the statement only once. To improve the performance of a program bound with the option REOPT(ALWAYS), execute the DESCRIBE statement after you open the cursor. To prevent an automatic DESCRIBE before a cursor is opened, do not use a PREPARE statement with the INTO clause. v If you use predictive governing for applications bound with REOPT(ALWAYS), DB2 does not return a warning SQLCODE when dynamic SQL statements exceed the predictive governing warning threshold. DB2 does return an error
181
SQLCODE when dynamic SQL statements exceed the predictive governing error threshold. DB2 returns the error SQLCODE for an EXECUTE or OPEN statement. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When you specify the bind option REOPT(AUTO), DB2 optimizes the access path for SQL statements at the first EXECUTE or OPEN. Each time a statement is executed, DB2 determines if a new access path is needed to improve the performance of the statement. If a new access path will improve the performance, DB2 generates one. The option REOPT(AUTO) has the following effects on dynamic SQL statements: v When you specify the bind option REOPT(AUTO), DB2 optimizes the access path for SQL statements at the first EXECUTE or OPEN. Each time a statement is executed, DB2 determines if a new access path is needed to improve the performance of the statement. If a new access path will improve the performance, DB2 generates one. v When you specify the option REOPT(ONCE), DB2 automatically uses DEFER(PREPARE), which means that DB2 waits to prepare a statement until it encounters an OPEN or EXECUTE statement. v When DB2 prepares a statement using REOPT(AUTO), it saves the access path in the dynamic statement cache. This access path is used each time the statement is run, until DB2 determines that a new access path is needed to improve the performance or the statement that is in the cache is invalidated (or removed from the cache) and needs to be rebound. v The DESCRIBE statement has the following effects on dynamic statements that are bound with REOPT(AUTO): When you execute a DESCRIBE statement before an EXECUTE statement on a non-SELECT statement, DB2 prepares the statement an extra time if it is not already saved in the cache: Once for the DESCRIBE statement and once for the EXECUTE statement. DB2 uses the values of the input variables only during the second time the statement is prepared. It then saves the statement in the cache. If you execute a DESCRIBE statement before an EXECUTE statement on a non-SELECT statement that has already been saved in the cache, DB2 will always prepare the non-SELECT statement for the DESCRIBE statement, and will prepare the statement again on EXECUTE only if DB2 determines that a new access path different from the one already saved in the cache can improve the performance. If you execute DESCRIBE on a statement before you open a cursor for that statement, DB2 always prepares the statement on DESCRIBE. However, DB2 will not prepare the statement again on OPEN if the statement has already been saved in the cache and DB2 does not think that a new access path is needed at OPEN time. If you execute DESCRIBE on a statement after you open a cursor for that statement, DB2 prepared the statement only once if it is not already saved in the cache. If the statement is already saved in the cache and you execute DESCRIBE after you open a cursor for that statement, DB2 does not prepare the statement, it used the statement that is saved in the cache. v If you use predictive governing for applications that are bound with REOPT(AUTO), DB2 does not return a warning SQLCODE when dynamic SQL statements exceed the predictive governing warning threshold. DB2 does return an error SQLCODE when dynamic SQL statements exceed the predictive governing error threshold. DB2 returns the error SQLCODE for an EXECUTE or OPEN statement.
182
When you specify the bind option REOPT(ONCE), DB2 optimizes the access path only once, at the first EXECUTE or OPEN, for SQL statements that contain host variables, parameter markers, or special registers. The option REOPT(ONCE) has the following effects on dynamic SQL statements: v When you specify the option REOPT(ONCE), DB2 automatically uses DEFER(PREPARE), which means that DB2 waits to prepare a statement until it encounters an OPEN or EXECUTE statement. v When DB2 prepares a statement using REOPT(ONCE), it saves the access path in the dynamic statement cache. This access path is used each time the statement is run, until the statement that is in the cache is invalidated (or removed from the cache) and needs to be rebound. v The DESCRIBE statement has the following effects on dynamic statements that are bound with REOPT(ONCE): When you execute a DESCRIBE statement before an EXECUTE statement on a non-SELECT statement, DB2 prepares the statement twice if it is not already saved in the cache: Once for the DESCRIBE statement and once for the EXECUTE statement. DB2 uses the values of the input variables only during the second time the statement is prepared. It then saves the statement in the cache. If you execute a DESCRIBE statement before an EXECUTE statement on a non-SELECT statement that has already been saved in the cache, DB2 prepares the non-SELECT statement only for the DESCRIBE statement. If you execute DESCRIBE on a statement before you open a cursor for that statement, DB2 always prepares the statement on DESCRIBE. However, DB2 will not prepare the statement again on OPEN if the statement has already been saved in the cache. If you execute DESCRIBE on a statement after you open a cursor for that statement, DB2 prepared the statement only once if it is not already saved in the cache. If the statement is already saved in the cache and you execute DESCRIBE after you open a cursor for that statement, DB2 does not prepare the statement, it used the statement that is saved in the cache. To improve the performance of a program that is bound with REOPT(ONCE), execute the DESCRIBE statement after you open a cursor. To prevent an automatic DESCRIBE before a cursor is opened, do not use a PREPARE statement with the INTO clause. v If you use predictive governing for applications that are bound with REOPT(ONCE), DB2 does not return a warning SQLCODE when dynamic SQL statements exceed the predictive governing warning threshold. DB2 does return an error SQLCODE when dynamic SQL statements exceed the predictive governing error threshold. DB2 returns the error SQLCODE for an EXECUTE or OPEN statement.
183
Related concepts: SQL statements in SQL statements in SQL statements in SQL statements in
assembler programs on page 243 C programs on page 283 COBOL programs on page 334 Fortran programs on page 379
SQL statements in PL/I programs on page 403 SQL statements in REXX programs on page 415 Related reference: DESCRIBE OUTPUT (DB2 SQL) SQL descriptor area (SQLDA) (DB2 SQL) SQLTYPE and SQLLEN (DB2 SQL) The SQLDA Header (DB2 SQL)
After reading a statement, the program is to run it immediately. Recall that you must prepare (precompile and bind) static SQL statements before you can use them. You cannot prepare dynamic SQL statements in advance. The SQL statement EXECUTE IMMEDIATE causes an SQL statement to prepare and execute, dynamically, at run time. Declaring the host variable: Before you prepare and execute an SQL statement, you can read it into a host variable. If the maximum length of the SQL statement is 32 KB, declare the host variable as a character or graphic host variable according to the following rules for the host languages: v In assembler, PL/I, COBOL and C, you must declare a string host variable as a varying-length string. v In Fortran, it must be a fixed-length string variable. If the length is greater than 32 KB, you must declare the host variable as a CLOB or DBCLOB, and the maximum is 2 MB. Example: Using a varying-length character host variable: This excerpt is from a C program that reads a DELETE statement into the host variable dstring and executes the statement:
EXEC SQL BEGIN DECLARE SECTION; ... struct VARCHAR { short len; char s[40];
| | |
184
} dstring; EXEC SQL END DECLARE SECTION; ... /* Read a DELETE statement into the host variable dstring. */ gets(dstring); EXEC SQL EXECUTE IMMEDIATE :dstring; ...
EXECUTE IMMEDIATE causes the DELETE statement to be prepared and executed immediately. Declaring a CLOB or DBCLOB host variable: You declare CLOB and DBCLOB host variables according to certain rules. The precompiler generates a structure that contains two elements, a 4-byte length field and a data field of the specified length. The names of these fields vary depending on the host language: v In PL/I, assembler, and Fortran, the names are variable_LENGTH and variable_DATA. v In COBOL, the names are variableLENGTH and variableDATA. v In C, the names are variable.LENGTH and variable.DATA. Example: Using a CLOB host variable: This excerpt is from a C program that copies an UPDATE statement into the host variable string1 and executes the statement:
EXEC SQL BEGIN DECLARE SECTION; ... SQL TYPE IS CLOB(4k) string1; EXEC SQL END DECLARE SECTION; ... /* Copy a statement into the host variable string1. */ strcpy(string1.data, "UPDATE DSN8610.EMP SET SALARY = SALARY * 1.1"); string1.length = 44; EXEC SQL EXECUTE IMMEDIATE :string1; ...
EXECUTE IMMEDIATE causes the UPDATE statement to be prepared and executed immediately. Related concepts: LOB host variable, LOB locator, and LOB file reference variable declarations on page 741 SQL statements in assembler programs on page 243 SQL statements in C programs on page 283 SQL statements in COBOL programs on page 334 SQL statements in Fortran programs on page 379 SQL statements in PL/I programs on page 403 SQL statements in REXX programs on page 415
185
The loop repeats until it reads an EMP value of 0. If you know in advance that you will use only the DELETE statement and only the table DSN8910.EMP, you can use the more efficient static SQL. Suppose further that several different tables have rows that are identified by employee numbers, and that users enter a table name as well as a list of employee numbers to delete. Although variables can represent the employee numbers, they cannot represent the table name, so you must construct and execute the entire statement dynamically. Your program must now do these things differently: v Use parameter markers instead of host variables v Use the PREPARE statement v Use EXECUTE instead of EXECUTE IMMEDIATE Parameter markers with PREPARE and EXECUTE: Dynamic SQL statements cannot use host variables. Therefore, you cannot dynamically execute an SQL statement that contains host variables. Instead, substitute a parameter marker, indicated by a question mark (?), for each host variable in the statement. You can indicate to DB2 that a parameter marker represents a host variable of a certain data type by specifying the parameter marker as the argument of a CAST specification. When the statement executes, DB2 converts the host variable to the data type in the CAST specification. A parameter marker that you include in a CAST specification is called a typed parameter marker. A parameter marker without a CAST specification is called an untyped parameter marker. Recommendation: Because DB2 can evaluate an SQL statement with typed parameter markers more efficiently than a statement with untyped parameter markers, use typed parameter markers whenever possible. Under certain circumstances you must use typed parameter markers. Example using parameter markers: Suppose that you want to prepare this statement:
DELETE FROM DSN8910.EMP WHERE EMPNO = :EMP;
You associate host variable :EMP with the parameter marker when you execute the prepared statement. Suppose that S1 is the prepared statement. Then the EXECUTE statement looks like this:
EXECUTE S1 USING :EMP;
186
Using the PREPARE statement: Before you prepare an SQL statement, you can assign it to a host variable. If the length of the statement is greater than 32 KB, you must declare the host variable as a CLOB or DBCLOB. You can think of PREPARE and EXECUTE as an EXECUTE IMMEDIATE done in two steps. The first step, PREPARE, turns a character string into an SQL statement, and then assigns it a name of your choosing. Example using the PREPARE statement: Assume that the character host variable :DSTRING has the value DELETE FROM DSN8910.EMP WHERE EMPNO = ?. To prepare an SQL statement from that string and assign it the name S1, write:
EXEC SQL PREPARE S1 FROM :DSTRING;
The prepared statement still contains a parameter marker, for which you must supply a value when the statement executes. After the statement is prepared, the table name is fixed, but the parameter marker enables you to execute the same statement many times with different values of the employee number. Using the EXECUTE statement: The EXECUTE statement executes a prepared SQL statement by naming a list of one or more host variables, one or more host variable arrays, or a host structure. This list supplies values for all of the parameter markers. After you prepare a statement, you can execute it many times within the same unit of work. In most cases, COMMIT or ROLLBACK destroys statements prepared in a unit of work. Then, you must prepare them again before you can execute them again. However, if you declare a cursor for a dynamic statement and use the option WITH HOLD, a commit operation does not destroy the prepared statement if the cursor is still open. You can execute the statement in the next unit of work without preparing it again. Example using the EXECUTE statement: To execute the prepared statement S1 just once, using a parameter value contained in the host variable :EMP, write:
EXEC SQL EXECUTE S1 USING :EMP;
Preparing and executing the example DELETE statement: The example in this topic began with a DO loop that executed a static SQL statement repeatedly:
< Read a value for EMP from the list. > DO UNTIL (EMP = 0); EXEC SQL DELETE FROM DSN8910.EMP WHERE EMPNO = :EMP ; < Read a value for EMP from the list. > END;
You can now write an equivalent example for a dynamic SQL statement:
< Read a statement containing parameter markers into DSTRING.> EXEC SQL PREPARE S1 FROM :DSTRING; < Read a value for EMP from the list. > DO UNTIL (EMPNO = 0); EXEC SQL EXECUTE S1 USING :EMP; < Read a value for EMP from the list. > END;
The PREPARE statement prepares the SQL statement and calls it S1. The EXECUTE statement executes S1 repeatedly, using different values for EMP.
187
Using more than one parameter marker: The prepared statement (S1 in the example) can contain more than one parameter marker. If it does, the USING clause of EXECUTE specifies a list of variables or a host structure. The variables must contain values that match the number and data types of parameters in S1 in the proper order. You must know the number and types of parameters in advance and declare the variables in your program, or you can use an SQLDA (SQL descriptor area). Related concepts: SQL statements in assembler programs on page 243 SQL statements in C programs on page 283 SQL statements in COBOL programs on page 334 SQL statements in Fortran programs on page 379 SQL statements in PL/I programs on page 403 SQL statements in REXX programs on page 415 Related tasks: Dynamically executing an SQL statement by using EXECUTE IMMEDIATE on page 184 Related reference: PREPARE (DB2 SQL)
However, if you want to enter the rows of data into different tables or enter different numbers of rows, you can construct the INSERT statement dynamically. This topic describes the following methods that you can use to execute a data change statement dynamically: v By using host variable arrays that contain the data to be inserted v By using a descriptor to describe the host variable arrays that contain the data Dynamically executing a data change statement by using host variable arrays: To dynamically execute a data change statement by using host variable arrays, perform the following actions in your program: 1. Assign the appropriate INSERT or MERGE statement to a host variable. If needed, use the CAST specification to explicitly assign types to parameter markers that represent host variable arrays.
188
Example: For the activity table, the following string contains an INSERT statement that is to be prepared:
INSERT INTO DSN8910.ACT VALUES (CAST(? AS SMALLINT), CAST(? AS CHAR(6)), CAST(? AS VARCHAR(20)))
2. Assign any attributes for the SQL statement to a host variable. 3. Include a PREPARE statement for the SQL statement. 4. Include an EXECUTE statement with the FOR n ROWS clause. Each host variable in the USING clause of the EXECUTE statement represents an array of values for the corresponding column of the target of the SQL statement. You can vary the number of rows without needing to prepare the SQL statement again. Example: The following code prepares and executes an INSERT statement:
/* Copy the INSERT string into the host variable sqlstmt */ strcpy(sqlstmt, "INSERT INTO DSN8910.ACT VALUES (CAST(? AS SMALLINT),"); strcat(sqlstmt, " CAST(? AS CHAR(6)), CAST(? AS VARCHAR(20)))"); /* Copy the INSERT attributes into the host variable attrvar */ strcpy(attrvar, "FOR MULTIPLE ROWS"); /* Prepare and execute my_insert using the host variable arrays */ EXEC SQL PREPARE my_insert ATTRIBUTES :attrvar FROM :sqlstmt; EXEC SQL EXECUTE my_insert USING :hva1, :hva2, :hva3 FOR :num_rows ROWS;
Dynamically executing a data change statement by using descriptors: You can use an SQLDA structure to specify data types and other information about the host variable arrays that contain the values to insert. To dynamically execute a data change statement by using descriptors, perform the following actions in your program: 1. Set the following fields in the SQLDA structure for your INSERT statement. v SQLN v SQLABC v SQLD v SQLVAR v SQLNAME Example: Assume that your program includes the standard SQLDA structure declaration and declarations for the program variables that point to the SQLDA structure. For C application programs, the following example code sets the SQLDA fields: | | | | | | | | | | | | | | |
strcpy(sqldaptr->sqldaid,"SQLDA"); sqldaptr->sqldabc = 192; /* number of bytes of storage allocated for the SQLDA */ sqldaptr->sqln = 4; /* number of SQLVAR occurrences */ sqldaptr->sqld = 4; varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0])); /* Point to first SQLVAR */ varptr->sqltype = 500; /* data type SMALLINT */ varptr->sqllen = 2; varptr->sqldata = (char *) hva1; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length); varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 1); /* Point
Chapter 3. Coding SQL statements in application programs: General information
189
| | | | | | | | | | | | | | |
to next SQLVAR */ varptr->sqltype = 452; /* data type CHAR(6) */ varptr->sqllen = 6; varptr->sqldata = (char *) hva2; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length); varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 2); /* Point to next SQLVAR */ varptr->sqltype = 448; /* data type VARCHAR(20) */ varptr->sqllen = 20; varptr->sqldata = (char *) hva3; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length);
The SQLDA structure has the following fields: v SQLDABC indicates the number of bytes of storage that are allocated for the SQLDA. The storage includes a 16-byte header and 44 bytes for each SQLVAR field. The value is SQLN x 44 + 16, or 192 for this example. v SQLN is the number of SQLVAR occurrences, plus one for use by DB2 for the host variable that contains the number n in the FOR n ROWS clause. v SQLD is the number of variables in the SQLDA that are used by DB2 when processing the INSERT statement. v An SQLVAR occurrence specifies the attributes of an element of a host variable array that corresponds to a value provided for a target column of the INSERT. Within each SQLVAR: SQLTYPE indicates the data type of the elements of the host variable array. SQLLEN indicates the length of a single element of the host variable array. SQLDATA points to the corresponding host variable array. Assume that your program allocates the dynamic variable arrays hva1, hva2, and hva3. SQLNAME has two parts: the LENGTH and the DATA. The LENGTH is 8. The first two bytes of the DATA field is X'0000'. Bytes 5 and 6 of the DATA field are a flag indicating whether the variable is an array or a FOR n ROWS value. Bytes 7 and 8 are a two-byte binary integer representation of the dimension of the array. 2. Assign the appropriate INSERT or MERGE statement to a host variable. Example: The following string contains an INSERT statement that is to be prepared:
INSERT INTO DSN8910.ACT VALUES (?, ?, ?)
| | | | |
3. Assign any attributes for the SQL statement to a host variable. 4. Include a PREPARE statement for the SQL statement. 5. Include an EXECUTE statement with the FOR n ROWS clause. The host variable in the USING clause of the EXECUTE statement names the SQLDA that describes the parameter markers in the INSERT statement. Example: The following code prepares and executes an INSERT statement:
/* Copy the INSERT string into the host variable sqlstmt */ strcpy(sqlstmt, "INSERT INTO DSN8910.ACT VALUES (?, ?, ?)"); /* Copy the INSERT attributes into the host variable attrvar */ strcpy(attrvar, "FOR MULTIPLE ROWS");
190
/* Prepare and execute my_insert using the descriptor */ EXEC SQL PREPARE my_insert ATTRIBUTES :attrvar FROM :sqlstmt; EXEC SQL EXECUTE my_insert USING DESCRIPTOR :*sqldaptr FOR :num_rows ROWS;
Related concepts: SQL statements in SQL statements in SQL statements in SQL statements in
assembler programs on page 243 C programs on page 283 COBOL programs on page 334 Fortran programs on page 379
SQL statements in PL/I programs on page 403 Related tasks: Including dynamic SQL for varying-list SELECT statements in your program on page 166 Related reference: SQLTYPE and SQLLEN (DB2 SQL)
Procedure
To dynamically execute a statement with parameter markers by using the SQLDA: 1. Include in your program a DESCRIBE INPUT statement that specifies the prepared SQL statement and the name of an appropriate SQLDA. DB2 puts the requested parameter marker information in the SQLDA. 2. Code the application in the same way as any other application in which you execute a prepared statement by using an SQLDA. First, obtain the addresses of the input host variables and their indicator variables and insert those addresses into the SQLDATA and SQLIND fields. Then, execute the prepared SQL statement.
Example
Suppose that you want to execute the following statement dynamically:
DELETE FROM DSN8910.EMP WHERE EMPNO = ?
You can use the following code to set up an SQLDA, obtain parameter information by using the DESCRIBE INPUT statement, and execute the statement:
SQLDAPTR=ADDR(INSQLDA); SQLDAID=SQLDA; SQLDABC=LENGTH(INSQLDA); SQLN=1; SQLD=0; DO IX=1 TO SQLN; SQLTYPE(IX)=0; SQLLEN(IX)=0; SQLNAME(IX)=; /* /* /* /* /* /* Get pointer to SQLDA Fill in SQLDA eye-catcher Fill in SQLDA length Fill in number of SQLVARs Initialize # of SQLVARs used Initialize the SQLVAR */ */ */ */ */ */
191
END; SQLSTMT=DELETE FROM DSN8910.EMP WHERE EMPNO = ?; EXEC SQL PREPARE SQLOBJ FROM SQLSTMT; EXEC SQL DESCRIBE INPUT SQLOBJ INTO :INSQLDA; SQLDATA(1)=ADDR(HVEMP); /* Get input data address SQLIND(1)=ADDR(HVEMPIND); /* Get indicator address EXEC SQL EXECUTE SQLOBJ USING DESCRIPTOR :INSQLDA;
*/ */
Related concepts: SQL statements in assembler programs on page 243 SQL statements in C programs on page 283 SQL statements in COBOL programs on page 334 SQL statements in Fortran programs on page 379 SQL statements in PL/I programs on page 403 SQL statements in REXX programs on page 415 Related tasks: Defining SQL descriptor areas on page 137 Related reference: DESCRIBE INPUT (DB2 SQL)
Procedure
To enable the dynamic statement cache to save prepared statements: Specify YES for the value of the CACHEDYN subsystem parameter. Related concepts: SQL statements in assembler programs on page 243 SQL statements in C programs on page 283 SQL statements in COBOL programs on page 334 SQL statements in Fortran programs on page 379 SQL statements in PL/I programs on page 403 SQL statements in REXX programs on page 415 Related reference: CACHE DYNAMIC SQL field (CACHEDYN subsystem parameter) (DB2 Installation and Migration) Dynamic SQL statements that DB2 can cache: The dynamic statement cache is a pool in which DB2 saves prepared SQL statements that can be shared among different threads, plans, and packages to improve performance. Only certain dynamic SQL statements can be saved in this cache.
192
As the DB2 ability to optimize SQL has improved, the cost of preparing a dynamic SQL statement has grown. Applications that use dynamic SQL might be forced to pay this cost more than once. When an application performs a commit operation, it must issue another PREPARE statement if that SQL statement is to be executed again. For a SELECT statement, the ability to declare a cursor WITH HOLD provides some relief but requires that the cursor be open at the commit point. WITH HOLD also causes some locks to be held for any objects that the prepared statement is dependent on. Also, WITH HOLD offers no relief for SQL statements that are not SELECT statements. DB2 can save prepared dynamic statements in a cache. The cache is a dynamic statement cache pool that all application processes can use to save and retrieve prepared dynamic statements. After an SQL statement has been prepared and is automatically saved in the cache, subsequent prepare requests for that same SQL statement can avoid the costly preparation process by using the statement that is in the cache. Statements that are saved in the cache can be shared among different threads, plans, or packages. Example: Assume that your application program contains a dynamic SQL statement, STMT1, which is prepared and executed multiple times. If you are using the dynamic statement cache when STMT1 is prepared for the first time, it is placed in the cache. When your application program encounters the identical PREPARE statement for STMT1, DB2 uses the already prepared STMT1 that is saved in the dynamic statement cache. The following example shows the identical STMT1 that might appear in your application program:
PREPARE EXECUTE COMMIT . . . PREPARE EXECUTE COMMIT . . . STMT1 FROM ... STMT1 Statement is prepared and the prepared statement is put in the cache.
Identical statement. DB2 uses the prepared statement from the cache.
Eligible statements: The following SQL statements can be saved in the cache: SELECT UPDATE INSERT DELETE MERGE Distributed and local SQL statements are eligible to be saved. Prepared, dynamic statements that use DB2 private protocol access are also eligible to be saved. Restriction: Even though static statements that use DB2 private protocol access are dynamic at the remote site, those statements cannot be saved in the cache.
| | | | | | |
SQL statement text that is preceded by any characters is not eligible to be saved in the dynamic statement cache. Those characters include SQL simple comments (--) and SQL bracketed comments (/* */). Statements in plans or packages that are bound with REOPT(ALWAYS) cannot be saved in the cache. Statements in plans and packages that are bound with REOPT(ONCE) or REOPT(AUTO) can be saved in the cache. Statements that are sent to an accelerator server cannot be saved in the cache.
Chapter 3. Coding SQL statements in application programs: General information
193
Prepared statements cannot be shared among data sharing members. Because each member has its own EDM pool, a cached statement on one member is not available to an application that runs on another member. Related tasks: Including dynamic SQL for varying-list SELECT statements in your program on page 166 Conditions for statement sharing: If a prepared version of an identical SQL statement already exists in the dynamic statement cache, certain conditions must still be met before DB2 can reuse that prepared statement. Suppose that S1 and S2 are source statements, and P1 is the prepared version of S1. P1 is in the dynamic statement cache. The following conditions must be met before DB2 can use statement P1 instead of preparing statement S2: v S1 and S2 must be identical. The statements must pass a character by character comparison and must be the same length. If the PREPARE statement for either statement contains an ATTRIBUTES clause, DB2 concatenates the values in the ATTRIBUTES clause to the statement string before comparing the strings. That is, if A1 is the set of attributes for S1 and A2 is the set of attributes for S2, DB2 compares S1||A1 to S2||A2. If the statement strings are not identical, DB2 cannot use the statement in the cache. For example, assume that S1 and S2 are specified as follows:
UPDATE EMP SET SALARY=SALARY+50
In this case, DB2 can use P1 instead of preparing S2. However, assume that S1 is specified as follows:
UPDATE EMP SET SALARY=SALARY+50
| | | | | | | | | | | | |
In this case, DB2 cannot use P1 for S2. DB2 prepares S2 and saves the prepared version of S2 in the cache. v The authorization ID or role that was used to prepare S1 must be used to prepare S2: When a plan or package has run behavior, the authorization ID is the current SQLID value. For secondary authorization IDs: - The application process that searches the cache must have the same secondary authorization ID list as the process that inserted the entry into the cache or must have a superset of that list. - If the process that originally prepared the statement and inserted it into the cache used one of the privileges held by the primary authorization ID to accomplish the prepare, that ID must either be part of the secondary authorization ID list of the process searching the cache, or it must be the primary authorization ID of that process.
194
| | | | | | | | | | | | |
When a plan or package has bind behavior, the authorization ID is the plan owner's ID. For a DDF server thread, the authorization ID is the package owner's ID. When a package has define behavior, then the authorization ID is the user-defined function or stored procedure owner. When a package has invoke behavior, then the authorization ID is the authorization ID under which the statement that invoked the user-defined function or stored procedure executed. If the application process has a role associated with it, DB2 uses the role to search the cache instead of the authorization IDs. If the trusted context that associated the role with the application process is defined with the WITH ROLE AS OBJECT OWNER clause, the role value is used as the default for the CURRENT SCHEMA special register and the SQL path. v When the plan or package that contains S2 is bound, the values of these bind options must be the same as when the plan or package that contains S1 was bound: CURRENTDATA DYNAMICRULES ISOLATION SQLRULES QUALIFIER v When S2 is prepared, the values of the following special registers must be the same as when S1 was prepared: CURRENT DECFLOAT ROUNDING MODE CURRENT DEGREE CURRENT RULES CURRENT PRECISION CURRENT REFRESH AGE CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION CURRENT LOCALE LC_CTYPE Exception: If you set the CACHEDYN_FREELOCAL subsystem parameter to 1 and a storage shortage occurs, DB2 frees the cached dynamic statements. In this case, DB2 cannot use P1 instead of preparing statement S2, because P1 no longer exists in the statement cache. Related concepts: DYNAMICRULES bind option on page 987 Related reference: PREPARE (DB2 SQL) Subsystem parameters that are not on installation panels (DB2 Installation and Migration) Capturing performance information for dynamic SQL statements: DB2 maintains statement caching performance statistics records when dynamic statements are cached. The statistics include cache hit ratio and other useful data points that you can use evaluate the overall performance of your statement caches and statement executions. Before you begin v Set the value of the CACHEDYN subsystem parameter to YES.
195
v Create DSN_STATEMENT_CACHE_TABLE, and the associated LOB and auxiliary tables and indexes. You can find the sample statements for creating these objects in member DSNTESC of the SDSNSAMP library. About this task When DB2 prepares a dynamic SQL statement, it creates control structures that are used when the statements are executed. When dynamic statement caching is in effect, DB2 stores the control structure associated with each prepared dynamic SQL statement in a storage pool. If that same statement or a matching statement is issued again, DB2 can use the cached control structure, avoiding the expense of preparing the statement again. Procedure To externalize the statement cache statistics for performance analysis: 1. To externalize the statement cache statistics for performance analysis: |
START TRACE(P) CLASS(30) IFCID(316,317,318)
IFCID 0316 contains the first 60 bytes of SQL text and statement execution statistics. IFCID 0317 captures the full text of the SQL statement. IFCID 0318 enables the collection of statistics. DB2 begins to collect statistics and accumulates them for the length of time when the trace is on. Stopping the trace resets all statistics. 2. Run the SQL workload that you want to analyze. 3. Issue the following SQL statement in a DSNTEP2 utility job:
EXPLAIN STMTCACHE ALL
Important: Run the workload and issue the EXPLAIN statement while the traces are still running. If you stop the trace for IFCID 318, all statistics in the dynamic statement cache are reset. DB2 extracts all statements from the global cache and writes the statistics information to for all statements in the cache that qualify based on the user's SQLID into the DSN_STATEMENT_CACHE_TABLE. If the SQLID has SYSADM authority, statistics for all statement in the chache are written into the table. 4. Begin your evaluation of the statement cache performance by selecting on the inserted rows from the DSN_STATEMENT_CACHE_TABLE table. For example, you can use the following clauses in your query to identify the n queries with the highest total accumulated CPU time for all the executions of the query during the trace interval:
ORDER BY STAT_CPU DESC FETCH FIRST n ROWS ONLY;
Similarly, you might use the following clauses in your query to identify the top n queries with the highest average CPU time per query execution during the trace interval:
SELECT STAT_CPU / STAT_EXEC FETCH FIRST n ROWS ONLY;
What to do next You can also use optimization tools such as IBM Data Studio and InfoSphere Optim Query Workload Tuner to capture and analyze statements from the dynamic statement cache.
196
Related tasks: Creating EXPLAIN tables (DB2 Performance) Monitoring the dynamic statement cache with READS calls (DB2 Performance) Related reference: DSN_STATEMENT_CACHE_TABLE (DB2 Performance) CACHE DYNAMIC SQL field (CACHEDYN subsystem parameter) (DB2 Installation and Migration) EXPLAIN (DB2 SQL) DSNTEP2 and DSNTEP4 (DB2 Application programming and SQL) IBM Data Studio product overview InfoSphere Optim Query Workload Tuner
STMT1
STMT1
To understand how the KEEPDYNAMIC bind option works, you need to differentiate between the executable form of a dynamic SQL statement, which is the prepared statement, and the character string form of the statement, which is the statement string. Relationship between KEEPDYNAMIC(YES) and statement caching: When the dynamic statement cache is not active, and you run an application bound with KEEPDYNAMIC(YES), DB2 saves only the statement string for a prepared statement after a commit operation. On a subsequent OPEN, EXECUTE, or DESCRIBE, DB2 must prepare the statement again before performing the requested operation. The following example illustrates this concept.
PREPARE STMT1 FROM ... EXECUTE STMT1 COMMIT Statement is prepared and put in memory.
197
Application does not issue PREPARE. DB2 prepares the statement again. Again, no PREPARE needed.
When the dynamic statement cache is active, and you run an application bound with KEEPDYNAMIC(YES), DB2 retains a copy of both the prepared statement and the statement string. The prepared statement is cached locally for the application process. In general, the statement is globally cached in the EDM pool, to benefit other application processes. If the application issues an OPEN, EXECUTE, or DESCRIBE after a commit operation, the application process uses its local copy of the prepared statement to avoid a prepare and a search of the cache. The following example illustrates this process. | | | | | | | | | | | | | | | | | |
PREPARE EXECUTE COMMIT . . . EXECUTE COMMIT . . . EXECUTE COMMIT . . . PREPARE COMMIT STMT1 FROM ... STMT1 Statement is prepared and put in memory.
STMT1
Application does not issue PREPARE. DB2 uses the prepared statement in memory. Again, no PREPARE needed. DB2 uses the prepared statement in memory. Again, no PREPARE needed. DB2 uses the prepared statement in memory.
STMT1
The local instance of the prepared SQL statement is kept in ssnmDBM1 storage until one of the following occurs: v The application process ends. v A rollback operation occurs. v The application issues an explicit PREPARE statement with the same statement name. If the application does issue a PREPARE for the same SQL statement name that has a kept dynamic statement associated with it, the kept statement is discarded and DB2 prepares the new statement. v The statement is removed from memory because the statement has not been used recently, and the number of kept dynamic SQL statements reaches the subsystem default as set during installation. Handling implicit prepare errors: If a statement is needed during the lifetime of an application process, and the statement has been removed from the local cache, DB2 might be able to retrieve it from the global cache. If the statement is not in the global cache, DB2 must implicitly prepare the statement again. The application does not need to issue a PREPARE statement. However, if the application issues an OPEN, EXECUTE, or DESCRIBE for the statement, the application must be able to handle the possibility that DB2 is doing the prepare implicitly. Any error that occurs during this prepare is returned on the OPEN, EXECUTE, or DESCRIBE. How KEEPDYNAMIC affects applications that use distributed data: If a requester does not issue a PREPARE after a COMMIT, the package at the DB2 for z/OS server must be bound with KEEPDYNAMIC(YES). If both requester and server are DB2 for z/OS subsystems, the DB2 requester assumes that the KEEPDYNAMIC value for the package at the server is the same as the value for the plan at the requester.
198
The KEEPDYNAMIC option has performance implications for DRDA clients that specify WITH HOLD on their cursors: v If KEEPDYNAMIC(NO) is specified, a separate network message is required when the DRDA client issues the SQL CLOSE for the cursor. v If KEEPDYNAMIC(YES) is specified, the DB2 for z/OS server automatically closes the cursor when SQLCODE +100 is detected, which means that the client does not have to send a separate message to close the held cursor. This reduces network traffic for DRDA applications that use held cursors. It also reduces the duration of locks that are associated with the held cursor. Note: If one member of a data sharing group has enabled the cache but another has not, and an application is bound with KEEPDYNAMIC(YES), DB2 must implicitly prepare the statement again if the statement is assigned to a member without the cache. This can mean a slight reduction in performance.
Limiting CPU time for dynamic SQL statements by using the resource limit facility
The resource limit facility (or governor) limits the amount of CPU time that an SQL statement can take, which prevents SQL statements from making excessive requests.
199
Related concepts: Predictive governing SQL statements in assembler programs on page 243 SQL statements in C programs on page 283 SQL statements in COBOL programs on page 334 SQL statements in Fortran programs on page 379 SQL statements in PL/I programs on page 403 SQL statements in REXX programs on page 415 Related tasks: Managing resource limit tables (DB2 Performance)
Reactive governing
The reactive governing function of the resource limit facility stops any dynamic SQL statements that overuse system resources. When a statement exceeds a reactive governing threshold, the application program receives SQLCODE -905. The application must include code that performs the appropriate action based on this situation. If the failed statement involves an SQL cursor, the cursor's position remains unchanged. The application can then close that cursor. All other operations with the cursor do not run and the same SQL error code occurs. If the failed SQL statement does not involve a cursor, then all changes that the statement made are undone before the error code returns to the application. The application can either issue another SQL statement or commit all work done so far.
Predictive governing
The predictive governing function of the resource limit facility provides an estimate of the processing cost of SQL statements before they run. If your installation uses predictive governing, you need to modify your applications to check for the +495 and -495 SQLCODEs that predictive governing can generate after a PREPARE statement executes. The +495 SQLCODE in combination with deferred prepare requires that DB2 do some special processing to ensure that existing applications are not affected by this new warning SQLCODE. Handling the +495 SQLCODEIf your requester uses deferred prepare, the presence of parameter markers determines when the application receives the +495 SQLCODE. When parameter markers are present, DB2 cannot do PREPARE, OPEN, and FETCH processing in one message. If SQLCODE +495 is returned, no OPEN or FETCH processing occurs until your application requests it. v If there are parameter markers, the +495 is returned on the OPEN (not the PREPARE). v If there are no parameter markers, the +495 is returned on the PREPARE. Normally with deferred prepare, the PREPARE, OPEN, and first FETCH of the data are returned to the requester. For a predictive governor warning of +495, you would ideally like to have the option to choose beforehand whether you want the OPEN and FETCH of the data to occur. For down-level requesters, you do not have this option.
200
201
Related concepts: Arithmetic and conversion errors on page 214 Related tasks: Defining the SQL communications area, SQLSTATE, on page 229 Defining the SQL communications area, SQLSTATE, 249 Defining the SQL communications area, SQLSTATE, on page 299 Defining the SQL communications area, SQLSTATE, on page 371 Defining the SQL communications area, SQLSTATE, page 383
and SQLCODE in assembler and SQLCODE in C on page and SQLCODE in COBOL and SQLCODE in Fortran and SQLCODE in PL/I on
Defining the SQL communications area, SQLSTATE, and SQLCODE in REXX on page 413 Displaying SQLCA fields by calling DSNTIAR on page 203
| | |
202
SQLWARN5 contains a character value of 1 (read only), 2 (read and delete), or 4 (read, delete, and update) to indicate the operation that is allowed on the result table of the cursor. Related tasks: Accessing data by using a rowset-positioned cursor on page 714 Checking the execution of SQL statements by using SQLCODE and SQLSTATE on page 206 Defining the on page 229 Defining the 249 Defining the on page 299 Defining the on page 371 Defining the page 383 SQL communications area, SQLSTATE, and SQLCODE in assembler SQL communications area, SQLSTATE, and SQLCODE in C on page SQL communications area, SQLSTATE, and SQLCODE in COBOL SQL communications area, SQLSTATE, and SQLCODE in Fortran SQL communications area, SQLSTATE, and SQLCODE in PL/I on
Defining the SQL communications area, SQLSTATE, and SQLCODE in REXX on page 413 Related reference: Description of SQLCA fields (DB2 SQL)
203
DSNTIAR can run either above or below the 16-MB line of virtual storage. The DSNTIAR object module that comes with DB2 has the attributes AMODE(31) and RMODE(ANY). At install time, DSNTIAR links as AMODE(31) and RMODE(ANY). DSNTIAR runs in 31-bit mode if any of the following conditions is true: v DSNTIAR is linked with other modules that also have the attributes AMODE(31) and RMODE(ANY). v DSNTIAR is linked into an application that specifies the attributes AMODE(31) and RMODE(ANY) in its link-edit JCL. v An application loads DSNTIAR. When loading DSNTIAR from another program, be careful how you branch to DSNTIAR. For example, if the calling program is in 24-bit addressing mode and DSNTIAR is loaded above the 16-MB line, you cannot use the assembler BALR instruction or CALL macro to call DSNTIAR, because they assume that DSNTIAR is in 24-bit mode. Instead, you must use an instruction that is capable of branching into 31-bit mode, such as BASSM. You can dynamically link (load) and call DSNTIAR directly from a language that does not handle 31-bit addressing. To do this, link a second version of DSNTIAR with the attributes AMODE(24) and RMODE(24) into another load module library. Alternatively, you can write an intermediate assembler language program that calls DSNTIAR in 31-bit mode and then call that intermediate program in 24-bit mode from your application. For more information on the allowed and default AMODE and RMODE settings for a particular language, see the application programming guide for that language. For details on how the attributes AMODE and RMODE of an application are determined, see the linkage editor and loader user's guide for the language in which you have written the application. Defining a message output area: If a program calls DSNTIAR, the program must allocate enough storage in the message output area to hold all of the message text that DSNTIAR returns. About this task You will probably need no more than 10 lines, 80-bytes each, for your message output area. An application program can have only one message output area. You must define the message output area in VARCHAR format. In this varying character format, a 2-byte length field precedes the data. The length field indicates to DSNTIAR how many total bytes are in the output message area; the minimum length of the output area is 240-bytes. The following figure shows the format of the message output area, where length is the 2-byte total length field, and the length of each line matches the logical record length (lrecl) you specify to DSNTIAR.
204
Line: 1 2
. . .
n-1 n Field sizes (in bytes): 2
When you call DSNTIAR, you must name an SQLCA and an output message area in the DSNTIAR parameters. You must also provide the logical record length (lrecl) as a value between 72 and 240 bytes. DSNTIAR assumes the message area contains fixed-length records of length lrecl. DSNTIAR places up to 10 lines in the message area. If the text of a message is longer than the record length you specify on DSNTIAR, the output message splits into several records, on word boundaries if possible. The split records are indented. All records begin with a blank character for carriage control. If you have more lines than the message output area can contain, DSNTIAR issues a return code of 4. A completely blank record marks the end of the message output area. Possible return codes from DSNTIAR: The assembler subroutine DSNTIAR helps your program read the information in the SQLCA. The subroutine also returns its own return code. Code 0 4 8 12 16 20 24 Meaning Successful execution. More data available than could fit into the provided message area. Logical record length not between 72 and 240, inclusive. Message area not large enough. The message length was 240 or greater. Error in TSO message routine. Module DSNTIA1 could not be loaded. SQLCA data error.
A scenario for using DSNTIAR: You can use the assembler subroutine DSNTIAR to generate the error message text in the SQLCA. Suppose you want your DB2 COBOL application to check for deadlocks and timeouts, and you want to make sure your cursors are closed before continuing. You use the statement WHENEVER SQLERROR to transfer control to an error routine when your application receives a negative SQLCODE.
Chapter 3. Coding SQL statements in application programs: General information
205
In your error routine, you write a section that checks for SQLCODE -911 or -913. You can receive either of these SQLCODEs when a deadlock or timeout occurs. When one of these errors occurs, the error routine closes your cursors by issuing the statement:
EXEC SQL CLOSE cursor-name
An SQLCODE of 0 or -501 resulting from that statement indicates that the close was successful. To use DSNTIAR to generate the error message text, first follow these steps: 1. Choose a logical record length (lrecl) of the output lines. For this example, assume lrecl is 72 (to fit on a terminal screen) and is stored in the variable named ERROR-TEXT-LEN. 2. Define a message area in your COBOL application. Assuming you want an area for up to 10 lines of length 72, you should define an area of 720 bytes, plus a 2-byte area that specifies the total length of the message output area.
01 ERROR-MESSAGE. 02 ERROR-LEN 02 ERROR-TEXT ERROR-TEXT-LEN PIC S9(4) PIC X(72) COMP VALUE +720. OCCURS 10 TIMES INDEXED BY ERROR-INDEX. PIC S9(9) COMP VALUE +72.
77
For this example, the name of the message area is ERROR-MESSAGE. 3. Make sure you have an SQLCA. For this example, assume the name of the SQLCA is SQLCA. To display the contents of the SQLCA when SQLCODE is 0 or -501, call DSNTIAR after the SQL statement that produces SQLCODE 0 or -501:
CALL DSNTIAR USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN.
You can then print the message output area just as you would any other variable. Your message might look like this:
DSNT408I SQLCODE = -501, ERROR: THE CURSOR IDENTIFIED IN A FETCH OR CLOSE STATEMENT IS NOT OPEN DSNT418I SQLSTATE = 24501 SQLSTATE RETURN CODE DSNT415I SQLERRP = DSNXERT SQL PROCEDURE DETECTING ERROR DSNT416I SQLERRD = -315 0 0 -1 0 0 SQL DIAGNOSTIC INFORMATION DSNT416I SQLERRD = XFFFFFEC5 X00000000 X00000000 XFFFFFFFF X00000000 X00000000 SQL DIAGNOSTIC INFORMATION
206
SQLCODE 100 indicates that no data was found. The meaning of SQLCODEs other than 0 and 100 varies with the particular product implementing SQL. SQLSTATE: SQLSTATE enables an application program to check for errors in the same way for different IBM database management systems. Using SQLCODE and SQLSTATE: An advantage to using the SQLCODE field is that it can provide more specific information than the SQLSTATE. Many of the SQLCODEs have associated tokens in the SQLCA that indicate, for example, which object incurred an SQL error. However, an SQL standard application uses only SQLSTATE. You can declare SQLCODE and SQLSTATE (SQLCOD and SQLSTA in Fortran) as stand-alone host variables. If you specify the STDSQL(YES) precompiler option, these host variables receive the return codes, and you should not include an SQLCA in your program. Related tasks: Defining the SQL communications area, SQLSTATE, and SQLCODE in assembler on page 229 Defining the 249 Defining the on page 299 Defining the on page 371 Defining the page 383 Defining the page 413 SQL communications area, SQLSTATE, and SQLCODE in C on page SQL communications area, SQLSTATE, and SQLCODE in COBOL SQL communications area, SQLSTATE, and SQLCODE in Fortran SQL communications area, SQLSTATE, and SQLCODE in PL/I on SQL communications area, SQLSTATE, and SQLCODE in REXX on
Related reference: SQLSTATE values and common error codes (DB2 Codes)
207
The condition of the WHENEVER statement is one of these three values: SQLWARNING Indicates what to do when SQLWARN0 = W or SQLCODE contains a positive value other than 100. DB2 can set SQLWARN0 for several reasonsfor example, if a column value is truncated when moved into a host variable. Your program might not regard this as an error. SQLERROR Indicates what to do when DB2 returns an error code as the result of an SQL statement (SQLCODE < 0). NOT FOUND Indicates what to do when DB2 cannot find a row to satisfy your SQL statement or when there are no more rows to fetch (SQLCODE = 100). The action of the WHENEVER statement is one of these two values: CONTINUE Specifies the next sequential statement of the source program. GOTO or GO TO host-label Specifies the statement identified by host-label. For host-label, substitute a single token, preceded by an optional colon. The form of the token depends on the host language. In COBOL, for example, it can be section-name or an unqualified paragraph-name. The WHENEVER statement must precede the first SQL statement it is to affect. However, if your program checks SQLCODE directly, you must check SQLCODE after each SQL statement. Related concepts: Chapter 9, Coding SQL statements in REXX application programs, on page 413 Related reference: WHENEVER (DB2 SQL)
Checking the execution of SQL statements by using the GET DIAGNOSTICS statement
One way to check whether an SQL statement executed successfully is to ask DB2 to return the diagnostic information about the last executed SQL statement.
208
In addition to requesting individual items, you can request that GET DIAGNOSTICS return ALL diagnostic items that are set during the execution of the last SQL statement as a single string. | | | | | | | | In SQL procedures, you can also retrieve diagnostic information by using handlers. Handlers tell the procedure what to do if a particular error occurs. Use the GET DIAGNOSTICS statement to handle multiple SQL errors that might result from the execution of a single SQL statement. First, check SQLSTATE (or SQLCODE) to determine whether diagnostic information should be retrieved by using GET DIAGNOSTICS. This method is especially useful for diagnosing problems that result from a multiple-row INSERT that is specified as NOT ATOMIC CONTINUE ON SQLEXCEPTIONand multiple row MERGE statements. Even if you use only the GET DIAGNOSTICS statement in your application program to check for conditions, you must either include the instructions required to use the SQLCA or you must declare SQLSTATE (or SQLCODE) separately in your program. Restriction: If you issue a GET DIAGNOSTICS statement immediately following an SQL statement that uses private protocol access, DB2 returns an error. When you use the GET DIAGNOSTICS statement, you assign the requested diagnostic information to host variables. Declare each target host variable with a data type that is compatible with the data type of the requested item. To retrieve condition information, you must first retrieve the number of condition items (that is, the number of errors and warnings that DB2 detected during the execution of the last SQL statement). The number of condition items is at least one. If the last SQL statement returned SQLSTATE '00000' (or SQLCODE 0), the number of condition items is one. Example: Using GET DIAGNOSTICS with multiple-row INSERT: You want to display diagnostic information for each condition that might occur during the execution of a multiple-row INSERT statement in your application program. You specify the INSERT statement as NOT ATOMIC CONTINUE ON SQLEXCEPTION, which means that execution continues regardless of the failure of any single-row insertion. DB2 does not insert the row that was processed at the time of the error. In the following example, the first GET DIAGNOSTICS statement returns the number of rows inserted and the number of conditions returned. The second GET DIAGNOSTICS statement returns the following items for each condition: SQLCODE, SQLSTATE, and the number of the row (in the rowset that was being inserted) for which the condition occurred.
EXEC SQL BEGIN DECLARE SECTION; long row_count, num_condns, i; long ret_sqlcode, row_num; char ret_sqlstate[6]; ... EXEC SQL END DECLARE SECTION; ... EXEC SQL INSERT INTO DSN8910.ACT (ACTNO, ACTKWD, ACTDESC) VALUES (:hva1, :hva2, :hva3) FOR 10 ROWS
Chapter 3. Coding SQL statements in application programs: General information
209
NOT ATOMIC CONTINUE ON SQLEXCEPTION; EXEC SQL GET DIAGNOSTICS :row_count = ROW_COUNT, :num_condns = NUMBER; printf("Number of rows inserted = %d\n", row_count); for (i=1; i<=num_condns; i++) { EXEC SQL GET DIAGNOSTICS CONDITION :i :ret_sqlcode = DB2_RETURNED_SQLCODE, :ret_sqlstate = RETURNED_SQLSTATE, :row_num = DB2_ROW_NUMBER; printf("SQLCODE = %d, SQLSTATE = %s, ROW NUMBER = %d\n", ret_sqlcode, ret_sqlstate, row_num); }
In the activity table, the ACTNO column is defined as SMALLINT. Suppose that you declare the host variable array hva1 as an array with data type long, and you populate the array so that the value for the fourth element is 32768. If you check the SQLCA values after the INSERT statement, the value of SQLCODE is equal to 0, the value of SQLSTATE is '00000', and the value of SQLERRD(3) is 9 for the number of rows that were inserted. However, the INSERT statement specified that 10 rows were to be inserted. The GET DIAGNOSTICS statement provides you with the information that you need to correct the data for the row that was not inserted. The printed output from your program looks like this:
Number of rows inserted = 9 SQLCODE = -302, SQLSTATE = 22003, ROW NUMBER = 4
The value 32768 for the input variable is too large for the target column ACTNO. You can print the MESSAGE_TEXT condition item. Retrieving statement and condition items: When you use the GET DIAGNOSTICS statement, you assign the requested diagnostic information to host variables. Declare each target host variable with a data type that is compatible with the data type of the requested item. To retrieve condition information, you must first retrieve the number of condition items (that is, the number of errors and warnings that DB2 detected during the execution of the last SQL statement). The number of condition items is at least one. If the last SQL statement returned SQLSTATE '00000' (or SQLCODE 0), the number of condition items is one.
210
Related concepts: Handlers in an SQL procedure on page 557 Related reference: Data types for GET DIAGNOSTICS items GET DIAGNOSTICS (DB2 SQL) Related information: -302 (DB2 Codes)
After a GET DIAGNOSTICS statement, VARCHAR(32672) if any error or warning occurred, this item contains all of the diagnostics as a single string. After a multiple-row FETCH statement, INTEGER this item contains a value of +100 if the last row in the table is in the rowset that was returned. After a PREPARE statement, this item contains the number of parameter markers in the prepared statement. INTEGER
DB2_LAST_ROW
DB2_NUMBER_PARAMETER_MARKERS
DB2_NUMBER_RESULT_SETS
After a CALL statement that invokes a INTEGER stored procedure, this item contains the number of result sets that are returned by the procedure. DECIMAL(31,0) After an OPEN or FETCH statement for which the size of the result table is known, this item contains the number of rows in the result table. After a PREPARE statement, this item contains the estimated number of rows in the result table for the prepared statement. For SENSITIVE DYNAMIC cursors, this item contains the approximate number of rows. Otherwise, or if the server only returns an SQLCA, the value zero is returned. After a CALL statement that invokes an SQL procedure, this item contains the return status if the procedure contains a RETURN statement. INTEGER
DB2_NUMBER_ROWS
DB2_RETURN_STATUS
211
Table 46. Data types for GET DIAGNOSTICS items that return statement information (continued) Item DB2_SQL_ATTR_CURSOR_HOLD Description After an ALLOCATE or OPEN statement, this item indicates whether the cursor can be held open across multiple units of work (Y or N). After an ALLOCATE or OPEN statement, this item indicates whether the cursor can use rowset positioning (Y or N). After an ALLOCATE or OPEN statement, this item indicates whether the cursor is scrollable (Y or N). After an ALLOCATE or OPEN statement, this item indicates whether the cursor shows updates made by other processes (sensitivity I or S). Data type CHAR(1)
DB2_SQL_ATTR_CURSOR_ROWSET
CHAR(1)
DB2_SQL_ATTR_CURSOR_SCROLLABLE
CHAR(1)
| DB2_SQL_ATTR_CURSOR_SENSITIVITY | | | | DB2_SQL_ATTR_CURSOR_TYPE | | | | |
MORE
CHAR(1)
CHAR(1) After an ALLOCATE or OPEN statement, this item indicates whether the cursor is forward (F), declared static (S for INSENSITIVE or SENSITIVE STATIC, or dynamic (D for SENSITIVE DYNAMIC). After any SQL statement, this item indicates whether some conditions items were discarded because of insufficient storage (Y or N). After any SQL statement, this item contains the number of condition items. If no warning or error occurred, or if no previous SQL statement has been executed, the number that is returned is 1. CHAR(1)
NUMBER
INTEGER
ROW_COUNT
After an insert, update, delete, or fetch, DECIMAL(31,0) this item contains the number of rows that are deleted, inserted, updated, or fetched. After PREPARE, this item contains the estimated number of result rows in the prepared statement. After TRUNCATE, it contains -1.
Table 47. Data types for GET DIAGNOSTICS items that return condition information Item CATALOG_NAME Description This item contains the server name of the table that owns a constraint that caused an error, or that caused an access rule or check violation. This item contains the number of the condition. This item contains the name of a cursor in an invalid cursor state. This item contains an internal error code. This item contains an internal error code. Data type VARCHAR(128)
212
Table 47. Data types for GET DIAGNOSTICS items that return condition information (continued) Item DB2_ERROR_CODE3 DB2_ERROR_CODE4 DB2_INTERNAL_ERROR_POINTER Description This item contains an internal error code. This item contains an internal error code. For some errors, this item contains a negative value that is an internal error pointer. Data type INTEGER INTEGER INTEGER
DB2_MESSAGE_ID
This item contains the message ID that CHAR(10) corresponds to the message that is contained in the MESSAGE_TEXT diagnostic item. After any SQL statement, this item indicates which module detected the error. After any SQL statement, this item contains the nth token, where n is a value from 1 to 100. CHAR(8) VARCHAR(515)
DB2_MODULE_DETECTING_ERROR DB2_ORDINAL_TOKEN_n
DB2_REASON_CODE
After any SQL statement, this item contains INTEGER the reason code for errors that have a reason code token in the message text. After any SQL statement, this item contains the SQLCODE for the condition. After any SQL statement that involves multiple rows, this item contains the row number on which DB2 detected the condition. After any SQL statement, this item contains the number of tokens available for the condition. After any SQL statement, this item contains the message text associated with the SQLCODE. After any SQL statement, this item contains the SQLSTATE for the condition. INTEGER DECIMAL(31,0)
DB2_RETURNED_SQLCODE DB2_ROW_NUMBER
DB2_TOKEN_COUNT
INTEGER
MESSAGE_TEXT
VARCHAR(32672)
RETURNED_SQLSTATE SERVER_NAME
CHAR(5)
VARCHAR(128) After a CONNECT, DISCONNECT, or SET CONNECTION statement, this item contains the name of the server specified in the statement.
Table 48. Data types for GET DIAGNOSTICS items that return connection information Item DB2_AUTHENTICATION_TYPE DB2_AUTHORIZATION_ID DB2_CONNECTION_STATE DB2_CONNECTION_STATUS Description Data type
This item contains the authentication type (S, CHAR(1) C, D, E, or blank). This item contains the authorization ID that is used by the connected server. VARCHAR(128)
This item indicates whether the connection is INTEGER unconnected (-1), local (0), or remote (1). This item indicates whether updates can be INTEGER committed for the current unit of work (1 for Yes, 2 for No).
213
Table 48. Data types for GET DIAGNOSTICS items that return connection information (continued) Item DB2_ENCRYPTION_TYPE Description This item contains one of the following values that indicates the level of encryption for the connection: A Only the authentication tokens (authid and password) are encrypted D All of the data for the connection is encrypted After a CONNECT or SET CONNECTION statement, this item contains the DB2 server class name. This item contains the DB2 product signature. Data type CHAR(1)
DB2_SERVER_CLASS_NAME
VARCHAR(128)
DB2_PRODUCT_ID
VARCHAR(8)
Procedure
To handle SQL error codes: Take action based on the programming language that you use. Related concepts: SQL statements in assembler programs on page 243 SQL statements SQL statements SQL statements SQL statements SQL statements in in in in in C programs on page 283 COBOL programs on page 334 Fortran programs on page 379 PL/I programs on page 403 REXX programs on page 415
214
Procedure
To create new tables: v Use the CREATE TABLE statement. To add columns or increase the length of columns: v Use the ALTER TABLE statement with the ADD COLUMN clause or the ALTER COLUMN clause. Added columns initially contain either the null value or a default value. Both CREATE TABLE and ALTER TABLE, like any data definition statement, are relatively expensive to execute. Also consider the effects of locks. To rearrange or delete columns: v Drop the table and create the table again, with the columns you want, in the order you want. Consider creating a view on the table, which includes only the columns that you want, in the order that you want, as an alternative to redefining the table. Related concepts: Dynamic SQL on page 158 Related reference: ALTER TABLE (DB2 SQL) CREATE TABLE (DB2 SQL) CREATE VIEW (DB2 SQL)
Saving SQL statements that are translated from end user requests
If your program translates requests from end users into SQL statements and allows users to save their requests, your program can improve performance by saving those translated statements.
Procedure
To save the corresponding SQL statement: Save the corresponding SQL statements in a table with a column having a data type of VARCHAR(n), where n is the maximum length of any SQL statement. You must save the source SQL statements, not the prepared versions. That means that you must retrieve and then prepare each statement before executing the version stored in the table. In essence, your program prepares an SQL statement from a character string and executes it dynamically.
215
Host variable data types for XML data in embedded SQL applications
DB2 provides XML host variable types for assembler, C, C++, COBOL, and PL/I. Those types are: v XML AS BLOB v XML AS CLOB v XML AS DBCLOB v XML AS BLOB_FILE (C, C++, or PL/I) or XML AS BLOB-FILE (COBOL) v XML AS CLOB_FILE (C, C++, or PL/I) or XML AS CLOB-FILE (COBOL) v XML AS DBCLOB_FILE (C, C++, or PL/I) or XML AS DBCLOB-FILE (COBOL) The XML host variable types are compatible only with the XML column data type. You can use BLOB, CLOB, DBCLOB, CHAR, VARCHAR, GRAPHIC, VARGRAPHIC, BINARY, or VARBINARY host variables to update XML columns. You can convert the host variable data types to the XML type using the XMLPARSE function, or you can let the DB2 database server perform the conversion implicitly. You can use BLOB, CLOB, DBCLOB, CHAR, VARCHAR, GRAPHIC, VARGRAPHIC, BINARY, or VARBINARY host variables to retrieve data from XML
216
columns. You can convert the XML data to the host variable type using the XMLSERIALIZE function, or you can let the DB2 database server perform the conversion implicitly. The following examples show you how to declare XML host variables in each supported language. In each table, the left column contains the declaration that you code in your application program. The right column contains the declaration that DB2 generates.
| | BLOB_XML_FILE SQL TYPE IS XML AS BLOB_FILE | | | | | | CLOB_XML_FILE SQL TYPE IS XML AS CLOB_FILE | | | | | | DBCLOB_XML_FILE SQL TYPE IS XML AS DBCLOB_FILE | | | |
Notes:
1. Because assembler language allows character declarations of no more than 65535 bytes, DB2 separates the host language declarations for XML AS BLOB and XML AS CLOB host variables that are longer than 65535 bytes into two parts. 2. Because assembler language allows graphic declarations of no more than 65534 bytes, DB2 separates the host language declarations for XML AS DBCLOB host variables that are longer than 65534 bytes into two parts.
217
Table 50. Examples of C language variable declarations You declare this variable SQL TYPE IS XML AS BLOB (1M) blob_xml; DB2 generates this variable struct { unsigned long length; char data??(1048576??); } blob_xml; struct { unsigned long length; char data??(40960000??); } clob_xml; struct { unsigned long length; unsigned short data??(4096000??); } dbclob_xml; struct { unsigned long name_length; unsigned long data_length; unsigned long file_options; char name??(255??); } blob_xml_file; struct { unsigned long name_length; unsigned long data_length; unsigned long file_options; char name??(255??); } clob_xml_file; struct { unsigned long name_length; unsigned long data_length; unsigned long file_options; char name??(255??); } dbclob_xml_file;
| | SQL TYPE IS XML AS BLOB_FILE blob_xml_file; | | | | | | | SQL TYPE IS XML AS CLOB_FILE clob_xml_file; | | | | | | | SQL TYPE IS XML AS DBCLOB_FILE dbclob_xml_file; | | | | |
218
Table 51. Examples of COBOL variable declarations by the DB2 precompiler (continued) You declare this variable 01 CLOB-XML USAGE IS SQL TYPE IS XML AS CLOB(40000K). DB2 precompiler generates this variable CLOB-XML. CLOB-XML-LENGTH PIC 9(9) COMP. 02 CLOB-XML-DATA. 49 FILLER PIC X(32767).1 49 FILLER PIC X(32767). Repeat 1248 times . . . 49 FILLER PIC X(40960000-1250*32767). 02 01 DBCLOB-XML. DBCLOB-XML-LENGTH PIC 9(9) COMP. 02 DBCLOB-XML-DATA. 49 FILLER PIC G(32767) USAGE DISPLAY-1.2 49 FILLER PIC G(32767) USAGE DISPLAY-1. Repeat 123 times . . . 49 FILLER PIC G(4096000-125*32767) USAGE DISPLAY-1. 02 01 49 49 49 49 01 49 49 49 49 01 49 49 49 49 BLOB-XML-FILE. BLOB-XML-FILE-NAME-LENGTH PIC S9(9) COMP-5 SYNC. BLOB-XML-FILE-DATA-LENGTH PIC S9(9) COMP-5. BLOB-XML-FILE-FILE-OPTION PIC S9(9) COMP-5. BLOB-XML-FILE-NAME PIC X(255). CLOB-XML-FILE. CLOB-XML-FILE-NAME-LENGTH PIC S9(9) COMP-5 SYNC. CLOB-XML-FILE-DATA-LENGTH PIC S9(9) COMP-5. CLOB-XML-FILE-FILE-OPTION PIC S9(9) COMP-5. CLOB-XML-FILE-NAME PIC X(255). DBCLOB-XML-FILE. DBCLOB-XML-FILE-NAME-LENGTH PIC S9(9) COMP-5 SYNC. DBCLOB-XML-FILE-DATA-LENGTH PIC S9(9) COMP-5. DBCLOB-XML-FILE-FILE-OPTION PIC S9(9) COMP-5. DBCLOB-XML-FILE-NAME PIC X(255). 01
01
| 01 BLOB-XML-FILE USAGE IS SQL TYPE IS XML AS BLOB-FILE. | | | | | 01 CLOB-XML-FILE USAGE IS SQL TYPE IS XML AS CLOB-FILE. | | | | | 01 DBCLOB-XML-FILE USAGE IS SQL TYPE IS XML AS DBCLOB-FILE. | | | |
Notes:
1. For XML AS BLOB or XML AS CLOB host variables that are greater than 32767 bytes in length, DB2 creates multiple host language declarations of 32767 or fewer bytes. 2. For XML AS DBCLOB host variables that are greater than 32767 double-byte characters in length, DB2 creates multiple host language declarations of 32767 or fewer double-byte characters.
219
Table 52. Examples of PL/I variable declarations You declare this variable DCL BLOB_XML SQL TYPE IS XML AS BLOB (1M); DB2 precompiler generates this variable DCL 1 2 2 BLOB_XML, BLOB_XML_LENGTH BIN FIXED(31), BLOB_XML_DATA,1 3 BLOB_XML_DATA1 (32) CHAR(32767), 3 BLOB_XML_DATA2 CHAR(32);
DCL 1 2 2 CLOB_XML, CLOB_XML_LENGTH BIN FIXED(31), CLOB_XML_DATA,1 3 CLOB_XML_DATA1 (1250) CHAR(32767), 3 CLOB_XML_DATA2 CHAR(1250); DBCLOB_XML, DBCLOB_XML_LENGTH BIN FIXED(31), DBCLOB_XML_DATA,2 3 DBCLOB_XML_DATA1 (250 ) GRAPHIC(16383), 3 DBCLOB_XML_DATA2 GRAPHIC(250); BLOB_XML_FILE, BLOB_XML_FILE_NAME_LENGTH BIN FIXED(31) ALIGNED, BLOB_XML_FILE_DATA_LENGTH BIN FIXED(31), BLOB_XML_FILE_FILE_OPTIONS BIN FIXED(31), BLOB_XML_FILE_NAME CHAR(255); CLOB_XML_FILE, CLOB_XML_FILE_NAME_LENGTH BIN FIXED(31) ALIGNED, CLOB_XML_FILE_DATA_LENGTH BIN FIXED(31), CLOB_XML_FILE_FILE_OPTIONS BIN FIXED(31), CLOB_XML_FILE_NAME CHAR(255); DBCLOB_XML_FILE, DBCLOB_XML_FILE_NAME_LENGTH BIN FIXED(31) ALIGNED, DBCLOB_XML_FILE_DATA_LENGTH BIN FIXED(31), DBCLOB_XML_FILE_FILE_OPTIONS BIN FIXED(31), DBCLOB_XML_FILE_NAME CHAR(255);
DCL 1 2 2
| DCL BLOB_XML_FILE SQL TYPE IS XML AS BLOB_FILE; | | | | | | DCL CLOB_XML_FILE SQL TYPE IS XML AS CLOB_FILE; | | | | | | DCL DBCLOB_XML_FILE SQL TYPE IS XML AS DBCLOB_FILE; | | | | |
Notes:
| 1. For XML AS BLOB or XML AS CLOB host variables that are greater than 32767 bytes in length, DB2 creates host language declarations in the following way: | v If the length of the XML is greater than 32767 bytes and evenly divisible by 32767, DB2 creates an array of | 32767-byte strings. The dimension of the array is length/32767. | v If the length of the XML is greater than 32767 bytes but not evenly divisible by 32767, DB2 creates two | declarations: The first is an array of 32767 byte strings, where the dimension of the array, n, is length/32767. | The second is a character string of length length-n*32767. | | 2. For XML AS DBCLOB host variables that are greater than 16383 double-byte characters in length, DB2 creates host language declarations in the following way: | v If the length of the XML is greater than 16383 characters and evenly divisible by 16383, DB2 creates an array of | 16383-character strings. The dimension of the array is length/16383. | v If the length of the XML is greater than 16383 characters but not evenly divisible by 16383, DB2 creates two | declarations: The first is an array of 16383 byte strings, where the dimension of the array, m, is length/16383. | The second is a character string of length length-m*16383. |
220
Related concepts: Insertion of rows with XML column values (DB2 Programming for XML) Retrieving XML data (DB2 Programming for XML) Updates of XML columns (DB2 Programming for XML)
+ + +
+ + +
+ + +
221
WHERE CID = 1000 ... LTORG ****************************** * HOST VARIABLE DECLARATIONS * ****************************** XMLBUF SQL TYPE IS XML AS CLOB 10K XMLBLOB SQL TYPE IS XML AS BLOB 10K CLOBBUF SQL TYPE IS CLOB 10K
Example: The following example shows a C language program that inserts data from XML AS BLOB, XML AS CLOB, and CLOB host variables into an XML column. The XML AS BLOB data is inserted as binary data, so the database server honors the internal encoding. The XML AS CLOB and CLOB data is inserted as character data, so the database server honors the external encoding.
/******************************/ /* Host variable declarations */ /******************************/ EXEC SQL BEGIN DECLARE SECTION; SQL TYPE IS XML AS CLOB( 10K ) xmlBuf; SQL TYPE IS XML AS BLOB( 10K ) xmlblob; SQL TYPE IS CLOB( 10K ) clobBuf; EXEC SQL END DECLARE SECTION; /******************************************************************/ /* Update an XML column with data in an XML AS CLOB host variable */ /******************************************************************/ EXEC SQL UPDATE MYCUSTOMER SET INFO = :xmlBuf where CID = 1000; /******************************************************************/ /* Update an XML column with data in an XML AS BLOB host variable */ /******************************************************************/ EXEC SQL UPDATE MYCUSTOMER SET INFO = :xmlblob where CID = 1000; /******************************************************************/ /* Update an XML column with data in a CLOB host variable. Use */ /* the XMLPARSE function to convert the data to the XML type. */ /******************************************************************/ EXEC SQL UPDATE MYCUSTOMER SET INFO = XMLPARSE(DOCUMENT :clobBuf) where CID = 1000;
Example: The following example shows a COBOL program that inserts data from XML AS BLOB, XML AS CLOB, and CLOB host variables into an XML column. The XML AS BLOB data is inserted as binary data, so the database server honors the internal encoding. The XML AS CLOB and CLOB data is inserted as character data, so the database server honors the external encoding.
****************************** * Host variable declarations * ****************************** 01 XMLBUF USAGE IS SQL TYPE IS XML as CLOB(10K). 01 XMLBLOB USAGE IS SQL TYPE IS XML AS BLOB(10K). 01 CLOBBUF USAGE IS SQL TYPE IS CLOB(10K). ******************************************************************* * Update an XML column with data in an XML AS CLOB host variable * ******************************************************************* EXEC SQL UPDATE MYCUSTOMER SET INFO = :XMLBUF where CID = 1000. ******************************************************************* * Update an XML column with data in an XML AS BLOB host variable * ******************************************************************* EXEC SQL UPDATE MYCUSTOMER SET INFO = :XMLBLOB where CID = 1000. ******************************************************************* * Update an XML column with data in a CLOB host variable. Use * * the XMLPARSE function to convert the data to the XML type. * ******************************************************************* EXEC SQL UPDATE MYCUSTOMER SET INFO = XMLPARSE(DOCUMENT :CLOBBUF) where CID = 1000.
222
Example: The following example shows a PL/I program that inserts data from XML AS BLOB, XML AS CLOB, and CLOB host variables into an XML column. The XML AS BLOB data is inserted as binary data, so the database server honors the internal encoding. The XML AS CLOB and CLOB data is inserted as character data, so the database server honors the external encoding.
/******************************/ /* Host variable declarations */ /******************************/ DCL XMLBUF SQL TYPE IS XML AS CLOB(10K), XMLBLOB SQL TYPE IS XML AS BLOB(10K), CLOBBUF SQL TYPE IS CLOB(10K); /*******************************************************************/ /* Update an XML column with data in an XML AS CLOB host variable */ /*******************************************************************/ EXEC SQL UPDATE MYCUSTOMER SET INFO = :XMLBUF where CID = 1000; /*******************************************************************/ /* Update an XML column with data in an XML AS BLOB host variable */ /*******************************************************************/ EXEC SQL UPDATE MYCUSTOMER SET INFO = :XMLBLOB where CID = 1000; /*******************************************************************/ /* Update an XML column with data in a CLOB host variable. Use */ /* the XMLPARSE function to convert the data to the XML type. */ /*******************************************************************/ EXEC SQL UPDATE MYCUSTOMER SET INFO = XMLPARSE(DOCUMENT :CLOBBUF) where CID = 1000;
Insertion of rows with XML column values (DB2 Programming for XML) Updates of XML columns (DB2 Programming for XML)
223
so the database server generates an XML declaration with an internal encoding declaration. That declaration might not be consistent with the external encoding.
********************************************************************** * RETRIEVE XML COLUMN DATA INTO AN XML AS CLOB HOST VARIABLE * ********************************************************************** EXEC SQL SELECT INFO INTO :XMLBUF FROM MYCUSTOMER WHERE CID = 1000 ********************************************************************** * RETRIEVE XML COLUMN DATA INTO AN XML AS BLOB HOST VARIABLE * ********************************************************************** EXEC SQL SELECT INFO INTO :XMLBLOB FROM MYCUSTOMER WHERE CID = 1000 ********************************************************************** * RETRIEVE DATA FROM AN XML COLUMN INTO A CLOB HOST VARIABLE. * * BEFORE SENDING THE DATA TO THE APPLICATION, INVOKE THE * * XMLSERIALIZE FUNCTION TO CONVERT THE DATA FROM THE XML * * TYPE TO THE CLOB TYPE. * ********************************************************************** EXEC SQL SELECT XMLSERIALIZE(INFO AS CLOB(10K)) INTO :CLOBBUF FROM MYCUSTOMER WHERE CID = 1000 ... LTORG ****************************** * HOST VARIABLE DECLARATIONS * ****************************** XMLBUF SQL TYPE IS XML AS CLOB 10K XMLBLOB SQL TYPE IS XML AS BLOB 10K CLOBBUF SQL TYPE IS CLOB 10K
+ + + +
+ + + +
+ + + +
Example: The following example shows a C language program that retrieves data from an XML column into XML AS BLOB, XML AS CLOB, and CLOB host variables. The data that is retrieved into an XML AS BLOB host variable is retrieved as binary data, so the database server generates an XML declaration with UTF-8 encoding. The data that is retrieved into an XML AS CLOB host variable is retrieved as character data, so the database server generates an XML declaration with an internal encoding declaration that is consistent with the external encoding. The data that is retrieved into a CLOB host variable is retrieved as character data, so the database server generates an XML declaration with an internal encoding declaration. That declaration might not be consistent with the external encoding.
/******************************/ /* Host variable declarations */ /******************************/ EXEC SQL BEGIN DECLARE SECTION; SQL TYPE IS XML AS CLOB( 10K ) xmlBuf; SQL TYPE IS XML AS BLOB( 10K ) xmlBlob; SQL TYPE IS CLOB( 10K ) clobBuf; EXEC SQL END DECLARE SECTION; /**********************************************************************/ /* Retrieve data from an XML column into an XML AS CLOB host variable */ /**********************************************************************/ EXEC SQL SELECT INFO INTO :xmlBuf from myTable where CID = 1000; /**********************************************************************/ /* Retrieve data from an XML column into an XML AS BLOB host variable */ /**********************************************************************/ EXEC SQL SELECT INFO INTO :xmlBlob from myTable where CID = 1000;
224
/**********************************************************************/ /* RETRIEVE DATA FROM AN XML COLUMN INTO A CLOB HOST VARIABLE. */ /* BEFORE SENDING THE DATA TO THE APPLICATION, INVOKE THE */ /* XMLSERIALIZE FUNCTION TO CONVERT THE DATA FROM THE XML */ /* TYPE TO THE CLOB TYPE. */ /**********************************************************************/ EXEC SQL SELECT XMLSERIALIZE(INFO AS CLOB(10K)) INTO :clobBuf from myTable where CID = 1000;
Example: The following example shows a COBOL program that retrieves data from an XML column into XML AS BLOB, XML AS CLOB, and CLOB host variables. The data that is retrieved into an XML AS BLOB host variable is retrieved as binary data, so the database server generates an XML declaration with UTF-8 encoding. The data that is retrieved into an XML AS CLOB host variable is retrieved as character data, so the database server generates an XML declaration with an internal encoding declaration that is consistent with the external encoding. The data that is retrieved into a CLOB host variable is retrieved as character data, so the database server generates an XML declaration with an internal encoding declaration. That declaration might not be consistent with the external encoding. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
****************************** * Host variable declarations * ****************************** 01 XMLBUF USAGE IS SQL TYPE IS XML AS CLOB(10K). 01 XMLBLOB USAGE IS SQL TYPE IS XML AS BLOB(10K). 01 CLOBBUF USAGE IS SQL TYPE IS CLOB(10K). ********************************************************************** * Retrieve data from an XML column into an XML AS CLOB host variable * ********************************************************************** EXEC SQL SELECT INFO INTO :XMLBUF FROM MYTABLE WHERE CID = 1000 END-EXEC. ********************************************************************** * Retrieve data from an XML column into an XML AS BLOB host variable * ********************************************************************** EXEC SQL SELECT INFO INTO :XMLBLOB FROM MYTABLE WHERE CID = 1000 END-EXEC. ********************************************************************** * RETRIEVE DATA FROM AN XML COLUMN INTO A CLOB HOST VARIABLE. * * BEFORE SENDING THE DATA TO THE APPLICATION, INVOKE THE * * XMLSERIALIZE FUNCTION TO CONVERT THE DATA FROM THE XML * * TYPE TO THE CLOB TYPE. * ********************************************************************** EXEC SQL SELECT XMLSERIALIZE(INFO AS CLOB(10K)) INTO :CLOBBUF FROM MYTABLE WHERE CID = 1000 END-EXEC.
Example: The following example shows a PL/I program that retrieves data from an XML column into XML AS BLOB, XML AS CLOB, and CLOB host variables. The data that is retrieved into an XML AS BLOB host variable is retrieved as binary data, so the database server generates an XML declaration with UTF-8 encoding. The data that is retrieved into an XML AS CLOB host variable is retrieved as character data, so the database server generates an XML declaration with an internal encoding declaration that is consistent with the external encoding. The data that is retrieved into a CLOB host variable is retrieved as character data,
225
so the database server generates an XML declaration with an internal encoding declaration. That declaration might not be consistent with the external encoding.
/******************************/ /* Host variable declarations */ /******************************/ DCL XMLBUF SQL TYPE IS XML AS CLOB(10K), XMLBLOB SQL TYPE IS XML AS BLOB(10K), CLOBBUF SQL TYPE IS CLOB(10K); /**********************************************************************/ /* Retrieve data from an XML column into an XML AS CLOB host variable */ /**********************************************************************/ EXEC SQL SELECT INFO INTO :XMLBUF FROM MYTABLE WHERE CID = 1000; /**********************************************************************/ /* Retrieve data from an XML column into an XML AS BLOB host variable */ /**********************************************************************/ EXEC SQL SELECT INFO INTO :XMLBLOB FROM MYTABLE WHERE CID = 1000; /**********************************************************************/ /* RETRIEVE DATA FROM AN XML COLUMN INTO A CLOB HOST VARIABLE. */ /* BEFORE SENDING THE DATA TO THE APPLICATION, INVOKE THE */ /* XMLSERIALIZE FUNCTION TO CONVERT THE DATA FROM THE XML */ /* TYPE TO THE CLOB TYPE. */ /**********************************************************************/ EXEC SQL SELECT XMLSERIALIZE(INFO AS CLOB(10K)) INTO :CLOBBUF FROM MYTABLE WHERE CID = 1000;
Programming examples
You can write DB2 programs in assembler language, C, C++, COBOL, Fortran, PL/I, or REXX. These programs can access a local or remote DB2 subsystem and can execute static or dynamic SQL statements. You can write DB2 programs in assembler language, C, C++, COBOL, Fortran, PL/I or REXX. These programs can access a local or remote DB2 subsystem and can execute static or dynamic SQL statements. This information contains several such programming examples. To prepare and run these applications, use the JCL in DSN910.SDSNSAMP as a model for your JCL.
226
v An SQL example does not necessarily show the complete syntax of an SQL statement. v Examples do not take referential constraints into account. Related concepts: DB2 sample applications on page 1126 Programming examples in assembler on page 248 Programming examples in C on page 287 Programming examples in COBOL on page 340 Programming examples in PL/I on page 407 Programming examples in REXX on page 424 C and C++ language options to use with the installation verification procedures (DB2 Installation and Migration) COBOL options to use with the installation verification procedures (DB2 Installation and Migration) PL/I options to use with the installation verification procedures (DB2 Installation and Migration) Related reference: DB2 sample tables (Introduction to DB2 for z/OS)
227
228
Procedure
To define the SQL communications area, SQLSTATE, and SQLCODE: Choose one of the following actions:
229
Description 1. Code the SQLCA directly in the program or use the following SQL INCLUDE statement to request a standard SQLCA declaration: EXEC SQL INCLUDE SQLCA If your program is reentrant, you must include the SQLCA within a unique data area that is acquired for your task (a DSECT). For example, at the beginning of your program, specify the following code: PROGAREA DSECT EXEC SQL INCLUDE SQLCA As an alternative, you can create a separate storage area for the SQLCA and provide addressability to that area. DB2 sets the SQLCODE and SQLSTATE values in the SQLCA after each SQL statement executes. Your application should check these values to determine whether the last SQL statement was successful.
1. Declare the SQLCODE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as a fullword integer. 2. Declare the SQLSTATE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as a character string of length 5 (CL5). Restriction: Do not declare an SQLSTATE variable as an element of a structure. Requirement: After you declare the SQLCODE and SQLSTATE variables, ensure that all SQL statements in the program are within the scope of the declaration of these variables.
230
Related tasks: Checking the execution of SQL statements on page 201 Checking the execution of SQL statements by using the SQLCA on page 202 Checking the execution of SQL statements by using SQLCODE and SQLSTATE on page 206 Defining the items that your program can use to check whether an SQL statement executed successfully on page 137
Procedure
To define SQL descriptor areas: Code the SQLDA directly in the program, or use the following SQL INCLUDE statement to request a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA
Restriction: You must place SQLDA declarations before the first SQL statement that references the data descriptor, unless you use the TWOPASS SQL processing option. Related tasks: Defining SQL descriptor areas on page 137
Procedure
To declare host variables, host variable arrays, and host structures: 1. Declare the variables according to the following rules and guidelines: v You can declare host variables in normal assembler style (DC or DS), depending on the data type and the limitations on that data type. You can specify a value on DC or DS declarations (for example, DC H5). The DB2 precompiler examines only packed decimal declarations. v If you specify the ONEPASS SQL processing option, you must explicitly declare each host variable and each host variable array before using them in an SQL statement. If you specify the TWOPASS precompiler option, you must declare each host variable before using it in the DECLARE CURSOR statement. v If you specify the STDSQL(YES) SQL processing option, you must precede the host language statements that define the host variables and host variable arrays with the BEGIN DECLARE SECTION statement and follow the host language statements with the END DECLARE SECTION statement. Otherwise, these statements are optional.
231
| | | | |
v Ensure that any SQL statement that uses a host variable or host variable array is within the scope of the statement that declares that variable or array. v If you are using the DB2 precompiler, ensure that the names of host variables and host variable arrays are unique within the program, even if the variables and variable arrays are in different blocks, classes, procedures, functions, or subroutines. You can qualify the names with a structure name to make them unique. 2. Optional: Define any associated indicator variables, arrays, and structures. Related tasks: Declaring host variables and indicator variables on page 138
232
variable-name
DC DS
Notes: 1 value is a numeric value that specifies the scale of the packed decimal variable. If value does not include a decimal point, the scale is 0.
For floating-point data types (E, EH, EB, D, DH, and DB), use the FLOAT SQL processing option to specify whether the host variable is in IEEE binary floating-point or z/Architecture hexadecimal floating-point format. If you specify FLOAT(S390), you need to define your floating-point host variables as E, EH, D, or DH. If you specify FLOAT(IEEE), you need to define your floating-point host variables as EB or DB. DB2 does not check if the host variable declarations or format of the host variable contents match the format that you specified with the FLOAT SQL processing option. Therefore, you need to ensure that your floating-point host variable types and contents match the format that you specified with the FLOAT SQL processing option. DB2 converts all floating-point input data to z/Architecture hexadecimal floating-point format before storing it. | | Restriction: The FLOAT SQL processing options do not apply to the decimal floating-point host variable types ED, DD, or LD. For the decimal floating-point host variable types ED, DD, and LD, you can specify the following special values: MIN, MAX, NAN, SNAN, and INFINITY.
233
v CLOBs The following diagrams show the syntax for forms other than CLOBs. The following diagram shows the syntax for declaring fixed-length character strings.
variable-name
DC DS
C 1 Ln (1)
Notes: 1 If you declare a character string host variable without a length (for example, DC C ABCD) DB2 interprets the length as 1. To get the correct length, specify a length attribute (for example, DC CL 4 ABCD).
The following diagram shows the syntax for declaring varying-length character strings.
variable-name
DC DS
H 1 L2
, 1
CLn
variable-name
DC DS
G Ln '<value>' Ln'<value>'
The following diagram shows the syntax for declaring varying-length graphic strings.
234
variable-name
DS DC
H L2 'm'
GLn '<value>'
(1) variable-name DS X Ln
Notes: 1 1 n 255
(1) variable-name DS H L2 , X Ln
Notes: 1 1 n 32704
|
variable-name SQL TYPE IS RESULT_SET_LOCATOR VARYING
(1)
Notes: 1 To be compatible with previous releases, result set locator host variables may be declared as fullword integers (FL4), but the method shown is the preferred syntax.
Table Locators
The following diagram shows the syntax for declaring of table locators.
235
table-name
AS LOCATOR
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR BLOB_FILE CLOB_FILE DBCLOB_FILE
length K M G
The following diagram shows the syntax for declaring BLOB, CLOB, and DBCLOB host variables and file reference variables for XML data types.
(1)
| Notes: 1 If you specify the length of the LOB in terms of KB, MB, or GB, do not leave spaces between the length and K, M, or G.
ROWIDs
The following diagram shows the syntax for declaring ROWID host variables.
236
Related concepts: Host variables on page 139 Rules for host variables in an SQL statement on page 147 Large objects (LOBs) on page 440 Related tasks: Determining whether a retrieved value in a host variable is null or truncated on page 150 Inserting a single row by using a host variable on page 154 Inserting null values into columns by using indicator variables or arrays on page 154 Retrieving a single row of data into host variables on page 148 Updating data by using host variables on page 153 Related reference: Descriptions of SQL processing options on page 959 High Level Assembler (HLASM) and Toolkit Feature Library
variable-name
DC DS
H 1 L2
Example
The following example shows a FETCH statement with the declarations of the host variables that are needed for the FETCH statement and their associated indicator variables.
EXEC SQL FETCH CLS_CURSOR INTO :CLSCD, :DAY :DAYIND, :BGN :BGNIND, :END :ENDIND X X X
INDICATOR VARIABLE FOR DAY INDICATOR VARIABLE FOR BGN INDICATOR VARIABLE FOR END
Chapter 4. Coding SQL statements in assembler application programs
237
Related concepts: Indicator variables, arrays, and structures on page 140 Related tasks: Inserting null values into columns by using indicator variables or arrays on page 154
| | | | | | | | | | | | | | |
short decimal FLOAT: SDFP DC ED SDFP DC EDL4 SDFP DC EDL411.11 long decimal FLOAT: LDFP DC DD LDFP DC DDL8 LDFP DC DDL822.22 extended decimal FLOAT: EDFP DC LD EDFP DC LDL16 EDFP DC LDL1633.33 DS EL4 DS EHL4 DS EBL4 DS DL8 DS DHL8 DS DBL8
996
DECFLOAT
996
DECFLOAT
996
16
DECFLOAT
480
REAL or FLOAT (n) 1<=n<=21 DOUBLE PRECISION, or FLOAT (n) 22<=n<=53 BIGINT BINARY(n) VARBINARY(n)
480
| | | | | | | | | |
DS FDL8 DS FD SQL TYPE IS BINARY(n) 1<=n<=255 SQL TYPE IS VARBINARY(n) or SQL TYPE IS BINARY(n) VARYING 1<=n<=32704
8 n n
238
Table 53. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in assembler programs (continued) Assembler host variable data type DS CLn 1<=n<=255 DS HL2,CLn 1<=n<=255 DS HL2,CLn n>255 DS GLm 2<=m<=254
2
DS HL2,GLm 2<=m<=254
2
464
VARGRAPHIC(n)
3
DS HL2,GLm m>254
2
472
VARGRAPHIC(n)
3
SQL TYPE IS RESULT_SET_LOCATOR SQL TYPE IS TABLE LIKE table-name AS LOCATOR SQL TYPE IS BLOB_LOCATOR SQL TYPE IS CLOB_LOCATOR SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BLOB(n) 1n2147483647 SQL TYPE IS CLOB(n) 1n2147483647 SQL TYPE IS DBCLOB(n) 1n1073741823
972 976
4 4
4 4 4 n
408
CLOB(n)
412
DBCLOB(n)
3
| | | | | | |
SQL TYPE IS XML AS BLOB(n) SQL TYPE IS XML AS CLOB(n) SQL TYPE IS XML AS DBCLOB(n) SQL TYPE IS BLOB_FILE SQL TYPE IS CLOB_FILE
0 0 0 267 267
239
Table 53. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in assembler programs (continued) Assembler host variable data type SQLTYPE of host variable1 924/925 916/917 920/921 924/925 904 SQLLEN of host variable 267 267 267 267 40 SQL data type DBCLOB file reference
4
1. If a host variable includes an indicator variable, the SQLTYPE value is the base SQLTYPE value plus 1. 2. m is the number of bytes. 3. n is the number of double-byte characters. 4. This data type cannot be used as a column type.
| |
5. To be compatible with previous releases, result set locator host variables may be declared as fullword integers (FL4), but the method shown is the preferred syntax.
The following table shows equivalent assembler host variables for each SQL data type. Use this table to determine the assembler data type for host variables that you define to receive output from the database. For example, if you retrieve TIMESTAMP data, you can define variable DS CLn. This table shows direct conversions between SQL data types and assembler data types. However, a number of SQL data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 converts those compatible data types.
Table 54. Assembler host variable equivalents that you can use when retrieving data of a particular SQL data type SQL data type SMALLINT INTEGER Assembler host variable equivalent DS HL2 DS F DS FD OR DS FDL8 DS FDL8 requires High Level Assembler (HLASM), Release 4 or later. Notes
| |
BIGINT
240
Table 54. Assembler host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type DECIMAL(p,s) or NUMERIC(p,s) Assembler host variable equivalent Notes
DS P'value' DS PLn'value' p is precision; s is scale. 1<=p<=31 and DS PLn 0<=s<=p. 1<=n<=16. value is a literal value that includes a decimal point. You must use Ln, value, or both. Using only value is recommended. Precision: If you use Ln, it is 2n-1; otherwise, it is the number of digits in value. Scale: If you use value, it is the number of digits to the right of the decimal point; otherwise, it is 0. For efficient use of indexes: Use value. If p is even, do not use Ln and be sure the precision of value is p and the scale of value is s. If p is odd, you can use Ln (although it is not advised), but you must choose n so that 2n-1=p, and value so that the scale is s. Include a decimal point in value, even when the scale of value is 0.
1<=n<=21 22<=n<=53
| |
DC EDL4 DC DDL8 DC LDL16 DS CLn DS HL2,CLn DS GLm DS HL2,GLx DS HL2'm',GLx'<value>' m is expressed in bytes. n is the number of double-byte characters. 1<=n<=127 x and m are expressed in bytes. n is the number of double-byte characters. < and > represent shift-out and shift-in characters. 1<=n<=255 1<=n<=255
| | | | | | |
BINARY(n)
241
Table 54. Assembler host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type Assembler host variable equivalent Format 1: variable-name-DS--H--L2--,-X--Ln Format 2: SQL TYPE IS VARBINARY(n) or SQL TYPE IS BINARY(n) VARYING DATE DS CLn If you are using a date exit routine, n is determined by that routine; otherwise, n must be at least 10. If you are using a time exit routine, n is determined by that routine. Otherwise, n must be at least 6; to include seconds, n must be at least 8. n must be at least 19. To include microseconds, n must be 26; if n is less than 26, truncation occurs on the microseconds part. Use this data type only to receive result sets. Do not use this data type as a column type. Use this data type only in a user-defined function or stored procedure to receive rows of a transition table. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. 1n2147483647 1n2147483647 Notes 1<=n<=32704
| | | | | | | | | | |
VARBINARY(n)
TIME
DS CLn
TIMESTAMP
DS CLn
DS F
Table locator
SQL TYPE IS TABLE LIKE table-name AS LOCATOR SQL TYPE IS BLOB_LOCATOR SQL TYPE IS CLOB_LOCATOR SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BLOB(n) SQL TYPE IS CLOB(n)
BLOB locator
CLOB locator
DBCLOB locator
SQL TYPE IS DBCLOB(n) n is the number of double-byte characters. 1n1073741823 SQL TYPE IS XML AS BLOB(n) SQL TYPE IS XML AS CLOB(n) SQL TYPE IS XML AS DBCLOB(n) 1n2147483647 1n2147483647 n is the number of double-byte characters. 1n1073741823
| | | | | |
242
Table 54. Assembler host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type Assembler host variable equivalent Notes
| | | | | | | | | | | | | | | | | |
SQL TYPE IS BLOB_FILE Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. SQL TYPE IS CLOB_FILE Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type. SQL TYPE IS DBCLOB_FILE SQL TYPE IS XML AS BLOB_FILE SQL TYPE IS XML AS CLOB_FILE SQL TYPE IS XML AS DBCLOB_FILE SQL TYPE IS ROWID Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. Use this data type only to manipulate XML data as BLOB files. Do not use this data type as a column type. Use this data type only to manipulate XML data as CLOB files. Do not use this data type as a column type. Use this data type only to manipulate XML data as DBCLOB files. Do not use this data type as a column type.
DBCLOB file reference XML BLOB file reference XML CLOB file reference XML DBCLOB file reference ROWID Notes:
| | |
1. Although stored procedures and user-defined functions can use IEEE floating-point host variables, you cannot declare a user-defined function or stored procedure parameter as IEEE.
Related concepts: Compatibility of SQL and language data types on page 144 LOB host variable, LOB locator, and LOB file reference variable declarations on page 741 Host variable data types for XML data in embedded SQL applications on page 216
Comments: You cannot include assembler comments in SQL statements. However, you can include SQL comments in any embedded SQL statement.
243
Continuation for SQL statements: The line continuation rules for SQL statements are the same as those for assembler statements, except that you must specify EXEC SQL within one line. Any part of the statement that does not fit on one line can appear on subsequent lines, beginning at the continuation margin (column 16, the default). Every line of the statement, except the last, must have a continuation character (a non-blank character) immediately after the right margin in column 72. Declaring tables and views: Your assembler program should include a DECLARE statement to describe each table and view the program accesses. Including code: To include SQL statements or assembler host variable declaration statements from a member of a partitioned data set, place the following SQL statement in the source code where you want to include the statements:
EXEC SQL INCLUDE member-name
You cannot nest SQL INCLUDE statements. Margins: Use the precompiler option MARGINS to set a left margin, a right margin, and a continuation margin. The default values for these margins are columns 1, 71, and 16, respectively. If EXEC SQL starts before the specified left margin, the DB2 precompiler does not recognize the SQL statement. If you use the default margins, you can place an SQL statement anywhere between columns 2 and 71. Multiple-row FETCH statements: You can use only the FETCH ... USING DESCRIPTOR form of the multiple-row FETCH statement in an assembler program. The DB2 precompiler does not recognize declarations of host variable arrays for an assembler program. Names: You can use any valid assembler name for a host variable. However, do not use external entry names or access plan names that begin with 'DSN' or host variable names that begin with 'SQL'. These names are reserved for DB2. The first character of a host variable that is used in embedded SQL cannot be an underscore. However, you can use an underscore as the first character in a symbol that is not used in embedded SQL. Statement labels: You can prefix an SQL statement with a label. The first line of an SQL statement can use a label beginning in the left margin (column 1). If you do not use a label, leave column 1 blank. WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER statement must be a label in the assembler source code and must be within the scope of the SQL statements that WHENEVER affects. Special assembler considerations: The following considerations apply to programs written in assembler: v To allow for reentrant programs, the precompiler puts all the variables and structures it generates within a DSECT called SQLDSECT, and it generates an assembler symbol called SQLDLEN. SQLDLEN contains the length of the DSECT. Your program must allocate an area of the size indicated by SQLDLEN, initialize it, and provide addressability to it as the DSECT SQLDSECT. The precompiler does not generate code to allocate the storage for SQLDSECT; the application program must allocate the storage.
| | | | | | |
244
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CICS: An example of code to support reentrant programs, running under CICS, follows:
DFHEISTG DSECT DFHEISTG EXEC SQL INCLUDE SQLCA * DS 0F SQDWSREG EQU R7 SQDWSTOR DS (SQLDLEN)C RESERVE STORAGE TO BE USED FOR SQLDSECT . . . XXPROGRM DFHEIENT CODEREG=R12,EIBREG=R11,DATAREG=R13 * * * SQL WORKING STORAGE LA SQDWSREG,SQDWSTOR GET ADDRESS OF SQLDSECT USING SQLDSECT,SQDWSREG AND TELL ASSEMBLER ABOUT IT *
In this example, the actual storage allocation is done by the DFHEIENT macro. TSO: The sample program in prefix.SDSNSAMP(DSNTIAD) contains an example of how to acquire storage for the SQLDSECT in a program that runs in a TSO environment. The following example code contains pieces from prefix.SDSNSAMP(DSNTIAD) with explanations in the comments.
DSNTIAD CSECT SAVE LR USING LR (14,12) R12,R15 DSNTIAD,R12 R7,R1 CONTROL SECTION NAME ANY SAVE SEQUENCE CODE ADDRESSABILITY TELL THE ASSEMBLER SAVE THE PARM POINTER
* * Allocate storage of size PRGSIZ1+SQLDSIZ, where: * - PRGSIZ1 is the size of the DSNTIAD program area * - SQLDSIZ is the size of the SQLDSECT, and declared * when the DB2 precompiler includes the SQLDSECT * L R6,PRGSIZ1 GET SPACE FOR USER PROGRAM A R6,SQLDSIZ GET SPACE FOR SQLDSECT GETMAIN R,LV=(6) GET STORAGE FOR PROGRAM VARIABLES LR R10,R1 POINT TO IT * * Initialize the storage * LR R2,R10 POINT TO THE FIELD LR R3,R6 GET ITS LENGTH SR R4,R4 CLEAR THE INPUT ADDRESS SR R5,R5 CLEAR THE INPUT LENGTH MVCL R2,R4 CLEAR OUT THE FIELD * * Map the storage for DSNTIAD program area * ST R13,FOUR(R10) CHAIN THE SAVEAREA PTRS ST R10,EIGHT(R13) CHAIN SAVEAREA FORWARD LR R13,R10 POINT TO THE SAVEAREA USING PRGAREA1,R13 SET ADDRESSABILITY * * Map the storage for the SQLDSECT * LR R9,R13 POINT TO THE PROGAREA A R9,PRGSIZ1 THEN PAST TO THE SQLDSECT USING SQLDSECT,R9 SET ADDRESSABILITY ... LTORG **********************************************************************
Chapter 4. Coding SQL statements in assembler application programs
245
| | | | | | | | | | | | | |
* * * DECLARE VARIABLES, WORK AREAS * * * ********************************************************************** PRGAREA1 DSECT WORKING STORAGE FOR THE PROGRAM ... DS 0D PRGSIZE1 EQU *-PRGAREA1 DYNAMIC WORKAREA SIZE ... DSNTIAD CSECT RETURN TO CSECT FOR CONSTANT PRGSIZ1 DC A(PRGSIZE1) SIZE OF PROGRAM WORKING STORAGE CA DSECT EXEC SQL INCLUDE SQLCA ...
v DB2 does not process set symbols in SQL statements. v Generated code can include more than two continuations per comment. v Generated code uses literal constants (for example, =F'-84'), so an LTORG statement might be necessary. v Generated code uses registers 0, 1, 14, and 15. Register 13 points to a save area that the called program uses. Register 15 does not contain a return code after a call that is generated by an SQL statement. CICS: A CICS application program uses the DFHEIENT macro to generate the entry point code. When using this macro, consider the following: If you use the default DATAREG in the DFHEIENT macro, register 13 points to the save area. If you use any other DATAREG in the DFHEIENT macro, you must provide addressability to a save area. For example, to use SAVED, you can code instructions to save, load, and restore register 13 around each SQL statement as in the following example.
ST 13,SAVER13 LA 13,SAVED EXEC SQL . . . L 13,SAVER13 SAVE REGISTER 13 POINT TO SAVE AREA RESTORE REGISTER 13
v If you have an addressability error in precompiler-generated code because of input or output host variables in an SQL statement, check to make sure that you have enough base registers. v Do not put CICS translator options in the assembly source code. Instead, pass the options to the translator by using the PARM field. Handling SQL error return codes in assembler You can use the subroutine DSNTIAR to convert an SQL return code into a text message. DSNTIAR takes data from the SQLCA, formats it into a message, and places the result in a message output area that you provide in your application program. For concepts and more information about the behavior of DSNTIAR, see Displaying SQLCA fields by calling DSNTIAR on page 203. You can also use the MESSAGE_TEXT condition item field of the GET DIAGNOSTICS statement to convert an SQL return code into a text message. Programs that require long token message support should code the GET DIAGNOSTICS statement instead of DSNTIAR. For more information about GET DIAGNOSTICS, see Checking the execution of SQL statements by using the GET DIAGNOSTICS statement on page 208. DSNTIAR syntax:
246
CALL DSNTIAR,(sqlca, message, lrecl),MF=(E,PARM) The DSNTIAR parameters have the following meanings: sqlca An SQL communication area. message An output area, defined as a varying-length string, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text, each line being the length specified in lrecl, are put into this area. For example, you could specify the format of the output area as:
LINES LRECL EQU EQU 10 132
AL4(LRECL) H,CL(LINES*LRECL) MESSAGE AL2(LINES*LRECL) CL(LRECL) text line 1 CL(LRECL) text line 2
CL(LRECL)
text line n
CALL DSNTIAR,(SQLCA,MESSAGE,MSGLRECL),MF=(E,PARM)
where MESSAGE is the name of the message output area, LINES is the number of lines in the message output area, and LRECL is the length of each line. lrecl A fullword containing the logical record length of output messages, between 72 and 240. The expression MF=(E,PARM) is an z/OS macro parameter that indicates dynamic execution. PARM is the name of a data area that contains a list of pointers to the call parameters of DSNTIAR. See DB2 sample applications on page 1126 for instructions on how to access and print the source code for the sample program. CICS: If your CICS application requires CICS storage handling, you must use the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC,(eib,commarea,sqlca,msg,lrecl),MF=(E,PARM)
DSNTIAC has extra parameters, which you must use for calls to routines that use CICS commands. eib EXEC interface block
247
For more information on these parameters, see the appropriate application programming guide for CICS. The remaining parameter descriptions are the same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA in the same way. You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you must also define them in the CSD. For an example of CSD entry generation statements for use with DSNTIAC, see member DSN8FRDO in the data set prefix.SDSNSAMP. The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and link-edits DSNTIAC, are also in the data set prefix.SDSNSAMP. Related tasks: Including dynamic SQL in your program on page 158 Embedding SQL statements in your application on page 146 Handling SQL error codes on page 214 Limiting CPU time for dynamic SQL statements by using the resource limit facility on page 199
248
Procedure
To define the SQL communications area, SQLSTATE, and SQLCODE: Choose one of the following actions:
Option To define the SQL communications area: Description 1. Code the SQLCA directly in the program or use the following SQL INCLUDE statement to request a standard SQLCA declaration: EXEC SQL INCLUDE SQLCA The standard declaration includes both a structure definition and a static data area named 'sqlca'. DB2 sets the SQLCODE and SQLSTATE values in the SQLCA after each SQL statement executes. Your application should check these values to determine whether the last SQL statement was successful.
249
Description 1. Declare the SQLCODE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as a long integer: long SQLCODE; 2. Declare the SQLSTATE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as a character array of length 6: char SQLSTATE[6]; Restriction: Do not declare an SQLSTATE variable as an element of a structure. Requirement: After you declare the SQLCODE and SQLSTATE variables, ensure that all SQL statements in the program are within the scope of the declaration of these variables.
Related tasks: Checking the execution of SQL statements on page 201 Checking the execution of SQL statements by using the SQLCA on page 202 Checking the execution of SQL statements by using SQLCODE and SQLSTATE on page 206 Defining the items that your program can use to check whether an SQL statement executed successfully on page 137
Procedure
To define SQL descriptor areas: Code the SQLDA directly in the program, or use the following SQL INCLUDE statement to request a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA
You can place an SQLDA declaration wherever C allows a structure definition. Normal C scoping rules apply. The standard declaration includes only a structure definition with the name sqlda. Restriction: You must place SQLDA declarations before the first SQL statement that references the data descriptor, unless you use the TWOPASS SQL processing option.
250
Procedure
To declare host variables, host variable arrays, and host structures: 1. Declare the variables according to the following rules and guidelines: v You can have more than one host variable declaration section in your program. v You can use class members as host variables. Class members that are used as host variables are accessible to any SQL statement within the class. However, you cannot use class objects as host variables. v If you specify the ONEPASS SQL processing option, you must explicitly declare each host variable and each host variable array before using them in an SQL statement. If you specify the TWOPASS precompiler option, you must declare each host variable before using it in the DECLARE CURSOR statement. Restriction: The DB2 coprocessor for C/C++ supports only the ONEPASS option. v If you specify the STDSQL(YES) SQL processing option, you must precede the host language statements that define the host variables and host variable arrays with the BEGIN DECLARE SECTION statement and follow the host language statements with the END DECLARE SECTION statement. Otherwise, these statements are optional. v Ensure that any SQL statement that uses a host variable or host variable array is within the scope of the statement that declares that variable or array. v If you are using the DB2 precompiler, ensure that the names of host variables and host variable arrays are unique within the program, even if the variables and variable arrays are in different blocks, classes, procedures, functions, or subroutines. You can qualify the names with a structure name to make them unique. 2. Optional: Define any associated indicator variables, arrays, and structures. Related tasks: Declaring host variables and indicator variables on page 138
| | | | |
Host variables in C
In C and C++ programs, you can specify numeric, character, graphic, binary, LOB, XML, and ROWID host variables. You can also specify result set, table, and LOB locators and LOB and XML file reference variables. Restrictions: v Only some of the valid C declarations are valid host variable declarations. If the declaration for a variable is not valid, any SQL statement that references the variable might result in the message UNDECLARED HOST VARIABLE. v C supports some data types and storage classes with no SQL equivalents, such as register storage class, typedef, and long long.
Chapter 5. Coding SQL statements in C application programs
251
| | | | | |
v The following locator data types are special SQL data types that do not have C equivalents: Result set locator Table locator LOB locators You cannot use them to define column types. v Although DB2 allows you to use properly formed L-literals in C application programs, DB2 does not check for all the restrictions that the C compiler imposes on the L-literal. \ v Do not use L-literals in SQL statements. Use DB2 graphic string constants in SQL statements to work with the L-literal. Recommendations: v Be careful of overflow. For example, suppose that you retrieve an INTEGER column value into a short integer host variable, and the column value is larger than 32767. You get an overflow warning or an error, depending on whether you provide an indicator variable. v Be careful of truncation. Ensure that the host variable that you declare can contain the data and a NUL terminator, if needed. Retrieving a floating-point or decimal column value into a long integer host variable removes any fractional part of the value.
|
auto extern static const volatile
float double int short sqlint32 int long int long long decimal ( _Decimal32 _Decimal64 _Decimal128 precision , scale )
Notes: 1 If you use the pointer notation of the host variable, you must use the DB2 coprocessor.
252
Restrictions: v If your C compiler does not have a decimal data type, no exact equivalent exists for the SQL data type DECIMAL. In this case, you can use one of the following variables or techniques to handle decimal values: An integer or floating-point variable, which converts the value. If you use an integer variable, you lose the fractional part of the number. If the decimal number can exceed the maximum value for an integer or if you want to preserve a fractional value, use floating-point variables. Floating-point numbers are approximations of real numbers. Therefore, when you assign a decimal number to a floating-point variable, the result might be different from the original number. A character-string host variable. Use the CHAR function to get a string representation of a decimal number. The DECIMAL function to explicitly convert a value to a decimal data type, as shown in the following example:
long duration=10100; char result_dt[11]; /* 1 year and 1 month */
| | | | | | |
v z/OS 1.10 or above (z/OS V1R10 XL C/C++ ) is required to use the decimal floating-point host data type. v The special C only 'complex floating-point' host data type is not a supported type for host variable. v The FLOAT precompiler option does not apply to the decimal floating-point host variable types. v To use decimal floating-point host variable, you must use the DB2 coprocessor. For floating-point data types, use the FLOAT SQL processing option to specify whether the host variable is in IEEE binary floating-point or z/Architecture hexadecimal floating-point format. DB2 does not check if the format of the host variable contents match the format that you specified with the FLOAT SQL processing option. Therefore, you need to ensure that your floating-point host variable contents match the format that you specified with the FLOAT SQL processing option. DB2 converts all floating-point input data to z/Architecture hexadecimal floating-point format before storing it.
253
, char auto extern static ; const volatile unsigned *pointer-name variable-name (1) =expression
Notes: 1 If you use the pointer notation of the host variable, you must use the DB2 coprocessor.
The following diagram shows the syntax for declaring NUL-terminated character host variables.
Notes: 1 2 3 If you use the pointer notation of the host variable, you must use the DB2 coprocessor. Any string that is assigned to this variable must be NUL-terminated. Any string that is retrieved from this variable is NUL-terminated. A NUL-terminated character host variable maps to a varying-length character string (except for the NUL).
The following diagram shows the syntax for declaring varying-length character host variables that use the VARCHAR structured form.
254
int var-1
(2) ;
Notes: 1 2 3 You can use the struct tag to define other variables, but you cannot use them as host variables in SQL. You cannot use var-1 and var-2 as host variables in an SQL statement. If you use the pointer notation of the host variable, you must use the DB2 coprocessor.
Example: The following example code shows valid and invalid declarations of the VARCHAR structured form:
EXEC SQL BEGIN DECLARE SECTION; /* valid declaration of host variable VARCHAR vstring */ struct VARCHAR { short len; char s[10]; } vstring; /* invalid declaration of host variable VARCHAR wstring */ struct VARCHAR wstring;
For NUL-terminated string host variables, use the SQL processing options PADNTSTR and NOPADNTSTR to specify whether the variable should be padded with blanks. The option that you specify determines where the NUL-terminator is placed. If you assign a string of length n to a NUL-terminated string host variable, the variable has one of the values that is shown in the following table.
Table 55. Value of a NUL-terminated string host variable that is assigned a string of length n Length of the NUL-terminated string host variable Less than or equal to n Value of the variable The source string up to a length of n-1 and a NUL at the end of the string. 1 DB2 sets SQLWARN[1] to W and any indicator variable that you provide to the original length of the source string.
255
Table 55. Value of a NUL-terminated string host variable that is assigned a string of length n (continued) Length of the NUL-terminated string host variable Equal to n+1 Greater than n+1 and the source is a fixed-length string Value of the variable The source string and a NUL at the end of the string. 1 If PADNTSTR is in effect The source string, blanks to pad the value, and a NUL at the end of the string. If NOPADNTSTR is in effect The source string and a NUL at the end of the string. Greater than n+1 and the source is a varying-length string Note: 1. In these cases, whether NOPADNTSTR or PADNTSTR is in effect is irrelevant. The source string and a NUL at the end of the string. 1
Restriction: If you use the DB2 precompiler, you cannot use a host variable that is of the NUL-terminated form in either a PREPARE or DESCRIBE statement. However, if you use the DB2 coprocessor, you can use host variables of the NUL-terminated form in PREPARE, DESCRIBE, and EXECUTE IMMEDIATE statements.
v Use the sqldbchar data type that is defined in the typedef statement in one of the following files or libraries: SQL library, sql.h DB2 CLI library, sqlcli.h SQLUDF file in data set DSN910.SDSNC.H v Use the C data type unsigned short. Using sqldbchar or unsigned short enables you to manipulate DBCS and Unicode UTF-16 data in the same format in which it is stored in DB2. Using sqldbchar also makes applications easier to port to other platforms. The following diagrams show the syntax for forms other than DBCLOBs.
256
The following diagram shows the syntax for declaring single-graphic host variables.
, (1) sqldbchar auto extern static const volatile variable-name *pointer-name ; =expression (2)
Notes: 1 2 You cannot use array notation in variable-name. The single-graphic form declares a fixed-length graphic string of length 1.
The following diagram shows the syntax for declaring NUL-terminated graphic host variables.
Notes: 1 2 3 4 If you use the pointer notation of the host variable, you must use the DB2 coprocessor. length must be a decimal integer constant greater than 1 and not greater than 16352. Any string that is assigned to this variable must be NUL-terminated. Any string that is retrieved from this variable is NUL-terminated. The NUL-terminated graphic form does not accept single-byte characters for the variable.
The following diagram shows the syntax for declaring graphic host variables that use the VARGRAPHIC structured form.
257
int var-1
(2) (3) ;
(4) ] ; }
Notes: 1 2 3 4 5 You can use the struct tag to define other variables, but you cannot use them as host variables in SQL. var-1 must be less than or equal to length. You cannot use var-1 or var-2 as host variables in an SQL statement. length must be a decimal integer constant greater than 1 and not greater than 16352. If you use the pointer notation of the host variable, you must use the DB2 coprocessor.
Example: The following example shows valid and invalid declarations of graphic host variables that use the VARGRAPHIC structured form:
EXEC SQL BEGIN DECLARE SECTION; /* valid declaration of host variable structured vgraph */ struct VARGRAPH { short len; sqldbchar d[10]; } vgraph; /* invalid declaration of host variable structured wgraph */ struct VARGRAPH wgraph;
| | | | | | | |
258
|
(1) SQL TYPE IS BINARY ( auto extern static const volatile length )
, variable-name ;
| | | | | | | | |
auto extern static const volatile
The following diagram shows the syntax for declaring VARBINARY host variables.
SQL TYPE IS
length
| |
| | | | | | | | | | | | | | | | | | | | | | | | Notes: 1 For VARBINARY host variables, the length must be in the range from 1 to 32 704.
The C language does not have variables that correspond to the SQL binary data types BINARY and VARBINARY. To create host variables that can be used with these data types, use the SQL TYPE IS clause. The SQL precompiler replaces this declaration with the C language structure in the output source member. When you reference a BINARY or VARBINARY host variable in an SQL statement, you must use the variable that you specify in the SQL TYPE declaration. When you reference the host variable in a host language statement, you must use the variable that DB2 generates. Examples of binary variable declarations: The following table shows examples of variables that DB2 generates when you declare binary host variables.
Table 56. Examples of BINARY and VARBINARY variable declarations for C Variable declaration that you include in your C program SQL TYPE IS BINARY(10) bin_var; SQL TYPE IS VARBINARY(10) vbin_var; Corresponding variable that DB2 generates in the output source member char bin_var[10] struct { short length; char data[10]; } vbin_var;
259
| | | | |
Recommendation: Be careful when you use binary host variables with C and C++. The SQL TYPE declaration for BINARY and VARBINARY does not account for the NUL-terminator that C expects, because binary strings are not NUL-terminated strings. Also, the binary host variable might contain zeroes at any point in the string.
SQL TYPE IS RESULT_SET_LOCATOR VARYING auto extern static register const volatile
Table locators
The following diagram shows the syntax for declaring table locators.
SQL TYPE IS TABLE LIKE auto extern static register const volatile
table-name
AS LOCATOR
260
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR BLOB_FILE CLOB_FILE DBCLOB_FILE
length K M G
Notes: | | | 1 Specify the initial value as a series of expressions. For example, specify ={expression, expression}. For BLOB_FILE, CLOB_FILE, and DBCLOB_FILE, specify ={name_length, data_length, file_option_map, file_name}.
| | | |
261
| |
AS AS AS AS AS AS
| |
| | | | | | | Notes: 1 Specify the initial value as a series of expressions. For example, specify ={expression, expression}. For BLOB_FILE, CLOB_FILE, and DBCLOB_FILE, specify ={name_length, data_length, file_option_map, file_name}.
const volatile
variable-name *pointer-name
Constants
The syntax for constants in C and C++ programs differs from the syntax for constants in SQL statements in the following ways: | | | | | | | | | | | | | | v C/C++ uses various forms for numeric literals (possible suffixes are: ll, LL, u, U, f,F,l,L,df,DF, dd, DD, dl, DL,d, D). For example, in C/C++: 4850976 is a decimal literal 0x4bD is a hexadecimal integer literal 03245 is an octal integer literal 3.2E+4 is a double floating-point literal 3.2E+4f is a float floating-point literal 3.2E+4l is a long double floating-point literal 0x4bDP+4 is a double hexadecimal floating-point literal 22.2df is a _Decimal32 decimal floating-point literal 0.00D is a fixed-point decimal literal (z/OS only when LANGLVL(EXTENDED) is specified) v Use C/C++ literal form only outside of SQL statements. Within SQL statements, use numeric constants.
262
v In C, character constants and string constants can use escape sequences. You cannot use the escape sequences in SQL statements. v Apostrophes and quotation marks have different meanings in C and SQL. In C, you can use double quotation marks to delimit string constants, and apostrophes to delimit character constants. Example of the use of quotation marks in C:
printf( "%d lines read. \n", num_lines);
In SQL, you can use double quotation marks to delimit identifiers and apostrophes to delimit string constants. Example of the use of quotation marks in SQL:
SELECT "COL#1" FROM TBL1;
v Character data in SQL is distinct from integer data. Character data in C is a subtype of integer data. Related concepts: Host variables on page 139 Rules for host variables in an SQL statement on page 147 Large objects (LOBs) on page 440 Related tasks: Determining whether a retrieved value in a host variable is null or truncated on page 150 Inserting a single row by using a host variable on page 154 Inserting null values into columns by using indicator variables or arrays on page 154 Retrieving a single row of data into host variables on page 148 Retrieving a single row of data into a host structure on page 157 Updating data by using host variables on page 153 Related reference: Descriptions of SQL processing options on page 959
| | |
263
| | | | |
varying-length character arrays varying-length graphic arrays LOB arrays In addition, the #pragma pack(1) directive cannot be in effect if you plan to use these arrays in multiple-row statements.
|
auto extern static const volatile unsigned
float double int long short int long long decimal ( _Decimal32 _Decimal64 _Decimal128 precision , scale )
Example: The following example shows a declaration of a numeric host variable array:
EXEC SQL BEGIN DECLARE SECTION; /* declaration of numeric host variable array */ long serial_num[10]; ... EXEC SQL END DECLARE SECTION;
264
The following diagram shows the syntax for declaring NUL-terminated character host variable arrays.
Notes: 1 2 3 dimension must be an integer constant between 1 and 32767. Any string that is assigned to this variable must be NUL-terminated. Any string that is retrieved from this variable is NUL-terminated. The strings in a NUL-terminated character host variable array map to varying-length character strings (except for the NUL).
The following diagram shows the syntax for declaring varying-length character host variable arrays that use the VARCHAR structured form.
265
int var-1
(2) ;
Notes: 1 | | 2 3 4 You can use the struct tag to define other variables, but you cannot use them as host variable arrays in SQL. var-1 must be a scalar numeric variable. var-2 must be a scalar CHAR array variable. dimension must be an integer constant between 1 and 32767.
Example: The following example shows valid and invalid declarations of VARCHAR host variable arrays.
EXEC SQL BEGIN DECLARE SECTION; /* valid declaration of VARCHAR host variable array */ struct VARCHAR { short len; char s[18]; } name[10]; /* invalid declaration of VARCHAR host variable array */ struct VARCHAR name[10];
| | |
266
| |
BINARY VARBINARY
(length)
| | | | | | |
v Use the sqldbchar data type that is defined in the typedef statement in the header files that are supplied by DB2. v Use the C data type unsigned short. The following diagram shows the syntax for declaring NUL-terminated graphic host variable arrays.
267
Notes: 1 2 3 4 dimension must be an integer constant between 1 and 32767. length must be a decimal integer constant greater than 1 and not greater than 16352. Any string that is assigned to this variable must be NUL-terminated. Any string that is retrieved from this variable is NUL-terminated. Do not assign single-byte characters into a NUL-terminated graphic host variable array
The following diagram shows the syntax for declaring graphic host variable arrays that use the VARGRAPHIC structured form.
int var-1
(2) ;
(4) ] ; }
Notes: 1 2 3 4 5 You can use the struct tag to define other variables, but you cannot use them as host variable arrays in SQL. var-1 must be a scalar numeric variable. var-2 must be a scalar char array variable. length must be a decimal integer constant greater than 1 and not greater than 16352. dimension must be an integer constant between 1 and 32767.
268
Example: The following example shows valid and invalid declarations of graphic host variable arrays that use the VARGRAPHIC structured form.
EXEC SQL BEGIN DECLARE SECTION; /* valid declaration of host variable array vgraph */ struct VARGRAPH { short len; sqldbchar d[10]; } vgraph[20]; /* invalid declaration of host variable array vgraph */ struct VARGRAPH vgraph[20];
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR BLOB_FILE CLOB_FILE DBCLOB_FILE
length K M G
| | | |
269
| |
| |
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_FILE CLOB_FILE DBCLOB_FILE
length K M G
| |
, (1) SQL TYPE IS ROWID auto extern static register const volatile variable-name [ dimension ] ;
270
Related concepts: Host variable arrays in an SQL statement on page 155 Host variable arrays on page 139 Large objects (LOBs) on page 440 Related tasks: Inserting multiple rows of data from host variable arrays on page 157 Retrieving multiple rows of data into host variable arrays on page 156
Host structures in C
A C host structure contains an ordered group of data fields.
Host structures
The following diagram shows the syntax for declaring host structures.
|
float double int short sqlint32 int long int long long decimal ( precision , scale ) _Decimal32 _Decimal64 _Decimal128 varchar structure binary structure vargraphic structure SQL TYPE IS ROWID LOB data type char var-2 unsigned [ length ] sqldbchar var-5 ; [ length ] variable-name =expression ; var-1 ; }
271
VARCHAR structures
The following diagram shows the syntax for VARCHAR structures that are used within declarations of host structures.
VARGRAPHIC structures
The following diagram shows the syntax for VARGRAPHIC structures that are used within declarations of host structures.
| | | | |
SQL TYPE IS
Binary structures
The following diagram shows the syntax for binary structures that are used within declarations of host structures.
(length)
| | |
272
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR
length K M G
| | | | |
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_FILE CLOB_FILE DBCLOB_FILE
length K M G
| | |
Example
In the following example, the host structure is named target, and it contains the fields c1, c2, and c3. c1 and c3 are character arrays, and c2 is a host variable that is equivalent to the SQL VARCHAR data type. The target host structure can be part of another host structure but must be the deepest level of the nested structure.
struct {char c1[3]; struct {short len; char data[5]; }c2; char c3[2]; }target;
273
, int short auto extern static const volatile signed variable-name = expression ;
The following diagram shows the syntax for declaring an indicator array or a host structure indicator array in C and C++.
Example
The following example shows a FETCH statement with the declarations of the host variables that are needed for the FETCH statement and their associated indicator variables.
EXEC SQL FETCH CLS_CURSOR INTO :ClsCd, :Day :DayInd, :Bgn :BgnInd, :End :EndInd;
274
Related concepts: Indicator variables, arrays, and structures on page 140 Related tasks: Inserting null values into columns by using indicator variables or arrays on page 154
Procedure
To reference pointer host variables in C and C++ programs: Specify the pointer host variable exactly as it was declared. The only exception is when you reference pointers to nul-terminated character arrays. In this case, you do not have to include the parentheses that were part of the declaration. Examples of scalar pointer host variable references:
Table 57. Example references to scalar pointer host variables Declaration short *hvshortp; Description hvshortp is a pointer host variable that points to two bytes of storage. hvdoubp is a pointer host variable that points to eight bytes of storage. hvcharpn is a pointer host variable that points to a nul-terminated character array of up to 20 bytes. Reference EXEC SQL set:*hvshortp=123;
double *hvdoubp;
Example of a bounded character pointer host variable reference: Suppose that your program declares the following bounded character pointer host variable:
struct { unsigned long len; char * data; } hvbcharp;
The following example references this bounded character pointer host variable:
hvcharp.len = dynlen; a hvcharp.data = (char 8) malloc (hvcharp.len); b EXEC SQL set :hvcharp = data buffer with length; c
Note: a b c dynlen can be either a compile time constant or a variable with a value that is assigned at run time. Storage is dynamically allocated for hvcharp.data. The SQL statement references the name of the structure, not an element within the structure.
275
Example of a structure array host variable reference: Suppose that your program declares the following pointer to the structure tbl_struct:
struct tbl_struct *ptr_tbl_struct = (struct tbl_struct *) malloc (sizeof (struct tbl_struct) * n);
To reference this data is SQL statements, use the pointer as shown in the following example. Assume that tbl_sel_cur is a declared cursor.
for (L_col_cnt = 0; L_col_cnt < n; L_con_cnt++) { ... EXEC SQL FETCH tbl_sel_cur INTO :ptr_tbl_struct [L_col_cnt] ... }
Procedure
To declare pointer host variables in C and C++ programs: Include an asterisk (*) in each variable declaration to indicate that the variable is a pointer. Restrictions:
276
v You cannot use pointer host variables that point to character data of an unknown length. For example, do not specify the following declaration: char * hvcharpu. Instead, specify the length of the data by using a bounded character pointer host variable. A bounded character pointer host variable is a host variable that is declared as a structure with the following elements: A 4-byte field that contains the length of the storage area. A pointer to the non-numeric dynamic storage area. v You cannot use untyped pointers. For example, do not specify the following declaration: void * untypedprt . Examples of scalar pointer host variable declarations:
Table 59. Example declarations of scalar pointer host variables Declaration short *hvshortp; double *hvdoubp; char (*hvcharpn) [20]; Description hvshortp is a pointer host variable that points to two bytes of storage. hvdoubp is a pointer host variable that points to eight bytes of storage. hvcharpn is a pointer host variable that points to a nul-terminated character array of up to 20 bytes.
Example of a bounded character pointer host variable declaration: The following example code declares a bounded character pointer host variable called hvbcharp with two elements: len and data.
struct { unsigned long len; char * data; } hvbcharp;
Example of a structure array host variable declaration: The following example code declares a table structure called tbl_struct.
struct tbl_struct { char colname[20]; small int colno; small int coltype; small int collen; };
277
The following example code declares a pointer to the structure tbl_struct. Storage is allocated dynamically for up to n rows.
struct tbl_struct *ptr_tbl_struct = (struct tbl_struct *) malloc (sizeof (struct tbl_struct) * n);
| | |
p in byte 1, s in byte 2 4 8 16 4 8 n
DECIMAL(p,s)2 DECFLOAT(16)7, 8 DECFLOAT(16)8 DECFLOAT(34)8 FLOAT (single precision) FLOAT (double precision) BINARY(n)
| | | | | |
| | | | | | | | | |
v SQL TYPE IS BINARY(n), 1<=n<=255 v SQL TYPE IS VARBINARY(n), 1<=n<=32704 Single-character form NUL-terminated character form VARCHAR structured form 1<=n<=255
908
VARBINARY(n)
1 n n
278
Table 61. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in C programs (continued) SQLTYPE of host C host variable data type variable1 VARCHAR structured form n>255 Single-graphic form NUL-terminated graphic form VARGRAPHIC structured form 1<=n<128 VARGRAPHIC structured form n>127 v SQL TYPE IS RESULT_SET _LOCATOR SQL TYPE IS TABLE LIKE table-name AS LOCATOR SQL TYPE IS BLOB_LOCATOR SQL TYPE IS CLOB_LOCATOR SQL TYPE IS DBCLOB_LOCATOR SQL TYPE IS BLOB(n) 1n2147483647 SQL TYPE IS CLOB(n) 1n2147483647 SQL TYPE IS DBCLOB(n) 1n1073741823 976 4 Table locator3 456 SQLLEN of host variable n SQL data type VARCHAR(n)
1 n n
472
VARGRAPHIC(n)
972
4 4 4 n
408
CLOB(n)
412
DBCLOB(n)4
| | | | | | | | | |
SQL TYPE IS XML AS BLOB(n) SQL TYPE IS XML AS CLOB(n) SQL TYPE IS XML AS DBCLOB(n) SQL TYPE IS BLOB_FILE SQL TYPE IS CLOB_FILE SQL TYPE IS DBCLOB_FILE
279
Table 61. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in C programs (continued) SQLTYPE of host C host variable data type variable1 SQLLEN of host variable 267 267 267 40 SQL data type XML BLOB file reference 3 XML CLOB file reference 3 XML DBCLOB file reference 3 ROWID
| | | | | |
SQL TYPE IS XML AS BLOB_FILE SQL TYPE IS XML AS CLOB_FILE SQL TYPE IS XML AS DBCLOB_FILE SQL TYPE IS ROWID Notes:
1. If a host variable includes an indicator variable, the SQLTYPE value is the base SQLTYPE value plus 1. 2. p is the precision; in SQL terminology, this the total number of digits. In C, this is called the size. s is the scale; in SQL terminology, this is the number of digits to the right of the decimal point. In C, this is called the precision. C++ does not support the decimal data type. 3. Do not use this data type as a column type. 4. n is the number of double-byte characters. 5. No exact equivalent. Use DECIMAL(19,0).
| | | | | | | | | | | | |
6. Starting in Version 9, the C data type long no longer maps to the SQL data type DEC (19,0). Instead, the C data type long maps to the SQL data type BIGINT. This new mapping applies if the application is precompiled again. 7. DFP host variable with a length of 4 is supported while DFP column can be defined only with length 8(DECFLOAT(16)) or 16(DECFLOAT(34)). 8. To use the decimal floating-point host data type, you must do the following: v Use z/OS 1.10 or above (z/OS V1R10 XL C/C++ ). v Compile with the C/C++ compiler option, DFP. v Specify the SQL compiler option to enable the DB2 coprocessor. v Specify C/C++ compiler option, ARCH(7). It is required by the DFP compiler option if the DFP type is used in the source. v Specify 'DEFINE(__STDC_WANT_DEC_FP__)' compiler option because DFP is not officially part of the C/C++ Language Standard.
The following table shows equivalent C host variables for each SQL data type. Use this table to determine the C data type for host variables that you define to receive output from the database. For example, if you retrieve TIMESTAMP data, you can define a variable of NUL-terminated character form or VARCHAR structured form This table shows direct conversions between SQL data types and C data types. However, a number of SQL data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 converts those compatible data types.
Table 62. C host variable equivalents that you can use when retrieving data of a particular SQL data type SQL data type SMALLINT INTEGER C host variable equivalent short int long int Notes
280
Table 62. C host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type DECIMAL(p,s) or NUMERIC(p,s) C host variable equivalent decimal Notes You can use the double data type if your C compiler does not have a decimal data type; however, double is not an exact equivalent. 1<=n<=21 22<=n<=53
float double _Decminal32 _Decimal128 long long, long long int, and sqlint64 SQL TYPE IS BINARY(n)
1<=n<=255 If data can contain character NULs (\0), certain C and C++ library functions might not handle the data correctly. Ensure that your application handles the data properly.
SQL TYPE IS VARBINARY(n) single-character form no exact equivalent NUL-terminated character form
1<=n<=32 704
If n>1, use NUL-terminated character form If data can contain character NULs (\0), use VARCHAR structured form. Allow at least n+1 to accommodate the NUL-terminator.
VARCHAR structured form GRAPHIC(1) GRAPHIC(n) VARGRAPHIC(n) single-graphic form no exact equivalent NUL-terminated graphic form If n>1, use NUL-terminated graphic form. n is the number of double-byte characters. If data can contain graphic NUL values (\0\0), use VARGRAPHIC structured form. Allow at least n+1 to accommodate the NUL-terminator. n is the number of double-byte characters. n is the number of double-byte characters. If you are using a date exit routine, that routine determines the length. Otherwise, allow at least 11 characters to accommodate the NUL-terminator. If you are using a date exit routine, that routine determines the length. Otherwise, allow at least 10 characters.
281
Table 62. C host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type TIME C host variable equivalent NUL-terminated character form Notes If you are using a time exit routine, the length is determined by that routine. Otherwise, the length must be at least 7; to include seconds, the length must be at least 9 to accommodate the NUL-terminator. If you are using a time exit routine, the length is determined by that routine. Otherwise, the length must be at least 6; to include seconds, the length must be at least 8. The length must be at least 20. To include microseconds, the length must be 27. If the length is less than 27, truncation occurs on the microseconds part. The length must be at least 19. To include microseconds, the length must be 26. If the length is less than 26, truncation occurs on the microseconds part. Use this data type only for receiving result sets. Do not use this data type as a column type. Use this data type only in a user-defined function or stored procedure to receive rows of a transition table. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. 1n2147483647 1n2147483647 n is the number of double-byte characters. 1n1073741823 1n2147483647 1n2147483647 n is the number of double-byte characters. 1n1073741823 Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type.
TIMESTAMP
Table locator
BLOB locator
CLOB locator
DBCLOB locator
SQL TYPE IS BLOB(n) SQL TYPE IS CLOB(n) SQL TYPE IS DBCLOB(n) SQL TYPE IS XML AS BLOB(n) SQL TYPE IS XML AS CLOB(n) SQL TYPE IS XML AS DBCLOB(n) SQL TYPE IS BLOB_FILE
282
Table 62. C host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type C host variable equivalent SQL TYPE IS DBCLOB_FILE Notes Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. Use this data type only to manipulate XML data as BLOB files. Do not use this data type as a column type. Use this data type only to manipulate XML data as CLOB files. Do not use this data type as a column type. Use this data type only to manipulate XML data as DBCLOB files. Do not use this data type as a column type.
| | | | | | | | | | | |
ROWID
Related concepts: Compatibility of SQL and language data types on page 144 LOB host variable, LOB locator, and LOB file reference variable declarations on page 741 Host variable data types for XML data in embedded SQL applications on page 216
| | | | |
Comments: You can include C comments (/* ... */) within SQL statements wherever you can use a blank, except between the keywords EXEC and SQL. You can use single-line comments (starting with //) in C language statements, but not in embedded SQL. You can use SQL comments within embedded SQL statements. You can nest comments.
283
To include EBCDIC DBCS characters in comments, you must delimit the characters by a shift-out and shift-in control character; the first shift-in character in the DBCS string signals the end of the DBCS string. Continuation for SQL statements: You can use a backslash to continue a character-string constant or delimited identifier on the following line. However, EBCDIC DBCS string constants cannot be continued on a second line. Declaring tables and views: Your C program should use the DECLARE TABLE statement to describe each table and view the program accesses. You can use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE statements. For more information, see DCLGEN (declarations generator) on page 125. | | Including SQL statements and variable declarations in source code that is to be processed by the DB2 precompiler: To include SQL statements or C host variable declarations from a member of a partitioned data set, add the following SQL statement to the source code where you want to include the statements:
EXEC SQL INCLUDE member-name;
You cannot nest SQL INCLUDE statements. Do not use C #include statements to include SQL statements or C host variable declarations. | | | | | Margins: Code SQL statements in columns 1 through 72, unless you specify other margins to the DB2 precompiler. If EXEC SQL is not within the specified margins, the DB2 precompiler does not recognize the SQL statement. The margin rules do not apply to the DB2 coprocessor. The DB2 coprocessor allows variable length source input. Names:You can use any valid C name for a host variable, subject to the following restrictions: v Do not use DBCS characters. v Do not use external entry names or access plan names that begin with 'DSN', and do not use host variable names or macro names that begin with 'SQL' (in any combination of uppercase or lowercase letters). These names are reserved for DB2. Nulls and NULs: C and SQL differ in the way they use the word null. The C language has a null character (NUL), a null pointer (NULL), and a null statement (just a semicolon). The C NUL is a single character that compares equal to 0. The C NULL is a special reserved pointer value that does not point to any valid data object. The SQL null value is a special value that is distinct from all non-null values and denotes the absence of a (nonnull) value. NUL (or NUL-terminator) is the null character in C and C++, and NULL is the SQL null value. Sequence numbers: The DB2 precompiler generates statements without sequence numbers. (The DB2 coprocessor does not perform this action, because the source is read and modified by the compiler. ) Statement labels: You can precede SQL statements with a label. Trigraph characters: Some characters from the C character set are not available on all keyboards. You can enter these characters into a C source program using a sequence of three characters called a trigraph. The trigraph characters that DB2 supports are the same as those that the C compiler supports.
| | | |
| |
284
WHENEVER statement: The target for the GOTO clause in an SQL WHENEVER statement must be within the scope of any SQL statements that the statement WHENEVER affects. Special C/C++ considerations: v Using the C/370 multi-tasking facility, in which multiple tasks execute SQL statements, causes unpredictable results. v Except for the DB2 coprocessor, you must run the DB2 precompiler before running the C preprocessor. v Except for the DB2 coprocessor, DB2 precompiler does not support C preprocessor directives. v If you use conditional compiler directives that contain C code, either place them after the first C token in your application program, or include them in the C program using the #include preprocessor directive. Refer to the appropriate C documentation for more information about C preprocessor directives. | | | | | | | To use the decimal floating-point host data type, you must do the following: v Use z/OS 1.10 or above (z/OS V1R10 XL C/C++ ). v Compile with the C/C++ compiler option, DFP. v Specify the SQL compiler option to enable the DB2 coprocessor. v Specify C/C++ compiler option, ARCH(7). It is required by the DFP compiler option if the DFP type is used in the source. v Specify 'DEFINE(__STDC_WANT_DEC_FP__)' compiler option. Handling SQL error return codes in C or C++ You can use the subroutine DSNTIAR to convert an SQL return code into a text message. DSNTIAR takes data from the SQLCA, formats it into a message, and places the result in a message output area that you provide in your application program. For concepts and more information about the behavior of DSNTIAR, see Displaying SQLCA fields by calling DSNTIAR on page 203. You can also use the MESSAGE_TEXT condition item field of the GET DIAGNOSTICS statement to convert an SQL return code into a text message. Programs that require long token message support should code the GET DIAGNOSTICS statement instead of DSNTIAR. For more information about GET DIAGNOSTICS, see Checking the execution of SQL statements by using the GET DIAGNOSTICS statement on page 208. DSNTIAR syntax: rc = DSNTIAR(&sqlca, &message, &lrecl); The DSNTIAR parameters have the following meanings: &sqlca An SQL communication area. &message An output area, in VARCHAR format, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240.
Chapter 5. Coding SQL statements in C application programs
| | | |
285
The output lines of text, each line being the length specified in &lrecl, are put into this area. For example, you could specify the format of the output area as:
#define data_len 132 #define data_dim 10 int length_of_line = data_len ; struct error_struct { short int error_len; char error_text[data_dim][data_len]; } error_message = {data_dim * data_len}; . . . rc = DSNTIAR(&sqlca, &error_message, &length_of_line);
where error_message is the name of the message output area, data_dim is the number of lines in the message output area, and data_len is the length of each line. &lrecl A fullword containing the logical record length of output messages, between 72 and 240. To inform your compiler that DSNTIAR is an assembler language program, include one of the following statements in your application. For C, include:
#pragma linkage (DSNTIAR,OS)
Examples of calling DSNTIAR from an application appear in the DB2 sample C program DSN8BD3 and in the sample C++ program DSN8BE3. Both are in the library DSN8910.SDSNSAMP. See DB2 sample applications on page 1126 for instructions on how to access and print the source code for the sample programs. CICS: If your CICS application requires CICS storage handling, you must use the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
rc = DSNTIAC(&eib, &commarea, &sqlca, &message, &lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that use CICS commands. &eib EXEC interface block
&commarea communication area For more information on these parameters, see the appropriate application programming guide for CICS. The remaining parameter descriptions are the same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA in the same way. You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you must also define them in the CSD. For an example of CSD entry generation statements for use with DSNTIAC, see job DSNTEJ5A.
286
The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP. Related concepts: Host variable arrays in an SQL statement on page 155 Related tasks: Including dynamic SQL in your program on page 158 Embedding SQL statements in your application on page 146 Handling SQL error codes on page 214 Limiting CPU time for dynamic SQL statements by using the resource limit facility on page 199
Programming examples in C
You can write DB2 programs in C and C++. These programs can access a local or remote DB2 subsystem and can execute static or dynamic SQL statements. This information contains several such programming examples. To prepare and run these applications, use the JCL in DSN910.SDSNSAMP as a model for your JCL. Related reference: Programming examples on page 226
287
/* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /* /*
symbolic label/name = EMP description = arbitrary table Output = symbolic label/name = SYSPRINT description = print results via printf Exit-normal = return code 0 normal completion Exit-error = Return code Abend codes = SQLCA = none = none
External references
*/ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
/* Logic specification: */ /* */ /* There are four SQL sections. */ /* */ /* 1) STATIC SQL 1: using static cursor with a SELECT statement. */ /* Two output host variables. */ /* 2) Dynamic SQL 2: Fixed-list SELECT, using same SELECT statement */ /* used in SQL 1 to show the difference. The prepared string */ /* :iptstr can be assigned with other dynamic-able SQL statements. */ /* 3) Dynamic SQL 3: Insert with parameter markers. */ /* Using four parameter markers which represent four input host */ /* variables within a host structure. */ /* 4) Dynamic SQL 4: EXECUTE IMMEDIATE */ /* A GRANT statement is executed immediately by passing it to DB2 */ /* via a varying string host variable. The example shows how to */ /* set up the host variable before passing it. */ /* */ /**********************************************************************/ #include "stdio.h" #include "stdefs.h" EXEC SQL INCLUDE SQLCA; EXEC SQL INCLUDE SQLDA; EXEC SQL BEGIN DECLARE SECTION; short edlevel; struct { short len; char x1[56]; } stmtbf1, stmtbf2, inpstr; struct { short len; char x1[15]; } lname; short hv1; struct { char deptno[4]; struct { short len; char x[36]; } deptname; char mgrno[7]; char admrdept[4]; } hv2; short ind[4]; EXEC SQL END DECLARE SECTION; EXEC SQL DECLARE EMP TABLE (EMPNO CHAR(6) FIRSTNAME VARCHAR(12) MIDINIT CHAR(1) LASTNAME VARCHAR(15)
, , , ,
288
WORKDEPT PHONENO HIREDATE JOBCODE EDLEVEL SEX BIRTHDATE SALARY FORFNAME FORMNAME FORLNAME FORADDR
CHAR(3) CHAR(4) DECIMAL(6) DECIMAL(3) SMALLINT CHAR(1) DECIMAL(6) DECIMAL(8,2) VARGRAPHIC(12) GRAPHIC(1) VARGRAPHIC(15) VARGRAPHIC(256) )
, , , , , , , , , , , ;
EXEC SQL DECLARE DEPT TABLE ( DEPTNO CHAR(3) , DEPTNAME VARCHAR(36) , MGRNO CHAR(6) , ADMRDEPT CHAR(3) ); main () { printf("??/n*** begin of program ***"); EXEC SQL WHENEVER SQLERROR GO TO HANDLERR; EXEC SQL WHENEVER SQLWARNING GO TO HANDWARN; EXEC SQL WHENEVER NOT FOUND GO TO NOTFOUND; /******************************************************************/ /* Assign values to host variables which will be input to DB2 */ /******************************************************************/ strcpy(hv2.deptno,"M92"); strcpy(hv2.deptname.x,"DDL"); hv2.deptname.len = strlen(hv2.deptname.x); strcpy(hv2.mgrno,"123456"); strcpy(hv2.admrdept,"abc"); /******************************************************************/ /* Static SQL 1: DECLARE CURSOR, OPEN, FETCH, CLOSE */ /* Select into :edlevel, :lname */ /******************************************************************/ printf("??/n*** begin declare ***"); EXEC SQL DECLARE C1 CURSOR FOR SELECT EDLEVEL, LASTNAME FROM EMP WHERE EMPNO = 000010; printf("??/n*** begin open ***"); EXEC SQL OPEN C1; printf("??/n*** begin fetch EXEC SQL FETCH C1 INTO :edlevel, :lname; printf("??/n*** returned values printf("??/n??/nedlevel = printf("??/nlname = ***"); ***");
printf("??/n*** begin close ***"); EXEC SQL CLOSE C1; /******************************************************************/ /* Dynamic SQL 2: PREPARE, DECLARE CURSOR, OPEN, FETCH, CLOSE */ /* Select into :edlevel, :lname */ /******************************************************************/ sprintf (inpstr.x1, "SELECT EDLEVEL, LASTNAME FROM EMP WHERE EMPNO = 000010"); inpstr.len = strlen(inpstr.x1); printf("??/n*** begin prepare ***"); EXEC SQL PREPARE STAT1 FROM :inpstr; printf("??/n*** begin declare ***"); EXEC SQL DECLARE C2 CURSOR FOR STAT1; printf("??/n*** begin open ***"); EXEC SQL OPEN C2; printf("??/n*** begin fetch EXEC SQL FETCH C2 INTO :edlevel, :lname; printf("??/n*** returned values ***"); ***");
289
printf("??/n??/nedlevel = printf("??/nlname = printf("??/n*** begin close ***"); EXEC SQL CLOSE C2; /******************************************************************/ /* Dynamic SQL 3: PREPARE with parameter markers */ /* Insert into with four values. */ /******************************************************************/ sprintf (stmtbf1.x1, "INSERT INTO DEPT VALUES (?,?,?,?)"); stmtbf1.len = strlen(stmtbf1.x1); printf("??/n*** begin prepare ***"); EXEC SQL PREPARE s1 FROM :stmtbf1; printf("??/n*** begin execute ***"); EXEC SQL EXECUTE s1 USING :hv2:ind; printf("??/n*** following are expected insert results ***"); printf("??/n hv2.deptno = printf("??/n hv2.deptname.len = printf("??/n hv2.deptname.x = printf("??/n hv2.mgrno = printf("??/n hv2.admrdept = EXEC SQL COMMIT; /******************************************************************/ /* Dynamic SQL 4: EXECUTE IMMEDIATE */ /* Grant select */ /******************************************************************/ sprintf (stmtbf2.x1, "GRANT SELECT ON EMP TO USERX"); stmtbf2.len = strlen(stmtbf2.x1); printf("??/n*** begin execute immediate ***"); EXEC SQL EXECUTE IMMEDIATE :stmtbf2; printf("??/n*** end of program ***"); goto progend; HANDWARN: HANDLERR: NOTFOUND: ; printf("??/n SQLCODE = printf("??/n SQLWARN0 = printf("??/n SQLWARN1 = printf("??/n SQLWARN2 = printf("??/n SQLWARN3 = printf("??/n SQLWARN4 = printf("??/n SQLWARN5 = printf("??/n SQLWARN6 = printf("??/n SQLWARN7 = progend: ; }
290
/* Declare variables that are not SQL-related. */ /************************************************************/ short int i; /* Loop counter */ /************************************************************/ /* Declare the following: */ /* - Parameters used to call stored procedure GETPRML */ /* - An SQLDA for DESCRIBE PROCEDURE */ /* - An SQLDA for DESCRIBE CURSOR */ /* - Result set variable locators for up to three result */ /* sets */ /************************************************************/ EXEC SQL BEGIN DECLARE SECTION; char procnm[19]; /* INPUT parm -- PROCEDURE name */ char schema[9]; /* INPUT parm -- Users schema */ long int out_code; /* OUTPUT -- SQLCODE from the */ /* SELECT operation. */ struct { short int parmlen; char parmtxt[254]; } parmlst; /* OUTPUT -- RUNOPTS values */ /* for the matching row in */ /* catalog table SYSROUTINES */ struct indicators { short int procnm_ind; short int schema_ind; short int out_code_ind; short int parmlst_ind; } parmind; /* Indicator variable structure */ struct sqlda *proc_da; /* SQLDA for DESCRIBE PROCEDURE */ struct sqlda *res_da; /* SQLDA for DESCRIBE CURSOR static volatile SQL TYPE IS RESULT_SET_LOCATOR *loc1, *loc2, *loc3; /* Locator variables EXEC SQL END DECLARE SECTION; */ */
/*************************************************************/ /* Allocate the SQLDAs to be used for DESCRIBE */ /* PROCEDURE and DESCRIBE CURSOR. Assume that at most */ /* three cursors are returned and that each result set */ /* has no more than five columns. */ /*************************************************************/ proc_da = (struct sqlda *)malloc(SQLDASIZE(3)); res_da = (struct sqlda *)malloc(SQLDASIZE(5)); /************************************************************/ /* Call the GETPRML stored procedure to retrieve the */ /* RUNOPTS values for the stored procedure. In this */ /* example, we request the PARMLIST definition for the */ /* stored procedure named DSN8EP2. */ /* */ /* The call should complete with SQLCODE +466 because */ /* GETPRML returns result sets. */ /************************************************************/ strcpy(procnm,"dsn8ep2 "); /* Input parameter -- PROCEDURE to be found */ strcpy(schema," "); /* Input parameter -- Schema name for proc */ parmind.procnm_ind=0; parmind.schema_ind=0; parmind.out_code_ind=0; /* Indicate that none of the input parameters */ /* have null values */ parmind.parmlst_ind=-1;
Chapter 5. Coding SQL statements in C application programs
291
/* /* /* /* /* /*
The parmlst parameter is an output parm. Mark PARMLST parameter as null, so the DB2 requester does not have to send the entire PARMLST variable to the server. This helps reduce network I/O time, because PARMLST is fairly large.
*/ */ */ */ */ */
EXEC SQL CALL GETPRML(:procnm INDICATOR :parmind.procnm_ind, :schema INDICATOR :parmind.schema_ind, :out_code INDICATOR :parmind.out_code_ind, :parmlst INDICATOR :parmind.parmlst_ind); if(SQLCODE!=+466) /* If SQL CALL failed, { /* print the SQLCODE and any /* message tokens printf("SQL CALL failed due to SQLCODE = printf("sqlca.sqlerrmc = "); for(i=0;i<sqlca.sqlerrml;i++) printf("i]); printf("\n"); }
*/ */ */
else /* If the CALL worked, */ if(out_code!=0) /* Did GETPRML hit an error? */ printf("GETPRML failed due to RC = /**********************************************************/ /* If everything worked, do the following: */ /* - Print out the parameters returned. */ /* - Retrieve the result sets returned. */ /**********************************************************/ else { printf("RUNOPTS = /* Print out the runopts list */ /********************************************************/ /* Use the statement DESCRIBE PROCEDURE to */ /* return information about the result sets in the */ /* SQLDA pointed to by proc_da: */ /* - SQLD contains the number of result sets that were */ /* returned by the stored procedure. */ /* - Each SQLVAR entry has the following information */ /* about a result set: */ /* - SQLNAME contains the name of the cursor that */ /* the stored procedure uses to return the result */ /* set. */ /* - SQLIND contains an estimate of the number of */ /* rows in the result set. */ /* - SQLDATA contains the result locator value for */ /* the result set. */ /********************************************************/ EXEC SQL DESCRIBE PROCEDURE INTO :*proc_da; /********************************************************/ /* Assume that you have examined SQLD and determined */ /* that there is one result set. Use the statement */ /* ASSOCIATE LOCATORS to establish a result set locator */ /* for the result set. */ /********************************************************/ EXEC SQL ASSOCIATE LOCATORS (:loc1) WITH PROCEDURE GETPRML; /********************************************************/ /* Use the statement ALLOCATE CURSOR to associate a */ /* cursor for the result set. */ /********************************************************/ EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1; /********************************************************/ /* Use the statement DESCRIBE CURSOR to determine the */
292
/* columns in the result set. */ /********************************************************/ EXEC SQL DESCRIBE CURSOR C1 INTO :*res_da; /********************************************************/ /* Call a routine (not shown here) to do the following: */ /* - Allocate a buffer for data and indicator values */ /* fetched from the result table. */ /* - Update the SQLDATA and SQLIND fields in each */ /* SQLVAR of *res_da with the addresses at which to */ /* to put the fetched data and values of indicator */ /* variables. */ /********************************************************/ alloc_outbuff(res_da); /********************************************************/ /* Fetch the data from the result table. */ /********************************************************/ while(SQLCODE==0) EXEC SQL FETCH C1 USING DESCRIPTOR :*res_da; } return; }
293
294
The following example is a C stored procedure with linkage convention GENERAL WITH NULLS.
#pragma runopts(plist(os)) #include <stdlib.h> EXEC SQL INCLUDE SQLCA; /***************************************************************/ /* Declare C variables used for SQL operations on the */ /* parameters. These are local variables to the C program, */ /* which you must copy to and from the parameter list provided */ /* to the stored procedure. */ /***************************************************************/ EXEC SQL BEGIN DECLARE SECTION; char PROCNM[19]; char SCHEMA[9]; char PARMLST[255]; struct INDICATORS { short int PROCNM_IND; short int SCHEMA_IND; short int OUT_CODE_IND; short int PARMLST_IND; } PARM_IND;
Chapter 5. Coding SQL statements in C application programs
295
EXEC SQL END DECLARE SECTION; /***************************************************************/ /* Declare cursors for returning result sets to the caller. */ /***************************************************************/ EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:SCHEMA; main(argc,argv) int argc; char *argv[]; { /********************************************************/ /* Copy the input parameters into the area reserved in */ /* the local program for SQL processing. */ /********************************************************/ strcpy(PROCNM, argv[1]); strcpy(SCHEMA, argv[2]); /********************************************************/ /* Copy null indicator values for the parameter list. */ /********************************************************/ memcpy(&PARM_IND,(struct INDICATORS *) argv[5], sizeof(PARM_IND)); /********************************************************/ /* If any input parameter is NULL, return an error */ /* return code and assign a NULL value to PARMLST. */ /********************************************************/ if (PARM_IND.PROCNM_IND<0 || PARM_IND.SCHEMA_IND<0 || { *(int *) argv[3] = 9999; /* set output return code */ PARM_IND.OUT_CODE_IND = 0; /* value is not NULL */ PARM_IND.PARMLST_IND = -1; /* PARMLST is NULL */ } else { /********************************************************/ /* If the input parameters are not NULL, issue the SQL */ /* SELECT against the SYSIBM.SYSROUTINES catalog */ /* table. */ /********************************************************/ strcpy(PARMLST, ""); /* Clear PARMLST */ EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.SYSROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA; /********************************************************/ /* Copy SQLCODE to the output parameter list. */ /********************************************************/ *(int *) argv[3] = SQLCODE; PARM_IND.OUT_CODE_IND = 0; /* OUT_CODE is not NULL */ } /********************************************************/ /* Copy the RUNOPTS value back to the output parameter */ /* area. */ /********************************************************/ strcpy(argv[4], PARMLST); /********************************************************/ /* Copy the null indicators back to the output parameter*/ /* area. */ /********************************************************/
296
memcpy((struct INDICATORS *) argv[5],&PARM_IND, sizeof(PARM_IND)); /********************************************************/ /* Open cursor C1 to cause DB2 to return a result set */ /* to the caller. */ /********************************************************/ EXEC SQL OPEN C1; }
297
298
Procedure
To define the SQL communications area, SQLSTATE, and SQLCODE: Choose one of the following actions:
299
Description 1. Code the SQLCA directly in the program or use the following SQL INCLUDE statement to request a standard SQLCA declaration: EXEC SQL INCLUDE SQLCA You can specify INCLUDE SQLCA or a declaration for SQLCODE wherever you can specify a 77 level or a record description entry in the WORKING-STORAGE SECTION. DB2 sets the SQLCODE and SQLSTATE values in the SQLCA after each SQL statement executes. Your application should check these values to determine whether the last SQL statement was successful.
1. Declare the SQLCODE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as PIC S9(9) BINARY, PIC S9(9) COMP-4, PIC S9(9) COMP-5, or PICTURE S9(9) COMP. When you use the DB2 precompiler, you can declare a stand-alone SQLCODE variable in either the WORKING-STORAGE SECTION or LINKAGE SECTION. When you use the DB2 coprocessor, you can declare a stand-alone SQLCODE variable in the WORKING-STORAGE SECTION, LINKAGE SECTION or LOCAL-STORAGE SECTION. 2. Declare the SQLSTATE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as PICTURE X(5). Restriction: Do not declare an SQLSTATE variable as an element of a structure. Requirement: After you declare the SQLCODE and SQLSTATE variables, ensure that all SQL statements in the program are within the scope of the declaration of these variables.
300
Related tasks: Checking the execution of SQL statements on page 201 Checking the execution of SQL statements by using the SQLCA on page 202 Checking the execution of SQL statements by using SQLCODE and SQLSTATE on page 206 Defining the items that your program can use to check whether an SQL statement executed successfully on page 137
Procedure
To define SQL descriptor areas: Perform one of the following actions: v Code the SQLDA declarations directly in your program. When you use the DB2 precompiler, you must place SQLDA declarations in the WORKING-STORAGE SECTION or LINKAGE SECTION of your program, wherever you can specify a record description entry in that section. When you use the DB2 coprocessor, you must place SQLDA declarations in the WORKING-STORAGE SECTION, LINKAGE SECTION or LOCAL-STORAGE SECTION of your program, wherever you can specify a record description entry in that section. v Call a subroutine that is written in C, PL/I, or assembler language and that uses the INCLUDE SQLDA statement to define the SQLDA. The subroutine can also include SQL statements for any dynamic SQL functions that you need. Restrictions: v You must place SQLDA declarations before the first SQL statement that references the data descriptor, unless you use the TWOPASS SQL processing option. v You cannot use the SQL INCLUDE statement for the SQLDA, because it is not supported in COBOL. Related tasks: Defining SQL descriptor areas on page 137
Procedure
To declare host variables, host variable arrays, and host structures: 1. Declare the variables according to the following rules and guidelines: v You must explicitly declare all host variables and host variable arrays that are used in SQL statements in the WORKING-STORAGE SECTION or LINKAGE SECTION of your program's DATA DIVISION.
301
v You must explicitly declare each host variable and host variable array before using them in an SQL statement. v You can specify OCCURS when defining an indicator structure, a host variable array, or an indicator variable array. You cannot specify OCCURS for any other type of host variable. v You cannot implicitly declare any host variables through default typing or by using the IMPLICIT statement. v If you specify the ONEPASS SQL processing option, you must explicitly declare each host variable and each host variable array before using them in an SQL statement. If you specify the TWOPASS precompiler option, you must declare each host variable before using it in the DECLARE CURSOR statement. v If you specify the STDSQL(YES) SQL processing option, you must precede the host language statements that define the host variables and host variable arrays with the BEGIN DECLARE SECTION statement and follow the host language statements with the END DECLARE SECTION statement. Otherwise, these statements are optional. v Ensure that any SQL statement that uses a host variable or host variable array is within the scope of the statement that declares that variable or array. | | | | | v If you are using the DB2 precompiler, ensure that the names of host variables and host variable arrays are unique within the program, even if the variables and variable arrays are in different blocks, classes, procedures, functions, or subroutines. You can qualify the names with a structure name to make them unique. 2. Optional: Define any associated indicator variables, arrays, and structures. Related tasks: Declaring host variables and indicator variables on page 138
302
than 32767 or smaller than -32768. You get an overflow warning or an error, depending on whether you specify an indicator variable. v Be careful of truncation. For example, if you retrieve an 80-character CHAR column value into a PICTURE X(70) host variable, the rightmost 10 characters of the retrieved string are truncated. Retrieving a double precision floating-point or decimal column value into a PIC S9(8) COMP host variable removes any fractional part of the value. Similarly, retrieving a column value with DECIMAL data type into a COBOL decimal variable with a lower precision might truncate the value. v If your varying-length string host variables receive values whose length is greater than 9999 bytes, compile the applications in which you use those host variables with the option TRUNC(BIN). TRUNC(BIN) lets the length field for the string receive a value of up to 32767 bytes.
(2) 01 77 (1) level-1 variable-name IS USAGE COMPUTATIONAL-2 COMP-2 . IS VALUE numeric-constant COMPUTATIONAL-1 COMP-1 (3)
Notes: 1 2 3 level-1 indicates a COBOL level between 2 and 48. COMPUTATIONAL-1 and COMP-1 are equivalent. COMPUTATIONAL-2 and COMP-2 are equivalent.
The following diagram shows the syntax for declaring integer and small integer host variables.
303
|
01 77 (1) level-1 variable-name PICTURE PIC
IS S9(4) S9999 S9(9) S999999999 S9(18) (4) . IS VALUE (3) COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP numeric-constant IS USAGE
Notes: 1 2 3 4 level-1 indicates a COBOL level between 2 and 48. The COBOL binary integer data types BINARY, COMPUTATIONAL, COMP, COMPUTATIONAL-4, and COMP-4 are equivalent. COMPUTATIONAL-5 (and COMP-5) are equivalent to the other COBOL binary integer data types if you compile the other data types with TRUNC(BIN). Any specification for scale is ignored.
The following diagram shows the syntax for declaring decimal host variables.
304
IS 01 77 (1) level-1 (3) PACKED-DECIMAL COMPUTATIONAL-3 COMP-3 IS DISPLAY NATIONAL SIGN LEADING SEPARATE CHARACTER variable-name PICTURE PIC picture-string
(2) IS USAGE
. IS VALUE numeric-constant
Notes: 1 2 3 level-1 indicates a COBOL level between 2 and 48. The picture-string that is associated with SIGN LEADING SEPARATE must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i instances of 9). PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The picture-string that is that is associated with these types must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9) or S9(i)V.
In COBOL, you declare the SMALLINT and INTEGER data types as a number of decimal digits. DB2 uses the full size of the integers (in a way that is similar to processing with the TRUNC(BIN) compiler option) and can place larger values in the host variable than would be allowed in the specified number of digits in the COBOL declaration. If you compile with TRUNC(OPT) or TRUNC(STD), ensure that the size of numbers in your application is within the declared number of digits. For small integers that can exceed 9999, use S9(4) COMP-5 or compile with TRUNC(BIN). For large integers that can exceed 999 999 999, use S9(10) COMP-3 to obtain the decimal data type. If you use COBOL for integers that exceed the COBOL PICTURE, specify the column as decimal to ensure that the data types match and perform well. If you are using a COBOL compiler that does not support decimal numbers of more than 18 digits, use one of the following data types to hold values of greater than 18 digits: v A decimal variable with a precision less than or equal to 18, if the actual data values fit. If you retrieve a decimal value into a decimal variable with a scale that is less than the source column in the database, the fractional part of the value might be truncated. v An integer or a floating-point variable, which converts the value. If you use an integer variable, you lose the fractional part of the number. If the decimal number might exceed the maximum value for an integer or if you want to preserve a fractional value, use a floating-point variable. Floating-point numbers
305
are approximations of real numbers. Therefore, when you assign a decimal number to a floating-point variable, the result might be different from the original number. v A character-string host variable. Use the CHAR function to retrieve a decimal value into it. Restriction: The SQL data type DECFLOAT has no equivalent in COBOL.
(2)
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. The picture-string that is associated with these forms must be X(m) (or XX...X, with m instances of X), where m is up to COBOL's limitation. However, the maximum length of the CHAR data type (fixed-length character string) in DB2 is 255 bytes.
The following diagrams show the syntax for declaring varying-length character host variables.
01 (1) level-1
variable-name
306
(1) 49 var-1
IS S9(4) S9999
(3) IS USAGE
. IS VALUE numeric-constant
Notes: 1 2 3 You cannot use an intervening REDEFINE at level 49. You cannot directly reference var-1 as a host variable. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with TRUNC(STD) recognizes values up to only 9999. This behavior can cause data truncation errors when COBOL statements execute and might effectively limit the maximum length of variable-length character strings to 9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation.
(1) 49 var-2
IS picture-string
IS VALUE character-constant
Notes: 1 2 3 You cannot use an intervening REDEFINE at level 49. You cannot directly reference var-2 as a host variable. For fixed-length strings, the picture-string must be X(m) (or XX, with m instances of X), where mis up to COBOL's limitation. However, the maximum length of the VARCHAR data type in DB2 varies depending on the data page size.
307
The following diagram shows the syntax for declaring fixed-length graphic host variables.
(2)
. IS VALUE graphic-constant
IS USAGE
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), where m is up to COBOL's limitation. However, the maximum length of the GRAPHIC data type (fixed-length graphic string) in DB2 is 127 double-bytes. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is supported only by the DB2 coprocessor.
The following diagrams show the syntax for declaring varying-length graphic host variables.
01 (1) level-1
variable-name
308
IS S9(4) S9999
. IS VALUE numeric-constant
Notes: 1 2 You cannot directly reference var-1 as a host variable. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with TRUNC(STD) recognizes values up to only 9999. This behavior can cause data truncation errors when COBOL statements execute and might effectively limit the maximum length of variable-length character strings to 9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation.
IS picture-string
IS VALUE graphic-constant
Notes: 1 2 You cannot directly reference var-2 as a host variable. For fixed-length strings, the picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), where m is up to COBOL's limitation. However, the maximum length of the VARGRAPHIC data type in DB2 varies depending on the data page size. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. USAGE NATIONAL is supported only by the DB2 coprocessor.
| | | | |
309
| | | |
The following diagram shows the syntax for declaring BINARY and VARBINARY host variables.
IS USAGE 01 variable-name SQL TYPE IS BINARY VARBINARY BINARY VARYING ( length (1) ) .
| | | | | | | | | | | | | | | | Notes: 1 For BINARY host variables, the length must be in the range from 1 to 255. For VARBINARY host variables, the length must be in the range from 1 to 32 704.
COBOL does not have variables that correspond to the SQL binary types BINARY and VARBINARY. To create host variables that can be used with these data types, use the SQL TYPE IS clause. The SQL precompiler replaces this declaration with a COBOL language structure in the output source member. When you reference a BINARY or VARBINARY host variable in an SQL statement, you must use the variable that you specify in the SQL TYPE declaration. When you reference the host variable in a host language statement, you must use the variable that DB2 generates. Examples of binary variable declarations: The following table shows examples of variables that DB2 generates when you declare binary host variables.
Corresponding variable that DB2 generates in the output source member 01 BIN-VAR PIC X(10). 01 VBIN-VAR. 49 VBIN-VAR-LEN PIC S9(4) USAGE BINARY. 49 VBIN-VAR-TEXT PIC X(10).
| Table 63. Examples of BINARY and VARBINARY variable declarations for COBOL | Variable declaration that you include in your COBOL | program | 01 BIN-VAR USAGE IS SQL TYPE IS BINARY(10). | 01 VBIN-VAR USAGE IS SQL TYPE IS VARBINARY(10). | | | |
01
variable-name IS USAGE
Table Locators
The following diagram shows the syntax for declaring table locators.
310
01 (1) level-1
01 level-1
variable-name IS USAGE
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB-LOCATOR CLOB-LOCATOR DBCLOB-LOCATOR BLOB-FILE CLOB-FILE DBCLOB-FILE
length K M G
311
01 (1) level-1
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB-FILE CLOB-FILE DBCLOB-FILE
01 (1) level-1
variable-name IS USAGE
Notes: 1 level-1 indicates a COBOL level between 2 and 48. Related concepts: Host variables on page 139 Large objects (LOBs) on page 440 Related tasks: Embedding SQL statements in your application on page 146 Related reference: Limits in DB2 for z/OS(DB2 SQL)
312
(1) level-1 variable-name IS USAGE COMPUTATIONAL-1 (2) COMP-1 COMPUTATIONAL-2 (3) COMP-2
Notes: 1 2 3 4 level-1 indicates a COBOL level between 2 and 48. COMPUTATIONAL-1 and COMP-1 are equivalent. COMPUTATIONAL-2 and COMP-2 are equivalent. dimension must be an integer constant between 1 and 32767.
The following diagram shows the syntax for declaring integer and small integer host variable arrays.
313
(2) BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 (3) COMP-5 COMPUTATIONAL COMP (5) .
Notes: 1 2 3 4 5 level-1 indicates a COBOL level between 2 and 48. The COBOL binary integer data types BINARY, COMPUTATIONAL, COMP, COMPUTATIONAL-4, and COMP-4 are equivalent. COMPUTATIONAL-5 (and COMP-5) are equivalent to the other COBOL binary integer data types if you compile the other data types with TRUNC(BIN). dimension must be an integer constant between 1 and 32767. Any specification for scale is ignored.
The following diagram shows the syntax for declaring decimal host variable arrays.
314
PACKED-DECIMAL COMPUTATIONAL-3 COMP-3 IS DISPLAY NATIONAL SIGN LEADING SEPARATE (3) CHARACTER
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. PACKED-DECIMAL, COMPUTATIONAL-3, and COMP-3 are equivalent. The picture-string that is associated with these types must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9) or S9(i)V. The picture-string that is associated with SIGN LEADING SEPARATE must have the form S9(i)V9(d) (or S9...9V9...9, with i and d instances of 9 or S9...9V with i instances of 9). dimension must be an integer constant between 1 and 32767.
3 4
315
IS picture-string
(2)
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. The picture-string must be in the form X(m) (or XX...X, with m instances of X), where 1 <= m <= 32767 for fixed-length strings. However, the maximum length of the CHAR data type (fixed-length character string) in DB2 is 255 bytes. dimension must be an integer constant between 1 and 32767.
The following diagrams show the syntax for declaring varying-length character string arrays.
(2) . TIMES
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. dimension must be an integer constant between 1 and 32767.
316
IS S9(4) S9999
. IS numeric-constant
Notes: 1 2 You cannot directly reference var-1 as a host variable array. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with TRUNC(STD) recognizes values up to only 9999. This behavior can cause data truncation errors when COBOL statements execute and might effectively limit the maximum length of variable-length character strings to 9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation.
IS picture-string
IS VALUE character-constant
Notes: 1 2 You cannot directly reference var-2 as a host variable array. The picture-string must be in the form X(m) (or XX...X, with m instances of X), where 1 <= m <= 32767 for fixed-length strings; for other strings, m cannot be greater than the maximum size of a varying-length character string. You cannot use an intervening REDEFINE at level 49.
Example: The following example shows declarations of a fixed-length character array and a varying-length character array.
01 OUTPUT-VARS. 05 NAME OCCURS 10 TIMES. 49 NAME-LEN PIC S9(4) COMP-4 SYNC. 49 NAME-DATA PIC X(40). 05 SERIAL-NUMBER PIC S9(9) COMP-4 OCCURS 10 TIMES.
317
IS picture-string
(2)
(5) TIMES
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. For fixed-length strings, the format for picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), where 1 <= m <= 127; for other strings, m cannot be greater than the maximum size of a varying-length graphic string. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. You can use USAGE NATIONAL only if you are using the DB2 coprocessor. dimension must be an integer constant between 1 and 32767.
3 4 5
The following diagrams show the syntax for declaring varying-length graphic string arrays.
(2) . TIMES
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. dimension must be an integer constant between 1 and 32767.
318
IS S9(4) S9999
. IS numeric-constant
Notes: 1 2 You cannot directly reference var-1 as a host variable array. DB2 uses the full length of the S9(4) BINARY variable even though COBOL with TRUNC(STD) recognizes values up to only 9999. This behavior can cause data truncation errors when COBOL statements execute and might effectively limit the maximum length of variable-length character strings to 9999. Consider using the TRUNC(BIN) compiler option or USAGE COMP-5 to avoid data truncation.
IS picture-string
(2) USAGE
. IS VALUE graphic-constant
Notes: 1 2 You cannot directly reference var-2 as a host variable array. For fixed-length strings, the format for picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), where 1 <= m <= 127; for other strings, m cannot be greater than the maximum size of a varying-length graphic string. Use USAGE NATIONAL only for Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. You can use USAGE NATIONAL only if you are using the DB2 coprocessor.
3 4
319
(1) level-1 variable-name SQL TYPE IS BINARY BINARY VARYING VARBINARY ( length
(2) )
Notes: 1 2 3 level-1 indicates a COBOL level between 2 and 48. For BINARY host variables, the length must be in the range 1 to 255. For VARBINARY host variables, the length must be in the range 1 to 32704. dimension must be an integer constant between 1 and 32767.
|
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB-LOCATOR CLOB-LOCATOR DBCLOB-LOCATOR BLOB-FILE CLOB-FILE DBCLOB-FILE ( length K M G ) OCCURS dimension
(2) . TIMES
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. dimension must be an integer constant between 1 and 32767.
| | | |
320
| |
| |
(2) BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB-FILE CLOB-FILE DBCLOB-FILE ( length K M G ) OCCURS dimension TIMES .
| | | | | | Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. dimension must be an integer constant between 1 and 32767.
(1) level-1 variable-name IS USAGE . TIMES SQL TYPE IS ROWID OCCURS dimension
(2)
Notes: 1 2 level-1 indicates a COBOL level between 2 and 48. dimension must be an integer constant between 1 and 32767. Related concepts: Host variable arrays in an SQL statement on page 155 Host variable arrays on page 139 Large objects (LOBs) on page 440 Related tasks: Inserting multiple rows of data from host variable arrays on page 157 Retrieving multiple rows of data into host variable arrays on page 156
321
Requirements: Host structure declarations in COBOL must satisfy the following requirements: v COBOL host structures can have a maximum of two levels, even though the host structure might occur within a structure with multiple levels. However, you can declare a varying-length character string, which must be level 49. v A host structure name can be a group name whose subordinate levels name elementary data items. v If you are using the DB2 precompiler, do not declare host variables or host structures on any subordinate levels after one of the following items: A COBOL item that begins in area A Any SQL statement (except SQL INCLUDE) Any SQL statement within an included member When the DB2 precompiler encounters one of the preceding items in a host structure, it considers the structure to be complete. When you write an SQL statement that contains a qualified host variable name (perhaps to identify a field within a structure), use the name of the structure followed by a period and the name of the field. For example, for structure B that contains field C1, specify B.C1 rather than C1 OF B or C1 IN B.
Host structures
The following diagram shows the syntax for declaring host structures.
(4)
PICTURE integer-decimal-usage . PIC picture-string char-inner-variable . varchar-inner-variables vargraphic-inner-variables SQL TYPE IS ROWID . IS USAGE SQL TYPE IS TABLE LIKE table-name AS LOCATOR . IS USAGE LOB data type . IS USAGE
Notes: 1 2 3 4 level-1 indicates a COBOL level between 1 and 47. level-2 indicates a COBOL level between 2 and 48. For elements within a structure, use any level 02 through 48 (rather than 01 or 77), up to a maximum of two levels. Using a FILLER or optional FILLER item within a host structure declaration can invalidate the whole structure.
322
IS USAGE
IS VALUE constant
IS USAGE
BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP PACKED-DECIMAL COMPUTATIONAL-3 COMP-3 IS DISPLAY NATIONAL SIGN LEADING SEPARATE CHARACTER
IS VALUE constant
323
IS VALUE constant
IS S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
. IS VALUE numeric-constant
Notes: | 1 The number 49 has a special meaning to DB2. Do not specify another number.
324
IS 49 var-4 PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
. IS VALUE numeric-constant
(2) (3)
IS VALUE graphic-constant
Notes: 1 For fixed-length strings, the format of picture-string is G(m) or N(m) (or, m instances of GG...G or NN...N), where 1 <= m <= 127; for other strings, m cannot be greater than the maximum size of a varying-length graphic string. Use USAGE NATIONAL for only Unicode UTF-16 data. In the picture-string for USAGE NATIONAL, you must use N in place of G. You can use USAGE NATIONAL only if you are using the DB2 coprocessor.
2 3
325
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB-LOCATOR CLOB-LOCATOR DBCLOB-LOCATOR BLOB-FILE CLOB-FILE DBCLOB-FILE
length K M G
| | | | |
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB-FILE CLOB-FILE DBCLOB-FILE
length K M G
| | |
Example
In the following example, B is the name of a host structure that contains the elementary items C1 and C2.
01 A 02 B 03 C1 PICTURE ... 03 C2 PICTURE ...
To reference the C1 field in an SQL statement, specify B.C1. Related concepts: Host structures on page 140
Indicator variables, indicator arrays, and host structure indicator arrays in COBOL
An indicator variable is a 2-byte integer (PIC S9(4) USAGE BINARY). An indicator variable array is an array of 2-byte integers (PIC S9(4) USAGE BINARY). You declare indicator variables in the same way as host variables. You can mix the declarations of the two types of variables. You can define indicator variables as scalar variables or as array elements in a structure form or as an array variable by using a single level OCCURS clause.
326
The following diagram shows the syntax for declaring an indicator variable in COBOL.
IS 01 77 variable-name PICTURE PIC S9(4) S9999 USAGE IS BINARY COMPUTATIONAL-4 COMP-4 COMPUTATIONAL-5 COMP-5 COMPUTATIONAL COMP
. IS VALUE constant
The following diagram shows the syntax for declaring an indicator array in COBOL.
Notes: 1 2 level-1 must be an integer between 2 and 48. dimension must be an integer constant between 1 and 32767.
Example
The following example shows a FETCH statement with the declarations of the host variables that are needed for the FETCH statement and their associated indicator variables.
EXEC SQL FETCH CLS_CURSOR INTO :CLS-CD, :DAY :DAY-IND, :BGN :BGN-IND, :END :END-IND END-EXEC.
327
77 77 77 77 77 77 77
Related concepts: Indicator variables, arrays, and structures on page 140 Related tasks: Inserting null values into columns by using indicator variables or arrays on page 154
Procedure
To control the CCSID for COBOL host variables: Use one or more of the following items: The NATIONAL data type Use this data type to declare Unicode values in the UTF-16 format (CCSID 1200). If you declare a host variable HV1 as USAGE NATIONAL, DB2 always handles HV1 as if you had used the following DECLARE VARIABLE statement:
DECLARE :HV1 VARIABLE CCSID 1200
The COBOL CODEPAGE compiler option Use this option to specify the default EBCDIC CCSID of character data items. The SQLCCSID compiler option Use this option to control whether the CODEPAGE compiler option influences the processing of SQL host variables in your COBOL programs (available in Enterprise COBOL V3R4 or later). When you specify the SQLCCSID compiler option, the COBOL DB2 coprocessor uses the CCSID that is specified in the CODEPAGE compiler option. All host variables of character data type, other than NATIONAL, are specified with that CCSID unless they are explicitly overridden by a DECLARE VARIABLE statement. When you specify the NOSQLCCSID compiler option, the CCSID that is specified in the CODEPAGE compiler option is used for processing only COBOL statements within the COBOL program. That CCSID is not used for the processing of host variables in SQL statements. DB2 uses the CCSIDs that are specified through DB2 mechanisms and defaults as host variable data value encodings.
328
The DECLARE VARIABLE statement. This statement explicitly sets the CCSID for individual host variables.
Example
Assume that the COBOL SQLCCSID compiler option is specified and that the COBOL CODEPAGE compiler option is specified as CODEPAGE(1141). The following code shows how you can control the CCSID:
DATA DIVISION. 01 HV1 PIC N(10) USAGE NATIONAL. 01 HV2 PIC X(20) USAGE DISPLAY. 01 HV3 PIC X(30) USAGE DISPLAY. ... EXEC SQL DECLARE :HV3 VARIABLE CCSID 1047 END-EXEC. ... PROCEDURE DIVISION. ... EXEC SQL SELECT C1, C2, C3 INTO :HV1, :HV2, :HV3 FROM T1 END-EXEC.
Each of the host variables have the following CCSIDs: HV1 HV2 HV3 1200 1141 1047
Assume that the COBOL NOSQLCCSID compiler option is specified, the COBOL CODEPAGE compiler option is specified as CODEPAGE(1141), and the DB2 default single byte CCSID is set to 37. In this case, each of the host variables in this example have the following CCSIDs: HV1 HV2 1200 37
HV3 1047 Related reference: Host variables in COBOL on page 302 Compiler options (COBOL) (Enterprise COBOL for z/OS Programming Guide)
329
Table 64. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in COBOL programs SQLTYPE of host COBOL host variable data type variable1 COMP-1 COMP-2 S9(i)V9(d) COMP-3 or S9(i)V9(d) PACKED-DECIMAL S9(i)V9(d) DISPLAY SIGN LEADING SEPARATE S9(i)V9(d) NATIONAL SIGN LEADING SEPARATE S9(4) COMP-4, S9(4) COMP-5, S9(4) COMP, or S9(4) BINARY S9(9) COMP-4, S9(9) COMP-5, S9(9) COMP, or S9(9) BINARY 480 480 484 504 SQLLEN of host variable 4 8 i+d in byte 1, d in byte 2 i+d in byte 1, d in byte 2 SQL data type REAL or FLOAT(n) 1<=n<=21 DOUBLE PRECISION, or FLOAT(n) 22<=n<=53 DECIMAL(i+d,d) or NUMERIC(i+d,d) No exact equivalent. Use DECIMAL(i+d,d) or NUMERIC(i+d,d) No exact equivalent. Use DECIMAL(i+d,d) or NUMERIC(i+d,d) SMALLINT INTEGER BIGINT CHAR(n) VARCHAR(n) VARCHAR(m) GRAPHIC(m) VARGRAPHIC(m) VARGRAPHIC(m) BINARY(n) VARBINARY(n) Result set locator2 Table locator2 BLOB locator2 CLOB locator2 DBCLOB locator2 BLOB(n) CLOB(n) DBCLOB(m)3
504
500 496
2 4 8 n n m m m m n n 4 4 4 4 4 n n n
330
Table 64. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in COBOL programs (continued) SQLTYPE of host COBOL host variable data type variable1 SQLLEN of host variable 0 0 0 267 267 267 267 267 267 40 SQL data type XML XML XML BLOB file reference2 CLOB file reference2 DBCLOB file reference2 XML BLOB file reference2 XML CLOB file reference2 XML DBCLOB file reference2 ROWID
| SQL TYPE IS XML AS BLOB(n) | SQL TYPE IS XML AS CLOB(n) | SQL TYPE IS XML AS | DBCLOB(n) | SQL TYPE IS BLOB-FILE | | | | | | | |
SQL TYPE IS CLOB-FILE SQL TYPE IS DBCLOB-FILE SQL TYPE IS XML AS BLOB-FILE SQL TYPE IS XML AS CLOB-FILE SQL TYPE IS XML AS DBCLOB-FILE SQL TYPE IS ROWID Notes:
404 408 412 916/917 920/921 924/925 916/917 920/921 924/925 904
1. If a host variable includes an indicator variable, the SQLTYPE value is the base SQLTYPE value plus 1. 2. Do not use this data type as a column type. 3. m is the number of double-byte characters.
The following table shows equivalent COBOL host variables for each SQL data type. Use this table to determine the COBOL data type for host variables that you define to receive output from the database. For example, if you retrieve TIMESTAMP data, you can define a fixed-length character string variable of length n This table shows direct conversions between SQL data types and COBOL data types. However, a number of SQL data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 converts those compatible data types.
Table 65. COBOL host variable equivalents that you can use when retrieving data of a particular SQL data type SQL data type SMALLINT COBOL host variable equivalent S9(4) COMP-4, S9(4) COMP-5, S9(4) COMP, or S9(4) BINARY S9(9) COMP-4, S9(9) COMP-5, S9(9) COMP, or S9(9) BINARY S9(p-s)V9(s) COMP-3 or S9(p-s)V9(s) PACKED-DECIMAL DISPLAY SIGN LEADING SEPARATE NATIONAL SIGN LEADING SEPARATE p is precision; s is scale. 0<=s<=p<=31. If s=0, use S9(p)V or S9(p). If s=p, use SV9(s). If the COBOL compiler does not support 31digit decimal numbers, no exact equivalent exists. Use COMP-2. Notes
INTEGER
DECIMAL(p,s) or NUMERIC(p,s)
331
Table 65. COBOL host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type REAL or FLOAT (n) DOUBLE PRECISION, DOUBLE or FLOAT (n) COBOL host variable equivalent COMP-1 COMP-2 S9(18) COMP-4, S9(18) COMP-5, S9(18) COMP, or S9(18) BINARY Fixed-length character string. For example, 1<=n<=255 01 VAR-NAME PIC X(n). VARCHAR(n) Varying-length character string. For example, 01 VAR-NAME. 49 VAR-LEN PIC S9(4) USAGE BINARY. 49 VAR-TEXT PIC X(n). GRAPHIC(n) Fixed-length graphic string. For example, 01 VAR-NAME PIC G(n) USAGE IS DISPLAY-1. VARGRAPHIC(n) Varying-length graphic string. For example, 01 VAR-NAME. 49 VAR-LEN PIC S9(4) USAGE BINARY. 49 VAR-TEXT PIC G(n) USAGE IS DISPLAY-1. n refers to the number of double-byte characters, not to the number of bytes. 1<=n<=127 n refers to the number of double-byte characters, not to the number of bytes. The inner variables must have a level of 49. The inner variables must have a level of 49. Notes 1<=n<=21 22<=n<=53
| BIGINT |
CHAR(n)
| BINARY(n) | VARBINARY(n)
DATE
SQL TYPE IS BINARY(n) SQL TYPE IS VARBINARY(n) Fixed-length character string of length n. For example, 01 VAR-NAME PIC X(n). Fixed-length character string of length n. For example, 01 VAR-NAME PIC X(n).
1<=n<=255 1<=n<=32 704 If you are using a date exit routine, n is determined by that routine. Otherwise, n must be at least 10. If you are using a time exit routine, n is determined by that routine. Otherwise, n must be at least 6; to include seconds, n must be at least 8. n must be at least 19. To include microseconds, n must be 26; if n is less than 26, truncation occurs on the microseconds part. Use this data type only for receiving result sets. Do not use this data type as a column type. Use this data type only in a user-defined function or stored procedure to receive rows of a transition table. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type.
TIME
TIMESTAMP
SQL TYPE IS RESULT-SET-LOCATOR SQL TYPE IS TABLE LIKE table-name AS LOCATOR USAGE IS SQL TYPE IS BLOB-LOCATOR USAGE IS SQL TYPE IS CLOB-LOCATOR
Table locator
BLOB locator
CLOB locator
332
Table 65. COBOL host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type DBCLOB locator COBOL host variable equivalent USAGE IS SQL TYPE IS DBCLOB-LOCATOR USAGE IS SQL TYPE IS BLOB(n) USAGE IS SQL TYPE IS CLOB(n) USAGE IS SQL TYPE IS DBCLOB(n) SQL TYPE IS XML AS BLOB(n) SQL TYPE IS XML AS CLOB(n) SQL TYPE IS XML AS DBCLOB(n) USAGE IS SQL TYPE IS BLOB-FILE USAGE IS SQL TYPE IS CLOB-FILE USAGE IS SQL TYPE IS DBCLOB-FILE SQL TYPE IS XML AS BLOB-FILE Notes Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. 1n2147483647 1n2147483647 n is the number of double-byte characters. 1n1073741823 1n2147483647 1n2147483647 n is the number of double-byte characters. 1n1073741823 Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type. Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type. Use this data type only to manipulate XML data as BLOB files. Do not use this data type as a column type. Use this data type only to manipulate XML data as CLOB files. Do not use this data type as a column type. Use this data type only to manipulate XML data as DBCLOB files. Do not use this data type as a column type.
| XML | XML | XML | | | BLOB file reference | | | | | CLOB file reference | | | | | DBCLOB file reference | | | | XML BLOB file reference | | | XML CLOB file reference | |
333
Related concepts: Compatibility of SQL and language data types on page 144 LOB host variable, LOB locator, and LOB file reference variable declarations on page 741 Host variable data types for XML data in embedded SQL applications on page 216
| |
1. If you use the DB2 coprocessor, you can use the LOCAL-STORAGE SECTION wherever WORKING-STORAGE SECTION is listed in the table. 2. When including host variable declarations, the INCLUDE statement must be in the WORKING-STORAGE SECTION or the LINKAGE SECTION.
You cannot put SQL statements in the DECLARATIVES section of a COBOL program. | | | | | | | Each SQL statement in a COBOL program must begin with EXEC SQL and end with END-EXEC. If you are using the DB2 precompiler, the EXEC and SQL keywords must appear on one line, but the remainder of the statement can appear on subsequent lines. If you are using the DB2 coprocessor, the EXEC and SQL keywords can be on different lines. Do not include any tokens between the two keywords EXEC and SQL except for COBOL comments, including debugging lines. Do not include SQL comments between the keywords EXEC and SQL. If the SQL statement appears between two COBOL statements, the period after END-EXEC is optional and might not be appropriate. If the statement appears in an IF...THEN set of COBOL statements, omit the ending period to avoid inadvertently ending the IF statement. You might code an UPDATE statement in a COBOL program as follows:
EXEC SQL UPDATE DSN8910.DEPT SET MGRNO = :MGR-NUM WHERE DEPTNO = :INT-DEPT END-EXEC.
334
| | | | | | | | |
Comments: You can include COBOL comment lines (* in column 7) in SQL statements wherever you can use a blank. If you are using the DB2 precompiler, you cannot include COBOL comment lines between the keywords EXEC and SQL. The precompiler treats COBOL debugging lines and page-eject lines (/ in column 7) as comment lines. The DB2 coprocessor treats the debugging lines based on the COBOL rules, which depend on the WITH DEBUGGING mode setting. For an SQL INCLUDE statement, the DB2 precompiler treats any text that follows the period after END-EXEC, and on the same line as END-EXEC, as a comment. The DB2 coprocessor treats this text as part of the COBOL program syntax. In addition, you can include SQL comments ('--') in any embedded SQL statement.
| | |
Debugging lines: The DB2 precompiler ignores the 'D' in column 7 on debugging lines and treats it as a blank. The DB2 coprocessor follows the COBOL language rules regarding debugging lines. Continuation for SQL statements: The rules for continuing a character string constant from one line to the next in an SQL statement embedded in a COBOL program are the same as those for continuing a non-numeric literal in COBOL. However, you can use either a quote or an apostrophe as the first nonblank character in area B of the continuation line. The same rule applies for the continuation of delimited identifiers and does not depend on the string delimiter option. To conform with SQL standard, delimit a character string constant with an apostrophe, and use a quote as the first nonblank character in area B of the continuation line for a character string constant. Continued lines of an SQL statement can be in columns 8 through 72 when using the DB2 precompiler and columns 12 through 72 when using the DB2 coprocessor.
| | |
COPY: If you use the DB2 precompiler, do not use a COBOL COPY statement within host variable declarations. If you use the DB2 coprocessor, you can use COBOL COPY. REPLACE: If you use the DB2 precompiler, the REPLACE statement has no effect on SQL statements. It affects only the COBOL statements that the precompiler generates. If you use the DB2 coprocessor, the REPLACE statement replaces text strings in SQL statements as well as in generated COBOL statements. Declaring tables and views: Your COBOL program should include the statement DECLARE TABLE to describe each table and view the program accesses. You can use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE statements. You should include the DCLGEN members in the DATA DIVISION. Dynamic SQL in a COBOL program: In general, COBOL programs can easily handle dynamic SQL statements. COBOL programs can handle SELECT statements if the data types and the number of fields returned are fixed. If you want to use variable-list SELECT statements, use an SQLDA. Including code: To include SQL statements or COBOL host variable declarations from a member of a partitioned data set, use the following SQL statement in the source code where you want to include the statements:
Chapter 6. Coding SQL statements in COBOL application programs
335
If you are using the DB2 precompiler, you cannot nest SQL INCLUDE statements. In this case, do not use COBOL verbs to include SQL statements or host variable declarations, and do not use the SQL INCLUDE statement to include CICS preprocessor related code. In general, if you are using the DB2 precompiler, use the SQL INCLUDE statement only for SQL-related coding. If you are using the COBOL DB2 coprocessor, none of these restrictions apply. | | Use the 'EXEC SQL' and 'END-EXEC' keyword pair to include SQL statements only. COBOL statements, such as COPY or REPLACE, are not allowed. Margins: You must code SQL statements that begin with EXEC SQL in columns 12 through 72. Otherwise the DB2 precompiler does not recognize the SQL statement. Names: You can use any valid COBOL name for a host variable. Do not use external entry names or access plan names that begin with 'DSN', and do not use host variable names that begin with 'SQL'. These names are reserved for DB2. Sequence numbers: The source statements that the DB2 precompiler generates do not include sequence numbers. Statement labels: You can precede executable SQL statements in the PROCEDURE DIVISION with a paragraph name. WHENEVER statement: The target for the GOTO clause in an SQL statement WHENEVER must be a section name or unqualified paragraph name in the PROCEDURE DIVISION. Special COBOL considerations: The following considerations apply to programs written in COBOL: v In a COBOL program that uses elements in a multi-level structure as host variable names, the DB2 precompiler generates the lowest two-level names. v Using the COBOL compiler options DYNAM and NODYNAM depends on the operating environment. TSO and IMS: You can specify the option DYNAM when compiling a COBOL program if you use the following guidelines. IMS and DB2 share a common alias name, DSNHLI, for the language interface module. You must do the following when you concatenate your libraries: If you use IMS with the COBOL option DYNAM, be sure to concatenate the IMS library first. If you run your application program only under DB2, be sure to concatenate the DB2 library first. CICS, CAF, and RRSAF: You must specify the NODYNAM option when you compile a COBOL program that either includes CICS statements or is translated by a separate CICS translator or the integrated CICS translator. In these cases, you cannot specify the DYNAM option. If your CICS program has a subroutine that is not translated by a separate CICS translator or the integrated CICS translator but contains SQL statements, you can specify the DYNAM option. However, in this case, you must concatenate the CICS libraries before the DB2 libraries.
| | | | | | | |
336
You can compile COBOL stored procedures with either the DYNAM option or the NODYNAM option. If you use DYNAM, ensure that the correct DB2 language interface module is loaded dynamically by performing one of the following actions: Use the ATTACH(RRSAF) precompiler option. Copy the DSNRLI module into a load library that is concatenated in front of the DB2 libraries. Use the member name DSNHLI. v To avoid truncating numeric values, use either of the following methods: Use the COMP-5 data type for binary integer host variables. Specify the COBOL compiler option: - TRUNC(OPT) if you are certain that the data being moved to each binary variable by the application does not have a larger precision than is defined in the PICTURE clause of the binary variable. - TRUNC(BIN) if the precision of data being moved to each binary variable might exceed the value in the PICTURE clause. DB2 assigns values to binary integer host variables as if you had specified the COBOL compiler option TRUNC(BIN) or used the COMP-5 data type. v If you are using the DB2 precompiler and your COBOL program contains several entry points or is called several times, the USING clause of the entry statement that executes before the first SQL statement executes must contain the SQLCA and all linkage section entries that any SQL statement uses as host variables. v If you use the DB2 precompiler, no compiler directives should appear between the PROCEDURE DIVISION and the DECLARATIVES statement. v Do not use COBOL figurative constants (such as ZERO and SPACE), symbolic characters, reference modification, and subscripts within SQL statements. v Observe the rules for naming SQL identifiers. However, for COBOL only, the names of SQL identifiers can follow the rules for naming COBOL words, if the names do not exceed the allowable length for the DB2 object. For example, the name 1ST-TIME is a valid cursor name because it is a valid COBOL word, but the name 1_TIME is not valid because it is not a valid SQL identifier or a valid COBOL word. v Observe these rules for hyphens: Surround hyphens used as subtraction operators with spaces. DB2 usually interprets a hyphen with no spaces around it as part of a host variable name. You can use hyphens in SQL identifiers under either of the following circumstances: - The application program is a local application that runs on DB2 for z/OS Version 7 or later. - The application program accesses remote sites, and the local site and remote sites are DB2 for z/OS Version 7 or later. v If you include an SQL statement in a COBOL PERFORM ... THRU paragraph and also specify the SQL statement WHENEVER ... GO, the COBOL compiler returns the warning message IGYOP3094. That message might indicate a problem. This usage is not recommended. v If you are using the DB2 precompiler, all SQL statements and any host variables they reference must be within the first program when using nested programs or batch compilation. v If you are using the DB2 precompiler, your COBOL programs must have a DATA DIVISION and a PROCEDURE DIVISION. Both divisions and the WORKING-STORAGE SECTION must be present in programs that contain SQL
Chapter 6. Coding SQL statements in COBOL application programs
| | | | | |
| | |
337
| | |
statements. However, if your COBOL programs requires the LOCAL-STORAGE SECTION, then the DB2 coprocessor should be used instead of the DB2 precompiler. If your program uses the DB2 precompiler and uses parameters that are defined in LINKAGE SECTION as host variables to DB2 and the address of the input parameter might change on subsequent invocations of your program, your program must reset the variable SQL-INIT-FLAG. This flag is generated by the DB2 precompiler. Resetting this flag indicates that the storage must initialize when the next SQL statement executes. To reset the flag, insert the statement MOVE ZERO TO SQL-INIT-FLAG in the called program's PROCEDURE DIVISION, ahead of any executable SQL statements that use the host variables. If you use the COBOL DB2 coprocessor, the called program does not need to reset SQL-INIT-FLAG. PSPI You can use the MESSAGE_TEXT condition item field of the GET DIAGNOSTICS statement to convert an SQL return code into a text message. Programs that require long token message support should code the GET DIAGNOSTICS statement instead of DSNTIAR. You can use the subroutine DSNTIAR to convert an SQL return code into a text message. DSNTIAR takes data from the SQLCA, formats it into a message, and places the result in a message output area that you provide in your application program. DSNTIAR syntax: CALL 'DSNTIAR' USING sqlca message lrecl. The DSNTIAR parameters have the following meanings: sqlca An SQL communication area. message An output area, in VARCHAR format, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text, each line being the length specified in lrecl, are put into this area. For example, you could specify the format of the output area as:
01 ERROR-MESSAGE. 02 ERROR-LEN 02 ERROR-TEXT PIC S9(4) COMP VALUE +1320. PIC X(132) OCCURS 10 TIMES INDEXED BY ERROR-INDEX. PIC S9(9) COMP VALUE +132.
PSPI
where ERROR-MESSAGE is the name of the message output area containing 10 lines of length 132 each, and ERROR-TEXT-LEN is the length of each line. lrecl A fullword containing the logical record length of output messages, between 72 and 240. An example of calling DSNTIAR from an application appears in the DB2 sample assembler program DSN8BC3, which is contained in the library DSN8910.
338
CICS: If you call DSNTIAR dynamically from a CICS COBOL application program, be sure you do the following: v Compile the COBOL application with the NODYNAM option. v Define DSNTIAR in the CSD. If your CICS application requires CICS storage handling, you must use the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC USING eib commarea sqlca msg lrecl.
DSNTIAC has extra parameters, which you must use for calls to routines that use CICS commands. eib EXEC interface block
commarea communication area For more information on these parameters, see the appropriate application programming guide for CICS. The remaining parameter descriptions are the same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA in the same way. You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you must also define them in the CSD. For an example of CSD entry generation statements for use with DSNTIAC, see job DSNTEJ5A. The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP. Related concepts: DB2 sample applications on page 1126 DCLGEN (declarations generator) on page 125 Host variable arrays in an SQL statement on page 155 SQL identifiers (DB2 SQL) Related tasks: Including dynamic SQL in your program on page 158 Embedding SQL statements in your application on page 146 Checking the execution of SQL statements by using the GET DIAGNOSTICS statement on page 208 Defining SQL descriptor areas on page 137 Displaying SQLCA fields by calling DSNTIAR on page 203 Limiting CPU time for dynamic SQL statements by using the resource limit facility on page 199
Example
Use EXEC SQL and END-EXEC. to delimit an SQL statement in a COBOL program:
Chapter 6. Coding SQL statements in COBOL application programs
339
340
In this case, you know the number of columns returned and their data types when you write the program. v Varying-List SELECT statements. In this case, you do not know the number of columns returned and their data types when you write the program. This section documents a technique of coding varying list SELECT statements in COBOL. This example program does not support BLOB, CLOB, or DBCLOB data types.
341
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
* DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION * * UNLOAD PROGRAM * * BATCH * * IBM ENTERPRISE COBOL FOR Z/OS * * * * COPYRIGHT = 5740-XYR (C) COPYRIGHT IBM CORP 1982, 1987 * * REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G120-2083 * * * * STATUS = VERSION 1 RELEASE 3, LEVEL 0 * * * * FUNCTION = THIS MODULE PROVIDES THE STORAGE NEEDED BY * * UNLDBCU2 AND CALLS THAT PROGRAM. * * * * NOTES = * * DEPENDENCIES = ENTERPRISE COBOL FOR Z/OS IS REQUIRED. * * SEVERAL NEW FACILITIES ARE USED. * * * * RESTRICTIONS = * * THE MAXIMUM NUMBER OF COLUMNS IS 750, * * WHICH IS THE SQL LIMIT. * * * * DATA RECORDS ARE LIMITED TO 32700 BYTES, * * INCLUDING DATA, LENGTHS FOR VARCHAR DATA, * * AND SPACE FOR NULL INDICATORS. * * * * MODULE TYPE = IBM ENTERPRISE COBOL PROGRAM * * PROCESSOR = ENTERPRISE COBOL FOR Z/OS * * MODULE SIZE = SEE LINK EDIT * * ATTRIBUTES = REENTRANT * * * * ENTRY POINT = UNLDBCU1 * * PURPOSE = SEE FUNCTION * * LINKAGE = INVOKED FROM DSN RUN * * INPUT = NONE * * OUTPUT = NONE * * * * EXIT-NORMAL = RETURN CODE 0 NORMAL COMPLETION * * * * EXIT-ERROR = * * RETURN CODE = NONE * * ABEND CODES = NONE * * ERROR-MESSAGES = NONE * * * * EXTERNAL REFERENCES = * * ROUTINES/SERVICES = * * UNLDBCU2 - ACTUAL UNLOAD PROGRAM * * * * DATA-AREAS = NONE * * CONTROL-BLOCKS = NONE * * * * TABLES = NONE * * CHANGE-ACTIVITY = NONE * * * * *PSEUDOCODE* * * * * PROCEDURE * * CALL UNLDBCU2. * * END. * *---------------------------------------------------------------* / IDENTIFICATION DIVISION. *----------------------PROGRAM-ID. UNLDBCU1 * ENVIRONMENT DIVISION. * CONFIGURATION SECTION.
342
| | | | | | | | | | | | | |
DATA DIVISION. * WORKING-STORAGE SECTION. * 01 01 * PROCEDURE DIVISION. * CALL UNLDBCU2 USING WORKAREA-IND RECWORK. GOBACK. WORKAREA-IND. 02 WORKIND PIC S9(4) COMP OCCURS 750 TIMES. RECWORK. 02 RECWORK-LEN PIC S9(8) COMP VALUE 32700. 02 RECWORK-CHAR PIC X(1) OCCURS 32700 TIMES.
The following example is the called program that does pointer manipulation. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
**** UNLDBCU2- DB2 SAMPLE BATCH COBOL UNLOAD PROGRAM *********** * * * MODULE NAME = UNLDBCU2 * * * * DESCRIPTIVE NAME = DB2 SAMPLE APPLICATION * * UNLOAD PROGRAM * * BATCH * * ENTERPRISE COBOL FOR Z/OS * * * * COPYRIGHT = 5740-XYR (C) COPYRIGHT IBM CORP 1982, 1987 * * REFER TO COPYRIGHT INSTRUCTIONS FORM NUMBER G120-2083 * * * * STATUS = VERSION 1 RELEASE 3, LEVEL 0 * * * * FUNCTION = THIS MODULE ACCEPTS A TABLE NAME OR VIEW NAME * * AND UNLOADS THE DATA IN THAT TABLE OR VIEW. * * READ IN A TABLE NAME FROM SYSIN. * * PUT DATA FROM THE TABLE INTO DD SYSREC01. * * WRITE RESULTS TO SYSPRINT. * * * * NOTES = * * DEPENDENCIES = IBM ENTERPRISE COBOL FOR Z/OS * * IS REQUIRED. * * * * RESTRICTIONS = * * THE SQLDA IS LIMITED TO 33016 BYTES. * * THIS SIZE ALLOWS FOR THE DB2 MAXIMUM * * OF 750 COLUMNS. * * * * DATA RECORDS ARE LIMITED TO 32700 BYTES, * * INCLUDING DATA, LENGTHS FOR VARCHAR DATA, * * AND SPACE FOR NULL INDICATORS. * * * * TABLE OR VIEW NAMES ARE ACCEPTED, AND ONLY * * ONE NAME IS ALLOWED PER RUN. * * * * MODULE TYPE = ENTERPRISE COBOL FOR Z/OS * * PROCESSOR = DB2 PRECOMPILER, COBOL COMPILER * * MODULE SIZE = SEE LINK EDIT * * ATTRIBUTES = REENTRANT * * * * ENTRY POINT = UNLDBCU2 * * PURPOSE = SEE FUNCTION * * LINKAGE = * * CALL UNLDBCU2 USING WORKAREA-IND RECWORK. * * * * INPUT = SYMBOLIC LABEL/NAME = WORKAREA-IND * * DESCRIPTION = INDICATOR VARIABLE ARRAY * * 01 WORKAREA-IND. * * 02 WORKIND PIC S9(4) COMP OCCURS 750 TIMES. *
Chapter 6. Coding SQL statements in COBOL application programs
343
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* SYMBOLIC LABEL/NAME = RECWORK * DESCRIPTION = WORK AREA FOR OUTPUT RECORD * 01 RECWORK. * 02 RECWORK-LEN PIC S9(8) COMP. * 02 RECWORK-CHAR PIC X(1) OCCURS 32700 TIMES.* * SYMBOLIC LABEL/NAME = SYSIN * DESCRIPTION = INPUT REQUESTS - TABLE OR VIEW * * OUTPUT = SYMBOLIC LABEL/NAME = SYSPRINT * DESCRIPTION = PRINTED RESULTS * * SYMBOLIC LABEL/NAME = SYSREC01 * DESCRIPTION = UNLOADED TABLE DATA * * EXIT-NORMAL = RETURN CODE 0 NORMAL COMPLETION * EXIT-ERROR = * RETURN CODE = NONE * ABEND CODES = NONE * ERROR-MESSAGES = * DSNT490I SAMPLE COBOL DATA UNLOAD PROGRAM RELEASE 3.0* - THIS IS THE HEADER, INDICATING A NORMAL * - START FOR THIS PROGRAM. * DSNT493I SQL ERROR, SQLCODE = NNNNNNNN * - AN SQL ERROR OR WARNING WAS ENCOUNTERED * - ADDITIONAL INFORMATION FROM DSNTIAR * - FOLLOWS THIS MESSAGE. * DSNT495I SUCCESSFUL UNLOAD XXXXXXXX ROWS OF * TABLE TTTTTTTT * - THE UNLOAD WAS SUCCESSFUL. XXXXXXXX IS * - THE NUMBER OF ROWS UNLOADED. TTTTTTTT * - IS THE NAME OF THE TABLE OR VIEW FROM * - WHICH IT WAS UNLOADED. * DSNT496I UNRECOGNIZED DATA TYPE CODE OF NNNNN * - THE PREPARE RETURNED AN INVALID DATA * - TYPE CODE. NNNNN IS THE CODE, PRINTED * - IN DECIMAL. USUALLY AN ERROR IN * - THIS ROUTINE OR A NEW DATA TYPE. * DSNT497I RETURN CODE FROM MESSAGE ROUTINE DSNTIAR * - THE MESSAGE FORMATTING ROUTINE DETECTED * - AN ERROR. SEE THAT ROUTINE FOR RETURN * - CODE INFORMATION. USUALLY AN ERROR IN * - THIS ROUTINE. * DSNT498I ERROR, NO VALID COLUMNS FOUND * - THE PREPARE RETURNED DATA WHICH DID NOT * - PRODUCE A VALID OUTPUT RECORD. * - USUALLY AN ERROR IN THIS ROUTINE. * DSNT499I NO ROWS FOUND IN TABLE OR VIEW * - THE CHOSEN TABLE OR VIEWS DID NOT * - RETURN ANY ROWS. * ERROR MESSAGES FROM MODULE DSNTIAR * - WHEN AN ERROR OCCURS, THIS MODULE * - PRODUCES CORRESPONDING MESSAGES. * OTHER MESSAGES: * THE TABLE COULD NOT BE UNLOADED. EXITING. * * EXTERNAL REFERENCES = * ROUTINES/SERVICES = * DSNTIAR - TRANSLATE SQLCA INTO MESSAGES * DATA-AREAS = NONE * CONTROL-BLOCKS = * SQLCA - SQL COMMUNICATION AREA * * TABLES = NONE * CHANGE-ACTIVITY = NONE * *
344
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
* *PSEUDOCODE* * * PROCEDURE * * EXEC SQL DECLARE DT CURSOR FOR SEL END-EXEC. * * EXEC SQL DECLARE SEL STATEMENT END-EXEC. * * INITIALIZE THE DATA, OPEN FILES. * * OBTAIN STORAGE FOR THE SQLDA AND THE DATA RECORDS. * * READ A TABLE NAME. * * OPEN SYSREC01. * * BUILD THE SQL STATEMENT TO BE EXECUTED * * EXEC SQL PREPARE SQL STATEMENT INTO SQLDA END-EXEC. * * SET UP ADDRESSES IN THE SQLDA FOR DATA. * * INITIALIZE DATA RECORD COUNTER TO 0. * * EXEC SQL OPEN DT END-EXEC. * * DO WHILE SQLCODE IS 0. * * EXEC SQL FETCH DT USING DESCRIPTOR SQLDA END-EXEC. * * ADD IN MARKERS TO DENOTE NULLS. * * WRITE THE DATA TO SYSREC01. * * INCREMENT DATA RECORD COUNTER. * * END. * * EXEC SQL CLOSE DT END-EXEC. * * INDICATE THE RESULTS OF THE UNLOAD OPERATION. * * CLOSE THE SYSIN, SYSPRINT, AND SYSREC01 FILES. * * END. * *---------------------------------------------------------------* / IDENTIFICATION DIVISION. *----------------------PROGRAM-ID. UNLDBCU2 * ENVIRONMENT DIVISION. *-------------------CONFIGURATION SECTION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT SYSIN ASSIGN TO DA-S-SYSIN. SELECT SYSPRINT ASSIGN TO UT-S-SYSPRINT. SELECT SYSREC01 ASSIGN TO DA-S-SYSREC01. * DATA DIVISION. *------------* FILE SECTION. FD SYSIN RECORD CONTAINS 80 CHARACTERS BLOCK CONTAINS 0 RECORDS LABEL RECORDS ARE OMITTED RECORDING MODE IS F. 01 CARDREC PIC X(80). * FD SYSPRINT RECORD CONTAINS 120 CHARACTERS LABEL RECORDS ARE OMITTED DATA RECORD IS MSGREC RECORDING MODE IS F. 01 MSGREC PIC X(120). * FD SYSREC01 RECORD CONTAINS 5 TO 32704 CHARACTERS LABEL RECORDS ARE OMITTED DATA RECORD IS REC01 RECORDING MODE IS V. 01 REC01. 02 REC01-LEN PIC S9(8) COMP. 02 REC01-CHAR PIC X(1) OCCURS 1 TO 32700 TIMES
Chapter 6. Coding SQL statements in COBOL application programs
345
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DEPENDING ON REC01-LEN. / WORKING-STORAGE SECTION. * ***************************************************** * STRUCTURE FOR INPUT * ***************************************************** 01 IOAREA. 02 TNAME PIC X(72). 02 FILLER PIC X(08). 01 STMTBUF. 49 STMTLEN PIC S9(4) COMP VALUE 92. 49 STMTCHAR PIC X(92). 01 STMTBLD. 02 FILLER PIC X(20) VALUE SELECT * FROM. 02 STMTTAB PIC X(72). * ***************************************************** * REPORT HEADER STRUCTURE * ***************************************************** 01 HEADER. 02 FILLER PIC X(35) VALUE DSNT490I SAMPLE COBOL DATA UNLOAD . 02 FILLER PIC X(85) VALUE PROGRAM RELEASE 3.0. 01 MSG-SQLERR. 02 FILLER PIC X(31) VALUE DSNT493I SQL ERROR, SQLCODE = . 02 MSG-MINUS PIC X(1). 02 MSG-PRINT-CODE PIC 9(8). 02 FILLER PIC X(81) VALUE . 01 MSG-OTHER-ERR. 02 FILLER PIC X(42) VALUE THE TABLE COULD NOT BE UNLOADED. EXITING.. 02 FILLER PIC X(78) VALUE . 01 UNLOADED. 02 FILLER PIC X(28) VALUE DSNT495I SUCCESSFUL UNLOAD . 02 ROWS PIC 9(8). 02 FILLER PIC X(15) VALUE ROWS OF TABLE . 02 TABLENAM PIC X(72) VALUE . 01 BADTYPE. 02 FILLER PIC X(42) VALUE DSNT496I UNRECOGNIZED DATA TYPE CODE OF . 02 TYPCOD PIC 9(8). 02 FILLER PIC X(71) VALUE . 01 MSGRETCD. 02 FILLER PIC X(42) VALUE DSNT497I RETURN CODE FROM MESSAGE ROUTINE. 02 FILLER PIC X(9) VALUE DSNTIAR . 02 RETCODE PIC 9(8). 02 FILLER PIC X(62) VALUE . 01 MSGNOCOL. 02 FILLER PIC X(120) VALUE DSNT498I ERROR, NO VALID COLUMNS FOUND. 01 MSG-NOROW. 02 FILLER PIC X(120) VALUE DSNT499I NO ROWS FOUND IN TABLE OR VIEW. ***************************************************** * WORKAREAS * ***************************************************** 77 NOT-FOUND PIC S9(8) COMP VALUE +100. ***************************************************** * VARIABLES FOR ERROR-MESSAGE FORMATTING * ***************************************************** 01 ERROR-MESSAGE. 02 ERROR-LEN PIC S9(4) COMP VALUE +960. 02 ERROR-TEXT PIC X(120) OCCURS 8 TIMES
346
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
INDEXED BY ERROR-INDEX. 77 ERROR-TEXT-LEN PIC S9(8) COMP VALUE +120. ***************************************************** * SQL DESCRIPTOR AREA * ***************************************************** 01 SQLDA. 02 SQLDAID PIC X(8) VALUE SQLDA . 02 SQLDABC PIC S9(8) COMPUTATIONAL VALUE 33016. 02 SQLN PIC S9(4) COMPUTATIONAL VALUE 750. 02 SQLD PIC S9(4) COMPUTATIONAL VALUE 0. 02 SQLVAR OCCURS 1 TO 750 TIMES DEPENDING ON SQLN. 03 SQLTYPE PIC S9(4) COMPUTATIONAL. 03 SQLLEN PIC S9(4) COMPUTATIONAL. 03 SQLDATA POINTER. 03 SQLIND POINTER. 03 SQLNAME. 49 SQLNAMEL PIC S9(4) COMPUTATIONAL. 49 SQLNAMEC PIC X(30). * * DATA TYPES FOUND IN SQLTYPE, AFTER REMOVING THE NULL BIT * 77 VARCTYPE PIC S9(4) COMP VALUE +448. 77 CHARTYPE PIC S9(4) COMP VALUE +452. 77 VARLTYPE PIC S9(4) COMP VALUE +456. 77 VARGTYPE PIC S9(4) COMP VALUE +464. 77 GTYPE PIC S9(4) COMP VALUE +468. 77 LVARGTYP PIC S9(4) COMP VALUE +472. 77 FLOATYPE PIC S9(4) COMP VALUE +480. 77 DECTYPE PIC S9(4) COMP VALUE +484. 77 INTTYPE PIC S9(4) COMP VALUE +496. 77 HWTYPE PIC S9(4) COMP VALUE +500. 77 DATETYP PIC S9(4) COMP VALUE +384. 77 TIMETYP PIC S9(4) COMP VALUE +388. 77 TIMESTMP PIC S9(4) COMP VALUE +392. * 01 RECPTR POINTER. 01 RECNUM REDEFINES RECPTR PICTURE S9(8) COMPUTATIONAL. 01 IRECPTR POINTER. 01 IRECNUM REDEFINES IRECPTR PICTURE S9(8) COMPUTATIONAL. 01 I PICTURE S9(4) COMPUTATIONAL. 01 J PICTURE S9(4) COMPUTATIONAL. 01 DUMMY PICTURE S9(4) COMPUTATIONAL. 01 MYTYPE PICTURE S9(4) COMPUTATIONAL. 01 COLUMN-IND PICTURE S9(4) COMPUTATIONAL. 01 COLUMN-LEN PICTURE S9(4) COMPUTATIONAL. 01 COLUMN-PREC PICTURE S9(4) COMPUTATIONAL. 01 COLUMN-SCALE PICTURE S9(4) COMPUTATIONAL. 01 INDCOUNT PIC S9(4) COMPUTATIONAL. 01 ROWCOUNT PIC S9(4) COMPUTATIONAL. 01 ERR-FOUND PICTURE X(1). 01 WORKAREA2. 02 WORKINDPTR POINTER OCCURS 750 TIMES. ***************************************************** * DECLARE CURSOR AND STATEMENT FOR DYNAMIC SQL ***************************************************** * EXEC SQL DECLARE DT CURSOR FOR SEL END-EXEC. EXEC SQL DECLARE SEL STATEMENT END-EXEC. * ***************************************************** * SQL INCLUDE FOR SQLCA * ***************************************************** EXEC SQL INCLUDE SQLCA END-EXEC. * 77 ONE PIC S9(4) COMP VALUE +1. 77 TWO PIC S9(4) COMP VALUE +2.
Chapter 6. Coding SQL statements in COBOL application programs
347
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
77 77 *
FOUR QMARK
LINKAGE SECTION. 01 LINKAREA-IND. 02 IND PIC S9(4) COMP OCCURS 750 TIMES. 01 LINKAREA-REC. 02 REC1-LEN PIC S9(8) COMP. 02 REC1-CHAR PIC X(1) OCCURS 1 TO 32700 TIMES DEPENDING ON REC1-LEN. 01 LINKAREA-QMARK. 02 INDREC PIC X(1). / PROCEDURE DIVISION USING LINKAREA-IND LINKAREA-REC. * ***************************************************** * SQL RETURN CODE HANDLING * ***************************************************** EXEC SQL WHENEVER SQLERROR GOTO DBERROR END-EXEC. EXEC SQL WHENEVER SQLWARNING GOTO DBERROR END-EXEC. EXEC SQL WHENEVER NOT FOUND CONTINUE END-EXEC. * ***************************************************** * MAIN PROGRAM ROUTINE * ***************************************************** SET IRECPTR TO ADDRESS OF REC1-CHAR(1). * **OPEN FILES MOVE N TO ERR-FOUND. * **INITIALIZE * ** ERROR FLAG OPEN INPUT SYSIN OUTPUT SYSPRINT OUTPUT SYSREC01. * WRITE MSGREC FROM HEADER AFTER ADVANCING 2 LINES. * READ SYSIN * * PROG-END. * CLOSE SYSIN SYSPRINT SYSREC01. GOBACK. / *************************************************************** * * * PERFORMED SECTION: * * PROCESSING FOR THE TABLE OR VIEW JUST READ * * * *************************************************************** PROCESS-INPUT. * MOVE TNAME TO STMTTAB. MOVE STMTBLD TO STMTCHAR. MOVE +750 TO SQLN. EXEC SQL PREPARE SEL INTO :SQLDA FROM :STMTBUF END-EXEC. *************************************************************** * * * SET UP ADDRESSES IN THE SQLDA FOR DATA. * * * *************************************************************** IF SQLD = ZERO THEN **CLOSE FILES RECORD INTO IOAREA. **GET FIRST INPUT **MAIN ROUTINE PERFORM PROCESS-INPUT THROUGH IND-RESULT. **WRITE HEADER
348
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
WRITE MSGREC FROM MSGNOCOL AFTER ADVANCING 2 LINES MOVE Y TO ERR-FOUND GO TO IND-RESULT. MOVE ZERO TO ROWCOUNT. MOVE ZERO TO REC1-LEN. SET RECPTR TO IRECPTR. MOVE ONE TO I. PERFORM COLADDR UNTIL I > SQLD. **************************************************************** * * * SET LENGTH OF OUTPUT RECORD. * * EXEC SQL OPEN DT END-EXEC. * * DO WHILE SQLCODE IS 0. * * EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-EXEC. * * ADD IN MARKERS TO DENOTE NULLS. * * WRITE THE DATA TO SYSREC01. * * INCREMENT DATA RECORD COUNTER. * * END. * * * **************************************************************** * **OPEN CURSOR EXEC SQL OPEN DT END-EXEC. PERFORM BLANK-REC. EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-EXEC. * **NO ROWS FOUND * **PRINT ERROR MESSAGE IF SQLCODE = NOT-FOUND WRITE MSGREC FROM MSG-NOROW AFTER ADVANCING 2 LINES MOVE Y TO ERR-FOUND ELSE * **WRITE ROW AND * **CONTINUE UNTIL * **NO MORE ROWS PERFORM WRITE-AND-FETCH UNTIL SQLCODE IS NOT EQUAL TO ZERO. * EXEC SQL WHENEVER NOT FOUND GOTO CLOSEDT END-EXEC. * CLOSEDT. EXEC SQL CLOSE DT END-EXEC. * **************************************************************** * * * INDICATE THE RESULTS OF THE UNLOAD OPERATION. * * * **************************************************************** IND-RESULT. IF ERR-FOUND = N THEN MOVE TNAME TO TABLENAM MOVE ROWCOUNT TO ROWS WRITE MSGREC FROM UNLOADED AFTER ADVANCING 2 LINES ELSE WRITE MSGREC FROM MSG-OTHER-ERR AFTER ADVANCING 2 LINES MOVE +0012 TO RETURN-CODE GO TO PROG-END. * WRITE-AND-FETCH. * ADD IN MARKERS TO DENOTE NULLS. MOVE ONE TO INDCOUNT. PERFORM NULLCHK UNTIL INDCOUNT = SQLD. MOVE REC1-LEN TO REC01-LEN. WRITE REC01 FROM LINKAREA-REC. ADD ONE TO ROWCOUNT.
Chapter 6. Coding SQL statements in COBOL application programs
349
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
PERFORM BLANK-REC. EXEC SQL FETCH DT USING DESCRIPTOR :SQLDA END-EXEC. * NULLCHK. IF IND(INDCOUNT) < 0 THEN SET ADDRESS OF LINKAREA-QMARK TO WORKINDPTR(INDCOUNT) MOVE QMARK TO INDREC. ADD ONE TO INDCOUNT. ***************************************************** * BLANK OUT RECORD TEXT FIRST * ***************************************************** BLANK-REC. MOVE ONE TO J. PERFORM BLANK-MORE UNTIL J > REC1-LEN. BLANK-MORE. MOVE TO REC1-CHAR(J). ADD ONE TO J. * COLADDR. SET SQLDATA(I) TO RECPTR. **************************************************************** * * DETERMINE THE LENGTH OF THIS COLUMN (COLUMN-LEN) * THIS DEPENDS UPON THE DATA TYPE. MOST DATA TYPES HAVE * THE LENGTH SET, BUT VARCHAR, GRAPHIC, VARGRAPHIC, AND * DECIMAL DATA NEED TO HAVE THE BYTES CALCULATED. * THE NULL ATTRIBUTE MUST BE SEPARATED TO SIMPLIFY MATTERS. * **************************************************************** MOVE SQLLEN(I) TO COLUMN-LEN. * COLUMN-IND IS 0 FOR NO NULLS AND 1 FOR NULLS DIVIDE SQLTYPE(I) BY TWO GIVING DUMMY REMAINDER COLUMN-IND. * MYTYPE IS JUST THE SQLTYPE WITHOUT THE NULL BIT MOVE SQLTYPE(I) TO MYTYPE. SUBTRACT COLUMN-IND FROM MYTYPE. * SET THE COLUMN LENGTH, DEPENDENT UPON DATA TYPE EVALUATE MYTYPE WHEN CHARTYPE CONTINUE, WHEN DATETYP CONTINUE, WHEN TIMETYP CONTINUE, WHEN TIMESTMP CONTINUE, WHEN FLOATYPE CONTINUE, WHEN VARCTYPE ADD TWO TO COLUMN-LEN, WHEN VARLTYPE ADD TWO TO COLUMN-LEN, WHEN GTYPE MULTIPLY COLUMN-LEN BY TWO GIVING COLUMN-LEN, WHEN VARGTYPE PERFORM CALC-VARG-LEN, WHEN LVARGTYP PERFORM CALC-VARG-LEN, WHEN HWTYPE MOVE TWO TO COLUMN-LEN, WHEN INTTYPE MOVE FOUR TO COLUMN-LEN, WHEN DECTYPE PERFORM CALC-DECIMAL-LEN, WHEN OTHER PERFORM UNRECOGNIZED-ERROR, END-EVALUATE. ADD COLUMN-LEN TO RECNUM. ADD COLUMN-LEN TO REC1-LEN. **************************************************************** * * * IF THIS COLUMN CAN BE NULL, AN INDICATOR VARIABLE IS * * NEEDED. WE ALSO RESERVE SPACE IN THE OUTPUT RECORD TO *
350
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
* NOTE THAT THE VALUE IS NULL. * * * **************************************************************** MOVE ZERO TO IND(I). IF COLUMN-IND = ONE THEN SET SQLIND(I) TO ADDRESS OF IND(I) SET WORKINDPTR(I) TO RECPTR ADD ONE TO RECNUM ADD ONE TO REC1-LEN. * ADD ONE TO I. * PERFORMED PARAGRAPH TO CALCULATE COLUMN LENGTH * FOR A DECIMAL DATA TYPE COLUMN CALC-DECIMAL-LEN. DIVIDE COLUMN-LEN BY 256 GIVING COLUMN-PREC REMAINDER COLUMN-SCALE. MOVE COLUMN-PREC TO COLUMN-LEN. ADD ONE TO COLUMN-LEN. DIVIDE COLUMN-LEN BY TWO GIVING COLUMN-LEN. * PERFORMED PARAGRAPH TO CALCULATE COLUMN LENGTH * FOR A VARGRAPHIC DATA TYPE COLUMN CALC-VARG-LEN. MULTIPLY COLUMN-LEN BY TWO GIVING COLUMN-LEN. ADD TWO TO COLUMN-LEN. * PERFORMED PARAGRAPH TO NOTE AN UNRECOGNIZED * DATA TYPE COLUMN UNRECOGNIZED-ERROR. * * ERROR MESSAGE FOR UNRECOGNIZED DATA TYPE * MOVE SQLTYPE(I) TO TYPCOD MOVE Y TO ERR-FOUND WRITE MSGREC FROM BADTYPE AFTER ADVANCING 2 LINES GO TO IND-RESULT. * ***************************************************** * SQL ERROR OCCURRED - GET MESSAGE * ***************************************************** DBERROR. * **SQL ERROR MOVE Y TO ERR-FOUND. MOVE SQLCODE TO MSG-PRINT-CODE. IF SQLCODE < 0 THEN MOVE - TO MSG-MINUS. WRITE MSGREC FROM MSG-SQLERR AFTER ADVANCING 2 LINES. CALL DSNTIAR USING SQLCA ERROR-MESSAGE ERROR-TEXT-LEN. IF RETURN-CODE = ZERO PERFORM ERROR-PRINT VARYING ERROR-INDEX FROM 1 BY 1 UNTIL ERROR-INDEX GREATER THAN 8 ELSE * **ERROR FOUND IN DSNTIAR * **PRINT ERROR MESSAGE MOVE RETURN-CODE TO RETCODE WRITE MSGREC FROM MSGRETCD AFTER ADVANCING 2 LINES. GO TO IND-RESULT. * ***************************************************** * PRINT MESSAGE TEXT * ***************************************************** ERROR-PRINT. WRITE MSGREC FROM ERROR-TEXT (ERROR-INDEX) AFTER ADVANCING 1 LINE.
351
352
* * *
* * *
* PSEUDOCODE * * * * MAINLINE. * * Perform CONNECT-TO-SITE-1 to establish * * a connection to the local connection. * * If the previous operation was successful Then * * Do. * * | Perform PROCESS-CURSOR-SITE-1 to obtain the * * | information about an employee that is * * | transferring to another location. * * | If the information about the employee was obtained * * | successfully Then * * | Do. * * | | Perform UPDATE-ADDRESS to update the information * * | | to contain current information about the * * | | employee. * * | | Perform CONNECT-TO-SITE-2 to establish * * | | a connection to the site where the employee is * * | | transferring to. * * | | If the connection is established successfully * * | | Then * * | | Do. * * | | | Perform PROCESS-SITE-2 to insert the * * | | | employee information at the location * * | | | where the employee is transferring to. * * | | End if the connection was established * * | | successfully. * * | End if the employee information was obtained * * | successfully. * * End if the previous operation was successful. * * Perform COMMIT-WORK to COMMIT the changes made to STLEC1 * * and STLEC2. * * * * PROG-END. * * Close the printer. * * Return. * * * * CONNECT-TO-SITE-1. * * Provide a text description of the following step. * * Establish a connection to the location where the * * employee is transferring from. * * Print the SQLCA out. * * * * PROCESS-CURSOR-SITE-1. * * Provide a text description of the following step. * * Open a cursor that will be used to retrieve information * * about the transferring employee from this site. * * Print the SQLCA out. * * If the cursor was opened successfully Then * * Do. * * | Perform FETCH-DELETE-SITE-1 to retrieve and * * | delete the information about the transferring * * | employee from this site. * * | Perform CLOSE-CURSOR-SITE-1 to close the cursor. * * End if the cursor was opened successfully. * * * * * * * * * * FETCH-DELETE-SITE-1. Provide a text description of the following step. Fetch information about the transferring employee. Print the SQLCA out. If the information was retrieved successfully Then Do. | Perform DELETE-SITE-1 to delete the employee * * * * * * *
353
* | at this site. * * End if the information was retrieved successfully. * * * * DELETE-SITE-1. * * Provide a text description of the following step. * * Delete the information about the transferring employee * * from this site. * * Print the SQLCA out. * * * * CLOSE-CURSOR-SITE-1. * * Provide a text description of the following step. * * Close the cursor used to retrieve information about * * the transferring employee. * * Print the SQLCA out. * * * * UPDATE-ADDRESS. * * Update the address of the employee. * * Update the city of the employee. * * Update the location of the employee. * * * * CONNECT-TO-SITE-2. * * Provide a text description of the following step. * * Establish a connection to the location where the * * employee is transferring to. * * Print the SQLCA out. * * * * PROCESS-SITE-2. * * Provide a text description of the following step. * * Insert the employee information at the location where * * the employee is being transferred to. * * Print the SQLCA out. * * * * COMMIT-WORK. * * COMMIT all the changes made to STLEC1 and STLEC2. * * * ***************************************************************** ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT PRINTER, ASSIGN TO S-OUT1. DATA DIVISION. FILE SECTION. FD PRINTER RECORD CONTAINS 120 CHARACTERS DATA RECORD IS PRT-TC-RESULTS LABEL RECORD IS OMITTED. 01 PRT-TC-RESULTS. 03 PRT-BLANK PIC X(120). WORKING-STORAGE SECTION. ***************************************************************** * Variable declarations * ***************************************************************** 01 H-EMPTBL. 05 H-EMPNO PIC X(6). 05 H-NAME. 49 H-NAME-LN PIC S9(4) COMP-4. 49 H-NAME-DA PIC X(32). 05 H-ADDRESS. 49 H-ADDRESS-LN PIC S9(4) COMP-4. 49 H-ADDRESS-DA PIC X(36). 05 H-CITY. 49 H-CITY-LN PIC S9(4) COMP-4.
354
05 05 05 05 05 05 05 05 05 05 05 01
49 H-CITY-DA H-EMPLOC PIC H-SSNO PIC H-BORN PIC H-SEX PIC H-HIRED PIC H-DEPTNO PIC H-JOBCODE PIC H-SRATE PIC H-EDUC PIC H-SAL PIC H-VALIDCHK PIC
PIC X(36). X(4). X(11). X(10). X(1). X(10). X(3). S9(3)V COMP-3. S9(5) COMP. S9(5) COMP. S9(6)V9(2) COMP-3. S9(6)V COMP-3. PIC S9(4) COMP OCCURS 15 TIMES.
H-EMPTBL-IND-TABLE. 02 H-EMPTBL-IND
***************************************************************** * Includes for the variables used in the COBOL standard * * language procedures and the SQLCA. * ***************************************************************** EXEC SQL INCLUDE COBSVAR END-EXEC. EXEC SQL INCLUDE SQLCA END-EXEC. ***************************************************************** * Declaration for the table that contains employee information * ***************************************************************** EXEC SQL DECLARE SYSADM.EMP TABLE (EMPNO CHAR(6) NOT NULL, NAME VARCHAR(32), ADDRESS VARCHAR(36) , CITY VARCHAR(36) , EMPLOC CHAR(4) NOT NULL, SSNO CHAR(11), BORN DATE, SEX CHAR(1), HIRED CHAR(10), DEPTNO CHAR(3) NOT NULL, JOBCODE DECIMAL(3), SRATE SMALLINT, EDUC SMALLINT, SAL VALCHK END-EXEC. DECIMAL(8,2) NOT NULL, DECIMAL(6))
***************************************************************** * Constants * ***************************************************************** 77 77 77 77 77 SITE-1 SITE-2 TEMP-EMPNO TEMP-ADDRESS-LN TEMP-CITY-LN PIC PIC PIC PIC PIC X(16) X(16) X(6) 99 99 VALUE VALUE VALUE VALUE VALUE STLEC1. STLEC2. 080000. 15. 18.
***************************************************************** * Declaration of the cursor that will be used to retrieve * * information about a transferring employee * ***************************************************************** EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, NAME, ADDRESS, CITY, EMPLOC, SSNO, BORN, SEX, HIRED, DEPTNO, JOBCODE, SRATE, EDUC, SAL, VALCHK FROM SYSADM.EMP WHERE EMPNO = :TEMP-EMPNO
Chapter 6. Coding SQL statements in COBOL application programs
355
END-EXEC. PROCEDURE DIVISION. A101-HOUSE-KEEPING. OPEN OUTPUT PRINTER. ***************************************************************** * An employee is transferring from location STLEC1 to STLEC2. * * Retrieve information about the employee from STLEC1, delete * * the employee from STLEC1 and insert the employee at STLEC2 * * using the information obtained from STLEC1. * ***************************************************************** MAINLINE. PERFORM CONNECT-TO-SITE-1 IF SQLCODE IS EQUAL TO 0 PERFORM PROCESS-CURSOR-SITE-1 IF SQLCODE IS EQUAL TO 0 PERFORM UPDATE-ADDRESS PERFORM CONNECT-TO-SITE-2 IF SQLCODE IS EQUAL TO 0 PERFORM PROCESS-SITE-2. PERFORM COMMIT-WORK. PROG-END. CLOSE PRINTER. GOBACK. ***************************************************************** * Establish a connection to STLEC1 * ***************************************************************** CONNECT-TO-SITE-1. MOVE CONNECT TO STLEC1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL CONNECT TO :SITE-1 END-EXEC. PERFORM PTSQLCA. ***************************************************************** * When a connection has been established successfully at STLEC1,* * open the cursor that will be used to retrieve information * * about the transferring employee. * ***************************************************************** PROCESS-CURSOR-SITE-1. MOVE OPEN CURSOR C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL OPEN C1 END-EXEC. PERFORM PTSQLCA. IF SQLCODE IS EQUAL TO ZERO PERFORM FETCH-DELETE-SITE-1 PERFORM CLOSE-CURSOR-SITE-1. ***************************************************************** * Retrieve information about the transferring employee. * * Provided that the employee exists, perform DELETE-SITE-1 to * * delete the employee from STLEC1. * ***************************************************************** FETCH-DELETE-SITE-1. MOVE FETCH C1 TO STNAME
356
WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL FETCH C1 INTO :H-EMPTBL:H-EMPTBL-IND END-EXEC. PERFORM PTSQLCA. IF SQLCODE IS EQUAL TO ZERO PERFORM DELETE-SITE-1. ***************************************************************** * Delete the employee from STLEC1. * ***************************************************************** DELETE-SITE-1. MOVE DELETE EMPLOYEE TO STNAME WRITE PRT-TC-RESULTS FROM STNAME MOVE DELETE EMPLOYEE TO STNAME EXEC SQL DELETE FROM SYSADM.EMP WHERE EMPNO = :TEMP-EMPNO END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Close the cursor used to retrieve information about the * * transferring employee. * ***************************************************************** CLOSE-CURSOR-SITE-1. MOVE CLOSE CURSOR C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL CLOSE C1 END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Update certain employee information in order to make it * * current. * ***************************************************************** UPDATE-ADDRESS. MOVE TEMP-ADDRESS-LN MOVE 1500 NEW STREET MOVE TEMP-CITY-LN MOVE NEW CITY, CA 97804 MOVE SJCA TO TO TO TO TO H-ADDRESS-LN. H-ADDRESS-DA. H-CITY-LN. H-CITY-DA. H-EMPLOC.
***************************************************************** * Establish a connection to STLEC2 * ***************************************************************** CONNECT-TO-SITE-2. MOVE CONNECT TO STLEC2 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL CONNECT TO :SITE-2 END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Using the employee information that was retrieved from STLEC1 * * and updated previously, insert the employee at STLEC2. ***************************************************************** PROCESS-SITE-2.
Chapter 6. Coding SQL statements in COBOL application programs
357
MOVE INSERT EMPLOYEE TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL INSERT INTO SYSADM.EMP VALUES (:H-EMPNO, :H-NAME, :H-ADDRESS, :H-CITY, :H-EMPLOC, :H-SSNO, :H-BORN, :H-SEX, :H-HIRED, :H-DEPTNO, :H-JOBCODE, :H-SRATE, :H-EDUC, :H-SAL, :H-VALIDCHK) END-EXEC. PERFORM PTSQLCA. ***************************************************************** * COMMIT any changes that were made at STLEC1 and STLEC2. * ***************************************************************** COMMIT-WORK. MOVE COMMIT WORK TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL COMMIT END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Include COBOL standard language procedures * ***************************************************************** INCLUDE-SUBS. EXEC SQL INCLUDE COBSSUB END-EXEC.
358
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
FUNCTION = THIS MODULE DEMONSTRATES DISTRIBUTED DATA ACCESS USING 2 PHASE COMMIT BY TRANSFERRING AN EMPLOYEE FROM ONE LOCATION TO ANOTHER.
* * * * NOTE: THIS PROGRAM ASSUMES THE EXISTENCE OF THE * TABLE SYSADM.ALLEMPLOYEES AT LOCATIONS STLEC1 AND STLEC2. * * MODULE TYPE = COBOL PROGRAM * PROCESSOR = DB2 PRECOMPILER, ENTERPRISE COBOL FOR Z/OS * MODULE SIZE = SEE LINK EDIT * ATTRIBUTES = NOT REENTRANT OR REUSABLE * * ENTRY POINT = * PURPOSE = TO ILLUSTRATE 2 PHASE COMMIT * LINKAGE = INVOKE FROM DSN RUN * INPUT = NONE * OUTPUT = * SYMBOLIC LABEL/NAME = SYSPRINT * DESCRIPTION = PRINT OUT THE DESCRIPTION OF EACH * STEP AND THE RESULTANT SQLCA * * EXIT NORMAL = RETURN CODE 0 FROM NORMAL COMPLETION * * EXIT ERROR = NONE * * EXTERNAL REFERENCES = * ROUTINE SERVICES = NONE * DATA-AREAS = NONE * CONTROL-BLOCKS = * SQLCA SQL COMMUNICATION AREA * * TABLES = NONE * * CHANGE-ACTIVITY = NONE * * * * PSEUDOCODE * * MAINLINE. * Perform PROCESS-CURSOR-SITE-1 to obtain the information * about an employee that is transferring to another * location. * If the information about the employee was obtained * successfully Then * Do. * | Perform UPDATE-ADDRESS to update the information to * | contain current information about the employee. * | Perform PROCESS-SITE-2 to insert the employee * | information at the location where the employee is * | transferring to. * End if the employee information was obtained * successfully. * Perform COMMIT-WORK to COMMIT the changes made to STLEC1 * and STLEC2. * * PROG-END. * Close the printer. * Return. * * PROCESS-CURSOR-SITE-1. * Provide a text description of the following step. * Open a cursor that will be used to retrieve information * about the transferring employee from this site. * Print the SQLCA out. * If the cursor was opened successfully Then *
Chapter 6. Coding SQL statements in COBOL application programs
359
* Do. * * | Perform FETCH-DELETE-SITE-1 to retrieve and * * | delete the information about the transferring * * | employee from this site. * * | Perform CLOSE-CURSOR-SITE-1 to close the cursor. * * End if the cursor was opened successfully. * * * * FETCH-DELETE-SITE-1. * * Provide a text description of the following step. * * Fetch information about the transferring employee. * * Print the SQLCA out. * * If the information was retrieved successfully Then * * Do. * * | Perform DELETE-SITE-1 to delete the employee * * | at this site. * * End if the information was retrieved successfully. * * * * DELETE-SITE-1. * * Provide a text description of the following step. * * Delete the information about the transferring employee * * from this site. * * Print the SQLCA out. * * * * CLOSE-CURSOR-SITE-1. * * Provide a text description of the following step. * * Close the cursor used to retrieve information about * * the transferring employee. * * Print the SQLCA out. * * * * UPDATE-ADDRESS. * * Update the address of the employee. * * Update the city of the employee. * * Update the location of the employee. * * * * PROCESS-SITE-2. * * Provide a text description of the following step. * * Insert the employee information at the location where * * the employee is being transferred to. * * Print the SQLCA out. * * * * COMMIT-WORK. * * COMMIT all the changes made to STLEC1 and STLEC2. * * * ***************************************************************** ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT PRINTER, ASSIGN TO S-OUT1. DATA DIVISION. FILE SECTION. FD PRINTER RECORD CONTAINS 120 CHARACTERS DATA RECORD IS PRT-TC-RESULTS LABEL RECORD IS OMITTED. 01 PRT-TC-RESULTS. 03 PRT-BLANK PIC X(120). WORKING-STORAGE SECTION. ***************************************************************** * Variable declarations * ***************************************************************** 01 H-EMPTBL.
360
H-EMPNO PIC X(6). H-NAME. 49 H-NAME-LN PIC S9(4) COMP-4. 49 H-NAME-DA PIC X(32). 05 H-ADDRESS. 49 H-ADDRESS-LN PIC S9(4) COMP-4. 49 H-ADDRESS-DA PIC X(36). 05 H-CITY. 49 H-CITY-LN PIC S9(4) COMP-4. 49 H-CITY-DA PIC X(36). 05 H-EMPLOC PIC X(4). 05 H-SSNO PIC X(11). 05 H-BORN PIC X(10). 05 H-SEX PIC X(1). 05 H-HIRED PIC X(10). 05 H-DEPTNO PIC X(3). 05 H-JOBCODE PIC S9(3)V COMP-3. 05 H-SRATE PIC S9(5) COMP. 05 H-EDUC PIC S9(5) COMP. 05 H-SAL PIC S9(6)V9(2) COMP-3. 05 H-VALIDCHK PIC S9(6)V COMP-3. 01 H-EMPTBL-IND-TABLE. 02 H-EMPTBL-IND PIC S9(4) COMP OCCURS 15 TIMES. ***************************************************************** * Includes for the variables used in the COBOL standard * * language procedures and the SQLCA. * ***************************************************************** EXEC SQL INCLUDE COBSVAR END-EXEC. EXEC SQL INCLUDE SQLCA END-EXEC. ***************************************************************** * Declaration for the table that contains employee information * ***************************************************************** EXEC SQL DECLARE SYSADM.ALLEMPLOYEES TABLE (EMPNO CHAR(6) NOT NULL, NAME VARCHAR(32), ADDRESS VARCHAR(36) , CITY VARCHAR(36) , EMPLOC CHAR(4) NOT NULL, SSNO CHAR(11), BORN DATE, SEX CHAR(1), HIRED CHAR(10), DEPTNO CHAR(3) NOT NULL, JOBCODE DECIMAL(3), SRATE SMALLINT, EDUC SMALLINT, SAL DECIMAL(8,2) NOT NULL, VALCHK DECIMAL(6)) END-EXEC. ***************************************************************** * Constants * ***************************************************************** 77 77 77 TEMP-EMPNO TEMP-ADDRESS-LN TEMP-CITY-LN PIC X(6) VALUE 080000. PIC 99 VALUE 15. PIC 99 VALUE 18.
05 05
***************************************************************** * Declaration of the cursor that will be used to retrieve * * information about a transferring employee * *****************************************************************
361
EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, NAME, ADDRESS, CITY, EMPLOC, SSNO, BORN, SEX, HIRED, DEPTNO, JOBCODE, SRATE, EDUC, SAL, VALCHK FROM STLEC1.SYSADM.ALLEMPLOYEES WHERE EMPNO = :TEMP-EMPNO END-EXEC. PROCEDURE DIVISION. A101-HOUSE-KEEPING. OPEN OUTPUT PRINTER. ***************************************************************** * An employee is transferring from location STLEC1 to STLEC2. * * Retrieve information about the employee from STLEC1, delete * * the employee from STLEC1 and insert the employee at STLEC2 * * using the information obtained from STLEC1. * ***************************************************************** MAINLINE. PERFORM PROCESS-CURSOR-SITE-1 IF SQLCODE IS EQUAL TO 0 PERFORM UPDATE-ADDRESS PERFORM PROCESS-SITE-2. PERFORM COMMIT-WORK. PROG-END. CLOSE PRINTER. GOBACK. ***************************************************************** * Open the cursor that will be used to retrieve information * * about the transferring employee. * ***************************************************************** PROCESS-CURSOR-SITE-1. MOVE OPEN CURSOR C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL OPEN C1 END-EXEC. PERFORM PTSQLCA. IF SQLCODE IS EQUAL TO ZERO PERFORM FETCH-DELETE-SITE-1 PERFORM CLOSE-CURSOR-SITE-1. ***************************************************************** * Retrieve information about the transferring employee. * * Provided that the employee exists, perform DELETE-SITE-1 to * * delete the employee from STLEC1. * ***************************************************************** FETCH-DELETE-SITE-1. MOVE FETCH C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL FETCH C1 INTO :H-EMPTBL:H-EMPTBL-IND END-EXEC. PERFORM PTSQLCA. IF SQLCODE IS EQUAL TO ZERO PERFORM DELETE-SITE-1. ***************************************************************** * Delete the employee from STLEC1. * ***************************************************************** DELETE-SITE-1.
362
MOVE DELETE EMPLOYEE TO STNAME WRITE PRT-TC-RESULTS FROM STNAME MOVE DELETE EMPLOYEE TO STNAME EXEC SQL DELETE FROM STLEC1.SYSADM.ALLEMPLOYEES WHERE EMPNO = :TEMP-EMPNO END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Close the cursor used to retrieve information about the * * transferring employee. * ***************************************************************** CLOSE-CURSOR-SITE-1. MOVE CLOSE CURSOR C1 TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL CLOSE C1 END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Update certain employee information in order to make it * * current. * ***************************************************************** UPDATE-ADDRESS. MOVE TEMP-ADDRESS-LN TO H-ADDRESS-LN. MOVE 1500 NEW STREET TO H-ADDRESS-DA. MOVE TEMP-CITY-LN TO H-CITY-LN. MOVE NEW CITY, CA 97804 TO H-CITY-DA. MOVE SJCA TO H-EMPLOC. **************************************************************** * Using the employee information that was retrieved from STLEC1 * * and updated previously, insert the employee at STLEC2. * ***************************************************************** PROCESS-SITE-2. MOVE INSERT EMPLOYEE TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL INSERT INTO STLEC2.SYSADM.ALLEMPLOYEES VALUES (:H-EMPNO, :H-NAME, :H-ADDRESS, :H-CITY, :H-EMPLOC, :H-SSNO, :H-BORN, :H-SEX, :H-HIRED, :H-DEPTNO, :H-JOBCODE, :H-SRATE, :H-EDUC, :H-SAL, :H-VALIDCHK) END-EXEC. PERFORM PTSQLCA. ***************************************************************** * COMMIT any changes that were made at STLEC1 and STLEC2. * *****************************************************************
Chapter 6. Coding SQL statements in COBOL application programs
363
COMMIT-WORK. MOVE COMMIT WORK TO STNAME WRITE PRT-TC-RESULTS FROM STNAME EXEC SQL COMMIT END-EXEC. PERFORM PTSQLCA. ***************************************************************** * Include COBOL standard language procedures * ***************************************************************** INCLUDE-SUBS. EXEC SQL INCLUDE COBSSUB END-EXEC.
Example COBOL stored procedure with a GENERAL WITH NULLS linkage convention
You can call a stored procedure that uses the GENERAL WITH NULLS linkage convention from a COBOL program. This example stored procedure does the following: v Searches the DB2 SYSIBM.SYSROUTINES catalog table for a row that matches the input parameters from the client program. The two input parameters contain values for NAME and SCHEMA. v Searches the DB2 catalog table SYSTABLES for all tables in which the value of CREATOR matches the value of input parameter SCHEMA. The stored procedure uses a cursor to return the table names. The linkage convention for this stored procedure is GENERAL WITH NULLS. The output parameters from this stored procedure contain the SQLCODE from the SELECT operation, and the value of the RUNOPTS column retrieved from the SYSIBM.SYSROUTINES table. The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN, OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT) LANGUAGE COBOL DETERMINISTIC READS SQL DATA EXTERNAL NAME "GETPRML" COLLID GETPRML ASUTIME NO LIMIT PARAMETER STYLE GENERAL WITH NULLS STAY RESIDENT NO RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)" WLM ENVIRONMENT SAMPPROG PROGRAM TYPE MAIN SECURITY DB2 RESULT SETS 2 COMMIT ON RETURN NO;
The following example is a COBOL stored procedure with linkage convention GENERAL WITH NULLS.
CBL RENT IDENTIFICATION DIVISION. PROGRAM-ID. GETPRML. AUTHOR. EXAMPLE.
364
DATE-WRITTEN.
03/25/98.
ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. DATA DIVISION. FILE SECTION. * WORKING-STORAGE SECTION. * EXEC SQL INCLUDE SQLCA END-EXEC. * *************************************************** * DECLARE A HOST VARIABLE TO HOLD INPUT SCHEMA *************************************************** 01 INSCHEMA PIC X(8). *************************************************** * DECLARE CURSOR FOR RETURNING RESULT SETS *************************************************** * EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA END-EXEC. * LINKAGE SECTION. *************************************************** * DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE *************************************************** 01 PROCNM PIC X(18). 01 SCHEMA PIC X(8). *************************************************** * DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE *************************************************** 01 OUT-CODE PIC S9(9) USAGE BINARY. 01 PARMLST. 49 PARMLST-LEN PIC S9(4) USAGE BINARY. 49 PARMLST-TEXT PIC X(254). *************************************************** * DECLARE THE STRUCTURE CONTAINING THE NULL * INDICATORS FOR THE INPUT AND OUTPUT PARAMETERS. *************************************************** 01 IND-PARM. 03 PROCNM-IND PIC S9(4) USAGE BINARY. 03 SCHEMA-IND PIC S9(4) USAGE BINARY. 03 OUT-CODE-IND PIC S9(4) USAGE BINARY. 03 PARMLST-IND PIC S9(4) USAGE BINARY. PROCEDURE DIVISION USING PROCNM, SCHEMA, OUT-CODE, PARMLST, IND-PARM. ******************************************************* * If any input parameter is null, return a null value * for PARMLST and set the output return code to 9999. ******************************************************* IF PROCNM-IND < 0 OR SCHEMA-IND < 0 MOVE 9999 TO OUT-CODE MOVE 0 TO OUT-CODE-IND MOVE -1 TO PARMLST-IND ELSE ******************************************************* * Issue the SQL SELECT against the SYSIBM.SYSROUTINES * DB2 catalog table. ******************************************************* EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.SYSROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA
Chapter 6. Coding SQL statements in COBOL application programs
365
END-EXEC MOVE 0 TO PARMLST-IND ******************************************************* * COPY SQLCODE INTO THE OUTPUT PARAMETER AREA ******************************************************* MOVE SQLCODE TO OUT-CODE MOVE 0 TO OUT-CODE-IND. * ******************************************************* * OPEN CURSOR C1 TO CAUSE DB2 TO RETURN A RESULT SET * TO THE CALLER. ******************************************************* EXEC SQL OPEN C1 END-EXEC. PROG-END. GOBACK.
366
ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. FILE-CONTROL. DATA DIVISION. FILE SECTION. WORKING-STORAGE SECTION. EXEC SQL INCLUDE SQLCA END-EXEC. *************************************************** * DECLARE A HOST VARIABLE TO HOLD INPUT SCHEMA *************************************************** 01 INSCHEMA PIC X(8). *************************************************** * DECLARE CURSOR FOR RETURNING RESULT SETS *************************************************** * EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT NAME FROM SYSIBM.SYSTABLES WHERE CREATOR=:INSCHEMA END-EXEC. * LINKAGE SECTION. *************************************************** * DECLARE THE INPUT PARAMETERS FOR THE PROCEDURE *************************************************** 01 PROCNM PIC X(18). 01 SCHEMA PIC X(8). ******************************************************* * DECLARE THE OUTPUT PARAMETERS FOR THE PROCEDURE ******************************************************* 01 OUT-CODE PIC S9(9) USAGE BINARY. 01 PARMLST. 49 PARMLST-LEN PIC S9(4) USAGE BINARY. 49 PARMLST-TEXT PIC X(254). PROCEDURE DIVISION USING PROCNM, SCHEMA, OUT-CODE, PARMLST. ******************************************************* * Issue the SQL SELECT against the SYSIBM.SYSROUTINES * DB2 catalog table. ******************************************************* EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.ROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA END-EXEC. ******************************************************* * COPY SQLCODE INTO THE OUTPUT PARAMETER AREA ******************************************************* MOVE SQLCODE TO OUT-CODE. ******************************************************* * OPEN CURSOR C1 TO CAUSE DB2 TO RETURN A RESULT SET * TO THE CALLER. ******************************************************* EXEC SQL OPEN C1 END-EXEC. PROG-END. GOBACK.
367
Because the stored procedure returns result sets, this program checks for result sets and retrieves the contents of the result sets. The following figure contains the example COBOL program that calls the GETPRML stored procedure.
IDENTIFICATION DIVISION. PROGRAM-ID. CALPRML. ENVIRONMENT DIVISION. CONFIGURATION SECTION. INPUT-OUTPUT SECTION. FILE-CONTROL. SELECT REPOUT ASSIGN TO UT-S-SYSPRINT. DATA DIVISION. FILE SECTION. FD REPOUT RECORD CONTAINS 127 CHARACTERS LABEL RECORDS ARE OMITTED DATA RECORD IS REPREC. 01 REPREC PIC X(127). WORKING-STORAGE SECTION. ***************************************************** * MESSAGES FOR SQL CALL * ***************************************************** 01 SQLREC. 02 BADMSG PIC X(34) VALUE SQL CALL FAILED DUE TO SQLCODE = . 02 BADCODE PIC +9(5) USAGE DISPLAY. 02 FILLER PIC X(80) VALUE SPACES. 01 ERRMREC. 02 ERRMMSG PIC X(12) VALUE SQLERRMC = . 02 ERRMCODE PIC X(70). 02 FILLER PIC X(38) VALUE SPACES. 01 CALLREC. 02 CALLMSG PIC X(28) VALUE GETPRML FAILED DUE TO RC = . 02 CALLCODE PIC +9(5) USAGE DISPLAY. 02 FILLER PIC X(42) VALUE SPACES. 01 RSLTREC. 02 RSLTMSG PIC X(15) VALUE TABLE NAME IS . 02 TBLNAME PIC X(18) VALUE SPACES. 02 FILLER PIC X(87) VALUE SPACES. ***************************************************** * WORK AREAS * ***************************************************** 01 PROCNM PIC X(18). 01 SCHEMA PIC X(8). 01 OUT-CODE PIC S9(9) USAGE COMP. 01 PARMLST. 49 PARMLEN PIC S9(4) USAGE COMP. 49 PARMTXT PIC X(254). 01 PARMBUF REDEFINES PARMLST. 49 PARBLEN PIC S9(4) USAGE COMP. 49 PARMARRY PIC X(127) OCCURS 2 TIMES. 01 NAME. 49 NAMELEN PIC S9(4) USAGE COMP. 49 NAMETXT PIC X(18). 77 PARMIND PIC S9(4) COMP. 77 I PIC S9(4) COMP. 77 NUMLINES PIC S9(4) COMP. ***************************************************** * DECLARE A RESULT SET LOCATOR FOR THE RESULT SET * * THAT IS RETURNED. * *****************************************************
368
01
LOC
***************************************************** * SQL INCLUDE FOR SQLCA * ***************************************************** EXEC SQL INCLUDE SQLCA END-EXEC. PROCEDURE DIVISION. *-----------------PROG-START. OPEN OUTPUT REPOUT. * OPEN OUTPUT FILE MOVE DSN8EP2 TO PROCNM. * INPUT PARAMETER -- PROCEDURE TO BE FOUND MOVE SPACES TO SCHEMA. * INPUT PARAMETER -- SCHEMA IN SYSROUTINES MOVE -1 TO PARMIND. * THE PARMLST PARAMETER IS AN OUTPUT PARM. * MARK PARMLST PARAMETER AS NULL, SO THE DB2 * REQUESTER DOES NOT HAVE TO SEND THE ENTIRE * PARMLST VARIABLE TO THE SERVER. THIS * HELPS REDUCE NETWORK I/O TIME, BECAUSE * PARMLST IS FAIRLY LARGE. EXEC SQL CALL GETPRML(:PROCNM, :SCHEMA, :OUT-CODE, :PARMLST INDICATOR :PARMIND) END-EXEC. * MAKE THE CALL IF SQLCODE NOT EQUAL TO +466 THEN * IF CALL RETURNED BAD SQLCODE MOVE SQLCODE TO BADCODE WRITE REPREC FROM SQLREC MOVE SQLERRMC TO ERRMCODE WRITE REPREC FROM ERRMREC ELSE PERFORM GET-PARMS PERFORM GET-RESULT-SET. PROG-END. CLOSE REPOUT. * CLOSE OUTPUT FILE GOBACK. PARMPRT. MOVE SPACES TO REPREC. WRITE REPREC FROM PARMARRY(I) AFTER ADVANCING 1 LINE. GET-PARMS. * IF THE CALL WORKED, IF OUT-CODE NOT EQUAL TO 0 THEN * DID GETPRML HIT AN ERROR? MOVE OUT-CODE TO CALLCODE WRITE REPREC FROM CALLREC ELSE * EVERYTHING WORKED DIVIDE 127 INTO PARMLEN GIVING NUMLINES ROUNDED * FIND OUT HOW MANY LINES TO PRINT PERFORM PARMPRT VARYING I FROM 1 BY 1 UNTIL I GREATER THAN NUMLINES. GET-RESULT-SET. ***************************************************** * ASSUME YOU KNOW THAT ONE RESULT SET IS RETURNED, * * AND YOU KNOW THE FORMAT OF THAT RESULT SET. * * ALLOCATE A CURSOR FOR THE RESULT SET, AND FETCH * * THE CONTENTS OF THE RESULT SET. * *****************************************************
Chapter 6. Coding SQL statements in COBOL application programs
369
EXEC SQL ASSOCIATE LOCATORS (:LOC) WITH PROCEDURE GETPRML END-EXEC. * LINK THE RESULT SET TO THE LOCATOR EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :LOC END-EXEC. * LINK THE CURSOR TO THE RESULT SET PERFORM GET-ROWS VARYING I FROM 1 BY 1 UNTIL SQLCODE EQUAL TO +100. GET-ROWS. EXEC SQL FETCH C1 INTO :NAME END-EXEC. MOVE NAME TO TBLNAME. WRITE REPREC FROM RSLTREC AFTER ADVANCING 1 LINE.
370
Procedure
To define the SQL communications area, SQLSTATE, and SQLCODE: Choose one of the following actions:
Option To define the SQL communications area: Description 1. Code the SQLCA directly in the program or use the following SQL INCLUDE statement to request a standard SQLCA declaration: EXEC SQL INCLUDE SQLCA DB2 sets the SQLCODE and SQLSTATE values in the SQLCA after each SQL statement executes. Your application should check these values to determine whether the last SQL statement was successful.
371
Description 1. Declare the SQLCODE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as INTEGER*4. This variable can also be called SQLCOD. 2. Declare the SQLSTATE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as CHARACTER*5. This variable can also be called SQLCOD. Restriction: Do not declare an SQLSTATE variable as an element of a structure. Requirement: After you declare the SQLCODE and SQLSTATE variables, ensure that all SQL statements in the program are within the scope of the declaration of these variables.
Related tasks: Checking the execution of SQL statements on page 201 Checking the execution of SQL statements by using the SQLCA on page 202 Checking the execution of SQL statements by using SQLCODE and SQLSTATE on page 206 Defining the items that your program can use to check whether an SQL statement executed successfully on page 137
Procedure
To define SQL descriptor areas: Call a subroutine that is written in C, PL/I, or assembler language and that uses the INCLUDE SQLDA statement to define the SQLDA. The subroutine can also include SQL statements for any dynamic SQL functions that you need. Restrictions: v You must place SQLDA declarations before the first SQL statement that references the data descriptor, unless you use the TWOPASS SQL processing option. v You cannot use the SQL INCLUDE statement for the SQLDA, because it is not supported in COBOL.
372
Procedure
To declare host variables, host variable arrays, and host structures: 1. Declare the variables according to the following rules and guidelines: v When you declare a character host variable, do not use an expression to define the length of the character variable. You can use a character host variable with an undefined length (for example, CHARACTER *(*)). The length of any such variable is determined when the associated SQL statement executes. v Host variables must be scalar variables; they cannot be elements of vectors or arrays (subscripted variables). v Be careful when calling subroutines that might change the attributes of a host variable. Such alteration can cause an error while the program is running. v If you specify the ONEPASS SQL processing option, you must explicitly declare each host variable and each host variable array before using them in an SQL statement. If you specify the TWOPASS precompiler option, you must declare each host variable before using it in the DECLARE CURSOR statement. v If you specify the STDSQL(YES) SQL processing option, you must precede the host language statements that define the host variables and host variable arrays with the BEGIN DECLARE SECTION statement and follow the host language statements with the END DECLARE SECTION statement. Otherwise, these statements are optional. v Ensure that any SQL statement that uses a host variable or host variable array is within the scope of the statement that declares that variable or array. v If you are using the DB2 precompiler, ensure that the names of host variables and host variable arrays are unique within the program, even if the variables and variable arrays are in different blocks, classes, procedures, functions, or subroutines. You can qualify the names with a structure name to make them unique. 2. Optional: Define any associated indicator variables, arrays, and structures. Related tasks: Declaring host variables and indicator variables on page 138
| | | | |
373
Fortran supports some data types with no SQL equivalent (for example, REAL*16 and COMPLEX). In most cases, you can use Fortran statements to convert between the unsupported data types and the data types that SQL allows.
v You can not use locators as column types. The following locator data types are Fortran data types and SQL data types: Result set locator LOB locators v Because Fortran does not support graphic data types, Fortran applications can process only Unicode tables that use UTF-8 encoding. Recommendations: v Be careful of overflow. For example, if you retrieve an INTEGER column value into a INTEGER*2 host variable and the column value is larger than 32767 or -32768, you get an overflow warning or an error, depending on whether you provided an indicator variable. v Be careful of truncation. For example, if you retrieve an 80-character CHAR column value into a CHARACTER*70 host variable, the rightmost ten characters of the retrieved string are truncated. Retrieving a double-precision floating-point or decimal column value into an INTEGER*4 host variable removes any fractional value.
Restrictions: v Fortran does not provide an equivalent for the decimal data type. To hold a decimal value, use one of the following variables: An integer or floating-point variable, which converts the value. If you use an integer variable, you lose the fractional part of the number. If the decimal number can exceed the maximum value for an integer or you want to preserve a fractional value, use a floating-point variable. Floating-point numbers are approximations of real numbers. Therefore, when you assign a decimal number to a floating-point variable, the result might be different from the original number. A character string host variable. Use the CHAR function to retrieve a decimal value into it. | v The SQL data type DECFLOAT has no equivalent in Fortran.
374
SQL TYPE IS
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB BLOB_LOCATOR CLOB_LOCATOR
length K M G
variable-name
variable-name
Constants
The syntax for constants in Fortran programs differs from the syntax for constants in SQL statements in the following ways:
Chapter 7. Coding SQL statements in Fortran application programs
375
v Fortran interprets a string of digits with a decimal point to be a real constant. An SQL statement interprets such a string to be a decimal constant. Therefore, use exponent notation when specifying a real (that is, floating-point) constant in an SQL statement. v In Fortran, a real (floating-point) constant that has a length of 8 bytes uses a D as the exponent indicator (for example, 3.14159D+04). An 8-byte floating-point constant in an SQL statement must use an E (for example, 3.14159E+04). Related concepts: Host variables on page 139 Rules for host variables in an SQL statement on page 147 Large objects (LOBs) on page 440 Related tasks: Determining whether a retrieved value in a host variable is null or truncated on page 150 Inserting a single row by using a host variable on page 154 Inserting null values into columns by using indicator variables or arrays on page 154 Retrieving a single row of data into host variables on page 148 Updating data by using host variables on page 153
Example
The following example shows a FETCH statement with the declarations of the host variables that are needed for the FETCH statement and their associated indicator variables.
C C C EXEC SQL FETCH CLS_CURSOR INTO :CLSCD, :DAY :DAYIND, :BGN :BGNIND, :END :ENDIND
376
Related concepts: Indicator variables, arrays, and structures on page 140 Related tasks: Inserting null values into columns by using indicator variables or arrays on page 154
Notes: 1. If a host variable includes an indicator variable, the SQLTYPE value is the base SQLTYPE value plus 1. The following table shows equivalent Fortran host variables for each SQL data type. Use this table to determine the Fortran data type for host variables that you define to receive output from the database. For example, if you retrieve TIMESTAMP data, you can define a variable of type CHARACTER*n. This table shows direct conversions between SQL data types and Fortran data types. However, a number of SQL data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 converts those compatible data types.
377
Table 68. Fortran host variable equivalents that you can use when retrieving data of a particular SQL data type SQL data type SMALLINT INTEGER Fortran host variable equivalent INTEGER*2 INTEGER*4 not supported no exact equivalent REAL*4 REAL*8 CHARACTER*n no exact equivalent Use REAL*8 1<=n<=21 22<=n<=53 1<=n<=255 Use a character host variable that is large enough to contain the largest expected VARCHAR value. Notes
| BIGINT
DECIMAL(p,s) or NUMERIC(p,s) FLOAT(n) single precision FLOAT(n) double precision CHAR(n) VARCHAR(n)
| BINARY | VARBINARY
GRAPHIC(n) VARGRAPHIC(n) DATE
not supported not supported not supported not supported CHARACTER*n If you are using a date exit routine, n is determined by that routine; otherwise, n must be at least 10. If you are using a time exit routine, n is determined by that routine. Otherwise, n must be at least 6; to include seconds, n must be at least 8. n must be at least 19. To include microseconds, n must be 26; if n is less than 26, truncation occurs on the microseconds part. Use this data type only for receiving result sets. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type.1 Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type.1 1n21474836471 1n21474836471
TIME
CHARACTER*n
TIMESTAMP
CHARACTER*n
BLOB locator
CLOB locator
not supported SQL TYPE IS BLOB(n) SQL TYPE IS CLOB(n) not supported SQL TYPE IS ROWID not supported
| XML
378
Related concepts: Compatibility of SQL and language data types on page 144 LOB host variable, LOB locator, and LOB file reference variable declarations on page 741
You cannot follow an SQL statement with another SQL statement or Fortran statement on the same line. Fortran does not require blanks to delimit words within a statement, but the SQL language requires blanks. The rules for embedded SQL follow the rules for SQL syntax, which require you to use one or more blanks as a delimiter. Comments: You can include Fortran comment lines within embedded SQL statements wherever you can use a blank, except between the keywords EXEC and SQL. You can include SQL comments in any embedded SQL statement. The DB2 precompiler does not support the exclamation point (!) as a comment recognition character in Fortran programs. Continuation for SQL statements: The line continuation rules for SQL statements are the same as those for Fortran statements, except that you must specify EXEC SQL on one line. The SQL examples in this topic have Cs in the sixth column to indicate that they are continuations of EXEC SQL. Declaring tables and views: Your Fortran program should also include the DECLARE TABLE statement to describe each table and view the program accesses. Dynamic SQL in a Fortran program: In general, Fortran programs can easily handle dynamic SQL statements. SELECT statements can be handled if the data types and the number of returned fields are fixed. If you want to use variable-list SELECT statements, you need to use an SQLDA, as described in Defining SQL descriptor areas on page 137. You can use a Fortran character variable in the statements PREPARE and EXECUTE IMMEDIATE, even if it is fixed-length.
379
Including code: To include SQL statements or Fortran host variable declarations from a member of a partitioned data set, use the following SQL statement in the source code where you want to include the statements:
EXEC SQL INCLUDE member-name
You cannot nest SQL INCLUDE statements. You cannot use the Fortran INCLUDE compiler directive to include SQL statements or Fortran host variable declarations. Margins: Code the SQL statements between columns 7 through 72, inclusive. If EXEC SQL starts before the specified left margin, the DB2 precompiler does not recognize the SQL statement. Names: You can use any valid Fortran name for a host variable. Do not use external entry names that begin with 'DSN' or host variable names that begin with 'SQL'. These names are reserved for DB2. Do not use the word DEBUG, except when defining a Fortran DEBUG packet. Do not use the words FUNCTION, IMPLICIT, PROGRAM, and SUBROUTINE to define variables. Sequence numbers: The source statements that the DB2 precompiler generates do not include sequence numbers. Statement labels: You can specify statement numbers for SQL statements in columns 1 to 5. However, during program preparation, a labeled SQL statement generates a Fortran CONTINUE statement with that label before it generates the code that executes the SQL statement. Therefore, a labeled SQL statement should never be the last statement in a DO loop. In addition, you should not label SQL statements (such as INCLUDE and BEGIN DECLARE SECTION) that occur before the first executable SQL statement, because an error might occur. WHENEVER statement: The target for the GOTO clause in the SQL WHENEVER statement must be a label in the Fortran source code and must refer to a statement in the same subprogram. The WHENEVER statement only applies to SQL statements in the same subprogram. Special Fortran considerations: The following considerations apply to programs written in Fortran: v You cannot use the @PROCESS statement in your source code. Instead, specify the compiler options in the PARM field. v You cannot use the SQL INCLUDE statement to include the following statements: PROGRAM, SUBROUTINE, BLOCK, FUNCTION, or IMPLICIT. DB2 supports Version 3 Release 1 (or later) of VS Fortran with the following restrictions: v The parallel option is not supported. Applications that contain SQL statements must not use Fortran parallelism. v You cannot use the byte data type within embedded SQL, because byte is not a recognizable host data type. Handling SQL error return codes in Fortran: You can use the subroutine DSNTIR to convert an SQL return code into a text message. DSNTIR builds a parameter list and calls DSNTIAR for you. DSNTIAR takes data from the SQLCA, formats it into a message, and places the result in a
380
message output area that you provide in your application program. For concepts and more information on the behavior of DSNTIAR, see Displaying SQLCA fields by calling DSNTIAR on page 203. You can also use the MESSAGE_TEXT condition item field of the GET DIAGNOSTICS statement to convert an SQL return code into a text message. Programs that require long token message support should code the GET DIAGNOSTICS statement instead of DSNTIAR. For more information about GET DIAGNOSTICS, see Checking the execution of SQL statements by using the GET DIAGNOSTICS statement on page 208. DSNTIR syntax: CALL DSNTIR ( error-length, message, return-code ) The DSNTIR parameters have the following meanings: error-length The total length of the message output area. message An output area, in VARCHAR format, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text are put into this area. For example, you could specify the format of the output area as:
INTEGER ERRLEN /1320/ CHARACTER*132 ERRTXT(10) INTEGER ICODE . . . CALL DSNTIR ( ERRLEN, ERRTXT, ICODE )
where ERRLEN is the total length of the message output area, ERRTXT is the name of the message output area, and ICODE is the return code. return-code Accepts a return code from DSNTIAR. An example of calling DSNTIR (which then calls DSNTIAR) from an application appears in the DB2 sample assembler program DSN8BF3, which is contained in the library DSN8910.SDSNSAMP. See DB2 sample applications on page 1126 for instructions on how to access and print the source code for the sample program. Related tasks: Including dynamic SQL in your program on page 158 Embedding SQL statements in your application on page 146 Handling SQL error codes on page 214 Limiting CPU time for dynamic SQL statements by using the resource limit facility on page 199
381
382
Procedure
To define the SQL communications area, SQLSTATE, and SQLCODE: Choose one of the following actions:
Option To define the SQL communications area: Description 1. Code the SQLCA directly in the program or use the following SQL INCLUDE statement to request a standard SQLCA declaration: EXEC SQL INCLUDE SQLCA DB2 sets the SQLCODE and SQLSTATE values in the SQLCA after each SQL statement executes. Your application should check these values to determine whether the last SQL statement was successful.
383
Description 1. Declare the SQLCODE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as BIN FIXED (31). 2. Declare the SQLSTATE variable within a BEGIN DECLARE SECTION statement and an END DECLARE SECTION statement in your program declarations as CHARACTER(5). Restriction: Do not declare an SQLSTATE variable as an element of a structure. Requirement: After you declare the SQLCODE and SQLSTATE variables, ensure that all SQL statements in the program are within the scope of the declaration of these variables.
Related tasks: Checking the execution of SQL statements on page 201 Checking the execution of SQL statements by using the SQLCA on page 202 Checking the execution of SQL statements by using SQLCODE and SQLSTATE on page 206 Defining the items that your program can use to check whether an SQL statement executed successfully on page 137
Procedure
To define SQL descriptor areas: Code the SQLDA directly in the program, or use the following SQL INCLUDE statement to request a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA
Restriction: You must place SQLDA declarations before the first SQL statement that references the data descriptor, unless you use the TWOPASS SQL processing option. Related tasks: Defining SQL descriptor areas on page 137
384
Procedure
To declare host variables, host variable arrays, and host structures: 1. Declare the variables according to the following rules and guidelines: v If you specify the ONEPASS SQL processing option, you must explicitly declare each host variable and each host variable array before using them in an SQL statement. If you specify the TWOPASS precompiler option, you must declare each host variable before using it in the DECLARE CURSOR statement. v If you specify the STDSQL(YES) SQL processing option, you must precede the host language statements that define the host variables and host variable arrays with the BEGIN DECLARE SECTION statement and follow the host language statements with the END DECLARE SECTION statement. Otherwise, these statements are optional. v Ensure that any SQL statement that uses a host variable or host variable array is within the scope of the statement that declares that variable or array. v If you are using the DB2 precompiler, ensure that the names of host variables and host variable arrays are unique within the program, even if the variables and variable arrays are in different blocks, classes, procedures, functions, or subroutines. You can qualify the names with a structure name to make them unique. 2. Optional: Define any associated indicator variables, arrays, and structures. Related tasks: Declaring host variables and indicator variables on page 138
| | | | |
385
v You can not use locators as column types. The following locator data types are PL/I data types as well as SQL data types: Result set locator Table locator LOB locators v The precompiler does not support PL/I scoping rules. Recommendations: v Be careful of overflow. For example, if you retrieve an INTEGER column value into a BIN FIXED(15) host variable and the column value is larger than 32767 or smaller than -32768, you get an overflow warning or an error, depending on whether you provided an indicator variable. v Be careful of truncation. For example, if you retrieve an 80-character CHAR column value into a CHAR(70) host variable, the rightmost ten characters of the retrieved string are truncated. Retrieving a double-precision floating-point or decimal column value into a BIN FIXED(31) host variable removes any fractional part of the value. Similarly, retrieving a column value with a DECIMAL data type into a PL/I decimal variable with a lower precision might truncate the value.
DECLARE DCL
variable-name , ( variable-name )
(2) FIXED ( precision (1) ,scale FLOAT ( precision ) ) Alignment and/or Scope and/or Storage
Notes: 1 2 You can specify a scale only for DECIMAL FIXED. You can specify host variable attributes in any order that is acceptable to PL/I. For example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
For floating-point data types, use the FLOAT SQL processing option to specify whether the host variable is in IEEE binary floating-point or z/Architecture hexadecimal floating-point format. DB2 does not check if the format of the host
386
variable contents match the format that you specified with the FLOAT SQL processing option. Therefore, you need to ensure that your floating-point host variable contents match the format that you specified with the FLOAT SQL processing option. DB2 converts all floating-point input data to z/Architecture hexadecimal floating-point format before storing it. If the PL/I compiler that you are using does not support a decimal data type with a precision greater than 15, use one of the following variable types for decimal data: v Decimal variables with precision less than or equal to 15, if the actual data values fit. If you retrieve a decimal value into a decimal variable with a scale that is less than the source column in the database, the fractional part of the value might truncate. v An integer or a floating-point variable, which converts the value. If you use an integer variable, you lose the fractional part of the number. If the decimal number can exceed the maximum value for an integer or you want to preserve a fractional value, use a floating-point variable. Floating-point numbers are approximations of real numbers. Therefore, when you assign a decimal number to a floating-point variable, the result might be different from the original number. v A character string host variable. Use the CHAR function to retrieve a decimal value into it.
DECLARE DCL
variable-name , ( variable-name )
CHARACTER CHAR
length
) VARYING VAR
387
The following diagram shows the syntax for declaring graphic host variables other than DBCLOBs.
DECLARE DCL
variable-name , ( variable-name )
length
) VARYING VAR
Notes: 1 Use WIDECHAR only for UNICODE UTF-16 data. WIDECHAR is supported only by the DB2 coprocessor.
| | | | | | | | |
DECLARE DCL
variable-name , ( variable-name )
SQL TYPE IS
| | | | | | | | | | | | | | | | |
(1) ( length ) ;
Notes: 1 For BINARY host variables, the length must be in the range from 1 to 255. For VARBINARY host variables, the length must be in the range from 1 to 32 704.
PL/I does not have variables that correspond to the SQL binary data types BINARY and VARBINARY. To create host variables that can be used with these data types, use the SQL TYPE IS clause. When you reference a BINARY or VARBINARY host variable in an SQL statement, you must use the variable that you specify in the SQL TYPE declaration. When you reference the host variable in a host language statement, you must use the variable that DB2 generates. Examples of binary variable declarations: The following table shows examples of variables that DB2 generates when you declare binary host variables.
388
| | | | | | |
Table 69. Examples of BINARY and VARBINARY variable declarations for PL/I Variable declaration that you include in your PL/I program DCL BIN_VAR SQL TYPE IS BINARY(10); DCL VBIN_VAR SQL TYPE IS VARBINARY(10); Corresponding variable that DB2 generates in the output source member DCL BIN_VAR CHAR(10); DCL VBIN_VAR CHAR(10) VAR;
DECLARE DCL
variable-name , ( variable-name )
Table locators
The following diagram shows the syntax for declaring table locators.
DCL DECLARE
variable-name , ( variable-name )
table-name
AS LOCATOR
389
variable-name , ( variable-name )
SQL TYPE IS
|
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR BLOB_FILE CLOB_FILE DBCLOB_FILE ( length K M G )
(2)
Notes: 1 2 A single PL/I declaration that contains a LOB variable declaration is limited to no more than 1000 lines of source code. Variable attributes such as STATIC and AUTOMATIC are ignored if specified on a LOB variable declaration.
| | | | | | | |
DCL DECLARE
Note: Variable attributes such as STATIC and AUTOMATIC are ignored if specified on a LOB variable declaration.
| |
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_FILE CLOB_FILE DBCLOB_FILE
) K M G
| | |
390
DCL DECLARE
variable-name , ( variable-name )
Related concepts: Host variables on page 139 Rules for host variables in an SQL statement on page 147 Large objects (LOBs) on page 440 Decimal floating-point (DECFLOAT) (DB2 SQL) Related tasks: Determining whether a retrieved value in a host variable is null or truncated on page 150 Inserting a single row by using a host variable on page 154 Inserting null values into columns by using indicator variables or arrays on page 154 Retrieving a single row of data into host variables on page 148 Retrieving a single row of data into a host structure on page 157 Updating data by using host variables on page 153
391
v You must specify the ALIGNED attribute when you declare varying-length character arrays or varying-length graphic arrays that are to be used in multiple-row INSERT and FETCH statements.
(1) DECLARE DCL variable-name , ( , (1) ( variable-name ( dimension ) ) (3) BINARY BIN DECIMAL DEC FIXED ( precision (2) ,scale FLOAT ( precision ) ) variable-name ) ( dimension )
Notes: 1 2 3 dimension must be an integer constant between 1 and 32767. You can specify the scale for only DECIMAL FIXED. You can specify host variable array attributes in any order that is acceptable to PL/I. For example, BIN FIXED(31), BINARY FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
392
(1) DECLARE DCL variable-name , ( , (1) ( CHARACTER CHAR ( variable-name ( length ) VARYING VAR Alignment and/or Scope and/or Storage dimension ) ) variable-name ) ( dimension )
Example: The following example shows the declarations needed to retrieve 10 rows of the department number and name from the department table:
DCL DEPTNO(10) CHAR(3); DCL DEPTNAME(10) CHAR(29) VAR; /* Array of ten CHAR(3) variables */ /* Array of ten VARCHAR(29) variables */
(1) DECLARE DCL variable-name , ( , (1) ( GRAPHIC ( length ) VARYING VAR Alignment and/or Scope and/or Storage variable-name ( dimension ) ) variable-name ) ( dimension )
| | |
393
DCL DECLARE
( dimension
SQL TYPE IS
BINARY VARBINARY
dimension
| | | | | | | | |
(1) DCL DECLARE variable-name , ( , (1) ( variable-name ( ( dimension ) K M G ) ) variable-name ) ( dimension ) SQL TYPE IS
| |
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_LOCATOR CLOB_LOCATOR DBCLOB_LOCATOR BLOB_FILE CLOB_FILE DBCLOB_FILE
length
394
| |
(1) DCL DECLARE variable-name , ( , (1) ( variable-name ( ( dimension ) K M G ) ) variable-name ) ( dimension ) SQL TYPE IS XML AS
| |
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_FILE CLOB_FILE DBCLOB_FILE
length
(1) DCL DECLARE variable-name , ( , (1) ( variable-name ( dimension ) ) variable-name ) ( dimension ) SQL TYPE IS ROWID
395
Related concepts: Host variable arrays in an SQL statement on page 155 Host variable arrays on page 139 Large objects (LOBs) on page 440 Decimal floating-point (DECFLOAT) (DB2 SQL) Related tasks: Inserting multiple rows of data from host variable arrays on page 157 Retrieving multiple rows of data into host variable arrays on page 156
v You can specify host variable attributes in any order that is acceptable to PL/I. For example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable. When you reference a host variable, you can qualify it with a structure name. For example, you can specify STRUCTURE.FIELD.
Host structures
The following diagram shows the syntax for declaring host structures.
DECLARE DCL
396
Data types
The following diagram shows the syntax for data types that are used within declarations of host structures.
integer )
integer ) FIXED
SQL TYPE IS
CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BINARY LARGE OBJECT BLOB CLOB_LOCATOR DBCLOB_LOCATOR BLOB_LOCATOR CLOB_FILE DBCLOB_FILE BLOB_FILE
length K M G
| | | |
397
BINARY LARGE OBJECT BLOB CHARACTER LARGE OBJECT CHAR LARGE OBJECT CLOB DBCLOB BLOB_FILE CLOB_FILE DBCLOB_FILE
length K M G
| | |
Example
In the following example, B is the name of a host structure that contains the scalars C1 and C2.
DCL 1 A, 2 B, 3 C1 CHAR(...), 3 C2 CHAR(...);
Notes: 1 You can specify host variable attributes in any order that is acceptable to PL/I. For example, BIN FIXED(31), BIN(31) FIXED, and FIXED BIN(31) are all acceptable.
The following diagram shows the syntax for declaring an indicator array in PL/I.
398
DECLARE DCL
) (1)
BINARY BIN ) ; )
dimension
Example
The following example shows a FETCH statement with the declarations of the host variables that are needed for the FETCH statement and their associated indicator variables.
EXEC SQL FETCH CLS_CURSOR INTO :CLS_CD, :DAY :DAY_IND, :BGN :BGN_IND, :END :END_IND;
BIN FIXED(15);
Related concepts: Indicator variables, arrays, and structures on page 140 Related tasks: Inserting null values into columns by using indicator variables or arrays on page 154
| FIXED BIN(63)
DEC FIXED(p,s) 0<=p<=31 and 0<=s<=p2
399
Table 70. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in PL/I programs (continued) PL/I host variable data type BIN FLOAT(p) 1<=p<=21 BIN FLOAT(p) 22<=p<=53 DEC FLOAT(m) 1<=m<=6 DEC FLOAT(m) 7<=m<=16 CHAR(n) CHAR(n) VARYING 1<=n<=255 CHAR(n) VARYING n>255 GRAPHIC(n) GRAPHIC(n) VARYING 1<=n<=127 GRAPHIC(n) VARYING n>127 SQLTYPE of host variable1 480 480 480 480 452 448 456 468 464 472 912 908 972 976 960 964 968 404 408 412 404 408 412 916/917 920/921 924/925 916/917 920/921 924/925 SQLLEN of host variable 4 8 4 8 n n n n n n n n 4 4 4 4 4 n n n 0 0 0 267 267 267 267 267 267 SQL data type REAL or FLOAT(n) 1<=n<=21 DOUBLE PRECISION or FLOAT(n) 22<=n<=53 FLOAT (single precision) FLOAT (double precision) CHAR(n) VARCHAR(n) VARCHAR(n) GRAPHIC(n) VARGRAPHIC(n) VARGRAPHIC(n) BINARY(n) VARBINARY(n) Result set locator3 Table locator3 BLOB locator3 CLOB locator3 DBCLOB locator3 BLOB(n) CLOB(n) DBCLOB(n)4 XML XML XML BLOB file reference3 CLOB file reference3 DBCLOB file reference3 XML BLOB file reference3 XML CLOB file reference3 XML DBCLOB file reference3
| SQL TYPE IS XML AS BLOB(n) | SQL TYPE IS XML AS CLOB(n) | SQL TYPE IS XML AS | DBCLOB(n) | SQL TYPE IS BLOB_FILE | SQL TYPE IS CLOB_FILE | SQL TYPE IS DBCLOB_FILE | SQL TYPE IS XML AS | BLOB_FILE | SQL TYPE IS XML AS | CLOB_FILE | SQL TYPE IS XML AS | DBCLOB_FILE
400
Table 70. SQL data types, SQLLEN values, and SQLTYPE values that the precompiler uses for host variables in PL/I programs (continued) PL/I host variable data type SQL TYPE IS ROWID Notes: 1. If a host variable includes an indicator variable, the SQLTYPE value is the base SQLTYPE value plus 1. 2. If p=0, DB2 interprets it as DECIMAL(31). For example, DB2 interprets a PL/I data type of DEC FIXED(0,0) to be DECIMAL(31,0), which equates to the SQL data type of DECIMAL(31,0). 3. Do not use this data type as a column type. 4. n is the number of double-byte characters. SQLTYPE of host variable1 904 SQLLEN of host variable 40 SQL data type ROWID
The following table shows equivalent PL/I host variables for each SQL data type. Use this table to determine the PL/I data type for host variables that you define to receive output from the database. For example, if you retrieve TIMESTAMP data, you can define a variable of type CHAR(n). This table shows direct conversions between SQL data types and PL/I data types. However, a number of SQL data types are compatible. When you do assignments or comparisons of data that have compatible data types, DB2 converts those compatible data types.
Table 71. PL/I host variable equivalents that you can use when retrieving data of a particular SQL data type SQL data type SMALLINT INTEGER PL/I host variable equivalent BIN FIXED(n) BIN FIXED(n) FIXED BIN(63) If p<16: DEC FIXED(p) or DEC FIXED(p,s) p is precision; s is scale. 1<=p<=31 and 0<=s<=p If p>15, the PL/I compiler must support 31-digit decimal variables. REAL or FLOAT(n) DOUBLE PRECISION, DOUBLE, or FLOAT(n) CHAR(n) VARCHAR(n) GRAPHIC(n) BIN FLOAT(p) or DEC FLOAT(m) BIN FLOAT(p) or DEC FLOAT(m) CHAR(n) CHAR(n) VAR GRAPHIC(n) n refers to the number of double-byte characters, not to the number of bytes. 1<=n<=127 n refers to the number of double-byte characters, not to the number of bytes. 1<=n<=255 1<=n<=32 704 If you are using a date exit routine, that routine determines n; otherwise, n must be at least 10. 1<=n<=21, 1<=p<=21, and 1<=m<=6 22<=n<=53, 22<=p<=53, and 7<=m<=16 1<=n<=255 Notes 1<=n<=15 16<=n<=31
| BIGINT
DECIMAL(p,s) or NUMERIC(p,s)
VARGRAPHIC(n)
| BINARY(n) | VARBINARY(n)
DATE
401
Table 71. PL/I host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type TIME PL/I host variable equivalent CHAR(n) Notes If you are using a time exit routine, that routine determines n. Otherwise, n must be at least 6; to include seconds, n must be at least 8. n must be at least 19. To include microseconds, n must be 26; if n is less than 26, the microseconds part is truncated. Use this data type only for receiving result sets. Do not use this data type as a column type. Use this data type only in a user-defined function or stored procedure to receive rows of a transition table. Do not use this data type as a column type. Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type.1 Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type.1 Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type.1 1n21474836471 1n21474836471 n is the number of double-byte characters. 1n10737418231 1n2147483647 1n2147483647
2 2
TIMESTAMP
CHAR(n)
Table locator
BLOB locator
CLOB locator
DBCLOB locator
SQL TYPE IS BLOB(n) SQL TYPE IS CLOB(n) SQL TYPE IS DBCLOB(n) SQL TYPE IS XML AS BLOB(n) SQL TYPE IS XML AS CLOB(n) SQL TYPE IS XML AS DBCLOB(n) SQL TYPE IS BLOB_FILE
| XML | XML | XML | | BLOB file reference | | | CLOB file reference | | | DBCLOB file reference | | | XML BLOB file reference | | | XML CLOB file reference | |
n is the number of double-byte characters. 1n1073741823 2 Use this data type only to manipulate data in BLOB columns. Do not use this data type as a column type.1 Use this data type only to manipulate data in CLOB columns. Do not use this data type as a column type.1 Use this data type only to manipulate data in DBCLOB columns. Do not use this data type as a column type.1 Use this data type only to manipulate XML data as BLOB files. Do not use this data type as a column type. 2 Use this data type only to manipulate XML data as CLOB files. Do not use this data type as a column type. 2
402
Table 71. PL/I host variable equivalents that you can use when retrieving data of a particular SQL data type (continued) SQL data type PL/I host variable equivalent SQL TYPE IS XML AS DBCLOB_FILE Notes Use this data type only to manipulate XML data as DBCLOB files. Do not use this data type as a column type.2
| | |
ROWID
Related concepts: Compatibility of SQL and language data types on page 144 LOB host variable, LOB locator, and LOB file reference variable declarations on page 741 Host variable data types for XML data in embedded SQL applications on page 216
Comments: You can include PL/I comments in embedded SQL statements wherever you can use a blank, except between the keywords EXEC and SQL. You can also include SQL comments in any SQL statement. To include DBCS characters in comments, you must delimit the characters by a shift-out and shift-in control character; the first shift-in character in the DBCS string signals the end of the DBCS string. Continuation for SQL statements: The line continuation rules for SQL statements are the same as those for other PL/I statements, except that you must specify EXEC SQL on one line. Declaring tables and views: Your PL/I program should include a DECLARE TABLE statement to describe each table and view the program accesses. You can use the DB2 declarations generator (DCLGEN) to generate the DECLARE TABLE statements. Including code: You can use SQL statements or PL/I host variable declarations from a member of a partitioned data set by using the following SQL statement in the source code where you want to include the statements:
EXEC SQL INCLUDE member-name;
Chapter 8. Coding SQL statements in PL/I application programs
403
You cannot nest SQL INCLUDE statements. Do not use the PL/I %INCLUDE statement to include SQL statements or host variable DCL statements. You must use the PL/I preprocessor to resolve any %INCLUDE statements before you use the DB2 precompiler. Do not use PL/I preprocessor directives within SQL statements. Margins: Code SQL statements in columns 2 through 72, unless you have specified other margins to the DB2 precompiler. If EXEC SQL starts before the specified left margin, the DB2 precompiler does not recognize the SQL statement. Names: You can use any valid PL/I name for a host variable. Do not use external entry names or access plan names that begin with 'DSN', and do not use host variable names that begin with 'SQL'. These names are reserved for DB2. Sequence numbers: The source statements that the DB2 precompiler generates do not include sequence numbers. IEL0378I messages from the PL/I compiler identify lines of code without sequence numbers. You can ignore these messages. Statement labels: You can specify a statement label for executable SQL statements. However, the INCLUDE text-file-name and END DECLARE SECTION statements cannot have statement labels. Whenever statement: The target for the GOTO clause in an SQL statement WHENEVER must be a label in the PL/I source code and must be within the scope of any SQL statements that WHENEVER affects. Using double-byte character set (DBCS) characters: The following considerations apply to using DBCS in PL/I programs with SQL statements: v If you use DBCS in the PL/I source, DB2 rules for the following language elements apply: Graphic strings Graphic string constants Host identifiers Mixed data in character strings MIXED DATA option v The PL/I preprocessor transforms the format of DBCS constants. If you do not want that transformation, run the DB2 precompiler before the preprocessor. v If you use graphic string constants or mixed data in dynamically prepared SQL statements, and if your application requires the PL/I Version 2 (or later) compiler, the dynamically prepared statements must use the PL/I mixed constant format. If you prepare the statement from a host variable, change the string assignment to a PL/I mixed string. If you prepare the statement from a PL/I string, change that to a host variable, and then change the string assignment to a PL/I mixed string. Example:
SQLSTMT = SELECT <dbdb> FROM table-nameM; EXEC SQL PREPARE STMT FROM :SQLSTMT;
v If you want a DBCS identifier to resemble a PL/I graphic string, you must use a delimited identifier. v If you include DBCS characters in comments, you must delimit the characters with a shift-out and shift-in control character. The first shift-in character signals the end of the DBCS string.
404
v You can declare host variable names that use DBCS characters in PL/I application programs. The rules for using DBCS variable names in PL/I follow existing rules for DBCS SQL ordinary identifiers, except for length. The maximum length for a host variable is 128 Unicode bytes in DB2. For information about the rules for DBCS SQL ordinary identifiers, see the information about SQL identifiers. Restrictions: DBCS variable names must contain DBCS characters only. Mixing single-byte character set (SBCS) characters with DBCS characters in a DBCS variable name produces unpredictable results. A DBCS variable name cannot continue to the next line. v The PL/I preprocessor changes non-Kanji DBCS characters into extended binary coded decimal interchange code (EBCDIC) SBCS characters. To avoid this change, use Kanji DBCS characters for DBCS variable names, or run the PL/I compiler without the PL/I preprocessor. Special PL/I considerations: The following considerations apply to programs written in PL/I: v When compiling a PL/I program that includes SQL statements, you must use the PL/I compiler option CHARSET (60 EBCDIC). v In unusual cases, the generated comments in PL/I can contain a semicolon. The semicolon generates compiler message IEL0239I, which you can ignore. v The generated code in a PL/I declaration can contain the ADDR function of a field defined as character varying. This produces either message IBM105l l or IBM1180l W, both of which you can ignore. v The precompiler generated code in PL/I source can contain the NULL() function. This produces message IEL0533I, which you can ignore unless you also use NULL as a PL/I variable. If you use NULL as a PL/I variable in a DB2 application, you must also declare NULL as a built-in function (DCL NULL BUILTIN;) to avoid PL/I compiler errors. v The PL/I macro processor can generate SQL statements or host variable DCL statements if you run the macro processor before running the DB2 precompiler. If you use the PL/I macro processor, do not use the PL/I *PROCESS statement in the source to pass options to the PL/I compiler. You can specify the needed options on the COPTION parameter of the DSNH command or the option PARM.PLI=options of the EXEC statement in the DSNHPLI procedure. v Using the PL/I multitasking facility, in which multiple tasks execute SQL statements, causes unpredictable results. You can use the subroutine DSNTIAR to convert an SQL return code into a text message. DSNTIAR takes data from the SQLCA, formats it into a message, and places the result in a message output area that you provide in your application program. For concepts and more information on the behavior of DSNTIAR, see Displaying SQLCA fields by calling DSNTIAR on page 203. You can also use the MESSAGE_TEXT condition item field of the GET DIAGNOSTICS statement to convert an SQL return code into a text message. Programs that require long token message support should code the GET DIAGNOSTICS statement instead of DSNTIAR. For more information about GET DIAGNOSTICS, see Checking the execution of SQL statements by using the GET DIAGNOSTICS statement on page 208. DSNTIAR syntax:
Chapter 8. Coding SQL statements in PL/I application programs
405
CALL DSNTIAR ( sqlca, message, lrecl ); The DSNTIAR parameters have the following meanings: sqlca An SQL communication area. message An output area, in VARCHAR format, in which DSNTIAR places the message text. The first halfword contains the length of the remaining area; its minimum value is 240. The output lines of text, each line being the length specified in lrecl, are put into this area. For example, you could specify the format of the output area as:
DCL DATA_LEN FIXED BIN(31) INIT(132); DCL DATA_DIM FIXED BIN(31) INIT(10); DCL 1 ERROR_MESSAGE AUTOMATIC, 3 ERROR_LEN FIXED BIN(15) UNAL INIT((DATA_LEN*DATA_DIM)), 3 ERROR_TEXT(DATA_DIM) CHAR(DATA_LEN); . . . CALL DSNTIAR ( SQLCA, ERROR_MESSAGE, DATA_LEN );
where ERROR_MESSAGE is the name of the message output area, DATA_DIM is the number of lines in the message output area, and DATA_LEN is the length of each line. lrecl A fullword containing the logical record length of output messages, between 72 and 240. Because DSNTIAR is an assembler language program, you must include the following directives in your PL/I application:
DCL DSNTIAR ENTRY OPTIONS (ASM,INTER,RETCODE);
An example of calling DSNTIAR from an application appears in the DB2 sample assembler program DSN8BP3, contained in the library DSN8910.SDSNSAMP. See DB2 sample applications on page 1126 for instructions on how to access and print the source code for the sample program. CICS: If your CICS application requires CICS storage handling, you must use the subroutine DSNTIAC instead of DSNTIAR. DSNTIAC has the following syntax:
CALL DSNTIAC (eib, commarea, sqlca, msg, lrecl);
DSNTIAC has extra parameters, which you must use for calls to routines that use CICS commands. eib EXEC interface block
commarea communication area For more information on these parameters, see the appropriate application programming guide for CICS. The remaining parameter descriptions are the same as those for DSNTIAR. Both DSNTIAC and DSNTIAR format the SQLCA in the same way. You must define DSNTIA1 in the CSD. If you load DSNTIAR or DSNTIAC, you must also define them in the CSD. For an example of CSD entry generation statements for use with DSNTIAC, see job DSNTEJ5A.
406
The assembler source code for DSNTIAC and job DSNTEJ5A, which assembles and link-edits DSNTIAC, are in the data set prefix.SDSNSAMP. Related concepts: DCLGEN (declarations generator) on page 125 Host variable arrays in an SQL statement on page 155 SQL identifiers (DB2 SQL) Related tasks: Including dynamic SQL in your program on page 158 Embedding SQL statements in your application on page 146 Handling SQL error codes on page 214 Limiting CPU time for dynamic SQL statements by using the resource limit facility on page 199
407
*PROCESS SYSTEM(MVS); CALPRML: PROC OPTIONS(MAIN); /************************************************************/ /* Declare the parameters used to call the GETPRML */ /* stored procedure. */ /************************************************************/ DECLARE PROCNM CHAR(18), /* INPUT parm -- PROCEDURE name */ SCHEMA CHAR(8), /* INPUT parm -- Users schema */ OUT_CODE FIXED BIN(31), /* OUTPUT -- SQLCODE from the */ /* SELECT operation. */ PARMLST CHAR(254) /* OUTPUT -- RUNOPTS for */ VARYING, /* the matching row in the */ /* catalog table SYSROUTINES */ PARMIND FIXED BIN(15); /* PARMLST indicator variable */ /************************************************************/ /* Include the SQLCA */ /************************************************************/ EXEC SQL INCLUDE SQLCA; /************************************************************/ /* Call the GETPRML stored procedure to retrieve the */ /* RUNOPTS values for the stored procedure. In this */ /* example, we request the RUNOPTS values for the */ /* stored procedure named DSN8EP2. */ /************************************************************/ PROCNM = DSN8EP2; /* Input parameter -- PROCEDURE to be found */ SCHEMA = ; /* Input parameter -- SCHEMA in SYSROUTINES */ PARMIND = -1; /* The PARMLST parameter is an output parm. */ /* Mark PARMLST parameter as null, so the DB2 */ /* requester does not have to send the entire */ /* PARMLST variable to the server. This */ /* helps reduce network I/O time, because */ /* PARMLST is fairly large. */ EXEC SQL CALL GETPRML(:PROCNM, :SCHEMA, :OUT_CODE, :PARMLST INDICATOR :PARMIND); IF SQLCODE=0 THEN /* If SQL CALL failed, DO; PUT SKIP EDIT(SQL CALL failed due to SQLCODE = , SQLCODE) (A(34),A(14)); PUT SKIP EDIT(SQLERRM = , SQLERRM) (A(10),A(70)); END; ELSE /* If the CALL worked, IF OUT_CODE=0 THEN /* Did GETPRML hit an error? PUT SKIP EDIT(GETPRML failed due to RC = , OUT_CODE) (A(33),A(14)); ELSE /* Everything worked. PUT SKIP EDIT(RUNOPTS = , PARMLST) (A(11),A(200)); RETURN; END CALPRML; Figure 21. Calling a stored procedure from a PL/I program */
*/ */ */
408
The following example is a PL/I stored procedure with linkage convention GENERAL.
*PROCESS SYSTEM(MVS); GETPRML: PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST) OPTIONS(MAIN NOEXECOPS REENTRANT); DECLARE PROCNM CHAR(18), SCHEMA CHAR(8), /* INPUT parm -- PROCEDURE name */ /* INPUT parm -- Users SCHEMA */
OUT_CODE FIXED BIN(31), /* OUTPUT -- SQLCODE from */ /* the SELECT operation. */ PARMLST CHAR(254) /* OUTPUT -- RUNOPTS for */ VARYING; /* the matching row in */ /* SYSIBM.SYSROUTINES */ EXEC SQL INCLUDE SQLCA; /************************************************************/ /* Execute SELECT from SYSIBM.SYSROUTINES in the catalog. */ /************************************************************/ EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.SYSROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA;
409
*/
Example PL/I stored procedure with a GENERAL WITH NULLS linkage convention
You can call a stored procedure that uses the GENERAL WITH NULLS linkage convention from a PL/I program. This example stored procedure searches the DB2 SYSIBM.SYSROUTINES catalog table for a row that matches the input parameters from the client program. The two input parameters contain values for NAME and SCHEMA. The linkage convention for this stored procedure is GENERAL WITH NULLS. The output parameters from this stored procedure contain the SQLCODE from the SELECT operation, and the value of the RUNOPTS column retrieved from the SYSIBM.SYSROUTINES table. The CREATE PROCEDURE statement for this stored procedure might look like this:
CREATE PROCEDURE GETPRML(PROCNM CHAR(18) IN, SCHEMA CHAR(8) IN, OUTCODE INTEGER OUT, PARMLST VARCHAR(254) OUT) LANGUAGE PLI DETERMINISTIC READS SQL DATA EXTERNAL NAME "GETPRML" COLLID GETPRML ASUTIME NO LIMIT PARAMETER STYLE GENERAL WITH NULLS STAY RESIDENT NO RUN OPTIONS "MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)" WLM ENVIRONMENT SAMPPROG PROGRAM TYPE MAIN SECURITY DB2 RESULT SETS 0 COMMIT ON RETURN NO;
The following example is a PL/I stored procedure with linkage convention GENERAL WITH NULLS.
*PROCESS SYSTEM(MVS); GETPRML: PROC(PROCNM, SCHEMA, OUT_CODE, PARMLST, INDICATORS) OPTIONS(MAIN NOEXECOPS REENTRANT); DECLARE PROCNM CHAR(18), SCHEMA CHAR(8), /* INPUT parm -- PROCEDURE name */ /* INPUT parm -- Users schema */ */ */ */ */ */ */ */
OUT_CODE FIXED BIN(31), /* OUTPUT -- SQLCODE from /* the SELECT operation. PARMLST CHAR(254) /* OUTPUT -- PARMLIST for VARYING; /* the matching row in /* SYSIBM.SYSROUTINES DECLARE 1 INDICATORS, /* Declare null indicators for /* input and output parameters. 3 PROCNM_IND FIXED BIN(15), 3 SCHEMA_IND FIXED BIN(15), 3 OUT_CODE_IND FIXED BIN(15), 3 PARMLST_IND FIXED BIN(15);
410
EXEC SQL INCLUDE SQLCA; IF PROCNM_IND<0 | SCHEMA_IND<0 THEN DO; OUT_CODE = 9999; OUT_CODE_IND = 0;
*/ */
/* Output return code is not NULL.*/ PARMLST_IND = -1; /* Assign NULL value to PARMLST. */ END; ELSE /* If input parms are not NULL, */ DO; /* */ /************************************************************/ /* Issue the SQL SELECT against the SYSIBM.SYSROUTINES */ /* DB2 catalog table. */ /************************************************************/ EXEC SQL SELECT RUNOPTS INTO :PARMLST FROM SYSIBM.SYSROUTINES WHERE NAME=:PROCNM AND SCHEMA=:SCHEMA; PARMLST_IND = 0; /* Mark PARMLST as not NULL. */ OUT_CODE = SQLCODE; OUT_CODE_IND = 0; OUT_CODE_IND = 0; END; RETURN; END GETPRML; /* return SQLCODE to caller */
411
412
Procedure
To define SQL descriptor areas: Code the SQLDA declarations directly in your program. Each SQLDA consists of a set of REXX variables with a common stem. The stem must be a REXX variable name that contains no periods and is the same as the value of descriptor-name that you specify when you use the SQLDA in an SQL statement. Restrictions:
413
v You must place SQLDA declarations before the first SQL statement that references the data descriptor, unless you use the TWOPASS SQL processing option. v You cannot use the SQL INCLUDE statement for the SQLDA, because it is not supported in COBOL. Related tasks: Defining SQL descriptor areas on page 137
| BIGINT | | |
DECIMAL(p,s)
492/493
484/485
| | | |
FLOAT 480/481
v A string of numerics that does not contain a decimal point or an exponent identifier. The first character can be a plus (+) or minus (-) sign. The number that is represented is less than -9223372036854775808 or greater than 9223372036854775807. A string that represents a number in scientific notation. The string consists of a series of numerics followed by an exponent identifier (an E or e followed by an optional plus (+) or minus (-) sign and a series of numerics). The string can begin with a plus (+) or minus (-) sign.
414
Table 72. SQL input data types and REXX data formats (continued) SQL data type assigned by DB2 VARCHAR(n) SQLTYPE for data type 448/449 REXX input data format One of the following formats: v A string of length n, enclosed in single or double quotation marks. v The character X or x, followed by a string enclosed in single or double quotation marks. The string within the quotation marks has a length of 2*n bytes and is the hexadecimal representation of a string of n characters. v A string of length n that does not have a numeric or graphic format, and does not satisfy either of the previous conditions. VARGRAPHIC(n) 464/465 One of the following formats: v The character G, g, N, or n, followed by a string enclosed in single or double quotation marks. The string within the quotation marks begins with a shift-out character (X'0E') and ends with a shift-in character (X'0F'). Between the shift-out character and shift-in character are n double-byte characters. v The characters GX, Gx, gX, or gx, followed by a string enclosed in single or double quotation marks. The string within the quotation marks has a length of 4*n bytes and is the hexadecimal representation of a string of n double-byte characters.
For example, when DB2 executes the following statements to update the MIDINIT column of the EMP table, DB2 must determine a data type for HVMIDINIT:
SQLSTMT="UPDATE EMP" , "SET MIDINIT = ?" , "WHERE EMPNO = 000200" "EXECSQL PREPARE S100 FROM :SQLSTMT" HVMIDINIT=H "EXECSQL EXECUTE S100 USING" , ":HVMIDINIT"
Because the data that is assigned to HVMIDINIT has a format that fits a character data type, DB2 REXX Language Support assigns a VARCHAR type to the input data. If you do not assign a value to a host variable before you assign the host variable to a column, DB2 returns an error code. Related concepts: Compatibility of SQL and language data types on page 144
415
| | | | | | | | | | | | | | | | | | | | |
v v v v v v v v v v v v v v v v v v v v v
DESCRIBE INPUT DESCRIBE PROCEDURE EXECUTE EXECUTE IMMEDIATE FETCH OPEN PREPARE RELEASE connection SET CONNECTION SET CURRENT PACKAGE PATH SET CURRENT PACKAGESET SET host-variable = CURRENT DATE SET host-variable = CURRENT DEGREE SET host-variable = CURRENT MEMBER SET host-variable = CURRENT PACKAGESET SET host-variable = CURRENT PATH SET host-variable = CURRENT SERVER SET host-variable = CURRENT SQLID SET host-variable = CURRENT TIME SET host-variable = CURRENT TIMESTAMP SET host-variable = CURRENT TIMEZONE
Each SQL statement in a REXX program must begin with EXECSQL, in either upper-, lower-, or mixed-case. One of the following items must follow EXECSQL: v An SQL statement enclosed in single or double quotation marks. v A REXX variable that contains an SQL statement. The REXX variable must not be preceded by a colon. For example, you can use either of the following methods to execute the COMMIT statement in a REXX program:
EXECSQL "COMMIT" rexxvar="COMMIT" EXECSQL rexxvar
| | | | | | | | | | | | | | |
The following dynamic statements must be executed using EXECUTE IMMEDIATE or PREPARE and EXECUTE under DSNREXX: v DECLARE GLOBAL TEMPORARY TABLE v SET CURRENT DEBUG MODE v SET CURRENT DECFLOAT ROUNDING MODE v SET CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION v SET CURRENT QUERY ACCELERATION v SET CURRENT REFRESH AGE v SET CURRENT ROUTINE VERSION v SET SCHEMA You cannot execute a SELECT, INSERT, UPDATE, MERGE, or DELETE statement that contains host variables. Instead, you must execute PREPARE on the statement, with parameter markers substituted for the host variables, and then use the host variables in an EXECUTE, OPEN, or FETCH statement. See Host variables on page 139 for more information.
416
An SQL statement follows rules that apply to REXX commands. The SQL statement can optionally end with a semicolon and can be enclosed in single or double quotation marks, as in the following example:
EXECSQL COMMIT;
Comments: You cannot include REXX comments (/* ... */) or SQL comments (--) within SQL statements. However, you can include REXX comments anywhere else in the program. Names:Continuation for SQL statements: SQL statements that span lines follow REXX rules for statement continuation. You can break the statement into several strings, each of which fits on a line, and separate the strings with commas or with concatenation operators followed by commas. For example, either of the following statements is valid:
EXECSQL , "UPDATE DSN8910.DEPT" , "SET MGRNO = 000010" , "WHERE DEPTNO = D11" "EXECSQL " || , " UPDATE DSN8910.DEPT " || , " SET MGRNO = 000010" || , " WHERE DEPTNO = D11"
Including code: The EXECSQL INCLUDE statement is not valid for REXX. You therefore cannot include externally defined SQL statements in a program. Margins: Like REXX commands, SQL statements can begin and end anywhere on a line. You can use any valid REXX name that does not end with a period as a host variable. However, host variable names should not begin with 'SQL', 'RDI', 'DSN', 'RXSQL', or 'QRW'. Variable names can be at most 64 bytes. Nulls: A REXX null value and an SQL null value are different. The REXX language has a null string (a string of length 0) and a null clause (a clause that contains only blanks and comments). The SQL null value is a special value that is distinct from all nonnull values and denotes the absence of a value. Assigning a REXX null value to a DB2 column does not make the column value null. Statement labels: You can precede an SQL statement with a label, in the same way that you label REXX commands. Handling errors and warnings: DB2 does not support the SQL WHENEVER statement in a REXX program. To handle SQL errors and warnings, use the following methods: v To test for SQL errors or warnings, test the SQLCODE or SQLSTATE value and the SQLWARN. values after each EXECSQL call. This method does not detect errors in the REXX interface to DB2. v To test for SQL errors or warnings or errors or warnings from the REXX interface to DB2, test the REXX RC variable after each EXECSQL call. The following table lists the values of the RC variable. You can also use the REXX SIGNAL ON ERROR and SIGNAL ON FAILURE keyword instructions to detect negative values of the RC variable and transfer control to an error routine.
417
Table 73. REXX return codes after SQL statements Return code 0 +1 -1 -3 Meaning No SQL warning or error occurred. An SQL warning occurred. An SQL error occurred. The first token after ADDRESS DSNREXX is in error. For a description of the tokens allowed, see Accessing the DB2 REXX language support application programming interfaces.
Related tasks: Including dynamic SQL in your program on page 158 Embedding SQL statements in your application on page 146 Handling SQL error codes on page 214 Limiting CPU time for dynamic SQL statements by using the resource limit facility on page 199
418
|
ADDRESS DSNREXX
'CONNECT'
'subsystem-ID' REXX-variable
| | | | | | | DSNREXX EXECSQL Executes SQL statements in REXX programs. The syntax of the DSNREXX EXECSQL command is:
Notes: 1. CALL SQLDBS 'ATTACH TO' ssid is equivalent to ADDRESS DSNREXX 'CONNECT' ssid. 2. The REXX-variable or 'subsystem-ID' string may also be a single member name in a data sharing group or the group attachment name.
"SQL-statement" REXX-variable
Notes: 1. CALL 'SQLEXEC' "SQL-statement" is equivalent to ADDRESS DSNREXX 'EXECSQL' "SQL-statement". 2. 'EXECSQL' and "SQL-statement" can be enclosed in either single or double quotation marks.
| | | | | | | | | |
ADDRESS DSNREXX
DSNREXX DISCONNECT Deallocates the DSNREXX plan and removes the REXX task as a connected user of DB2. You should execute the DSNREXX DISCONNECT command to release resources that are held by DB2. Otherwise resources are not released until the REXX task terminates. Do not use the DSNREXX DISCONNECT command from a stored procedure. The syntax of the DSNREXX DISCONNECT command is:
'DISCONNECT'
| | | |
Note: CALL SQLDBS 'DETACH' is equivalent to ADDRESS DSNREXX 'DISCONNECT'.
These application programming interfaces are available through the DSNREXX host command environment. To make DSNREXX available to the application, invoke the RXSUBCOM function. The syntax is:
RXSUBCOM (
'ADD' 'DELETE'
'DSNREXX' ,
'DSNREXX' )
419
The ADD function adds DSNREXX to the REXX host command environment table. The DELETE function deletes DSNREXX from the REXX host command environment table. The following figure shows an example of REXX code that makes DSNREXX available to an application.
SUBCOM DSNREXX /* HOST CMD ENV AVAILABLE? IF RC THEN /* IF NOT, MAKE IT AVAILABLE S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) /* ADD HOST CMD ENVIRONMENT ADDRESS DSNREXX /* SEND ALL COMMANDS OTHER /* THAN REXX INSTRUCTIONS TO /* DSNREXX /* CALL CONNECT, EXECSQL, AND /* DISCONNECT INTERFACES . . . S_RC = RXSUBCOM(DELETE,DSNREXX,DSNREXX) /* WHEN DONE WITH /* DSNREXX, REMOVE IT. */ */ */ */ */ */ */ */
*/ */
Ensuring that DB2 correctly interprets character input data in REXX programs
DB2 REXX Language Support might incorrectly interpret character literals as graphic or numeric literals unless you mark them correctly.
Procedure
To ensure that DB2 correctly interprets character input data in REXX programs: Precede and follow character literals with a double quotation mark, followed by a single quotation mark, followed by another double quotation mark (""). Example: Specify the string the string 100 as ""100"". Enclosing the string in apostrophes is not adequate, because REXX removes the apostrophes when it assigns a literal to a variable. For example, suppose that you want to pass the value in a host variable called stringvar to DB2. The value that you want to pass is the string '100'. First, you assign the string to the host variable by issuing the following REXX command:
stringvar = 100
After the command executes, stringvar contains the characters 100 (without the apostrophes). DB2 REXX Language Support then passes the numeric value 100 to DB2, which is not what you intended. However, suppose that you write the following command:
stringvar = ""100""
In this case, REXX assigns the string '100' to stringvar, including the single quotation marks. DB2 REXX Language Support then passes the string '100' to DB2, which is the result that you want.
420
Passing the data type of an input data type to DB2 for REXX programs
In certain situations, you should tell DB2 the data type to use for input data in a REXX program. For example, if you are assigning or comparing input data to columns of type SMALLINT, CHAR, or GRAPHIC, you should tell DB2 to use those data types.
Procedure
To pass the data type of an input data type to DB2 for REXX programs: Use an SQLDA.
Examples
Example of specifying CHAR as an input data type: Suppose that you want to tell DB2 that the data with which you update the MIDINIT column of the EMP table is of type CHAR, rather than VARCHAR. You need to set up an SQLDA that contains a description of a CHAR column, and then prepare and execute the UPDATE statement using that SQLDA, as shown in the following example.
INSQLDA.SQLD = 1 INSQLDA.1.SQLTYPE = 453 /* /* /* /* /* /* SQLDA contains one variable Type of the variable is CHAR, and the value can be null Length of the variable is 1 Value in variable is H Input variable is not null */ */ */ */ */ */
INSQLDA.1.SQLLEN = 1 INSQLDA.1.SQLDATA = H INSQLDA.1.SQLIND = 0 SQLSTMT="UPDATE EMP" , "SET MIDINIT = ?" , "WHERE EMPNO = 000200" "EXECSQL PREPARE S100 FROM :SQLSTMT" "EXECSQL EXECUTE S100 USING DESCRIPTOR :INSQLDA"
Example of specifying the input data type as DECIMAL with precision and scale: Suppose that you want to tell DB2 that the data is of type DECIMAL with precision and nonzero scale. You need to set up an SQLDA that contains a description of a DECIMAL column, as shown in the following example.
INSQLDA.SQLD = 1 INSQLDA.1.SQLTYPE = 484 INSQLDA.1.SQLLEN.SQLPRECISION = 18 INSQLDA.1.SQLLEN.SQLSCALE = 8 INSQLDA.1.SQLDATA = 9876543210.87654321 /* /* /* /* /* SQLDA contains one variable Type of variable is DECIMAL Precision of variable is 18 Scale of variable is 8 Value in variable */ */ */ */ */
421
Procedure
To set the isolation level of SQL statements in a REXX program: Execute the SET CURRENT PACKAGESET statement to select one of the following DB2 REXX Language Support packages with the isolation level that you need.
Table 74. DB2 REXX Language Support packages and associated isolation levels Package namea DSNREXRR DSNREXRS DSNREXCS DSNREXUR Isolation level Repeatable read (RR) Read stability (RS) Cursor stability (CS) Uncommitted read (UR)
Note: 1. These packages enable your program to access DB2 and are bound when you install DB2 REXX Language Support. For example, to change the isolation level to cursor stability, execute the following SQL statement:
"EXECSQL SET CURRENT PACKAGESET=DSNREXCS"
| | |
FLOAT(n) REAL DOUBLE
v Does not contain a decimal point or an exponent identifier. The numeric value is less than -9223372036854775808 or greater than 9223372036854775807. If the value is negative, it begins with a minus (-) sign. A string that represents a number in scientific notation. The string consists of a numeric, a decimal point, a series of numerics, and an exponent identifier. The exponent identifier is an E followed by a minus (-) sign and a series of numerics if the number is between -1 and 1. Otherwise, the exponent identifier is an E followed by a series of numerics. If the string represents a negative number, it begins with a minus (-) sign.
422
Table 75. SQL output data types and REXX data formats (continued) SQL data type REXX output data format REXX emulates the DECFLOAT data type with DOUBLE, so support for DECFLOAT is limited to the REXX support for DOUBLE. The following special values are not supported: v INFINITY v SNAN v NAN A character string of length n bytes. The string is not enclosed in single or double quotation marks. A string of length 2*n bytes. Each pair of bytes represents a double-byte character. This string does not contain a leading G, is not enclosed in quotation marks, and does not contain shift-out or shift-in characters.
| | | | | |
DECFLOAT
Because you cannot use the SELECT INTO statement in a REXX procedure, to retrieve data from a DB2 table you must prepare a SELECT statement, open a cursor for the prepared statement, and then fetch rows into host variables or an SQLDA using the cursor. The following example demonstrates how you can retrieve data from a DB2 table using an SQLDA:
SQLSTMT= , SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, , WORKDEPT, PHONENO, HIREDATE, JOB, , EDLEVEL, SEX, BIRTHDATE, SALARY, , BONUS, COMM , FROM EMP EXECSQL DECLARE C1 CURSOR FOR S1 EXECSQL PREPARE S1 INTO :OUTSQLDA FROM :SQLSTMT EXECSQL OPEN C1 Do Until(SQLCODE = 0) EXECSQL FETCH C1 USING DESCRIPTOR :OUTSQLDA If SQLCODE = 0 Then Do Line = Do I = 1 To OUTSQLDA.SQLD Line = Line OUTSQLDA.I.SQLDATA End I Say Line End End
423
s1 to s100 Prepared statement names for DECLARE STATEMENT, PREPARE, DESCRIBE, and EXECUTE statements. Use only the predefined names for cursors and statements. When you associate a cursor name with a statement name in a DECLARE CURSOR statement, the cursor name and the statement must have the same number. For example, if you declare cursor c1, you need to declare it for statement s1:
EXECSQL DECLARE C1 CURSOR FOR S1
DRAW syntax:
%DRAW object-name ( SSID=ssid TYPE= SELECT INSERT UPDATE LOAD
DRAW parameters: object-name The name of the table or view for which DRAW builds an SQL statement or utility control statement. The name can be a one-, two-, or three-part name. The table or view to which object-name refers must exist before DRAW can run. object-name is a required parameter.
424
SSID=ssid Specifies the name of the local DB2 subsystem. S can be used as an abbreviation for SSID. If you invoke DRAW from the command line of the edit session in SPUFI, SSID=ssid is an optional parameter. DRAW uses the subsystem ID from the DB2I Defaults panel. TYPE=operation-type The type of statement that DRAW builds. T can be used as an abbreviation for TYPE. operation-type has one of the following values: SELECT Builds a SELECT statement in which the result table contains all columns of object-name. S can be used as an abbreviation for SELECT. INSERT Builds a template for an INSERT statement that inserts values into all columns of object-name. The template contains comments that indicate where the user can place column values. I can be used as an abbreviation for INSERT. UPDATE Builds a template for an UPDATE statement that updates columns of object-name. The template contains comments that indicate where the user can place column values and qualify the update operation for selected rows. U can be used as an abbreviation for UPDATE. LOAD Builds a template for a LOAD utility control statement for object-name. L can be used as an abbreviation for LOAD. TYPE=operation-type is an optional parameter. The default is TYPE=SELECT. DRAW data sets: Edit data set The data set from which you issue the DRAW command when you are in an ISPF edit session. If you issue the DRAW command from a SPUFI session, this data set is the data set that you specify in field 1 of the main SPUFI panel (DSNESP01). The output from the DRAW command goes into this data set. DRAW return codes: Return code Meaning 0 12 20 Successful completion. An error occurred when DRAW edited the input file. One of the following errors occurred: v No input parameters were specified. v One of the input parameters was not valid. v An SQL error occurred when the output statement was generated.
Chapter 9. Coding SQL statements in REXX application programs
425
Examples of DRAW invocation: Generate a SELECT statement for table DSN8910.EMP at the local subsystem. Use the default DB2I subsystem ID. The DRAW invocation is:
DRAW DSN8910.EMP (TYPE=SELECT
Generate a template for an INSERT statement that inserts values into table DSN8910.EMP at location SAN_JOSE. The local subsystem ID is DSN. The DRAW invocation is:
DRAW SAN_JOSE.DSN8910.EMP (TYPE=INSERT SSID=DSN
Generate a template for an UPDATE statement that updates values of table DSN8910.EMP. The local subsystem ID is DSN. The DRAW invocation is:
DRAW DSN8910.EMP (TYPE=UPDATE SSID=DSN
426
-----
Generate a LOAD control statement to load values into table DSN8910.EMP. The local subsystem ID is DSN. The draw invocation is:
DRAW DSN8910.EMP (TYPE=LOAD SSID=DSN
427
"PHONENO" , "HIREDATE" , "JOB" , "EDLEVEL" , "SEX" , "BIRTHDATE" , "SALARY" , "BONUS" , "COMM" FROM DSN8910.EMP If you include a location qualifier, the query looks like this: SELECT "EMPNO" , "FIRSTNME" , "MIDINIT" , "LASTNAME" , "WORKDEPT" , "PHONENO" , "HIREDATE" , "JOB" , "EDLEVEL" , "SEX" , "BIRTHDATE" , "SALARY" , "BONUS" , "COMM" FROM STLEC1.DSN8910.EMP To use this SELECT query, type the other clauses you need. If you are selecting from more than one table, use a DRAW command for each table name you want represented. Insert Composes a basic query to insert data into the columns of a table or view. The following example shows an INSERT query of EMP that DRAW composed: INSERT INTO DSN8910.EMP ( "EMPNO" , "FIRSTNME" , "MIDINIT" , "LASTNAME" , "WORKDEPT" , "PHONENO" , "HIREDATE" , "JOB" , "EDLEVEL" , "SEX" , "BIRTHDATE" , "SALARY" , "BONUS" , "COMM" ) VALUES ( -- ENTER VALUES BELOW COLUMN NAME DATA TYPE , -- EMPNO CHAR(6) NOT NULL , -- FIRSTNME VARCHAR(12) NOT NULL , -- MIDINIT CHAR(1) NOT NULL , -- LASTNAME VARCHAR(15) NOT NULL , -- WORKDEPT CHAR(3) , -- PHONENO CHAR(4) , -- HIREDATE DATE , -- JOB CHAR(8) , -- EDLEVEL SMALLINT , -- SEX CHAR(1) , -- BIRTHDATE DATE , -- SALARY DECIMAL(9,2) , -- BONUS DECIMAL(9,2) ) -- COMM DECIMAL(9,2) To insert values into EMP, type values to the left of the column names. Update Composes a basic query to change the data in a table or view. The following example shows an UPDATE query of EMP composed by DRAW: UPDATE DSN8910.EMP SET -- COLUMN NAME ENTER VALUES BELOW DATA TYPE "EMPNO"= -- CHAR(6) NOT NULL , "FIRSTNME"= -- VARCHAR(12) NOT NULL , "MIDINIT"= -- CHAR(1) NOT NULL , "LASTNAME"= -- VARCHAR(15) NOT NULL , "WORKDEPT"= -- CHAR(3) , "PHONENO"= -- CHAR(4) , "HIREDATE"= -- DATE , "JOB"= -- CHAR(8) , "EDLEVEL"= -- SMALLINT , "SEX"= -- CHAR(1) , "BIRTHDATE"= -- DATE , "SALARY"= -- DECIMAL(9,2) , "BONUS"= -- DECIMAL(9,2) , "COMM"= -- DECIMAL(9,2) WHERE To use this UPDATE query, type the changes you want to make to the right of the column names, and delete the lines you do not need. Be sure to complete the WHERE clause. Load Composes a load statement to load the data in a table. The following example shows a LOAD statement of EMP composed by DRAW: LOAD DATA INDDN SYSREC INTO TABLE DSN8910 .EMP
428
( , , , ,
POSITION( POSITION( POSITION( POSITION( POSITION( NULLIF( POSITION( NULLIF( POSITION( NULLIF( POSITION( NULLIF( POSITION( NULLIF( POSITION( NULLIF( POSITION( NULLIF( POSITION( NULLIF( POSITION( NULLIF( POSITION( NULLIF(
1) CHAR(6) 8) VARCHAR 21) CHAR(1) 23) VARCHAR 39) CHAR(3) 39)=? 43) CHAR(4) 43)=? 48) DATE EXTERNAL 48)=? 59) CHAR(8) 59)=? 68) SMALLINT 68)=? 71) CHAR(1) 71)=? 73) DATE EXTERNAL 73)=? 84) DECIMAL EXTERNAL(9,2) 84)=? 90) DECIMAL EXTERNAL(9,2) 90)=? 96) DECIMAL EXTERNAL(9,2) 96)=?
To use this LOAD statement, type the changes you want to make, and delete the lines you do not need. */ L2 = WHEREAMI() /**********************************************************************/ /* TRACE ?R */ /**********************************************************************/ Address ISPEXEC "ISREDIT MACRO (ARGS) NOPROCESS" If ARGS = "" Then Do Do I = L1+2 To L2-2;Say SourceLine(I);End Exit (20) End Parse Upper Var Args Table "(" Parms Parms = Translate(Parms," ",",") Type = "SELECT" /* Default */ SSID = "" /* Default */ "VGET (DSNEOV01)" If RC = 0 Then SSID = DSNEOV01 If (Parms <> "") Then Do Until(Parms = "") Parse Var Parms Var "=" Value Parms If Var = "T" | Var = "TYPE" Then Type = Value Else If Var = "S" | Var = "SSID" Then SSID = Value Else Exit (20) End "CONTROL ERRORS RETURN" "ISREDIT (LEFTBND,RIGHTBND) = BOUNDS" "ISREDIT (LRECL) = DATA_WIDTH" /*LRECL*/ BndSize = RightBnd - LeftBnd + 1 If BndSize > 72 Then BndSize = 72 "ISREDIT PROCESS DEST" Select When rc = 0 Then ISREDIT (ZDEST) = LINENUM .ZDEST When rc <= 8 Then /* No A or B entered */ Do zedsmsg = Enter "A"/"B" line cmd zedlmsg = DRAW requires an "A" or "B" line command
Chapter 9. Coding SQL statements in REXX application programs
429
SETMSG MSG(ISRZ001) Exit 12 End When rc < 20 Then /* Conflicting line commands - edit sets message */ Exit 12 When rc = 20 Then zdest = 0 Otherwise Exit 12 End SQLTYPE. = "UNKNOWN TYPE" VCHTYPE = 448; SQLTYPES.VCHTYPE = VARCHAR CHTYPE = 452; SQLTYPES.CHTYPE = CHAR LVCHTYPE = 456; SQLTYPES.LVCHTYPE = VARCHAR VGRTYP = 464; SQLTYPES.VGRTYP = VARGRAPHIC GRTYP = 468; SQLTYPES.GRTYP = GRAPHIC LVGRTYP = 472; SQLTYPES.LVGRTYP = VARGRAPHIC FLOTYPE = 480; SQLTYPES.FLOTYPE = FLOAT DCTYPE = 484; SQLTYPES.DCTYPE = DECIMAL INTYPE = 496; SQLTYPES.INTYPE = INTEGER SMTYPE = 500; SQLTYPES.SMTYPE = SMALLINT DATYPE = 384; SQLTYPES.DATYPE = DATE TITYPE = 388; SQLTYPES.TITYPE = TIME TSTYPE = 392; SQLTYPES.TSTYPE = TIMESTAMP Address TSO "SUBCOM DSNREXX" /* HOST CMD ENV AVAILABLE? */ IF RC THEN /* NO, LETS MAKE ONE */ S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) /* ADD HOST CMD ENV */ Address DSNREXX "CONNECT" SSID If SQLCODE ^= 0 Then Call SQLCA Address DSNREXX "EXECSQL DESCRIBE TABLE :TABLE INTO :SQLDA" If SQLCODE ^= 0 Then Call SQLCA Address DSNREXX "EXECSQL COMMIT" Address DSNREXX "DISCONNECT" If SQLCODE ^= 0 Then Call SQLCA Select When (Left(Type,1) = "S") Then Call DrawSelect When (Left(Type,1) = "I") Then Call DrawInsert When (Left(Type,1) = "U") Then Call DrawUpdate When (Left(Type,1) = "L") Then Call DrawLoad Otherwise EXIT (20) End Do I = LINE.0 To 1 By -1 LINE = COPIES(" ",LEFTBND-1)||LINE.I ISREDIT LINE_AFTER zdest = DATALINE (Line) End line1 = zdest + 1 ISREDIT CURSOR = line1 0 Exit /**********************************************************************/ WHEREAMI:; RETURN SIGL /**********************************************************************/ /* Draw SELECT */ /**********************************************************************/ DrawSelect: Line.0 = 0 Line = "SELECT" Do I = 1 To SQLDA.SQLD If I > 1 Then Line = Line , ColName = "SQLDA.I.SQLNAME" Null = SQLDA.I.SQLTYPE//2 If Length(Line)+Length(ColName)+LENGTH(" ,") > BndSize THEN Do
430
L = Line.0 + 1; Line.0 = L Line.L = Line Line = " " End Line = Line ColName End I If Line ^= "" Then Do L = Line.0 + 1; Line.0 = L Line.L = Line Line = " " End L = Line.0 + 1; Line.0 = L Line.L = "FROM" TABLE Return /**********************************************************************/ /* Draw INSERT */ /**********************************************************************/ DrawInsert: Line.0 = 0 Line = "INSERT INTO" TABLE "(" Do I = 1 To SQLDA.SQLD If I > 1 Then Line = Line , ColName = "SQLDA.I.SQLNAME" If Length(Line)+Length(ColName) > BndSize THEN Do L = Line.0 + 1; Line.0 = L Line.L = Line Line = " " End Line = Line ColName If I = SQLDA.SQLD Then Line = Line ) End I If Line ^= "" Then Do L = Line.0 + 1; Line.0 = L Line.L = Line Line = " " End L = Line.0 + 1; Line.0 = L Line.L = " VALUES (" L = Line.0 + 1; Line.0 = L Line.L = , "-- ENTER VALUES BELOW COLUMN NAME DATA TYPE" Do I = 1 To SQLDA.SQLD If SQLDA.SQLD > 1 & I < SQLDA.SQLD Then Line = " , --" Else Line = " ) --" Line = Line Left(SQLDA.I.SQLNAME,18) Type = SQLDA.I.SQLTYPE Null = Type//2 If Null Then Type = Type - 1 Len = SQLDA.I.SQLLEN Prcsn = SQLDA.I.SQLLEN.SQLPRECISION Scale = SQLDA.I.SQLLEN.SQLSCALE Select When (Type = CHTYPE , |Type = VCHTYPE , |Type = LVCHTYPE , |Type = GRTYP , |Type = VGRTYP , |Type = LVGRTYP ) THEN Type = SQLTYPES.Type"("STRIP(LEN)")" When (Type = FLOTYPE ) THEN Type = SQLTYPES.Type"("STRIP((LEN*4)-11) ")" When (Type = DCTYPE ) THEN
Chapter 9. Coding SQL statements in REXX application programs
431
Type = SQLTYPES.Type"("STRIP(PRCSN)","STRIP(SCALE)")" Otherwise Type = SQLTYPES.Type End Line = Line Type If Null = 0 Then Line = Line "NOT NULL" L = Line.0 + 1; Line.0 = L Line.L = Line End I Return /**********************************************************************/ /* Draw UPDATE */ /**********************************************************************/ DrawUpdate: Line.0 = 1 Line.1 = "UPDATE" TABLE "SET" L = Line.0 + 1; Line.0 = L Line.L = , "-- COLUMN NAME ENTER VALUES BELOW DATA TYPE" Do I = 1 To SQLDA.SQLD If I = 1 Then Line = " " Else Line = " ," Line = Line Left("SQLDA.I.SQLNAME"=,21) Line = Line Left(" ",20) Type = SQLDA.I.SQLTYPE Null = Type//2 If Null Then Type = Type - 1 Len = SQLDA.I.SQLLEN Prcsn = SQLDA.I.SQLLEN.SQLPRECISION Scale = SQLDA.I.SQLLEN.SQLSCALE Select When (Type = CHTYPE , |Type = VCHTYPE , |Type = LVCHTYPE , |Type = GRTYP , |Type = VGRTYP , |Type = LVGRTYP ) THEN Type = SQLTYPES.Type"("STRIP(LEN)")" When (Type = FLOTYPE ) THEN Type = SQLTYPES.Type"("STRIP((LEN*4)-11) ")" When (Type = DCTYPE ) THEN Type = SQLTYPES.Type"("STRIP(PRCSN)","STRIP(SCALE)")" Otherwise Type = SQLTYPES.Type End Line = Line "--" Type If Null = 0 Then Line = Line "NOT NULL" L = Line.0 + 1; Line.0 = L Line.L = Line End I L = Line.0 + 1; Line.0 = L Line.L = "WHERE" Return /**********************************************************************/ /* Draw LOAD */ /**********************************************************************/ DrawLoad: Line.0 = 1 Line.1 = "LOAD DATA INDDN SYSREC INTO TABLE" TABLE Position = 1 Do I = 1 To SQLDA.SQLD If I = 1 Then
432
Line = " (" Else Line = " ," Line = Line Left("SQLDA.I.SQLNAME",20) Line = Line "POSITION("RIGHT(POSITION,5)")" Type = SQLDA.I.SQLTYPE Null = Type//2 If Null Then Type = Type - 1 Len = SQLDA.I.SQLLEN Prcsn = SQLDA.I.SQLLEN.SQLPRECISION Scale = SQLDA.I.SQLLEN.SQLSCALE Select When (Type = CHTYPE , |Type = GRTYP ) THEN Type = SQLTYPES.Type"("STRIP(LEN)")" When (Type = FLOTYPE ) THEN Type = SQLTYPES.Type"("STRIP((LEN*4)-11) ")" When (Type = DCTYPE ) THEN Do Type = SQLTYPES.Type "EXTERNAL" Type = Type"("STRIP(PRCSN)","STRIP(SCALE)")" Len = (PRCSN+2)%2 End When (Type = DATYPE , |Type = TITYPE , |Type = TSTYPE ) THEN Type = SQLTYPES.Type "EXTERNAL" Otherwise Type = SQLTYPES.Type End If (Type = GRTYP , |Type = VGRTYP , |Type = LVGRTYP ) THEN Len = Len * 2 If (Type = VCHTYPE , |Type = LVCHTYPE , |Type = VGRTYP , |Type = LVGRTYP ) THEN Len = Len + 2 Line = Line Type L = Line.0 + 1; Line.0 = L Line.L = Line If Null = 1 Then Do Line = " " Line = Line Left(,20) Line = Line " NULLIF("RIGHT(POSITION,5)")=?" L = Line.0 + 1; Line.0 = L Line.L = Line End Position = Position + Len + 1 End I L = Line.0 + 1; Line.0 = L Line.L = " )" Return /**********************************************************************/ /* Display SQLCA */ /**********************************************************************/ SQLCA: "ISREDIT LINE_AFTER "zdest" = MSGLINE SQLSTATE="SQLSTATE"" "ISREDIT LINE_AFTER "zdest" = MSGLINE SQLWARN ="SQLWARN.0",", || SQLWARN.1",", || SQLWARN.2",", || SQLWARN.3",", || SQLWARN.4",", || SQLWARN.5",", || SQLWARN.6",",
Chapter 9. Coding SQL statements in REXX application programs
433
"ISREDIT
|| SQLWARN.7",", || SQLWARN.8",", || SQLWARN.9",", || SQLWARN.10"" LINE_AFTER "zdest" = || SQLERRD.2",", || SQLERRD.3",", || SQLERRD.4",", || SQLERRD.5",", || SQLERRD.6"" LINE_AFTER "zdest" = LINE_AFTER "zdest" = LINE_AFTER "zdest" =
In the following program, the phone number for employee Haas is selected into variable HVPhone. After the SELECT statement executes, if no phone number for employee Haas is found, indicator variable INDPhone contains -1.
SUBCOM DSNREXX IF RC THEN , S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) ADDRESS DSNREXX CONNECT DSN SQLSTMT = , "SELECT PHONENO FROM DSN8910.EMP WHERE LASTNAME=HAAS" "EXECSQL DECLARE C1 CURSOR FOR S1" "EXECSQL PREPARE S1 FROM :SQLSTMT" Say "SQLCODE from PREPARE is "SQLCODE "EXECSQL OPEN C1" Say "SQLCODE from OPEN is "SQLCODE "EXECSQL FETCH C1 INTO :HVPhone :INDPhone" Say "SQLCODE from FETCH is "SQLCODE If INDPhone < 0 Then , Say Phone number for Haas is null. "EXECSQL CLOSE C1" Say "SQLCODE from CLOSE is "SQLCODE S_RC = RXSUBCOM(DELETE,DSNREXX,DSNREXX)
434
Creating tables
Creating a table provides a logical place to store related data on a DB2 subsystem.
Separate each column description from the next with a comma, and enclose the entire list of column descriptions in parentheses. Example: The following SQL statement creates a table named PRODUCT:
CREATE TABLE PRODUCT (SERIAL CHAR(8) NOT NULL, DESCRIPTION VARCHAR(60) DEFAULT, MFGCOST DECIMAL(8,2), MFGDEPT CHAR(3), MARKUP SMALLINT, SALESDEPT CHAR(3), CURDATE DATE DEFAULT);
For more information about referential constraints, see Referential constraints on page 446 For more information about check constraints, see Check constraints on page 444. Identifying column defaults and constraining column inputs: If you want to constrain the input or identify the default of a column, you can use the following values: v NOT NULL, when the column cannot contain null values. v UNIQUE, when the value for each row must be unique, and the column cannot contain null values. v DEFAULT, when the column has one of the following DB2-assigned defaults: For numeric columns, 0 (zero) is the default value. For character or graphic fixed-length strings, blank is the default value.
Copyright IBM Corp. 1983, 2013
435
For binary fixed-length strings, a set of hexadecimal zeros is the default value. For variable-length strings, including LOB strings, the empty string (a string of zero-length) is the default value. For datetime columns, the current value of the associated special register is the default value. v DEFAULT value, when you want to identify one of the following values as the default value: A constant NULL SESSION_USER, which specifies the value of the SESSION_USER special register at the time when a default value is needed for the column CURRENT SQLID, which specifies the value of the CURRENT SQLID special register at the time when a default value is needed for the column The name of a cast function that casts a default value (of a built-in data type) to the distinct type of a column
Data types
When you create a DB2 table, you define each column to have a specific data type. The data type of a column determines what you can and cannot do with the column. When you perform operations on columns, the data must be compatible with the data type of the referenced column. For example, you cannot insert character data, such as a last name, into a column whose data type is numeric. Similarly, you cannot compare columns that contain incompatible data types. The data type for a column can be a distinct type, which is a user-defined data type, or a DB2 built-in data type. As shown in the following figure, DB2 built-in data types have four general categories: datetime, string, numeric, and row identifier (ROWID).
436
The following table shows whether operands of any two data types are compatible, Y (Yes), or incompatible, N (No). Notes are indicated either as a superscript number next to Y or N or as a value in the column of the table.
Table 76. Supported casts between built-in data types To data type1 V A R G R A P H I C
| | | | | | | | |
S M A L L I N T Y Y Y Y Y Y Y
I N T E G E R Y Y Y Y Y Y Y
B I G I N T Y Y Y Y Y Y Y
D E C I M A L Y Y Y Y Y Y Y
D E C F L O A T Y Y Y Y Y Y Y
R E A L Y Y Y Y Y Y Y
D O U B L E Y Y Y Y Y Y Y
C H A R Y Y Y Y Y Y Y
V A R C H A R Y Y Y Y Y Y Y
C L O B
G R A P H I C
D B C L O B
B I N A R Y
V A R B I N A R Y
B L O B
D A T E
T I M E
T I M E S T A M P
R O W X I M D L
| DECFLOAT
REAL DOUBLE
437
Table 76. Supported casts between built-in data types (continued) To data type1 V A R G R A P H I C Y Y Y Y Y Y
S M A L L I N T Y Y
I N T E G E R Y Y
B I G I N T Y Y
D E C I M A L Y Y
D E C F L O A T Y Y
R E A L Y Y
D O U B L E Y Y
C H A R Y Y Y
2
V A R C H A R Y Y Y Y
2
C L O B Y Y Y Y
2
G R A P H I C Y Y Y Y Y Y
D B C L O B Y Y Y Y Y Y
B I N A R Y Y Y Y Y Y Y Y Y Y
V A R B I N A R Y Y Y Y Y Y Y Y Y Y
B L O B Y Y Y Y Y Y Y Y Y
D A T E Y Y
T I M E Y Y
T I M E S T A M P Y Y Y3 Y
R O W X I M D L Y Y
Y Y
Y Y
Y Y
Y Y
Y Y
Y Y
Y Y
Y3 Y3 Y Y
Y2 Y
2
Y2 Y
2
Y2 Y
2
Y Y Y Y
Y Y Y Y Y Y Y
Y Y Y Y Y Y Y
| XML
Note:
1. Other synonyms for the listed data types are considered to be the same as the synonym listed. Some exceptions exist when the cast involves character string data if the subtype is FOR BIT DATA. 2. The result length for these casts is 3 * LENGTH(graphic string). 3. These data types are castable between each other only if the data is Unicode.
Related concepts: Distinct types on page 489 Data types (DB2 SQL)
Procedure
To store LOB data in DB2: 1. Define one or more columns of the appropriate LOB type and optionally a row identifier (ROWID) column by executing a CREATE TABLE statement or one or more ALTER TABLE statements.
438
| | |
Define only one ROWID column, even if the table is to have multiple LOB columns. If you do not create a ROWID column before you define a LOB column, DB2 creates an implicitly hidden ROWID column and appends it as the last column of the table. If you add a ROWID column after you add a LOB column, the table has two ROWID columns: the implicitly-created, hidden, column and the explicitly-created column. In this case, DB2 ensures that the values of the two ROWID columns are always identical. If DB2 implicitly creates the table space for this table or CURRENT RULES is set to STD, DB2 creates the necessary auxiliary objects for you and you can skip steps 2 and 3. 2. If you explicitly created the table space for this table and the CURRENT RULES special register is not set to STD, create a LOB table space and auxiliary table by using the CREATE LOB TABLESPACE and CREATE AUXILIARY TABLE statements. v If your base table is nonpartitioned, create one LOB table space and for each column create one auxiliary table. v If your base table is partitioned, create one LOB table space for each partition and one auxiliary table for each column. For example, if your base table has three partitions, you must create three LOB table spaces and three auxiliary tables for each LOB column. 3. If you explicitly created the table space for this table and the CURRENT RULES special register is not set to STD, create one index for each auxiliary table by using the CREATE INDEX statement. 4. Insert the LOB data into DB2 by using one of the following techniques: v If the total length of a LOB column and the base table row is less than 32 KB, use the LOAD utility and specify the base table. v Otherwise, use INSERT, UPDATE, or MERGE statements and specify the base table. If you use the INSERT statement, ensure that you application has enough storage available to hold the entire value that is to be put into the LOB column.
| | | |
Results
Example: Adding a CLOB column: Suppose that you want to add a resume for each employee to the employee table. The employee resumes are no more than 5 MB in size. Because the employee resumes contain single-byte characters, you can define the resumes to DB2 as CLOBs. You therefore need to add a column of data type CLOB with a length of 5 MB to the employee table. If you want to define a ROWID column explicitly, you must define it before you define the CLOB column. First, execute an ALTER TABLE statement to add the ROWID column, and then execute another ALTER TABLE statement to add the CLOB column. The following statements create these columns:
ALTER TABLE EMP ADD ROW_ID ROWID NOT NULL GENERATED ALWAYS; COMMIT; ALTER TABLE EMP ADD EMP_RESUME CLOB(5M); COMMIT;
| | |
If you explicitly created the table space for this table and the CURRENT RULES special register is not set to STD, you then need to define a LOB table space and an auxiliary table to hold the employee resumes. You also need to define an index on
Chapter 10. Creating and modifying DB2 objects
439
| | | | | | | | | | | | | |
the auxiliary table. You must define the LOB table space in the same database as the associated base table. The following statements create these objects:
CREATE LOB TABLESPACE RESUMETS IN DSN8D91A LOG NO COMMIT; CREATE AUXILIARY TABLE EMP_RESUME_TAB IN DSN8D91A.RESUMETS STORES DSN8910.EMP COLUMN EMP_RESUME; CREATE UNIQUE INDEX XEMP_RESUME ON EMP_RESUME_TAB; COMMIT;
You can then load your employee resumes into DB2. In your application, you can define a host variable to hold the resume, copy the resume data from a file into the host variable, and then execute an UPDATE statement to copy the data into DB2. Although the LOB data is stored in the auxiliary table, your UPDATE statement specifies the name of the base table. The following code declares a host variable to store the resume in the C language:
SQL TYPE is CLOB (5M) resumedata;
In this statement, employeenum is a host variable that identifies the employee who is associated with a resume.
440
only if you reference the column directly. The column is not included in the results of SELECT * statements or DESCRIBE statements. DB2 assigns the GENERATED ALWAYS attribute and the name DB2_GENERATED_ROWID_FOR_LOBSnn to a an implicitly hidden ROWID column. DB2 appends the identifier nn only if the column name already exists in the table. If so, DB2 appends 00 and increments by 1 until the name is unique within the row. Related reference: ALTER TABLE (DB2 SQL) ALTER VIEW (DB2 SQL) CREATE TABLE (DB2 SQL) select-clause (DB2 SQL)
Identity columns
An identity column contains a unique numeric value for each row in the table. DB2 can automatically generate sequential numeric values for this column as rows are inserted into the table. Thus, identity columns are ideal for primary key values, such as employee numbers or product numbers.
441
v If you define the column as GENERATED ALWAYS, DB2 always generates a value for the column, and you cannot insert data into that column. If you want the values to be unique, you must define the identity column with GENERATED ALWAYS and NO CYCLE and define a unique index on that column. The values that DB2 generates for an identity column depend on how the column is defined. The START WITH option determines the first value that DB2 generates. The values advance by the INCREMENT BY value in ascending or descending order. The MINVALUE and MAXVALUE options determine the minimum and maximum values that DB2 generates. The CYCLE or NO CYCLE option determines whether DB2 wraps values when it has generated all values between the START WITH value and MAXVALUE if the values are ascending, or between the START WITH value and MINVALUE if the values are descending.
Now suppose that you execute the following INSERT statement eight times:
INSERT INTO T1 (CHARCOL1) VALUES (A);
When DB2 generates values for IDENTCOL1, it starts with -1 and increments by 1 until it reaches the MAXVALUE of 3 on the fifth INSERT. To generate the value for the sixth INSERT, DB2 cycles back to MINVALUE, which is -3. T1 looks like this after the eight INSERTs are executed:
CHARCOL1 ======== A A A A A A A A IDENTCOL1 ========= -1 0 1 2 3 -3 -2 -1
The value of IDENTCOL1 for the eighth INSERT repeats the value of IDENTCOL1 for the first INSERT.
442
When you insert a new employee into the EMPLOYEE table, to retrieve the value for the EMPNO column, you can use the following SELECT from INSERT statement:
EXEC SQL SELECT EMPNO INTO :hv_empno FROM FINAL TABLE (INSERT INTO EMPLOYEE (NAME, SALARY, WORKDEPT) VALUES (New Employee, 75000.00, 11));
The SELECT statement returns the DB2-generated identity value for the EMPNO column in the host variable :hv_empno. You can then use the value in :hv_empno to update the MGRNO column in the DEPARTMENT table with the new employee as the department manager:
EXEC SQL UPDATE DEPARTMENT SET MGRNO = :hv_empno WHERE DEPTNO = 11;
Related concepts: Rules for inserting data into an identity column on page 641 Related tasks: Selecting values while inserting data on page 645 Related reference: IDENTITY_VAL_LOCAL (DB2 SQL)
443
Constraints are rules that limit the values that you can insert, delete, or update in a table. There are two types of constraints: v Check constraints determine the values that a column can contain. Check constraints are discussed in Check constraints. v Referential constraints preserve relationships between tables. Referential constraints are discussed in Referential constraints on page 446. A specific type of referential constraints, the informational referential constraint, is discussed in Informational referential constraints on page 448. Triggers are a series of actions that are invoked when a table is updated. Triggers are discussed in Creating triggers on page 468. Check constraints: A check constraint is a rule that specifies the values that are allowed in one or more columns of every row of a base table. For example, you can define a check constraint to ensure that all values in a column that contains ages are positive numbers. Check constraints designate the values that specific columns of a base table can contain, providing you a method of controlling the integrity of data entered into tables. You can create tables with check constraints using the CREATE TABLE statement, or you can add the constraints with the ALTER TABLE statement. However, if the check integrity is compromised or cannot be guaranteed for a table, the table space or partition that contains the table is placed in a check pending state. Check integrity is the condition that exists when each row of a table conforms to the check constraints defined on that table. For example, you might want to make sure that no salary can be below 15000 dollars. To do this, you can create the following check constraint:
CREATE TABLE EMPSAL (ID INTEGER SALARY INTEGER NOT NULL, CHECK (SALARY >= 15000));
Using check constraints makes your programming task easier, because you do not need to enforce those constraints within application programs or with a validation routine. Define check constraints on one or more columns in a table when that table is created or altered. Check constraint considerations The syntax of a check constraint is checked when the constraint is defined, but the meaning of the constraint is not checked. The following examples show mistakes that are not caught. Column C1 is defined as INTEGER NOT NULL. Allowable but mistaken check constraints: v A self-contradictory check constraint:
CHECK (C1 > 5 AND C1 < 2)
444
A check constraint is not checked for consistency with other types of constraints. For example, a column in a dependent table can have a referential constraint with a delete rule of SET NULL. You can also define a check constraint that prohibits nulls in the column. As a result, an attempt to delete a parent row fails, because setting the dependent row to null violates the check constraint. Similarly, a check constraint is not checked for consistency with a validation routine, which is applied to a table before a check constraint. If the routine requires a column to be greater than or equal to 10 and a check constraint requires the same column to be less than 10, table inserts are not possible. Plans and packages do not need to be rebound after check constraints are defined on or removed from a table. When check constraints are enforced After check constraints are defined on a table, any change must satisfy those constraints if it is made by: v The LOAD utility with the option ENFORCE CONSTRAINT v An SQL insert operation v An SQL update operation A row satisfies a check constraint if its condition evaluates either to true or to unknown. A condition can evaluate to unknown for a row if one of the named columns contains the null value for that row. Any constraint defined on columns of a base table applies to the views defined on that base table. When you use ALTER TABLE to add a check constraint to already populated tables, the enforcement of the check constraint is determined by the value of the CURRENT RULES special register as follows: v If the value is STD, the check constraint is enforced immediately when it is defined. If a row does not conform, the check constraint is not added to the table and an error occurs. v If the value is DB2, the check constraint is added to the table description but its enforcement is deferred. Because there might be rows in the table that violate the check constraint, the table is placed in CHECK-pending status. CHECK-pending status: To maintain data integrity DB2 enforces check constraints and referential constraints on data in a table. When these types of constraints are violated or might be violated, DB2 places the table space or partition that contains the table in CHECK-pending status. Table check violations place a table space or partition in CHECK-pending status when any of these conditions exist: v A check constraint is defined on a populated table using the ALTER TABLE statement, and the value of the CURRENT RULES special register is DB2. v The LOAD utility is run with CONSTRAINTS NO, and check constraints are defined on the table. v CHECK DATA is run on a table that contains violations of check constraints.
Chapter 10. Creating and modifying DB2 objects
| |
445
v A point-in-time RECOVER introduces violations of check constraints. Referential constraints: A referential constraint is a rule that specifies that the only valid values for a particular column are those values that exist in another specified table column. For example, a referential constraint can ensure that all customer IDs in a transaction table exist in the ID column of a customer table. A table can serve as the master list of all occurrences of an entity. In the sample application, the employee table serves that purpose for employees; the numbers that appear in that table are the only valid employee numbers. Likewise, the department table provides a master list of all valid department numbers; the project activity table provides a master list of activities performed for projects; and so on. The following figure shows the relationships that exist among the tables in the sample application. Arrows point from parent tables to dependent tables.
CASCADE DEPT SET NULL RESTRICT EMP RESTRICT CASCADE PROJ RESTRICT PROJACT RESTRICT EMPPROJACT ACT RESTRICT SET NULL
RESTRICT
When a table refers to an entity for which there is a master list, it should identify an occurrence of the entity that actually appears in the master list; otherwise, either the reference is invalid or the master list is incomplete. Referential constraints enforce the relationship between a table and a master list. Restrictions on cycles of dependent tables: A cycle is a set of two or more tables. The tables are ordered so that each is a dependent of the one before it, and the first is a dependent of the last. Every table in the cycle is a descendent of itself. DB2 restricts certain operations on cycles.
446
In the sample application, the employee and department tables are a cycle; each is a dependent of the other. DB2 does not allow you to create a cycle in which a delete operation on a table involves that same table. Enforcing that principle creates rules about adding a foreign key to a table: v In a cycle of two tables, neither delete rule can be CASCADE. v In a cycle of more than two tables, two or more delete rules must not be CASCADE. For example, in a cycle with three tables, two of the delete rules must be other than CASCADE. This concept is illustrated in The following figure. The cycle on the left is valid because two or more of the delete rules are not CASCADE. The cycle on the right is invalid because it contains two cascading deletes.
Valid cycle
Invalid cycle
TABLE1 CASCADE
CASCADE
TABLE2
SET NULL
TABLE3
TABLE2
SET NULL
TABLE3
Alternatively, a delete operation on a self-referencing table must involve the same table, and the delete rule there must be CASCADE or NO ACTION. Recommendation: Avoid creating a cycle in which all the delete rules are RESTRICT and none of the foreign keys allows nulls. If you do this, no row of any of the tables can ever be deleted. Referential constraints on tables with multilevel security with row-level granularity: You cannot use referential constraints on a security label column, which is used for multilevel security with row-level granularity. However, you can use referential constraints on other columns in the row. DB2 does not enforce multilevel security with row-level granularity when it is already enforcing referential constraints. Referential constraints are enforced when the following situations occur: v An insert operation is applied to a dependent table. v An update operation is applied to a foreign key of a dependent table, or to the parent key of a parent table. v A delete operation is applied to a parent table. In addition to all referential constraints being enforced, the DB2 system enforces all delete rules for all dependent rows that are affected by the delete operation. If all referential constraints and delete rules are not satisfied, the delete operation will not succeed. v The LOAD utility with the ENFORCE CONSTRAINTS option is run on a dependent table. v The CHECK DATA utility is run.
447
Related concepts: Multilevel security (Managing Security) Informational referential constraints: An informational referential constraint is a referential constraint that DB2 does not enforce during normal operations. Use these constraints only when referential integrity can be enforced by another means, such as when retrieving data from other sources. These constraints might improve performance by enabling the query to qualify for automatic query rewrite. DB2 ignores informational referential constraints during insert, update, and delete operations. Some utilities ignore these constraints; other utilities recognize them. For example, CHECK DATA and LOAD ignore these constraints. QUIESCE TABLESPACESET recognizes these constraints by quiescing all table spaces related to the specified table space. You should use this type of referential constraint only when an application process verifies the data in a referential integrity relationship. For example, when inserting a row in a dependent table, the application should verify that a foreign key exists as a primary or unique key in the parent table. To define an informational referential constraint, use the NOT ENFORCED option of the referential constraint definition in a CREATE TABLE or ALTER TABLE statement. Informational referential constraints are often useful, especially in a data warehouse environment, for several reasons: v To avoid the overhead of enforcement by DB2. Typically, data in a data warehouse has been extracted and cleansed from other sources. Referential integrity might already be guaranteed. In this situation, enforcement by DB2 is unnecessary. v To allow more queries to qualify for automatic query rewrite. Automatic query rewrite is a process that examines a submitted query that references source tables and, if appropriate, rewrites the query so that it executes against a materialized query table that has been derived from those source tables. This process uses informational referential constraints to determine whether the query can use a materialized query table. Automatic query rewrite results in a significant reduction in query run time, especially for decision-support queries that operate over huge amounts of data. Related tasks: Using materialized query tables to improve SQL performance (DB2 Performance) Related reference: CREATE TABLE (DB2 SQL)
448
449
Related concepts: Ways to maintain data integrity on page 443 Related reference: ALTER TABLE (DB2 SQL) CREATE TABLE (DB2 SQL) Parent key columns: A parent key is either a primary key or a unique key in the parent table of a referential constraint. This key consists of a column or set of columns. The values of a parent key determine the valid values of the foreign key in the constraint. If every row in a table represents relationships for a unique entity, the table should have one column or a set of columns that provides a unique identifier for the rows of the table. This column (or set of columns) is called the parent key of the table. To ensure that the parent key does not contain duplicate values, you must create a unique index on the column or columns that constitute the parent key. Defining the parent key is called entity integrity, because it requires each entity to have a unique key. In some cases, using a timestamp as part of the key can be helpful, for example when a table does not have a natural unique key or if arrival sequence is the key. Primary keys for some of the sample tables are: Table Key Column Employee table EMPNO Department table DEPTNO Project table PROJNO Table 77 shows part of the project table which has the primary key column, PROJNO.
Table 77. Part of the project table with the primary key column, PROJNO PROJNO MA2100 MA2110 PROJNAME WELD LINE AUTOMATION W L PROGRAMMING DEPTNO D01 D11
Table 78 shows part of the project activity table, which has a primary key that contains more than one column. The primary key is a composite key, which consists of the PRONNO, ACTNO, and ACSTDATE columns.
Table 78. Part of the Project activities table with a composite primary key PROJNO AD3100 AD3110 ACTNO 10 10 ACSTAFF .50 1.00 ACSTDATE 1982-01-01 1982-01-01 ACENDATE 1982-07-01 1983-01-01
450
Table 78. Part of the Project activities table with a composite primary key (continued) PROJNO AD3111 ACTNO 60 ACSTAFF .50 ACSTDATE 1982-03-15 ACENDATE 1982-04-15
Procedure
To define a foreign key, use one of the following approaches: v Issue a CREATE TABLE statement and specify a FOREIGN KEY clause. 1. Choose a constraint name for the relationship that is defined by a foreign key. If you do not choose a name, DB2 generates one from the name of the first column of the foreign key, in the same way that it generates the name of an implicitly created table space. For example, the names of the relationships in which the employee-to-project activity table is a dependent would, by default, be recorded (in column RELNAME of SYSIBM.SYSFOREIGNKEYS) as EMPNO and PROJNO. The name is used in error messages, queries to the catalog, and DROP FOREIGN KEY statements. Hence, you might want to choose one if you are experimenting with your database design and have more than one foreign key that begins with the same column (otherwise DB2 generates the name). 2. Specify column names that identify the columns of the parent key. A foreign key can refer to either a unique or a primary key of the parent table. If the foreign key refers to a non-primary unique key, you must specify the column names of the key explicitly. If the column names of the key are not specified explicitly, the default is to refer to the column names of the primary key of the parent table. v Issue an ALTER TABLE statement and specify the FOREIGN KEY clause. You can add a foreign key to an existing table; in fact, that is sometimes the only way to proceed. To make a table self-referencing, you must add a foreign key after creating it. When a foreign key is added to a populated table, the table space is put into check pending status.
Example
The following example shows a CREATE TABLE statement that specifies constraint names REPAPA and REPAE for the foreign keys in the employee-to-project activity table.
CREATE TABLE DSN8910.EMPPROJACT (EMPNO CHAR(6) NOT NULL, PROJNO CHAR(6) NOT NULL, ACTNO SMALLINT NOT NULL, CONSTRAINT REPAPA FOREIGN KEY (PROJNO, ACTNO)
451
REFERENCES DSN8910.PROJACT ON DELETE RESTRICT, CONSTRAINT REPAE FOREIGN KEY (EMPNO) REFERENCES DSN8910.EMP ON DELETE RESTRICT) IN DATABASE DSN8D91A;
What to do next
Although not required, an index on a foreign key is strongly recommended if rows of the parent table are often deleted. The validity of the delete statement, and its possible effect on the dependent table, can be checked through the index. You can create an index on the columns of a foreign key in the same way you create one on any other set of columns. Most often it is not a unique index. If you do create a unique index on a foreign key, it introduces an additional constraint on the values of the columns. | | | | | | The index on the foreign key can be used on the dependent table for delete operations on a parent table. For the index to qualify, the leading columns of the index must be identical to and in the same order as all columns in the foreign key. The index can include additional columns, but the leading columns match the definition of the foreign key. Indexes that use expressions cannot be used for this purpose. A foreign key can also be the primary key; then the primary index is also a unique index on the foreign key. In that case, every row of the parent table has at most one dependent row. The dependent table might be used to hold information that pertains to only a few of the occurrences of the entity described by the parent table. For example, a dependent of the employee table might contain information that applies only to employees working in a different country. The primary key can share columns of the foreign key if the first n columns of the foreign key are the same as the columns of the primary key. Again, the primary index serves as an index on the foreign key. In the sample project activity table, the primary index (on PROJNO, ACTNO, ACSTDATE) serves as an index on the foreign key on PROJNO. It does not serve as an index on the foreign key on ACTNO, because ACTNO is not the first column of the index. Related concepts: Implications of adding parent or foreign keys (DB2 Administration Guide) Related tasks: Adding parent keys and foreign keys (DB2 Administration Guide) Related reference: CREATE TABLE (DB2 SQL) ALTER TABLE (DB2 SQL) SYSIBM.SYSFOREIGNKEYS table (DB2 SQL)
452
Creating work tables for the EMP and DEPT sample tables
Before testing SQL statements that insert, update, and delete rows in the DSN8910.EMP and DSN8910.DEPT sample tables, you should create duplicates of these tables. Create duplicates so that the original sample tables remain intact. These duplicate tables are called work tables.
If you want DEPTNO to be a primary key, as in the sample table, explicitly define the key. Use an ALTER TABLE statement, as in the following example:
ALTER TABLE YDEPT PRIMARY KEY(DEPTNO);
You can use an INSERT statement to copy the rows of the result table of a fullselect from one table to another. The following statement copies all of the rows from DSN8910.DEPT to your own YDEPT work table:
453
For information about using the INSERT statement, see Inserting rows by using the INSERT statement on page 637. You can use the following statements to create a new employee table called YEMP:
CREATE TABLE YEMP (EMPNO CHAR(6) FIRSTNME VARCHAR(12) MIDINIT CHAR(1) LASTNAME VARCHAR(15) WORKDEPT CHAR(3) PHONENO HIREDATE JOB EDLEVEL SEX BIRTHDATE SALARY BONUS COMM CHAR(4) DATE CHAR(8) SMALLINT CHAR(1) DATE DECIMAL(9, 2) DECIMAL(9, 2) DECIMAL(9, 2) PRIMARY KEY NOT NULL, NOT NULL, NOT NULL, NOT NULL, REFERENCES YDEPT ON DELETE SET NULL, UNIQUE NOT NULL, , , , , , , , );
This statement also creates a referential constraint between the foreign key in YEMP (WORKDEPT) and the primary key in YDEPT (DEPTNO). It also restricts all phone numbers to unique numbers. If you want to change a table definition after you create it, use the ALTER TABLE statement with a RENAME clause. If you want to change a table name after you create it, use the RENAME statement. You can change a table definition by using the ALTER TABLE statement only in certain ways. For example, you can add and drop constraints on columns in a table. You can also change the data type of a column within character data types, within numeric data types, and within graphic data types. You can add a column to a table. However, you cannot use the ALTER TABLE statement to drop a column from a table. Related tasks: Altering DB2 tables (DB2 Administration Guide) Related reference: ALTER TABLE (DB2 SQL) RENAME (DB2 SQL)
454
You create the definition of a created temporary table using the SQL CREATE GLOBAL TEMPORARY TABLE statement. Example: The following statement creates the definition of a table called TEMPPROD:
CREATE GLOBAL TEMPORARY TABLE TEMPPROD (SERIAL CHAR(8) NOT NULL, DESCRIPTION VARCHAR(60) NOT NULL, MFGCOST DECIMAL(8,2), MFGDEPT CHAR(3), MARKUP SMALLINT, SALESDEPT CHAR(3), CURDATE DATE NOT NULL);
Example: You can also create this same definition by copying the definition of a base table (named PROD) by using the LIKE clause:
CREATE GLOBAL TEMPORARY TABLE TEMPPROD LIKE PROD;
The SQL statements in the previous examples create identical definitions for the TEMPPROD table, but these tables differ slightly from the PROD sample table PROD. The PROD sample table contains two columns, DESCRIPTION and CURDATE, that are defined as NOT NULL WITH DEFAULT. Because created temporary tables do not support non-null default values, the DESCRIPTION and CURDATE columns in the TEMPPROD table are defined as NOT NULL and do not have defaults. After you run one of the two CREATE statements, the definition of TEMPPROD exists, but no instances of the table exist. To create an instance of TEMPPROD, you must use TEMPPROD in an application. DB2 creates an instance of the table when TEMPPROD is specified in one of the following SQL statements: v OPEN v SELECT v INSERT v DELETE Restriction: You cannot use the MERGE statement with created temporary tables. An instance of a created temporary table exists at the current server until one of the following actions occurs: v The application process ends. v The remote server connection through which the instance was created terminates. v The unit of work in which the instance was created completes. When you run a ROLLBACK statement, DB2 deletes the instance of the created temporary table. When you run a COMMIT statement, DB2 deletes the instance of the created temporary table unless a cursor for accessing the created temporary table is defined with the WITH HOLD clause and is open. Example: Suppose that you create a definition of TEMPPROD and then run an application that contains the following statements:
EXEC EXEC EXEC . . . EXEC SQL DECLARE C1 CURSOR FOR SELECT * FROM TEMPPROD; SQL INSERT INTO TEMPPROD SELECT * FROM PROD; SQL OPEN C1; SQL COMMIT;
455
When you run the INSERT statement, DB2 creates an instance of TEMPPROD and populates that instance with rows from table PROD. When the COMMIT statement runs, DB2 deletes all rows from TEMPPROD. However, assume that you change the declaration of cursor C1 to the following declaration:
EXEC SQL DECLARE C1 CURSOR WITH HOLD FOR SELECT * FROM TEMPPROD;
In this case, DB2 does not delete the contents of TEMPPROD until the application ends because C1, a cursor that is defined with the WITH HOLD clause, is open when the COMMIT statement runs. In either case, DB2 drops the instance of TEMPPROD when the application ends. To drop the definition of TEMPPROD, you must run the following statement:
DROP TABLE TEMPPROD;
Temporary tables
Use temporary tables when you need to store data for only the duration of an application process. Depending on whether you want to share the table definition, you can create a created temporary table or a declared temporary table. The two kinds of temporary tables are: v Created temporary tables, which you define using a CREATE GLOBAL TEMPORARY TABLE statement v Declared temporary tables, which you define using a DECLARE GLOBAL TEMPORARY TABLE statement SQL statements that use temporary tables can run faster because of the following reasons: v DB2 does no logging (for created temporary tables) or limited logging (for declared temporary tables). v For created temporary tables, DB2 provides no locking. For declared temporary tables, DB2 provides limited locking. Temporary tables are especially useful when you need to sort or query intermediate result tables that contain a large number of rows, but you want to store only a small subset of those rows permanently. Temporary tables can also return result sets from stored procedures. The following topics provide more details about created temporary tables and declared temporary tables: v Creating created temporary tables on page 454 v Creating declared temporary tables For more information, see Writing an external procedure to return result sets to a DRDA client on page 622.
456
| | |
Example: The following statement defines a declared temporary table called TEMPPROD by copying the definition of a base table. The base table has an identity column that the declared temporary table also uses as an identity column.
DECLARE GLOBAL TEMPORARY TABLE TEMPPROD LIKE BASEPROD INCLUDING IDENTITY COLUMN ATTRIBUTES;
Example: The following statement defines a declared temporary table called TEMPPROD by selecting columns from a view. The view has an identity column
Chapter 10. Creating and modifying DB2 objects
457
that the declared temporary table also uses as an identity column. The declared temporary table inherits its default column values from the default column values of a base table on which the view is based.
DECLARE GLOBAL TEMPORARY TABLE TEMPPROD AS (SELECT * FROM PRODVIEW) DEFINITION ONLY INCLUDING IDENTITY COLUMN ATTRIBUTES INCLUDING COLUMN DEFAULTS;
After you run a DECLARE GLOBAL TEMPORARY TABLE statement, the definition of the declared temporary table exists as long as the application process runs. If you need to delete the definition before the application process completes, you can do that with the DROP TABLE statement. For example, to drop the definition of TEMPPROD, run the following statement:
DROP TABLE SESSION.TEMPPROD;
DB2 creates an empty instance of a declared temporary table when it runs the DECLARE GLOBAL TEMPORARY TABLE statement. You can then perform the following actions: v Populate the declared temporary table by using INSERT statements v Modify the table using searched or positioned UPDATE or DELETE statements v Query the table using SELECT statements v Create indexes on the declared temporary table The ON COMMIT clause that you specify in the DECLARE GLOBAL TEMPORARY TABLE statement determines whether DB2 keeps or deletes all the rows from the table when you run a COMMIT statement in an application with a declared temporary table. ON COMMIT DELETE ROWS, which is the default, causes all rows to be deleted from the table at a commit point, unless a held cursor is open on the table at the commit point. ON COMMIT PRESERVE ROWS causes the rows to remain past the commit point. Example: Suppose that you run the following statement in an application program:
EXEC SQL DECLARE GLOBAL TEMPORARY TABLE TEMPPROD AS (SELECT * FROM BASEPROD) DEFINITION ONLY INCLUDING IDENTITY COLUMN ATTRIBUTES INCLUDING COLUMN DEFAULTS ON COMMIT PRESERVE ROWS; EXEC SQL INSERT INTO SESSION.TEMPPROD SELECT * FROM BASEPROD; . . . EXEC SQL COMMIT; . . .
When DB2 runs the preceding DECLARE GLOBAL TEMPORARY TABLE statement, DB2 creates an empty instance of TEMPPROD. The INSERT statement populates that instance with rows from table BASEPROD. The qualifier, SESSION, must be specified in any statement that references TEMPPROD. When DB2 executes the COMMIT statement, DB2 keeps all rows in TEMPPROD because TEMPPROD is defined with ON COMMIT PRESERVE ROWS. When the program ends, DB2 drops TEMPPROD.
458
459
| | | | | | | | |
v Creating a unique index on the unique key or altering the table to drop the unique key. v Defining a unique index on the ROWID column. v Creating the necessary LOB objects. Example of creating a primary index: To create the primary index for the project activity table, issue the following SQL statement:
CREATE UNIQUE INDEX XPROJAC1 ON DSN8910.PROJACT (PROJNO, ACTNO, ACSTDATE);
Dropping tables
When you drop a table, you delete the data and the table definition. You also delete all synonyms, views, indexes, referential constraints, and check constraints that are associated with that table.
Use the DROP TABLE statement with care: Dropping a table is not equivalent to deleting all its rows. When you drop a table, you lose more than its data and its definition. You lose all synonyms, views, indexes, and referential and check constraints that are associated with that table. You also lose all authorities that are granted on the table. Related reference: DROP (DB2 SQL)
Defining a view
A view is a named specification of a result table. Use views to control which users have access to certain data or to simplify writing SQL statements.
When a program accesses the data that is defined by a view, DB2 uses the view definition to return a set of rows that the program can access with SQL statements. Example: To see the departments that are administered by department D01 and the managers of those departments, run the following statement, which returns information from the VDEPTM view:
SELECT DEPTNO, LASTNAME FROM VDEPTM WHERE ADMRDEPT = DO1;
460
| | | | | |
When you create a view, you can reference the SESSION_USER and CURRENT SQLID special registers in the CREATE VIEW statement. When referencing the view, DB2 uses the value of the SESSION_USER or CURRENT SQLID special register that belongs to the user of the SQL statement (SELECT, UPDATE, INSERT, or DELETE) rather than the creator of the view. In other words, a reference to a special register in a view definition refers to its run time value. A column in a view might be based on a column in a base table that is an identity column. The column in the view is also an identity column,except under any of the following circumstances: v The column appears more than once in the view. v The view is based on a join of two or more tables. v The view is based on the union of two or more tables. v Any column in the view is derived from an expression that refers to an identity column.
| | | |
You can use views to limit access to certain kinds of data, such as salary information. Alternatively, you can use the IMPLICITLY HIDDEN clause of a CREATE TABLE statement define a column of a table to be hidden from some operations. You can also use views for the following actions: v Make a subset of a table's data available to an application. For example, a view based on the employee table might contain rows only for a particular department. v Combine columns from two or more tables and make the combined data available to an application. By using a SELECT statement that matches values in one table with those in another table, you can create a view that presents data from both tables. However, you can only select data from this type of view. You cannot update, delete, or insert data using a view that joins two or more tables. v Combine rows from two or more tables and make the combined data available to an application. By using two or more subselects that are connected by a set operator such as UNION, you can create a view that presents data from several tables. However, you can only select data from this type of view. You cannot update, delete, or insert data using a view that contains UNION operations. v Present computed data, and make the resulting data available to an application. You can compute such data using any function or operation that you can use in a SELECT statement. Related information: Implementing DB2 views (DB2 Administration Guide)
Views
A view does not contain data; it is a stored definition of a set of rows and columns. A view can present any or all of the data in one or more tables. Although you cannot modify an existing view, you can drop it and create a new one if your base tables change in a way that affects the view. Dropping and creating views does not affect the base tables or their data.
461
Dropping a view
When you drop a view, you also drop all views that are defined on that view. The base table is not affected.
Example
The following SQL statement drops the VDEPTM view:
DROP VIEW VDEPTM;
462
aggregation. First, you need to determine the total pay for each department by using the SUM function and order the results by using the GROUP BY clause. You then need to find the department with highest total pay based on the total pay for each department.
WITH DTOTAL (deptno, totalpay) AS (SELECT deptno, sum(salary+bonus) FROM DSN8810.EMP GROUP BY deptno) SELECT deptno FROM DTOTAL WHERE totalpay = (SELECT max(totalpay) FROM DTOTAL);
The result table for the common table expression, DTOTAL, contains the department number and total pay for each department in the employee table. The fullselect in the previous example uses the result table for DTOTAL to find the department with the highest total pay. The result table for the entire statement looks similar to the following results:
DEPTNO ====== D11
Using common table expressions with views: You can use common table expressions before a fullselect in a CREATE VIEW statement. This technique is useful if you need to use the results of a common table expression in more than one query. Example: Using a WITH clause in a CREATE VIEW statement: The following statement finds the departments that have a greater-than-average total pay and saves the results as the view RICH_DEPT:
CREATE VIEW RICH_DEPT (deptno) AS WITH DTOTAL (deptno, totalpay) AS (SELECT deptno, sum(salary+bonus) FROM DSN8910.EMP GROUP BY deptno) SELECT deptno FROM DTOTAL WHERE totalpay > (SELECT AVG(totalpay) FROM DTOTAL);
The fullselect in the previous example uses the result table for DTOTAL to find the departments that have a greater-than-average total pay. The result table is saved as the RICH_DEPT view and looks similar to the following results:
DEPTNO ====== A00 D11 D21
Using common table expressions when you use INSERT: You can use common table expressions before a fullselect in an INSERT statement. Example: Using a common table expression in an INSERT statement: The following statement uses the result table for VITALDEPT to find the manager's number for each department that has a greater-than-average number of senior engineers. Each manager's number is then inserted into the vital_mgr table.
463
INSERT INTO vital_mgr (mgrno) WITH VITALDEPT (deptno, se_count) AS (SELECT deptno, count(*) FROM DSN8910.EMP WHERE job = senior engineer GROUP BY deptno) SELECT d.manager FROM DSN8910.DEPT d, VITALDEPT s WHERE d.deptno = s.deptno AND s.se_count > (SELECT AVG(se_count) FROM VITALDEPT);
Assume that the PARTLIST table is populated with the values that are in the following table:
Table 79. PARTLIST table PART 00 00 01 SUBPART 01 05 02 QUANTITY 5 3 2
464
The preceding query includes a common table expression, identified by the name RPL, that expresses the recursive part of this query. It illustrates the basic elements of a recursive common table expression. The first operand (fullselect) of the UNION, referred to as the initialization fullselect, gets the direct subparts of part '01'. The FROM clause of this fullselect refers to the source table and will never refer to itself (RPL in this case). The result of this first fullselect goes into the common table expression RPL. As in this example, the UNION must always be a UNION ALL. The second operand (fullselect) of the UNION uses RPL to compute subparts of subparts by using the FROM clause to refer to the common table expression RPL and the source table PARTLIST with a join of a part from the source table (child) to a subpart of the current result contained in RPL (parent). The result goes then back to RPL again. The second operand of UNION is used repeatedly until no more subparts exist.
Chapter 10. Creating and modifying DB2 objects
465
The SELECT DISTINCT in the main fullselect of this query ensures the same part/subpart is not listed more than once. The result of the query is shown in the following table:
Table 80. Result table for example 1 PART 01 01 01 01 02 02 03 04 04 05 05 06 06 07 07 SUBPART 02 03 04 06 05 06 07 08 09 10 11 12 13 12 14 QUANTITY 2 3 4 3 7 6 6 10 11 10 10 10 10 8 8
Observe in the result that part '01' contains subpart '02' which contains subpart '06' and so on. Further, notice that part '06' is reached twice, once through part '01' directly and another time through part '02'. In the output, however, the subparts of part '06' are listed only once (this is the result of using a SELECT DISTINCT). Remember that with recursive common table expressions it is possible to introduce an infinite loop. In this example, an infinite loop would be created if the search condition of the second operand that joins the parent and child tables was coded as follows:
WHERE PARENT.SUBPART = CHILD.SUBPART
This infinite loop is created by not coding what is intended. You should carefully determining what to code so that there is a definite end of the recursion cycle. The result produced by this example could be produced in an application program without using a recursive common table expression. However, such an application would require coding a different query for every level of recursion. Furthermore, the application would need to put all of the results back in the database to order the final result. This approach complicates the application logic and does not perform well. The application logic becomes more difficult and inefficient for other bill of material queries, such as summarized and indented explosion queries.
466
quantity of subparts required for the part whenever it is required. It does not indicate how many of each subpart is needed to build part '01'.
WITH RPL (PART, SUBPART, QUANTITY) AS ( SELECT ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY FROM PARTLIST ROOT WHERE ROOT.PART = 01 UNION ALL SELECT PARENT.PART, CHILD.SUBPART, PARENT.QUANTITY*CHILD.QUANTITY FROM RPL PARENT, PARTLIST CHILD WHERE PARENT.SUBPART = CHILD.PART ) SELECT PART, SUBPART, SUM(QUANTITY) AS "Total QTY Used" FROM RPL GROUP BY PART, SUBPART ORDER BY PART, SUBPART;
In the preceding query, the select list of the second operand of the UNION in the recursive common table expression, identified by the name RPL, shows the aggregation of the quantity. To determine how many of each subpart is used, the quantity of the parent is multiplied by the quantity per parent of a child. If a part is used multiple times in different places, it requires another final aggregation. This is done by the grouping the parts and subparts in the common table expression RPL and using the SUM column function in the select list of the main fullselect. The result of the query is shown in the following table:
Table 81. Result table for example 2 PART 01 01 01 01 01 01 01 01 01 01 01 01 01 SUBPART 02 03 04 05 06 07 08 09 10 11 12 13 14 Total QTY Used 2 3 4 14 15 18 40 44 140 140 294 150 144
Consider the total quantity for subpart '06'. The value of 15 is derived from a quantity of 3 directly for part '01' and a quantity of 6 for part '02' which is needed two times by part '01'.
467
WITH RPL (LEVEL, PART, SUBPART, QUANTITY) AS ( SELECT 1, ROOT.PART, ROOT.SUBPART, ROOT.QUANTITY FROM PARTLIST ROOT WHERE ROOT.PART = 01 UNION ALL SELECT PARENT.LEVEL+1, CHILD.PART, CHILD.SUBPART, CHILD.QUANTITY FROM RPL PARENT, PARTLIST CHILD WHERE PARENT.SUBPART = CHILD.PART AND PARENT.LEVEL < 2 ) SELECT PART, LEVEL, SUBPART, QUANTITY FROM RPL;
This query is similar to the query in example 1. The column LEVEL is introduced to count the level each subpart is from the original part. In the initialization fullselect, the value for the LEVEL column is initialized to 1. In the subsequent fullselect, the level from the parent table increments by 1. To control the number of levels in the result, the second fullselect includes the condition that the level of the parent must be less than 2. This ensures that the second fullselect only processes children to the second level. The result of the query is shown in the following table:
Table 82. Result table for example 3 PART 01 01 01 01 02 02 03 04 04 06 06 LEVEL 1 1 1 1 2 2 2 2 2 2 2 SUBPART 02 03 04 06 05 06 07 08 09 12 13 QUANTITY 2 3 4 3 7 6 6 10 11 10 10
Creating triggers
A trigger is a set of SQL statements that execute when a certain event occurs in a table. Use triggers to control changes in DB2 databases. Triggers are more powerful than constraints because they can monitor a broader range of changes and perform a broader range of actions.
468
is above a certain amount, the trigger might substitute a valid value and call a user-defined function to send a notice to an administrator about the invalid update. Triggers also move application logic into DB2, which can result in faster application development and easier maintenance. For example, you can write applications to control salary changes in the employee table, but each application program that changes the salary column must include logic to check those changes. A better method is to define a trigger that controls changes to the salary column. Then DB2 does the checking for any application that modifies salaries. Example of creating and using a trigger: Triggers automatically execute a set of SQL statements whenever a specified event occurs. These SQL statements can perform tasks such as validation and editing of table changes, reading and modifying tables, or invoking functions or stored procedures that perform operations both inside and outside DB2. You create triggers using the CREATE TRIGGER statement. The following figure shows an example of a CREATE TRIGGER statement.
1 CREATE TRIGGER REORDER 2 3 4 AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS 5 REFERENCING NEW AS N_ROW 6 FOR EACH ROW MODE DB2SQL 7 WHEN (N_ROW.ON_HAND < 0.10 * N_ROW.MAX_STOCKED) 8 BEGIN ATOMIC CALL ISSUE_SHIP_REQUEST(N_ROW.MAX_STOCKED N_ROW.ON_HAND, N_ROW.PARTNO); END
The parts of this trigger are: 1 2 3 4 5 6 7 8 Trigger name (REORDER) Trigger activation time (AFTER) Triggering event (UPDATE) Subject table name (PARTS) New transition variable correlation name (N_ROW) Granularity (FOR EACH ROW) Trigger condition (WHEN...) Trigger body (BEGIN ATOMIC...END;)
When you execute this CREATE TRIGGER statement, DB2 creates a trigger package called REORDER and associates the trigger package with table PARTS. DB2 records the timestamp when it creates the trigger. If you define other triggers on the PARTS table, DB2 uses this timestamp to determine which trigger to activate first. The trigger is now ready to use.
469
After DB2 updates columns ON_HAND or MAX_STOCKED in any row of table PARTS, trigger REORDER is activated. The trigger calls a stored procedure called ISSUE_SHIP_REQUEST if, after a row is updated, the quantity of parts on hand is less than 10% of the maximum quantity stocked. In the trigger condition, the qualifier N_ROW represents a value in a modified row after the triggering event. When you no longer want to use trigger REORDER, you can delete the trigger by executing the statement:
DROP TRIGGER REORDER;
Executing this statement drops trigger REORDER and its associated trigger package named REORDER. If you drop table PARTS, DB2 also drops trigger REORDER and its trigger package. Parts of a trigger: A trigger contains the following parts: v trigger name v subject table v trigger activation time v triggering event v granularity v transition variables v transition tables v triggered action Trigger name: Use an ordinary identifier to name your trigger. You can use a qualifier or let DB2 determine the qualifier. When DB2 creates a trigger package for the trigger, it uses the qualifier for the collection ID of the trigger package. DB2 uses these rules to determine the qualifier: v If you use static SQL to execute the CREATE TRIGGER statement, DB2 uses the authorization ID in the bind option QUALIFIER for the plan or package that contains the CREATE TRIGGER statement. If the bind command does not include the QUALIFIER option, DB2 uses the owner of the package or plan. v If you use dynamic SQL to execute the CREATE TRIGGER statement, DB2 uses the authorization ID in special register CURRENT SCHEMA. Subject table: When you perform an insert, update, or delete operation on this table, the trigger is activated. You must name a local table in the CREATE TRIGGER statement. You cannot define a trigger on a catalog table or on a view. Trigger activation time: The two choices for trigger activation time are NO CASCADE BEFORE and AFTER. NO CASCADE BEFORE means that the trigger is activated before DB2 makes any changes to the subject table, and that the triggered action does not activate any other triggers. AFTER means that the trigger is activated after DB2 makes changes to the subject table and can activate other triggers. Triggers with an
| |
470
activation time of NO CASCADE BEFORE are known as before triggers. Triggers with an activation time of AFTER are known as after triggers. Triggering event: Every trigger is associated with an event. A trigger is activated when the triggering event occurs in the subject table. The triggering event is one of the following SQL operations: v insert v update v delete A triggering event can also be an update or delete operation that occurs as the result of a referential constraint with ON DELETE SET NULL or ON DELETE CASCADE. Triggers are not activated as the result of updates made to tables by DB2 utilities, with the exception of the LOAD utility when it is specified with the RESUME YES and SHRLEVEL CHANGE options. When the triggering event for a trigger is an update operation, the trigger is called an update trigger. Similarly, triggers for insert operations are called insert triggers, and triggers for delete operations are called delete triggers. The SQL statement that performs the triggering SQL operation is called the triggering SQL statement. Each triggering event is associated with one subject table and one SQL operation. The following trigger is defined with an insert triggering event:
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END
If the triggering SQL operation is an update operation, the event can be associated with specific columns of the subject table. In this case, the trigger is activated only if the update operation updates any of the specified columns. The following trigger, PAYROLL1, which invokes user-defined function named PAYROLL_LOG, is activated only if an update operation is performed on the SALARY or BONUS column of table PAYROLL:
CREATE TRIGGER PAYROLL1 AFTER UPDATE OF SALARY, BONUS ON PAYROLL FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC VALUES(PAYROLL_LOG(USER, UPDATE, CURRENT TIME, CURRENT DATE)); END
Granularity: The triggering SQL statement might modify multiple rows in the table. The granularity of the trigger determines whether the trigger is activated only once for the triggering SQL statement or once for every row that the SQL statement modifies. The granularity values are: v FOR EACH ROW
Chapter 10. Creating and modifying DB2 objects
471
The trigger is activated once for each row that DB2 modifies in the subject table. If the triggering SQL statement modifies no rows, the trigger is not activated. However, if the triggering SQL statement updates a value in a row to the same value, the trigger is activated. For example, if an UPDATE trigger is defined on table COMPANY_STATS, the following SQL statement will activate the trigger.
UPDATE COMPANY_STATS SET NBEMP = NBEMP;
v FOR EACH STATEMENT The trigger is activated once when the triggering SQL statement executes. The trigger is activated even if the triggering SQL statement modifies no rows. Triggers with a granularity of FOR EACH ROW are known as row triggers. Triggers with a granularity of FOR EACH STATEMENT are known as statement triggers. Statement triggers can only be after triggers. The following statement is an example of a row trigger:
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END
Trigger NEW_HIRE is activated once for every row inserted into the employee table. Transition variables: When you code a row trigger, you might need to refer to the values of columns in each updated row of the subject table. To do this, specify transition variables in the REFERENCING clause of your CREATE TRIGGER statement. The two types of transition variables are: v Old transition variables, specified with the OLD transition-variable clause, capture the values of columns before the triggering SQL statement updates them. You can define old transition variables for update and delete triggers. v New transition variables, specified with the NEW transition-variable clause, capture the values of columns after the triggering SQL statement updates them. You can define new transition variables for update and insert triggers. The following example uses transition variables and invocations of the IDENTITY_VAL_LOCAL function to access values that are assigned to identity columns. Suppose that you have created tables T and S, with the following definitions:
CREATE TABLE T (ID SMALLINT GENERATED BY DEFAULT AS IDENTITY (START WITH 100), C2 SMALLINT, C3 SMALLINT, C4 SMALLINT); CREATE TABLE S (ID SMALLINT GENERATED ALWAYS AS IDENTITY, C1 SMALLINT);
Define a before insert trigger on T that uses the IDENTITY_VAL_LOCAL built-in function to retrieve the current value of identity column ID, and uses transition variables to update the other columns of T with the identity column value.
472
CREATE TRIGGER TR1 NO CASCADE BEFORE INSERT ON T REFERENCING NEW AS N FOR EACH ROW MODE DB2SQL BEGIN ATOMIC SET N.C3 =N.ID; SET N.C4 =IDENTITY_VAL_LOCAL(); SET N.ID =N.C2 *10; SET N.C2 =IDENTITY_VAL_LOCAL(); END
This statement inserts a row into S with a value of 5 for column C1 and a value of 1 for identity column ID. Next, suppose that you execute the following SQL statement, which activates trigger TR1:
INSERT INTO T (C2) VALUES (IDENTITY_VAL_LOCAL());
This insert statement, and the subsequent activation of trigger TR1, have the following results: v The INSERT statement obtains the most recent value that was assigned to an identity column (1), and inserts that value into column C2 of table T. 1 is the value that DB2 inserted into identity column ID of table S. v When the INSERT statement executes, DB2 inserts the value 100 into identity column ID column of C2. v The first statement in the body of trigger TR1 inserts the value of transition variable N.ID (100) into column C3. N.ID is the value that identity column ID contains after the INSERT statement executes. v The second statement in the body of trigger TR1 inserts the null value into column C4. By definition, the result of the IDENTITY_VAL_LOCAL function in the triggered action of a before insert trigger is the null value. v The third statement in the body of trigger TR1 inserts 10 times the value of transition variable N.C2 (10*1) into identity column ID of table T. N.C2 is the value that column C2 contains after the INSERT is executed. v The fourth statement in the body of trigger TR1 inserts the null value into column C2. By definition, the result of the IDENTITY_VAL_LOCAL function in the triggered action of a before insert trigger is the null value. Transition tables: If you want to refer to the entire set of rows that a triggering SQL statement modifies, rather than to individual rows, use a transition table. Like transition variables, transition tables can appear in the REFERENCING clause of a CREATE TRIGGER statement. Transition tables are valid for both row triggers and statement triggers. The two types of transition tables are: v Old transition tables, specified with the OLD TABLE transition-table-name clause, capture the values of columns before the triggering SQL statement updates them. You can define old transition tables for update and delete triggers. v New transition tables, specified with the NEW TABLE transition-table-name clause, capture the values of columns after the triggering SQL statement updates them. You can define new transition variables for update and insert triggers.
473
The scope of old and new transition table names is the trigger body. If another table exists that has the same name as a transition table, any unqualified reference to that name in the trigger body points to the transition table. To reference the other table in the trigger body, you must use the fully qualified table name. The following example uses a new transition table to capture the set of rows that are inserted into the INVOICE table:
CREATE TRIGGER LRG_ORDR AFTER INSERT ON INVOICE REFERENCING NEW TABLE AS N_TABLE FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC SELECT LARGE_ORDER_ALERT(CUST_NO, TOTAL_PRICE, DELIVERY_DATE) FROM N_TABLE WHERE TOTAL_PRICE > 10000; END
The SELECT statement in LRG_ORDER causes user-defined function LARGE_ORDER_ALERT to execute for each row in transition table N_TABLE that satisfies the WHERE clause (TOTAL_PRICE > 10000). Triggered action: When a trigger is activated, a triggered action occurs. Every trigger has one triggered action, which consists of a trigger condition and a trigger body. Trigger condition: If you want the triggered action to occur only when certain conditions are true, code a trigger condition. A trigger condition is similar to a predicate in a SELECT, except that the trigger condition begins with WHEN, rather than WHERE. If you do not include a trigger condition in your triggered action, the trigger body executes every time the trigger is activated. For a row trigger, DB2 evaluates the trigger condition once for each modified row of the subject table. For a statement trigger, DB2 evaluates the trigger condition once for each execution of the triggering SQL statement. If the trigger condition of a before trigger has a fullselect, the fullselect cannot reference the subject table. The following example shows a trigger condition that causes the trigger body to execute only when the number of ordered items is greater than the number of available items:
CREATE TRIGGER CK_AVAIL NO CASCADE BEFORE INSERT ON ORDERS REFERENCING NEW AS NEW_ORDER FOR EACH ROW MODE DB2SQL WHEN (NEW_ORDER.QUANTITY > (SELECT ON_HAND FROM PARTS WHERE NEW_ORDER.PARTNO=PARTS.PARTNO)) BEGIN ATOMIC VALUES(ORDER_ERROR(NEW_ORDER.PARTNO, NEW_ORDER.QUANTITY)); END
Trigger body:
474
In the trigger body, you code the SQL statements that you want to execute whenever the trigger condition is true. If the trigger body consists of more than one statement, it must begin with BEGIN ATOMIC and end with END. You cannot include host variables or parameter markers in your trigger body. If the trigger body contains a WHERE clause that references transition variables, the comparison operator cannot be LIKE. The statements you can use in a trigger body depend on the activation time of the trigger. For a list of valid SQL statements for triggers, see the "Allowable SQL statements" table in the CREATE TRIGGER (DB2 SQL) topic. The following list provides more detailed information about SQL statements that are valid in triggers: v fullselect, CALL, and VALUES Use a fullselect or the VALUES statement in a trigger body to conditionally or unconditionally invoke a user-defined function. Use the CALL statement to invoke a stored procedure. See Invoking a stored procedure or user-defined function from a trigger on page 476 for more information on invoking user-defined functions and stored procedures from triggers. A fullselect in the trigger body of a before trigger cannot reference the subject table. v SIGNAL Use the SIGNAL statement in the trigger body to report an error condition and back out any changes that are made by the trigger, as well as actions that result from referential constraints on the subject table. When DB2 executes the SIGNAL statement, it returns an SQLCA to the application with SQLCODE -438. The SQLCA also includes the following values, which you supply in the SIGNAL statement: A 5-character value that DB2 uses as the SQLSTATE An error message that DB2 places in the SQLERRMC field In the following example, the SIGNAL statement causes DB2 to return an SQLCA with SQLSTATE 75001 and terminate the salary update operation if an employee's salary increase is over 20%:
CREATE TRIGGER SAL_ADJ BEFORE UPDATE OF SALARY ON EMP REFERENCING OLD AS OLD_EMP NEW AS NEW_EMP FOR EACH ROW MODE DB2SQL WHEN (NEW_EMP.SALARY > (OLD_EMP.SALARY * 1.20)) BEGIN ATOMIC SIGNAL SQLSTATE 75001 (Invalid Salary Increase - Exceeds 20%); END
v SET transition-variable Because before triggers operate on rows of a table before those rows are modified, you cannot perform operations in the body of a before trigger that directly modify the subject table. You can, however, use the SET transition-variable statement to modify the values in a row before those values go into the table. For example, this trigger uses a new transition variable to fill in today's date for the new employee's hire date:
CREATE TRIGGER HIREDATE NO CASCADE BEFORE INSERT ON EMP REFERENCING NEW AS NEW_VAR
475
FOR EACH ROW MODE DB2SQL BEGIN ATOMIC SET NEW_VAR.HIRE_DATE = CURRENT_DATE; END
| | | | |
v INSERT, DELETE (searched), UPDATE (searched), and MERGE Because you can include INSERT, DELETE (searched), UPDATE (searched), and MERGE statements in your trigger body, execution of the trigger body might cause activation of other triggers. See Trigger cascading on page 480 for more information. If any SQL statement in the trigger body fails during trigger execution, DB2 rolls back all changes that are made by the triggering SQL statement and the triggered SQL statements. However, if the trigger body executes actions that are outside of DB2's control or are not under the same commit coordination as the DB2 subsystem in which the trigger executes, DB2 cannot undo those actions. Examples of external actions that are not under DB2's control are: v Performing updates that are not under RRS commit control v Sending an electronic mail message If the trigger executes external actions that are under the same commit coordination as the DB2 subsystem under which the trigger executes, and an error occurs during trigger execution, DB2 places the application process that issued the triggering statement in a must-rollback state. The application must then execute a rollback operation to roll back those external actions. Examples of external actions that are under the same commit coordination as the triggering SQL operation are: v Executing a distributed update operation v From a user-defined function or stored procedure, executing an external action that affects an external resource manager that is under RRS commit control. Related reference: CREATE TRIGGER (DB2 SQL) LOAD (DB2 Utilities)
| | | | | | | | | | | | | | | |
Procedure
To invoke a stored procedure or user-defined function from a trigger: 1. Ensure that the stored procedure or user-defined function is defined before the trigger is defined. v Define procedures by using the CREATE PROCEDURE statement. v Define triggers by using the CREATE FUNCTION statement.
476
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2. Invoke the user-defined function or stored procedure by performing one of the following actions: v To invoke a user-defined function, include the user-defined function in one of the following statements in the trigger: SELECT statement Use a SELECT statement to execute the function conditionally. The number of times that the user-defined function executes depends on the number of rows in the result table of the SELECT statement. For example, in the following trigger, the SELECT statement invokes user-defined function LARGE_ORDER_ALERT. This function executes once for each row in transition table N_TABLE with an order price of more than 10000:
CREATE TRIGGER LRG_ORDR AFTER INSERT ON INVOICE REFERENCING NEW TABLE AS N_TABLE FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC SELECT LARGE_ORDER_ALERT(CUST_NO, TOTAL_PRICE, DELIVERY_DATE) FROM N_TABLE WHERE TOTAL_PRICE > 10000; END
VALUES statement Use the VALUES statement to execute a function unconditionally. The function executes once for each execution of a statement trigger or once for each row in a row trigger. In the following example, user-defined function PAYROLL_LOG executes every time the trigger PAYROLL1 is activated. This trigger is activated when an update operation occurs.
CREATE TRIGGER PAYROLL1 AFTER UPDATE ON PAYROLL FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC VALUES(PAYROLL_LOG(USER, UPDATE, CURRENT TIME, CURRENT DATE)); END
] v To invoke a stored procedure, include a CALL statement in the trigger. The parameters of this stored procedure call must be constants, transition variables, table locators, or expressions. 3. To pass transition tables from the trigger to the user-defined function or stored procedure, use table locators. When you call a user-defined function or stored procedure from a trigger, you might want to give the function or procedure access to the entire set of modified rows. In this case, use table locators to pass a pointer to the old or new transition table. Most of the code for using a table locator is in the function or stored procedure that receives the locator. To pass the transition table from a trigger, specify the parameter TABLE transition-table-name when you invoke the function or stored procedure. This parameter causes DB2 to pass a table locator for the transition table to the user-defined function or stored procedure. For example, the following trigger passes a table locator for a transition table NEWEMPS to stored procedure CHECKEMP:
477
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CREATE TRIGGER EMPRAISE AFTER UPDATE ON EMP REFERENCING NEW TABLE AS NEWEMPS FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC CALL CHECKEMP(TABLE NEWEMPS); END
Related concepts: User-defined functions on page 495 Triggers (Introduction to DB2 for z/OS) Related tasks: Accessing transition tables in a user-defined function or stored procedure on page 527 Creating a stored procedure on page 535 Defining a user-defined function on page 493 Related reference: CALL (DB2 SQL) CREATE FUNCTION (DB2 SQL) CREATE PROCEDURE (DB2 SQL) select-statement (DB2 SQL) VALUES (DB2 SQL)
Procedure
To insert, update, or delete data in a view by using INSTEAD OF triggers: 1. Define one or more INSTEAD OF triggers on the view by using a CREATE TRIGGER statement. You can create one trigger for each of the following operations: INSERT, UPDATE, and DELETE. These triggers define the action that DB2 is to take for each of these operations.
478
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2. Submit a INSERT, UPDATE, or DELETE statement on the view. DB2 executes the appropriate INSTEAD OF trigger.
Results
Example: Suppose that you create the following view on the sample tables DSN8910.EMP and DSN8910.DEPT:
CREATE VIEW EMPV (EMPNO, FIRSTNME, MIDINIT, LASTNAME, PHONENO, HIREDATE,DEPTNAME) AS SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, PHONENO, HIREDATE, DEPTNAME FROM DSN8910.EMP, DSN8910.DEPT WHERE DSN8910.EMP.WORKDEPT = DSN8910.DEPT.DEPTNO
Suppose that you also define the following three INSTEAD OF triggers:
CREATE TRIGGER EMPV_INSERT INSTEAD OF INSERT ON EMPV REFERENCING NEW AS NEWEMP FOR EACH ROW MODE DB2SQL INSERT INTO DSN8910.EMP (EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, HIREDATE) VALUES(NEWEMP.EMPNO, NEWEMP.FIRSTNME, NEWEMP.MIDINIT, NEWEMP.LASTNAME, COALESCE((SELECT D.DEPTNO FROM DSN8910.DEPT AS D WHERE D.DEPTNAME = NEWEMP.DEPTNAME), RAISE_ERROR(70001, Unknown department name)), NEWEMP.PHONENO, NEWEMP.HIREDATE) CREATE TRIGGER EMPV_UPDATE INSTEAD OF UPDATE ON EMPV REFERENCING NEW AS NEWEMP OLD AS OLDEMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE DSN8910.EMP AS E SET (E.FIRSTNME, E.MIDINIT, E.LASTNAME, E.WORKDEPT, E.PHONENO, E.HIREDATE) = (NEWEMP.FIRSTNME, NEWEMP.MIDINIT, NEWEMP.LASTNAME, COALESCE((SELECT D.DEPTNO FROM DSN8910.DEPT AS D WHERE D.DEPTNAME = OLDEMP.DEPTNAME), RAISE_ERROR (70001, Unknown department name)) NEWEMP.PHONENO, NEWEMP.HIREDATE) WHERE NEWEMP.EMPNO = E.EMPNO; UPDATE DSN8910.DEPT D SET D.DEPTNAME=NEWEMP.DEPTNAME WHERE D.DEPTNAME=OLDEMP.DEPTNAME; END CREATE TRIGGER EMPV_DELETE INSTEAD OF DELETE ON EMPV REFERENCING OLD AS OLDEMP FOR EACH ROW MODE DB2SQL DELETE FROM DSN8910.EMP AS E WHERE E.EMPNO = OLDEMP.EMPNO
Because the view is on a query with an inner join, the view is read only. However, the INSTEAD OF triggers makes insert, update, and delete operations possible. The following table describes what happens for various insert, update, and delete operations on the EMPV view.
479
| | | | | | | | | | | | | | | | | | | | | | | | | |
Table 83. Results of INSTEAD OF triggers SQL statement INSERT INTO EMPV VALUES (...) Result The EMPV_INSERT trigger is activated. This trigger inserts the row into the base table DSN8910.EMP if the department name matches a value in the WORKDEPT column in the DSN8910.DEPT table. Otherwise, an error is returned. If a query had been used instead of a VALUES clause on the INSERT statement, the trigger body would be processed for each row from the query. The EMPV_UPDATE trigger is activated. This trigger updates the DEPTNAME column in the DSN8910.DEPT for the any qualifying rows. The EMPV_DELETE trigger is activated. This trigger deletes the qualifying rows from the DSN8910.EMP table.
UPDATE EMPV SET DEPTNAME=PLANNING & STRATEGY WHERE DEPTNAME=PLANNING DELETE FROM EMPV WHERE HIREDATE<1910-01-01
Trigger packages
A trigger package is a special type of package that is created only when you execute a CREATE TRIGGER statement. A trigger package executes only when its associated trigger is activated. As with any other package, DB2 marks a trigger package invalid when you drop a table, index, or view on which the trigger package depends. DB2 executes an automatic rebind the next time the trigger is activated. However, if the automatic rebind fails, DB2 does not mark the trigger package as inoperative. Unlike other packages, a trigger package is freed if you drop the table on which the trigger is defined, so you can recreate the trigger package only by recreating the table and the trigger. You can use the subcommand REBIND TRIGGER PACKAGE to rebind a trigger package that DB2 has marked as inoperative. You can also use REBIND TRIGGER PACKAGE to change the option values with which DB2 originally bound the trigger package. You can change only a limited subset of the default bind options that DB2 used when creating the package. Related reference: REBIND TRIGGER PACKAGE (DSN) (DB2 Commands)
Trigger cascading
When a trigger performs an SQL operation, it might modify the subject table or other tables with triggers, therefore DB2 also activates those triggers. This situation is called trigger cascading. A trigger that is activated as the result of another trigger can be activated at the same level as the original trigger or at a different level. Two triggers, A and B, are activated at different levels if trigger B is activated after trigger A is activated and
480
completes before trigger A completes. If trigger B is activated after trigger A is activated and completes after trigger A completes, then the triggers are at the same level. For example, in these cases, trigger A and trigger B are activated at the same level: v Table X has two triggers that are defined on it, A and B. A is a before trigger and B is an after trigger. An update to table X causes both trigger A and trigger B to activate. v Trigger A updates table X, which has a referential constraint with table Y, which has trigger B defined on it. The referential constraint causes table Y to be updated, which activates trigger B. In these cases, trigger A and trigger B are activated at different levels: v Trigger A is defined on table X, and trigger B is defined on table Y. Trigger B is an update trigger. An update to table X activates trigger A, which contains an UPDATE statement on table B in its trigger body. This UPDATE statement activates trigger B. v Trigger A calls a stored procedure. The stored procedure contains an INSERT statement for table X, which has insert trigger B defined on it. When the INSERT statement on table X executes, trigger B is activated. When triggers are activated at different levels, it is called trigger cascading. Trigger cascading can occur only for after triggers because DB2 does not support cascading of before triggers. To prevent the possibility of endless trigger cascading, DB2 supports only 16 levels of cascading of triggers, stored procedures, and user-defined functions. If a trigger, user-defined function, or stored procedure at the 17th level is activated, DB2 returns SQLCODE -724 and backs out all SQL changes in the 16 levels of cascading. However, as with any other SQL error that occurs during trigger execution, if any action occurs that is outside the control of DB2, that action is not backed out. You can write a monitor program that issues IFI READS requests to collect DB2 trace information about the levels of cascading of triggers, user-defined functions, and stored procedures in your programs. Related tasks: Invoking IFI from a monitor program (DB2 Performance)
481
In this example, triggers NEWHIRE1 and NEWHIRE2 have the same triggering event (INSERT), the same subject table (EMP), and the same activation time (AFTER). Suppose that the CREATE TRIGGER statement for NEWHIRE1 is run before the CREATE TRIGGER statement for NEWHIRE2:
CREATE TRIGGER NEWHIRE1 AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END CREATE TRIGGER NEWHIRE2 AFTER INSERT ON EMP REFERENCING NEW AS N_EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE DEPTS SET NBEMP = NBEMP + 1 WHERE DEPT_ID = N_EMP.DEPT_ID; END
When an insert operation occurs on table EMP, DB2 activates NEWHIRE1 first because NEWHIRE1 was created first. Now suppose that someone drops and re-creates NEWHIRE1. NEWHIRE1 now has a later timestamp than NEWHIRE2, so the next time an insert operation occurs on EMP, NEWHIRE2 is activated before NEWHIRE1. If two row triggers are defined for the same action, the trigger that was created earlier is activated first for all affected rows. Then the second trigger is activated for all affected rows. In the previous example, suppose that an INSERT statement with a fullselect inserts 10 rows into table EMP. NEWHIRE1 is activated for all 10 rows, then NEWHIRE2 is activated for all 10 rows.
482
| | | | |
3. DB2 makes the changes that are specified in statement S1 to table T1, unless an INSTEAD OF trigger is defined for that action. If an appropriate INSTEAD OF trigger is defined, DB2 executes the trigger instead of the statement and skips the remaining steps in this list. If an error occurs, DB2 rolls back all changes that are made by S1. 4. If M1 is not empty, DB2 applies all the following constraints and checks that are defined on table T1: v Referential constraints v Check constraints v Checks that are due to updates of the table through views defined WITH CHECK OPTION Application of referential constraints with rules of DELETE CASCADE or DELETE SET NULL are activated before delete triggers or before update triggers on the dependent tables. If any constraint is violated, DB2 rolls back all changes that are made by constraint actions or by statement S1. 5. DB2 processes all after triggers that are defined on T1, and all after triggers on tables that are modified as the result of referential constraint actions, in order of creation. Each after row trigger executes the triggered action once for each row in M1. If M1 is empty, the triggered action does not execute. Each after statement trigger executes the triggered action once for each execution of S1, even if M1 is empty. If any triggered actions contain SQL insert, update, or delete operations, DB2 repeats steps 1 through 5 for each operation. If an error occurs when the triggered action executes, or if a triggered action is at the 17th level of trigger cascading, DB2 rolls back all changes that are made in step 5 and all previous steps. For example, table DEPT is a parent table of EMP, with these conditions: v The DEPTNO column of DEPT is the primary key. v The WORKDEPT column of EMP is the foreign key. v The constraint is ON DELETE SET NULL. Suppose the following trigger is defined on EMP:
CREATE TRIGGER EMPRAISE AFTER UPDATE ON EMP REFERENCING NEW TABLE AS NEWEMPS FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC VALUES(CHECKEMP(TABLE NEWEMPS)); END
Also suppose that an SQL statement deletes the row with department number E21 from DEPT. Because of the constraint, DB2 finds the rows in EMP with a WORKDEPT value of E21 and sets WORKDEPT in those rows to null. This is equivalent to an update operation on EMP, which has update trigger EMPRAISE. Therefore, because EMPRAISE is an after trigger, EMPRAISE is activated after the constraint action sets WORKDEPT values to null.
483
Interactions between triggers and tables that have multilevel security with row-level granularity
A BEFORE trigger affects the value of the transition variable that is associated with a security label column. If a subject table has a security label column, the column in the transition table or transition variable that corresponds to the security label column in the subject table does not inherit the security label attribute. This means that the multilevel security check with row-level granularity is not enforced for the transition table or the transition variable. If you add a security label column to a subject table using the ALTER TABLE statement, the rules are the same as when you add any column to a subject table because the column in the transition table or the transition variable that corresponds to the security label column does not inherit the security label attribute. If the ID you are using does not have write-down privilege and you execute an insert or update operation, the security label value of your ID is assigned to the security label column for the rows that you are inserting or updating. When a BEFORE trigger is activated, the value of the transition variable that corresponds to the security label column is the security label of the ID if either of the following conditions is true: v The user does not have write-down privilege v The value for the security label column is not specified If the user does not have write-down privilege, and the trigger changes the transition variable that corresponds to the security label column, the value of the security label column is changed back to the security label value of the user before the row is written to the page. Related concepts: Multilevel security (Managing Security)
484
Now suppose that an application executes the following statements to perform a positioned update operation:
EXEC SQL BEGIN DECLARE SECTION; long hv1; EXEC SQL END DECLARE SECTION; . . . EXEC SQL DECLARE C1 CURSOR FOR SELECT A1 FROM T1 WHERE A1 IN (SELECT B1 FROM T2) FOR UPDATE OF A1; . . . EXEC SQL OPEN C1; . . . while(SQLCODE>=0 && SQLCODE!=100) { EXEC SQL FETCH C1 INTO :hv1; UPDATE T1 SET A1=5 WHERE CURRENT OF C1; }
When DB2 executes the FETCH statement that positions cursor C1 for the first time, DB2 evaluates the subselect, SELECT B1 FROM T2, to produce a result table that contains the two rows of column T2:
1 2
When DB2 executes the positioned UPDATE statement for the first time, trigger TR1 is activated. When the body of trigger TR1 executes, the row with value 2 is deleted from T2. However, because SELECT B1 FROM T2 is evaluated only once, when the FETCH statement is executed again, DB2 finds the second row of T1, even though the second row of T2 was deleted. The FETCH statement positions the cursor to the second row of T1, and the second row of T1 is updated. The update operation causes the trigger to be activated again, which causes DB2 to attempt to delete the second row of T2, even though that row was already deleted. To avoid processing of the second row after it should have been deleted, use a correlated subquery in the cursor declaration:
DCL C1 CURSOR FOR SELECT A1 FROM T1 X WHERE EXISTS (SELECT B1 FROM T2 WHERE X.A1 = B1) FOR UPDATE OF A1;
In this case, the subquery, SELECT B1 FROM T2 WHERE X.A1 = B1, is evaluated for each FETCH statement. The first time that the FETCH statement executes, it positions the cursor to the first row of T1. The positioned UPDATE operation activates the trigger, which deletes the second row of T2. Therefore, when the FETCH statement executes again, no row is selected, so no update operation or triggered action occurs.
485
Example: Effect of row processing order on a triggered action: The following example shows how the order of processing rows can change the outcome of an after row trigger. Suppose that tables T1, T2, and T3 look like this:
Table T1 A1 == 1 2 Table T2 B1 == (empty) Table T3 C1 == (empty)
The contents of tables T2 and T3 after the UPDATE statement executes depend on the order in which DB2 updates the rows of T1. If DB2 updates the first row of T1 first, after the UPDATE statement and the trigger execute for the first time, the values in the three tables are:
Table T1 A1 == 2 2 Table T2 B1 == 2 Table T3 C1 == 2
After the second row of T1 is updated, the values in the three tables are:
Table T1 A1 == 2 3 Table T2 B1 == 2 3 Table T3 C1 == 2 2 3
However, if DB2 updates the second row of T1 first, after the UPDATE statement and the trigger execute for the first time, the values in the three tables are:
Table T1 A1 == 1 3 Table T2 B1 == 3 Table T3 C1 == 3
After the first row of T1 is updated, the values in the three tables are:
Table T1 A1 == 2 3 Table T2 B1 == 3 2 Table T3 C1 == 3 3 2
486
Sequence objects
A sequence is a user-defined object that generates a sequence of numeric values according to the specification with which the sequence was created. Sequences, unlike identity columns, are not associated with tables. Applications refer to a sequence object to get its current or next value. The sequence of numeric values is generated in a monotonically ascending or descending order. The relationship between sequences and tables is controlled by the application, not by DB2. Your application can reference a sequence object and coordinate the value as keys across multiple rows and tables. However, a table column that gets its values from a sequence object does not necessarily have unique values in that column. Even if the sequence object has been defined with the NO CYCLE clause, some other application might insert values into that table column other than values you obtain by referencing that sequence object. DB2 always generates sequence numbers in order of request. However, in a data sharing group where the sequence values are cached by multiple DB2 members simultaneously, the sequence value assignments might not be in numeric order. Additionally, you might have gaps in sequence number values for the following reasons: v If DB2 terminates abnormally before it assigns all the cached values v If your application rolls back a transaction that increments the sequence v If the statement containing NEXT VALUE fails after it increments the sequence You create a sequence object with the CREATE SEQUENCE statement, alter it with the ALTER SEQUENCE statement, and drop it with the DROP SEQUENCE statement. You grant access to a sequence with the GRANT (privilege) ON SEQUENCE statement, and revoke access to the sequence with the REVOKE (privilege) ON SEQUENCE statement. The values that DB2 generates for a sequence depend on how the sequence is created. The START WITH option determines the first value that DB2 generates. The values advance by the INCREMENT BY value in ascending or descending order. The MINVALUE and MAXVALUE options determine the minimum and maximum values that DB2 generates. The CYCLE or NO CYCLE option determines whether DB2 wraps values when it has generated all values between the START WITH value and MAXVALUE if the values are ascending, or between the START WITH value and MINVALUE if the values are descending. Keys across multiple tables: You can use the same sequence number as a key value in two separate tables by first generating the sequence value with a NEXT VALUE expression to insert the first row in the first table. You can then reference this same sequence value with a PREVIOUS VALUE expression to insert the other rows in the second table. Example: Suppose that an ORDERS table and an ORDER_ITEMS table are defined in the following way:
CREATE TABLE ORDERS (ORDERNO INTEGER NOT NULL, ORDER_DATE DATE DEFAULT, CUSTNO SMALLINT
Chapter 10. Creating and modifying DB2 objects
487
PRIMARY KEY (ORDERNO)); CREATE TABLE ORDER_ITEMS (ORDERNO INTEGER NOT NULL, PARTNO INTEGER NOT NULL, QUANTITY SMALLINT NOT NULL, PRIMARY KEY (ORDERNO,PARTNO), CONSTRAINT REF_ORDERNO FOREIGN KEY (ORDERNO) REFERENCES ORDERS (ORDERNO) ON DELETE CASCADE);
You create a sequence named ORDER_SEQ to use as key values for both the ORDERS and ORDER_ITEMS tables:
CREATE SEQUENCE ORDER_SEQ AS INTEGER START WITH 1 INCREMENT BY 1 NO MAXVALUE NO CYCLE CACHE 20;
You can then use the same sequence number as a primary key value for the ORDERS table and as part of the primary key value for the ORDER_ITEMS table:
INSERT INTO ORDERS (ORDERNO, CUSTNO) VALUES (NEXT VALUE FOR ORDER_SEQ, 12345); INSERT INTO ORDER_ITEMS (ORDERNO, PARTNO, QUANTITY) VALUES (PREVIOUS VALUE FOR ORDER_SEQ, 987654, 2);
The NEXT VALUE expression in the first INSERT statement generates a sequence number value for the sequence object ORDER_SEQ. The PREVIOUS VALUE expression in the second INSERT statement retrieves that same value because it was the sequence number most recently generated for that sequence object within the current application process.
| | | | | | | |
488
| | | | | | | | |
A distinct type is a user-defined data type that shares its internal representation with a built-in data type but is considered to be a separate and incompatible type for semantic purposes. For example, you might want to define a picture type or an audio type, both of which have quite different semantics, but which use the built-in data type BLOB for their internal representation. For a detailed discussion of distinct types, see Distinct types. v User-defined functions The built-in functions that are supplied with DB2 are a useful set of functions, but they might not satisfy all of your requirements. For those cases, you can use user-defined functions. For example, a built-in function might perform a calculation you need, but the function does not accept the distinct types you want to pass to it. You can then define a function based on a built-in function, called a sourced user-defined function, that accepts your distinct types. You might need to perform another calculation in your SQL statements for which no built-in function exists. In that situation, you can define and write an SQL function or an external function. For a detailed discussion of user-defined functions, see User-defined functions on page 495.
Procedure
To create a distinct type: Issue the CREATE DISTINCT TYPE statement. For example, you can create distinct types for euros and yen by issuing the following SQL statements:
CREATE DISTINCT TYPE EURO AS DECIMAL(9,2); CREATE DISTINCT TYPE JAPANESE_YEN AS DECIMAL(9,2);
Distinct types
A distinct type is a user-defined data type that shares its internal representation with a built-in data type (its source type), but is considered to be a separate and incompatible data type for most operations. Each distinct type has the same internal representation as a built-in data type. Suppose you want to define some audio and video data in a DB2 table. You can define columns for both types of data as BLOB, but you might want to use a data type that more specifically describes the data. To do that, define distinct types. You can then use those types when you define columns in a table or manipulate the data in those columns. For example, you can define distinct types for the audio and video data like this:
CREATE DISTINCT TYPE AUDIO AS BLOB (1M); CREATE DISTINCT TYPE VIDEO AS BLOB (1M);
489
For more information on LOB data, see Large objects (LOBs) on page 440. After you define distinct types and columns of those types, you can use those data types in the same way you use built-in types. You can use the data types in assignments, comparisons, function invocations, and stored procedure calls. However, when you assign one column value to another or compare two column values, those values must be of the same distinct type. For example, you must assign a column value of type VIDEO to a column of type VIDEO, and you can compare a column value of type AUDIO only to a column of type AUDIO. When you assign a host variable value to a column with a distinct type, you can use any host data type that is compatible with the source data type of the distinct type. For example, to receive an AUDIO or VIDEO value, you can define a host variable like this:
SQL TYPE IS BLOB (1M) HVAV;
When you use a distinct type as an argument to a function, a version of that function that accepts that distinct type must exist. For example, if function SIZE takes a BLOB type as input, you cannot automatically use a value of type AUDIO as input. However, you can create a sourced user-defined function that takes the AUDIO type as input. For example:
CREATE FUNCTION SIZE(AUDIO) RETURNS INTEGER SOURCE SIZE(BLOB(1M));
Using distinct types in application programs: The main reason to use distinct types is because DB2 enforces strong typing for distinct types. Strong typing ensures that only functions, procedures, comparisons, and assignments that are defined for a data type can be used. For example, if you have defined a user-defined function to convert U.S. dollars to euro currency, you do not want anyone to use this same user-defined function to convert Japanese yen to euros because the U.S. dollars to euros function returns the wrong amount. Suppose you define three distinct types:
CREATE DISTINCT TYPE US_DOLLAR AS DECIMAL(9,2); CREATE DISTINCT TYPE EURO AS DECIMAL(9,2); CREATE DISTINCT TYPE JAPANESE_YEN AS DECIMAL(9,2);
If a conversion function is defined that takes an input parameter of type US_DOLLAR as input, DB2 returns an error if you try to execute the function with an input parameter of type JAPANESE_YEN.
490
v Inserting data from a host variable into a distinct type column based on a LOB column v Executing a query that contains a user-defined function invocation v Casting a LOB locator to the input data type of a user-defined function Suppose that you keep electronic mail documents that are sent to your company in a DB2 table. The DB2 data type of an electronic mail document is a CLOB, but you define it as a distinct type so that you can control the types of operations that are performed on the electronic mail. The distinct type is defined like this:
CREATE DISTINCT TYPE E_MAIL AS CLOB(5M);
You have also defined and written user-defined functions to search for and return the following information about an electronic mail document: v Subject v Sender v Date sent v Message content v Indicator of whether the document contains a user-specified string The user-defined function definitions look like this:
CREATE FUNCTION SUBJECT(E_MAIL) RETURNS VARCHAR(200) EXTERNAL NAME SUBJECT LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION; CREATE FUNCTION SENDER(E_MAIL) RETURNS VARCHAR(200) EXTERNAL NAME SENDER LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION; CREATE FUNCTION SENDING_DATE(E_MAIL) RETURNS DATE EXTERNAL NAME SENDDATE LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION; CREATE FUNCTION CONTENTS(E_MAIL) RETURNS CLOB(1M) EXTERNAL NAME CONTENTS LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION; CREATE FUNCTION CONTAINS(E_MAIL, VARCHAR (200)) RETURNS INTEGER EXTERNAL NAME CONTAINS LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION;
491
The table that contains the electronic mail documents is defined like this:
CREATE TABLE DOCUMENTS (LAST_UPDATE_TIME TIMESTAMP, DOC_ROWID ROWID NOT NULL GENERATED ALWAYS, A_DOCUMENT E_MAIL);
Because the table contains a column with a source data type of CLOB, the table requires an associated LOB table space, auxiliary table, and index on the auxiliary table. Use statements like this to define the LOB table space, the auxiliary table, and the index:
CREATE LOB TABLESPACE DOCTSLOB LOG YES GBPCACHE SYSTEM; CREATE AUX TABLE DOCAUX_TABLE IN DOCTSLOB STORES DOCUMENTS COLUMN A_DOCUMENT; CREATE INDEX A_IX_DOC ON DOCAUX_TABLE;
To populate the document table, you write code that executes an INSERT statement to put the first part of a document in the table, and then executes multiple UPDATE statements to concatenate the remaining parts of the document. For example:
EXEC SQL BEGIN DECLARE SECTION; char hv_current_time[26]; SQL TYPE IS CLOB (1M) hv_doc; EXEC SQL END DECLARE SECTION; /* Determine the current time and put this value */ /* into host variable hv_current_time. */ /* Read up to 1 MB of document data from a file */ /* into host variable hv_doc. */ . . . /* Insert the time value and the first 1 MB of */ /* document data into the table. */ EXEC SQL INSERT INTO DOCUMENTS VALUES(:hv_current_time, DEFAULT, E_MAIL(:hv_doc)); /* Although there is more document data in the /* file, read up to 1 MB more of data, and then /* use an UPDATE statement like this one to /* concatenate the data in the host variable /* to the existing data in the table. EXEC SQL UPDATE DOCUMENTS SET A_DOCUMENT = A_DOCUMENT || E_MAIL(:hv_doc) WHERE LAST_UPDATE_TIME = :hv_current_time; */ */ */ */ */
Now that the data is in the table, you can execute queries to learn more about the documents. For example, you can execute this query to determine which documents contain the word "performance":
SELECT SENDER(A_DOCUMENT), SENDING_DATE(A_DOCUMENT), SUBJECT(A_DOCUMENT) FROM DOCUMENTS WHERE CONTAINS(A_DOCUMENT,performance) = 1;
Because the electronic mail documents can be very large, you might want to use LOB locators to manipulate the document data instead of fetching all of a document into a host variable. You can use a LOB locator on any distinct type that is defined on one of the LOB types. The following example shows how you can cast a LOB locator as a distinct type, and then use the result in a user-defined function that takes a distinct type as an argument:
492
EXEC SQL BEGIN DECLARE SECTION long hv_len; char hv_subject[200]; SQL TYPE IS CLOB_LOCATOR hv_email_locator; EXEC SQL END DECLARE SECTION . . . /* Select a document into a CLOB locator. EXEC SQL SELECT A_DOCUMENT, SUBJECT(A_DOCUMENT) INTO :hv_email_locator, :hv_subject FROM DOCUMENTS WHERE LAST_UPDATE_TIME = :hv_current_time; . . . /* Extract the subject from the document. The /* SUBJECT function takes an argument of type /* E_MAIL, so cast the CLOB locator as E_MAIL. EXEC SQL SET :hv_subject = SUBJECT(CAST(:hv_email_locator AS E_MAIL)); . . .
*/
*/ */ */
Procedure
To define a user-defined function: 1. Determine the characteristics of the user-defined function, such as the user-defined function name, schema (qualifier), and number and data types of the input parameters and the types of the values returned. 2. Execute a CREATE FUNCTION statement to register the information in the DB2 catalog.
Results
If you discover after you define the function that any of these characteristics is not appropriate for the function, you can use an ALTER FUNCTION statement to change information in the definition. You cannot use ALTER FUNCTION to change some of the characteristics of a user-defined function definition.
Examples
Example: Definition for an external user-defined scalar function: A programmer develops a user-defined function that searches for a string of maximum length 200 in a CLOB value whose maximum length is 500 KB. This CREATE FUNCTION statement defines the user-defined function:
CREATE FUNCTION FINDSTRING (CLOB(500K), VARCHAR(200)) RETURNS INTEGER CAST FROM FLOAT SPECIFIC FINDSTRINCLOB EXTERNAL NAME FINDSTR LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION FENCED STOP AFTER 3 FAILURES;
493
The output from the user-defined function is of type float, but users require integer output for their SQL statements. The user-defined function is written in C and contains no SQL statements. The function is defined to stop when the number of abnormal terminations is equal to 3. Example: Definition for an external user-defined scalar function that overloads an operator: A programmer has written a user-defined function that overloads the built-in SQL division operator (/). That is, this user-defined function is invoked when an application program executes a statement like either of the following:
UPDATE TABLE1 SET INTCOL1=INTCOL2/INTCOL3; UPDATE TABLE1 SET INTCOL1="/"(INTCOL2,INTCOL3);
The user-defined function takes two integer values as input. The output from the user-defined function is of type integer. The user-defined function is in the MATH schema, is written in assembler, and contains no SQL statements. This CREATE FUNCTION statement defines the user-defined function:
CREATE FUNCTION MATH."/" (INT, INT) RETURNS INTEGER SPECIFIC DIVIDE EXTERNAL NAME DIVIDE LANGUAGE ASSEMBLE PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION FENCED;
Suppose that you want the FINDSTRING user-defined function to work on BLOB data types, as well as CLOB types. You can define another instance of the user-defined function that specifies a BLOB type as input:
CREATE FUNCTION FINDSTRING (BLOB(500K), VARCHAR(200)) RETURNS INTEGER CAST FROM FLOAT SPECIFIC FINDSTRINBLOB EXTERNAL NAME FNDBLOB LANGUAGE C PARAMETER STYLE SQL NO SQL DETERMINISTIC NO EXTERNAL ACTION FENCED STOP AFTER 3 FAILURES;
Each instance of FINDSTRING uses a different application program to implement the user-defined function. Example: Definition for a sourced user-defined function: Suppose you need a user-defined function that finds a string in a value with a distinct type of BOAT. BOAT is based on a BLOB data type. User-defined function FINDSTRING has already been defined. FINDSTRING takes a BLOB data type and performs the required function. The specific name for FINDSTRING is FINDSTRINBLOB. You can therefore define a sourced user-defined function based on FINDSTRING to do the string search on values of type BOAT. This CREATE FUNCTION statement defines the sourced user-defined function:
CREATE FUNCTION FINDSTRING (BOAT, VARCHAR(200)) RETURNS INTEGER SPECIFIC FINDSTRINBOAT SOURCE SPECIFIC FINDSTRINBLOB;
494
Example: Definition for an SQL user-defined function: You can define an SQL user-defined function for the tangent of a value by using the existing built-in SIN and COS functions:
CREATE FUNCTION TAN (X DOUBLE) RETURNS DOUBLE LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION DETERMINISTIC RETURN SIN(X)/COS(X);
Example: Definition for an external user-defined table function: An application programmer develops a user-defined function that receives two values and returns a table. The two input values are: v A character string of maximum length 30 that describes a subject v A character string of maximum length 255 that contains text to search for The user-defined function scans documents on the subject for the search string and returns a list of documents that match the search criteria, with an abstract for each document. The list is in the form of a two-column table. The first column is a character column of length 16 that contains document IDs. The second column is a varying-character column of maximum length 5000 that contains document abstracts. The user-defined function is written in COBOL, uses SQL only to perform queries, always produces the same output for given input, and should not execute as a parallel task. The program is reentrant, and successive invocations of the user-defined function share information. You expect an invocation of the user-defined function to return about 20 rows. The following CREATE FUNCTION statement defines the user-defined function:
CREATE FUNCTION DOCMATCH (VARCHAR(30), VARCHAR(255)) RETURNS TABLE (DOC_ID CHAR(16), DOC_ABSTRACT VARCHAR(5000)) EXTERNAL NAME DOCMTCH LANGUAGE COBOL PARAMETER STYLE SQL READS SQL DATA DETERMINISTIC NO EXTERNAL ACTION FENCED SCRATCHPAD FINAL CALL DISALLOW PARALLEL CARDINALITY 20;
Related reference: Components of a user-defined function definition on page 498 ALTER FUNCTION (external) (DB2 SQL) ALTER FUNCTION (SQL scalar) (DB2 SQL)
User-defined functions
A user-defined function is an extension to the SQL language. A user-defined function is a small program that you write, similar to a host language subprogram or function. However, a user-defined function is often the better choice for an SQL application because you can invoke it in an SQL statement.
495
This section contains information that applies to all user-defined functions and specific information about user-defined functions in languages other than Java. The types of user-defined functions are: v Sourced user-defined functions, which are based on existing built-in functions or user-defined functions v External user-defined functions, which a programmer writes in a host language v SQL user-defined functions, which contain the source code for the user-defined function in the user-defined function definition User-defined functions can also be categorized as user-defined scalar functions or user-defined table functions: v A user-defined scalar function returns a single-value answer each time it is invoked v A user-defined table function returns a table to the SQL statement that references it External user-defined functions can be user-defined scalar functions or user-defined table functions. Sourced and SQL user-defined functions can only be user-defined scalar functions. Creating and using a user-defined function involves these steps: v Setting up the environment for user-defined functions A systems administrator probably performs this step. The user-defined function environment is shown in the following figure.
WLM-Established Stored Procedures Address Space Function Program Program B
. . .
. . .
. . .
. . .
It contains an application address space, from which a program invokes a user-defined function; a DB2 system, where the packages from the user-defined function are run; and a WLM-established address space, where the user-defined function is executed. The steps for setting up and maintaining the user-defined function environment are the same as for setting up and maintaining the environment for stored procedures in WLM-established address spaces. v Writing and preparing the user-defined function This step is necessary only for an external user-defined function.
496
The person who performs this step is called the user-defined function implementer. v Defining the user-defined function to DB2 The person who performs this step is called the user-defined function definer. v Invoking the user-defined function from an SQL application The person who performs this step is called the user-defined function invoker. Related concepts: Java stored procedures and user-defined functions (DB2 Application Programming for Java)
Sourced functions
A sourced function is a function that invokes another function that already exists at the server. The function inherits the attributes of the underlying source function. The source function can be built-in, external, SQL, or sourced. Use sourced functions to build upon existing built-in functions or other user-defined functions. Sourced functions are useful to extend built-in aggregate and scalar functions for use on distinct types. To implement a sourced function, issue a CREATE FUNCTION statement with the name of the function upon which you want to base the sourced function.
497
| |
Characteristic User-defined function name Input parameter types and encoding schemes Output parameter types and encoding schemes Specific name External name Language
No No
Yes Yes5
Yes Yes6
Name of source function Parameter style Address space for user-defined functions Call with null input External actions Scratchpad specification Call function after SQL processing Consider function for parallel processing
Yes No No No No No No No
No No No Yes8 Yes No No No
498
Table 84. Characteristics of a user-defined function (continued) CREATE FUNCTION or ALTER FUNCTION option NO COLLID COLLID collection-id WLM ENVIRONMENT name WLM ENVIRONMENT name,* ASUTIME NO LIMIT ASUTIME LIMIT integer STAY RESIDENT NO STAY RESIDENT YES PROGRAM TYPE MAIN PROGRAM TYPE SUB SECURITY DB2 SECURITY USER SECURITY DEFINER RUN OPTIONS options NO DBINFO DBINFO CARDINALITY integer STATIC DISPATCH Valid in sourced function? No No No No No No Valid in external function? Yes Yes Yes Yes Yes Yes Valid in SQL function? No No No No No No
Characteristic Package collection WLM environment CPU time for a function invocation Load module stays in memory Program type Security
run time options Pass DB2 environment information Expected number of rows returned Function resolution is based on the declared parameter types SQL expression that evaluates to the value returned by the function Encoding scheme for all string parameters For functions that are defined as LANGUAGE C, the representation of VARCHAR parameters and, if applicable, the returned result. Number of abnormal terminations before the function is stopped
No No No No
No No No Yes
none
No
No
Yes
No
Yes
Yes
Yes
No
STOP AFTER SYSTEM DEFAULT FAILURES STOP AFTER n FAILURES CONTINUE AFTER FAILURE PACKAGE PATH package-path NO PACKAGE PATH
No
Yes
No
| | | |
Identifies the list of package collections that is to be used when the stored procedure is executed.
No
Yes
No
499
Table 84. Characteristics of a user-defined function (continued) CREATE FUNCTION or ALTER FUNCTION option Valid in sourced function? Valid in external function? Valid in SQL function?
Characteristic Notes:
1. RETURNS TABLE and CARDINALITY are valid only for user-defined table functions. For a single query, you can override the CARDINALITY value by specifying a CARDINALITY clause for the invocation of a user-defined table function in the SELECT statement. 2. An SQL user-defined function can return only one scalar value. 3. LANGUAGE SQL is not valid for an external user-defined function. 4. Only LANGUAGE SQL is valid for an SQL user-defined function. 5. MODIFIES SQL DATA and ALLOW PARALLEL are not valid for user-defined table functions. 6. MODIFIES SQL DATA and NO SQL are not valid for SQL user-defined functions. 7. PARAMETER STYLE JAVA is valid only with LANGUAGE JAVA. PARAMETER STYLE SQL is valid only with LANGUAGE values other than LANGUAGE JAVA. 8. RETURNS NULL ON NULL INPUT is not valid for an SQL user-defined function. 9. The PARAMETER VARCHAR clause can be specified in CREATE FUNCTION statements only.
500
v If your user-defined function is not defined with parameters SCRATCHPAD or EXTERNAL ACTION, the user-defined function is not guaranteed to execute under the same task each time it is invoked. v You cannot execute COMMIT or ROLLBACK statements in your user-defined function. v You must close all cursors that were opened within a user-defined scalar function. DB2 returns an SQL error if a user-defined scalar function does not close all cursors that it opened before it completes. v When you choose the language in which to write a user-defined function program, be aware of restrictions on the number of parameters that can be passed to a routine in that language. User-defined table functions in particular can require large numbers of parameters. Consult the programming guide for the language in which you plan to write the user-defined function for information about the number of parameters that can be passed. v You cannot pass LOB file reference variables as parameters to user-defined functions. v User-defined functions cannot return LOB file reference variables. v You cannot pass parameters with the type XML to user-defined functions. You can specify tables or views that contain XML columns as table locator parameters. However, you cannot reference the XML columns in the body of the user-defined function. Coding your user-defined function as a main program or as a subprogram: You can code your user-defined function as either a main program or a subprogram. The way that you code your program must agree with the way you defined the user-defined function: with the PROGRAM TYPE MAIN or PROGRAM TYPE SUB parameter. The main difference is that when a main program starts, Language Environment allocates the application program storage that the external user-defined function uses. When a main program ends, Language Environment closes files and releases dynamically allocated storage. If you code your user-defined function as a subprogram and manage the storage and files yourself, you can get better performance. The user-defined function should always free any allocated storage before it exits. To keep data between invocations of the user-defined function, use a scratchpad. You must code a user-defined table function that accesses external resources as a subprogram. Also ensure that the definer specifies the EXTERNAL ACTION parameter in the CREATE FUNCTION or ALTER FUNCTION statement. Program variables for a subprogram persist between invocations of the user-defined function, and use of the EXTERNAL ACTION parameter ensures that the user-defined function stays in the same address space from one invocation to another. Parallelism considerations: If the definer specifies the parameter ALLOW PARALLEL in the definition of a user-defined scalar function, and the invoking SQL statement runs in parallel, the function can run under a parallel task. DB2 executes a separate instance of the user-defined function for each parallel task. When you write your function program, you need to understand how the following parameter values interact with ALLOW PARALLEL so that you can avoid unexpected results: v SCRATCHPAD
| | | | | | |
501
When an SQL statement invokes a user-defined function that is defined with the ALLOW PARALLEL parameter, DB2 allocates one scratchpad for each parallel task of each reference to the function. This can lead to unpredictable or incorrect results. For example, suppose that the user-defined function uses the scratchpad to count the number of times it is invoked. If a scratchpad is allocated for each parallel task, this count is the number of invocations done by the parallel task and not for the entire SQL statement, which is not the result that is wanted. v FINAL CALL If a user-defined function performs an external action, such as sending a note, for each final call to the function, one note is sent for each parallel task instead of once for the function invocation. v EXTERNAL ACTION Some user-defined functions with external actions can receive incorrect results if the function is executed by parallel tasks. For example, if the function sends a note for each initial call to the function, one note is sent for each parallel task instead of once for the function invocation. v NOT DETERMINISTIC A user-defined function that is non-deterministic can generate incorrect results if it is run under a parallel task. For example, suppose that you execute the following query under parallel tasks:
SELECT * FROM T1 WHERE C1 = COUNTER();
COUNTER is a user-defined function that increments a variable in the scratchpad every time it is invoked. Counter is non-deterministic because the same input does not always produce the same output. Table T1 contains one column, C1, that has the following values:
1 2 3 4 5 6 7 8 9 10
When the query is executed with no parallelism, DB2 invokes COUNTER once for each row of table T1, and there is one scratchpad for counter, which DB2 initializes the first time that COUNTER executes. COUNTER returns 1 the first time it executes, 2 the second time, and so on. The result table for the query has the following values:
1 2 3 4 5 6 7 8 9 10
Now suppose that the query is run with parallelism, and DB2 creates three parallel tasks. DB2 executes the predicate WHERE C1 = COUNTER() for each parallel task. This means that each parallel task invokes its own instance of the
502
user-defined function and has its own scratchpad. DB2 initializes the scratchpad to zero on the first call to the user-defined function for each parallel task. If parallel task 1 processes rows 1 to 3, parallel task 2 processes rows 4 to 6, and parallel task 3 processes rows 7 to 10, the following results occur: When parallel task 1 executes, C1 has values 1, 2, and 3, and COUNTER returns values 1, 2, and 3, so the query returns values 1, 2, and 3. When parallel task 2 executes, C1 has values 4, 5, and 6, but COUNTER returns values 1, 2, and 3, so the query returns no rows. When parallel task 3, executes, C1 has values 7, 8, 9, and 10, but COUNTER returns values 1, 2, 3, and 4, so the query returns no rows. Thus, instead of returning the 10 rows that you might expect from the query, DB2 returns only 3 rows. Related concepts: Java stored procedures and user-defined functions (DB2 Application Programming for Java)
503
Register 1
Result 1 . Result 2
Indicator 1 Indicator 2
Indicator 1 Indicator 2
SQLSTATE Procedure name Specific name Message text Scratchpad Call type DBINFO
3 2
SQLSTATE Procedure name Specific name Message text Scratchpad Call type DBINFO
4, 5
1. For a user-defined scalar function, only one result and one result indicator are passed. 2. Passed if the SCRATCHPAD option is specified in the user-defined function definition. 3. Passed if the FINAL CALL option is specified in a user-defined scalar function definition; always passed for a user-defined table function. 4. For PL/I, this value is the address of a pointer to the DBINFO data. 5. Passed if the DBINFO option is specified in the user-defined function definition.
Figure 26. Parameter conventions for a user-defined function
504
For all data types except LOBs, ROWIDs, locators, and VARCHAR (with C language), see the tables listed in the following table for the host data types that are compatible with the data types in the user-defined function definition.
Table 85. Listing of tables of compatible data types Language Assembler C COBOL PL/I Compatible data types table Compatibility of SQL and language data types on page 144 Compatibility of SQL and language data types on page 144 Compatibility of SQL and language data types on page 144 Compatibility of SQL and language data types on page 144
For LOBs, ROWIDs, and locators, see the following table for the assembler data types that are compatible with the data types in the user-defined function definition.
Table 86. Compatible assembler language declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) Assembler declaration DS FL4
If n <= 65535: var DS 0FL4 var_length DS FL4 var_data DS CLn If n > 65535: var DS 0FL4 var_length DS FL4 var_data DS CL65535 ORG var_data+(n-65535)
CLOB(n)
If n <= 65535: var DS 0FL4 var_length DS FL4 var_data DS CLn If n > 65535: var DS 0FL4 var_length DS FL4 var_data DS CL65535 ORG var_data+(n-65535)
DBCLOB(n)
If n (=2*n) <= 65534: var DS 0FL4 var_length DS FL4 var_data DS CLm If n > 65534: var DS 0FL4 var_length DS FL4 var_data DS CL65534 ORG var_data+(m-65534)
Chapter 10. Creating and modifying DB2 objects
505
Table 86. Compatible assembler language declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition ROWID Assembler declaration DS HL2,CL40
For LOBs, ROWIDs, VARCHARs, and locators see the following table for the C data types that are compatible with the data types in the user-defined function definition.
Table 87. Compatible C language declarations for LOBs, ROWIDs, VARCHARs, and locators SQL data type in definition1 TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) C declaration unsigned long
struct {unsigned long length; char data[n]; } var; struct {unsigned long length; char var_data[n]; } var; struct {unsigned long length; sqldbchar data[n]; } var; struct { short int length; char data[40]; } var; If PARAMETER VARCHAR NULTERM is specified or implied: char data[n+1]; If PARAMETER VARCHAR STRUCTURE is specified: struct {short len; char data[n]; } var;
CLOB(n)
DBCLOB(n)
ROWID
VARCHAR(n)2
Note: 1. The SQLUDF file, which is in data set DSN910.SDSNC.H, includes the typedef sqldbchar. Using sqldbchar lets you manipulate DBCS and Unicode UTF-16 data in the same format in which it is stored in DB2. sqldbchar also makes applications easier to port to other DB2 platforms. 2. This row does not apply to VARCHAR(n) FOR BIT DATA. BIT DATA is always passed in a structured representation.
For LOBs, ROWIDs, and locators, see the following table for the COBOL data types that are compatible with the data types in the user-defined function definition.
506
Table 88. Compatible COBOL declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC X(n). If length > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC X(32767). 49 FILLER PIC X(32767). . . . 49 FILLER PIC X(mod(n,32767)). 02 CLOB(n) If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC X(n). If length > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC X(32767). 49 FILLER PIC X(32767). . . . 49 FILLER PIC X(mod(n,32767)). 02 COBOL declaration 01 var PIC S9(9) USAGE IS BINARY.
507
Table 88. Compatible COBOL declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition DBCLOB(n) COBOL declaration If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC G(n) USAGE DISPLAY-1. If length > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC G(32767) USAGE DISPLAY-1. 49 FILLER PIC G(32767). USAGE DISPLAY-1. . . . 49 FILLER PIC G(mod(n,32767)) USAGE DISPLAY-1. 02 ROWID 01 var. 49 var-LEN PIC 9(4) USAGE COMP. 49 var-DATA PIC X(40).
For LOBs, ROWIDs, and locators, see the following table for the PL/I data types that are compatible with the data types in the user-defined function definition.
Table 89. Compatible PL/I declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) PL/I BIN FIXED(31)
If n <= 32767: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA CHAR(n); If n > 32767: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) CHAR(32767), 03 var_DATA2 CHAR(mod(n,32767));
508
Table 89. Compatible PL/I declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition CLOB(n) PL/I If n <= 32767: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA CHAR(n); If n > 32767: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) CHAR(32767), 03 var_DATA2 CHAR(mod(n,32767)); DBCLOB(n) If n <= 16383: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA GRAPHIC(n); If n > 16383: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) GRAPHIC(16383), 03 var_DATA2 GRAPHIC(mod(n,16383)); ROWID CHAR(40) VAR;
Result parameters: Set these values in your user-defined function before exiting. For a user-defined scalar function, you return one result parameter. For a user-defined table function, you return the same number of parameters as columns in the RETURNS TABLE clause of the CREATE FUNCTION statement. DB2 allocates a buffer for each result parameter value and passes the buffer address to the user-defined function. Your user-defined function places each result parameter value in its buffer. You must ensure that the length of the value you place in each output buffer does not exceed the buffer length. Use the SQL data type and length in the CREATE FUNCTION statement to determine the buffer length. See Parameters for external user-defined functions on page 503 to determine the host data type to use for each result parameter value. If the CREATE FUNCTION statement contains a CAST FROM clause, use a data type that corresponds to the SQL data type in the CAST FROM clause. Otherwise, use a data type that corresponds to the SQL data type in the RETURNS or RETURNS TABLE clause. To improve performance for user-defined table functions that return many columns, you can pass values for a subset of columns to the invoker. For example, a user-defined table function might be defined to return 100 columns, but the invoker needs values for only two columns. Use the DBINFO parameter to indicate
Chapter 10. Creating and modifying DB2 objects
509
to DB2 the columns for which you will return values. Then return values for only those columns. See DBINFO for information about how to indicate the columns of interest. Input parameter indicators: These are SMALLINT values, which DB2 sets before it passes control to the user-defined function. You use the indicators to determine whether the corresponding input parameters are null. The number and order of the indicators are the same as the number and order of the input parameters. On entry to the user-defined function, each indicator contains one of these values: 0 The input parameter value is not null.
negative The input parameter value is null. Code the user-defined function to check all indicators for null values unless the user-defined function is defined with RETURNS NULL ON NULL INPUT. A user-defined function defined with RETURNS NULL ON NULL INPUT executes only if all input parameters are not null. Result indicators: These are SMALLINT values, which you must set before the user-defined function ends to indicate to the invoking program whether each result parameter value is null. A user-defined scalar function has one result indicator. A user-defined table function has the same number of result indicators as the number of result parameters. The order of the result indicators is the same as the order of the result parameters. Set each result indicator to one of these values: 0 or positive The result parameter is not null. negative The result parameter is null. SQLSTATE value: This CHAR(5) value represents the SQLSTATE that is passed in to the program from the database manager. The initial value is set to 00000'. Although the SQLSTATE is usually not set by the program, it can be set as the result SQLSTATE that is used to return an error or a warning. Returned values that start with anything other than 00', 01', or 02' are error conditions. User-defined function name: DB2 sets this value in the parameter list before the user-defined function executes. This value is VARCHAR(257): 128 bytes for the schema name, 1 byte for a period, and 128 bytes for the user-defined function name. If you use the same code to implement multiple versions of a user-defined function, you can use this parameter to determine which version of the function the invoker wants to execute. Specific name: DB2 sets this value in the parameter list before the user-defined function executes. This value is VARCHAR(128) and is either the specific name from the CREATE FUNCTION statement or a specific name that DB2 generated. If you use the same code to implement multiple versions of a user-defined function, you can use this parameter to determine which version of the function the invoker wants to execute. Diagnostic message: Your user-defined function can set this CHAR or VARCHAR value to a character string of up to 1000 bytes before exiting. Use this area to pass descriptive information about an error or warning to the invoker.
510
DB2 allocates a buffer for this area and passes you the buffer address in the parameter list. At least the first 17 bytes of the value you put in the buffer appear in the SQLERRMC field of the SQLCA that is returned to the invoker. The exact number of bytes depends on the number of other tokens in SQLERRMC. Do not use X'FF' in your diagnostic message. DB2 uses this value to delimit tokens. Scratchpad: If the definer specified SCRATCHPAD in the CREATE FUNCTION statement, DB2 allocates a buffer for the scratchpad area and passes its address to the user-defined function. Before the user-defined function is invoked for the first time in an SQL statement, DB2 sets the length of the scratchpad in the first 4 bytes of the buffer and then sets the scratchpad area to X'00'. DB2 does not reinitialize the scratchpad between invocations of a correlated subquery. You must ensure that your user-defined function does not write more bytes to the scratchpad than the scratchpad length. Call type: For a user-defined scalar function, if the definer specified FINAL CALL in the CREATE FUNCTION statement, DB2 passes this parameter to the user-defined function. For a user-defined table function, DB2 always passes this parameter to the user-defined function. On entry to a user-defined scalar function, the call type parameter has one of the following values: -1 This is the first call to the user-defined function for the SQL statement. For a first call, all input parameters are passed to the user-defined function. In addition, the scratchpad, if allocated, is set to binary zeros. This is a normal call. For a normal call, all the input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This is a final call. For a final call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This type of final call occurs when the invoking application explicitly closes a cursor. When a value of 1 is passed to a user-defined function, the user-defined function can execute SQL statements. 255 This is a final call. For a final call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This type of final call occurs when the invoking application executes a COMMIT or ROLLBACK statement, or when the invoking application abnormally terminates. When a value of 255 is passed to the user-defined function, the user-defined function cannot execute any SQL statements, except for CLOSE CURSOR. If the user-defined function executes any close cursor statements during this type of final call, the user-defined function should tolerate SQLCODE -501 because DB2 might have already closed cursors before the final call. During the first call, your user-defined scalar function should acquire any system resources it needs. During the final call, the user-defined scalar function should release any resources it acquired during the first call. The user-defined scalar function should return a result value only during normal calls. DB2 ignores any results that are returned during a final call. However, the user-defined scalar function can set the SQLSTATE and diagnostic message area during the final call.
Chapter 10. Creating and modifying DB2 objects
511
If an invoking SQL statement contains more than one user-defined scalar function, and one of those user-defined functions returns an error SQLSTATE, DB2 invokes all of the user-defined functions for a final call, and the invoking SQL statement receives the SQLSTATE of the first user-defined function with an error. On entry to a user-defined table function, the call type parameter has one of the following values: -2 This is the first call to the user-defined function for the SQL statement. A first call occurs only if the FINAL CALL keyword is specified in the user-defined function definition. For a first call, all input parameters are passed to the user-defined function. In addition, the scratchpad, if allocated, is set to binary zeros. This is the open call to the user-defined function by an SQL statement. If FINAL CALL is not specified in the user-defined function definition, all input parameters are passed to the user-defined function, and the scratchpad, if allocated, is set to binary zeros during the open call. If FINAL CALL is specified for the user-defined function, DB2 does not modify the scratchpad. This is a fetch call to the user-defined function by an SQL statement. For a fetch call, all input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This is a close call. For a close call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This is a final call. This type of final call occurs only if FINAL CALL is specified in the user-defined function definition. For a final call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This type of final call occurs when the invoking application executes a CLOSE CURSOR statement. 255 This is a final call. For a final call, no input parameters are passed to the user-defined function. If a scratchpad is also passed, DB2 does not modify it. This type of final call occurs when the invoking application executes a COMMIT or ROLLBACK statement, or when the invoking application abnormally terminates. When a value of 255 is passed to the user-defined function, the user-defined function cannot execute any SQL statements, except for CLOSE CURSOR. If the user-defined function executes any close cursor statements during this type of final call, the user-defined function should tolerate SQLCODE -501 because DB2 might have already closed cursors before the final call. If a user-defined table function is defined with FINAL CALL, the user-defined function should allocate any resources it needs during the first call and release those resources during the final call that sets a value of 2. If a user-defined table function is defined with NO FINAL CALL, the user-defined function should allocate any resources it needs during the open call and release those resources during the close call.
-1
512
During a fetch call, the user-defined table function should return a row. If the user-defined function has no more rows to return, it should set the SQLSTATE to 02000. During the close call, a user-defined table function can set the SQLSTATE and diagnostic message area. If a user-defined table function is invoked from a subquery, the user-defined table function receives a CLOSE call for each invocation of the subquery within the higher level query, and a subsequent OPEN call for the next invocation of the subquery within the higher level query. DBINFO: If the definer specified DBINFO in the CREATE FUNCTION statement, DB2 passes the DBINFO structure to the user-defined function. DBINFO contains information about the environment of the user-defined function caller. It contains the following fields, in the order shown: Location name length An unsigned 2-byte integer field. It contains the length of the location name in the next field. Location name A 128-byte character field. It contains the name of the location to which the invoker is currently connected. Authorization ID length An unsigned 2-byte integer field. It contains the length of the authorization ID in the next field. Authorization ID A 128-byte character field. It contains the authorization ID of the application from which the user-defined function is invoked, padded on the right with blanks. If this user-defined function is nested within other user-defined functions, this value is the authorization ID of the application that invoked the highest-level user-defined function. Subsystem code page A 48-byte structure that consists of 10 integer fields and an eight-byte reserved area. These fields provide information about the CCSIDs of the subsystem from which the user-defined function is invoked. Table qualifier length An unsigned 2-byte integer field. It contains the length of the table qualifier in the next field. If the table name field is not used, this field contains 0. Table qualifier A 128-byte character field. It contains the qualifier of the table that is specified in the table name field. Table name length An unsigned 2-byte integer field. It contains the length of the table name in the next field. If the table name field is not used, this field contains 0. Table name A 128-byte character field. This field contains the name of the table for the update or insert operation if the reference to the user-defined function in the invoking SQL statement is in one of the following places: v The right side of a SET clause in an update operation v In the VALUES list of an insert operation Otherwise, this field is blank.
Chapter 10. Creating and modifying DB2 objects
513
Column name length An unsigned 2-byte integer field. It contains the length of the column name in the next field. If no column name is passed to the user-defined function, this field contains 0. Column name A 128-byte character field. This field contains the name of the column that the update or insert operation modifies if the reference to the user-defined function in the invoking SQL statement is in one of the following places: v The right side of a SET clause in an update operation v In the VALUES list of an insert operation Otherwise, this field is blank. Product information An 8-byte character field that identifies the product on which the user-defined function executes. This field has the form pppvvrrm, where: v ppp is a 3-byte product code: ARI DSN QSQ SQL DB2 Server for VSE & VM DB2 for z/OS DB2 for i DB2 for Linux, UNIX, and Windows
v vv is a 2-digit version identifier. v rr is a 2-digit release identifier. v m is a 1-digit maintenance level identifier. Reserved area 2 bytes. Operating system A 4-byte integer field. It identifies the operating system on which the program that invokes the user-defined function runs. The value is one of these: 0 1 3 4 5 6 7 8 13 15 16 18 19 24 25 Unknown OS/2 Windows AIX Windows NT HP-UX Solaris z/OS Siemens Nixdorf Windows 95 SCO UNIX Linux DYNIX/ptx Linux for S/390 Linux for System z
514
26 27 28 29 400
Number of entries in table function column list An unsigned 2-byte integer field. Reserved area 26 bytes. Table function column list pointer If a table function is defined, this field is a pointer to an array that contains 1000 2-byte integers. DB2 dynamically allocates the array. If a table function is not defined, this pointer is null. Only the first n entries, where n is the value in the field entitled number of entries in table function column list, are of interest. n is greater than or equal to 0 and less than or equal to the number result columns defined for the user-defined function in the RETURNS TABLE clause of the CREATE FUNCTION statement. The values correspond to the numbers of the columns that the invoking statement needs from the table function. A value of 1 means the first defined result column, 2 means the second defined result column, and so on. The values can be in any order. If n is equal to 0, the first array element is 0. This is the case for a statement like the following one, where the invoking statement needs no column values.
SELECT COUNT(*) FROM TABLE(TF(...)) AS QQ
This array represents an opportunity for optimization. The user-defined function does not need to return all values for all the result columns of the table function. Instead, the user-defined function can return only those columns that are needed in the particular context, which you identify by number in the array. However, if this optimization complicates the user-defined function logic enough to cancel the performance benefit, you might choose to return every defined column. Unique application identifier This field is a pointer to a string that uniquely identifies the application's connection to DB2. The string is regenerated for each connection to DB2. The string is the LUWID, which consists of a fully-qualified LU network name followed by a period and an LUW instance number. The LU network name consists of a 1- to 8-character network ID, a period, and a 1- to 8-character network LU name. The LUW instance number consists of 12 hexadecimal characters that uniquely identify the unit of work. Reserved area 20 bytes. If you write your user-defined function in C or C++, you can use the declarations in member SQLUDF of DSN910.SDSNC.H for many of the passed parameters. To include SQLUDF, make these changes to your program: v Put this statement in your source code:
#include <sqludf.h>
v Include the DSN910.SDSNC.H data set in the SYSLIB concatenation for the compile step of your program preparation job.
Chapter 10. Creating and modifying DB2 objects
515
v Specify the NOMARGINS and NOSEQUENCE options in the compile step of your program preparation job. Examples of receiving parameters in a user-defined function: The following examples show how a user-defined function that is written in each of the supported host languages receives the parameter list that is passed by DB2. These examples assume that the user-defined function is defined with the SCRATCHPAD, FINAL CALL, and DBINFO parameters. Assembler: The follow figure shows the parameter conventions for a user-defined scalar function that is written as a main program that receives two parameters and returns one result. For an assembler language user-defined function that is a subprogram, the conventions are the same. In either case, you must include the CEEENTRY and CEEEXIT macros.
MYMAIN CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS USING PROGAREA,R13 L MVC L MVC L MVC LH LTR BM L MVC LH LTR BM . . . NULLIN R7,0(R1) PARM1(4),0(R7) R7,4(R1) PARM2(4),0(R7) R7,12(R1) F_IND1(2),0(R7) R7,F_IND1 R7,R7 NULLIN R7,16(R1) F_IND2(2),0(R7) R7,F_IND2 R7,R7 NULLIN GET POINTER TO PARM1 MOVE VALUE INTO LOCAL COPY GET POINTER TO PARM2 MOVE VALUE INTO LOCAL COPY GET POINTER TO INDICATOR 1 MOVE PARM1 INDICATOR TO LOCAL MOVE PARM1 INDICATOR INTO R7 CHECK IF IT IS NEGATIVE IF SO, PARM1 IS NULL GET POINTER TO INDICATOR 2 MOVE PARM2 INDICATOR TO LOCAL MOVE PARM2 INDICATOR INTO R7 CHECK IF IT IS NEGATIVE IF SO, PARM2 IS NULL OF PARM1 OF PARM2 STORAGE
STORAGE
L MVC L MVC
GET ADDRESS OF AREA FOR RESULT MOVE A VALUE INTO RESULT AREA GET ADDRESS OF AREA FOR RESULT IND MOVE A VALUE INTO INDICATOR AREA
. . . CEETERM RC=0 ******************************************************************* * VARIABLE DECLARATIONS AND EQUATES * ******************************************************************* R1 EQU 1 REGISTER 1 R7 EQU 7 REGISTER 7 PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK LTORG , PLACE LITERAL POOL HERE PROGAREA DSECT ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART PARM1 DS F PARAMETER 1 PARM2 DS F PARAMETER 2 RESULT DS CL9 RESULT F_IND1 DS H INDICATOR FOR PARAMETER 1 F_IND2 DS H INDICATOR FOR PARAMETER 2 F_INDR DS H INDICATOR FOR RESULT PROGSIZE EQU *-PROGAREA CEEDSA , CEECAA , END MYMAIN MAPPING OF THE DYNAMIC SAVE AREA MAPPING OF THE COMMON ANCHOR AREA
516
C or C++: For C or C++ user-defined functions, the conventions for passing parameters are different for main programs and subprograms. For subprograms, you pass the parameters directly. For main programs, you use the standard argc and argv variables to access the input and output parameters: v The argv variable contains an array of pointers to the parameters that are passed to the user-defined function. All string parameters that are passed back to DB2 must be null terminated. argv[0] contains the address of the load module name for the user-defined function. argv[1] through argv[n] contain the addresses of parameters 1 through n. v The argc variable contains the number of parameters that are passed to the external user-defined function, including argv[0]. The following figure shows the parameter conventions for a user-defined scalar function that is written as a main program that receives two parameters and returns one result.
#include <stdlib.h> #include <stdio.h> main(argc,argv) int argc; char *argv[]; { /***************************************************/ /* Assume that the user-defined function invocation*/ /* included 2 input parameters in the parameter */ /* list. Also assume that the definition includes */ /* the SCRATCHPAD, FINAL CALL, and DBINFO options, */ /* so DB2 passes the scratchpad, calltype, and */ /* dbinfo parameters. */ /* The argv vector contains these entries: */ /* argv[0] 1 load module name */ /* argv[1-2] 2 input parms */ /* argv[3] 1 result parm */ /* argv[4-5] 2 null indicators */ /* argv[6] 1 result null indicator */ /* argv[7] 1 SQLSTATE variable */ /* argv[8] 1 qualified func name */ /* argv[9] 1 specific func name */ /* argv[10] 1 diagnostic string */ /* argv[11] 1 scratchpad */ /* argv[12] 1 call type */ /* argv[13] + 1 dbinfo */ /* -----*/ /* 14 for the argc variable */ /***************************************************/ if argc<>14 { . . . /**********************************************************/ /* This section would contain the code executed if the */ /* user-defined function is invoked with the wrong number */ /* of parameters. */ /**********************************************************/ } /***************************************************/ /* Assume the first parameter is an integer. */ /* The following code shows how to copy the integer*/ /* parameter into the application storage. */ /***************************************************/
Chapter 10. Creating and modifying DB2 objects
517
int parm1; parm1 = *(int *) argv[1]; /***************************************************/ /* Access the null indicator for the first */ /* parameter on the invoked user-defined function */ /* as follows: */ /***************************************************/ short int ind1; ind1 = *(short int *) argv[4]; /***************************************************/ /* Use the following expression to assign */ /* xxxxx to the SQLSTATE returned to caller on */ /* the SQL statement that contains the invoked */ /* user-defined function. */ /***************************************************/ strcpy(argv[7],"xxxxx"); /***************************************************/ /* Obtain the value of the qualified function */ /* name with this expression. */ /***************************************************/ char f_func[28]; strcpy(f_func,argv[8]); /***************************************************/ /* Obtain the value of the specific function */ /* name with this expression. */ /***************************************************/ char f_spec[19]; strcpy(f_spec,argv[9]); /***************************************************/ /* Use the following expression to assign */ /* yyyyyyyy to the diagnostic string returned */ /* in the SQLCA associated with the invoked */ /* user-defined function. */ /***************************************************/ strcpy(argv[10],"yyyyyyyy"); /***************************************************/ /* Use the following expression to assign the */ /* result of the function. */ /***************************************************/ char l_result[11]; strcpy(argv[3],l_result); . . . }
The following figure shows the parameter conventions for a user-defined scalar function written as a C subprogram that receives two parameters and returns one result.
#pragma runopts(plist(os)) #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sqludf.h> void myfunc(long *parm1, char parm2[11], char result[11], short *f_ind1, short *f_ind2, short *f_indr, char udf_sqlstate[6], char udf_fname[138], char udf_specname[129], char udf_msgtext[71], struct sqludf_scratchpad *udf_scratchpad, long *udf_call_type,
518
struct sql_dbinfo *udf_dbinfo); { /***************************************************/ /* Declare local copies of parameters */ /***************************************************/ int l_p1; char l_p2[11]; short int l_ind1; short int l_ind2; char ludf_sqlstate[6]; /* SQLSTATE */ char ludf_fname[138]; /* function name */ char ludf_specname[129]; /* specific function name */ char ludf_msgtext[71] /* diagnostic message text*/ sqludf_scratchpad *ludf_scratchpad; /* scratchpad */ long *ludf_call_type; /* call type */ sqludf_dbinfo *ludf_dbinfo /* dbinfo */ /***************************************************/ /* Copy each of the parameters in the parameter */ /* list into a local variable to demonstrate */ /* how the parameters can be referenced. */ /***************************************************/ l_p1 = *parm1; strcpy(l_p2,parm2); l_ind1 = *f_ind1; l_ind1 = *f_ind2; strcpy(ludf_sqlstate,udf_sqlstate); strcpy(ludf_fname,udf_fname); strcpy(ludf_specname,udf_specname); l_udf_call_type = *udf_call_type; strcpy(ludf_msgtext,udf_msgtext); memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad)); memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo)); . . . }
The following figure shows the parameter conventions for a user-defined scalar function that is written as a C++ subprogram that receives two parameters and returns one result. This example demonstrates that you must use an extern "C" modifier to indicate that you want the C++ subprogram to receive parameters according to the C linkage convention. This modifier is necessary because the CEEPIPI CALL_SUB interface, which DB2 uses to call the user-defined function, passes parameters using the C linkage convention.
#pragma runopts(plist(os)) #include <stdlib.h> #include <stdio.h> #include <sqludf.h> extern "C" void myfunc(long *parm1, char parm2[11], char result[11], short *f_ind1, short *f_ind2, short *f_indr, char udf_sqlstate[6], char udf_fname[138], char udf_specname[129], char udf_msgtext[71], struct sqludf_scratchpad *udf_scratchpad, long *udf_call_type, struct sql_dbinfo *udf_dbinfo); { /***************************************************/ /* Define local copies of parameters. */ /***************************************************/ int l_p1; char l_p2[11]; short int l_ind1; short int l_ind2;
Chapter 10. Creating and modifying DB2 objects
519
char ludf_sqlstate[6]; /* SQLSTATE */ char ludf_fname[138]; /* function name */ char ludf_specname[129]; /* specific function name */ char ludf_msgtext[71] /* diagnostic message text*/ sqludf_scratchpad *ludf_scratchpad; /* scratchpad */ long *ludf_call_type; /* call type */ sqludf_dbinfo *ludf_dbinfo /* dbinfo */ /***************************************************/ /* Copy each of the parameters in the parameter */ /* list into a local variable to demonstrate */ /* how the parameters can be referenced. */ /***************************************************/ l_p1 = *parm1; strcpy(l_p2,parm2); l_ind1 = *f_ind1; l_ind1 = *f_ind2; strcpy(ludf_sqlstate,udf_sqlstate); strcpy(ludf_fname,udf_fname); strcpy(ludf_specname,udf_specname); l_udf_call_type = *udf_call_type; strcpy(ludf_msgtext,udf_msgtext); memcpy(&ludf_scratchpad,udf_scratchpad,sizeof(ludf_scratchpad)); memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo)); . . . }
COBOL: The following figure shows the parameter conventions for a user-defined table function that is written as a main program that receives two parameters and returns two results. For a COBOL user-defined function that is a subprogram, the conventions are the same.
CBL APOST,RES,RENT IDENTIFICATION DIVISION. . . . DATA DIVISION. . . . LINKAGE SECTION. ********************************************************* * Declare each of the parameters * ********************************************************* 01 UDFPARM1 PIC S9(9) USAGE COMP. 01 UDFPARM2 PIC X(10). . . . ********************************************************* * Declare these variables for result parameters * ********************************************************* 01 UDFRESULT1 PIC X(10). 01 UDFRESULT2 PIC X(10). . . . ********************************************************* * Declare a null indicator for each parameter * ********************************************************* 01 UDF-IND1 PIC S9(4) USAGE COMP. 01 UDF-IND2 PIC S9(4) USAGE COMP. . . . ********************************************************* * Declare a null indicator for result parameter * ********************************************************* 01 UDF-RIND1 PIC S9(4) USAGE COMP. 01 UDF-RIND2 PIC S9(4) USAGE COMP.
520
. . . ********************************************************* * Declare the SQLSTATE that can be set by the * * user-defined function * ********************************************************* 01 UDF-SQLSTATE PIC X(5). ********************************************************* * Declare the qualified function name * ********************************************************* 01 UDF-FUNC. 49 UDF-FUNC-LEN PIC 9(4) USAGE BINARY. 49 UDF-FUNC-TEXT PIC X(137). ********************************************************* * Declare the specific function name * ********************************************************* 01 UDF-SPEC. 49 UDF-SPEC-LEN PIC 9(4) USAGE BINARY. 49 UDF-SPEC-TEXT PIC X(128). ********************************************************* * Declare SQL diagnostic message token * ********************************************************* 01 UDF-DIAG. 49 UDF-DIAG-LEN PIC 9(4) USAGE BINARY. 49 UDF-DIAG-TEXT PIC X(1000). ********************************************************* * Declare the scratchpad * ********************************************************* 01 UDF-SCRATCHPAD. 49 UDF-SPAD-LEN PIC 9(9) USAGE BINARY. 49 UDF-SPAD-TEXT PIC X(100). ********************************************************* * Declare the call type * ********************************************************* 01 UDF-CALL-TYPE PIC 9(9) USAGE BINARY. ********************************************************* * CONSTANTS FOR DB2-EBCODING-SCHEME. * ********************************************************* 77 SQLUDF-ASCII PIC 9(9) VALUE 1. 77 SQLUDF-EBCDIC PIC 9(9) VALUE 2. 77 SQLUDF-UNICODE PIC 9(9) VALUE 3. ********************************************************* * Structure used for DBINFO * ********************************************************* 01 SQLUDF-DBINFO. * location name length 05 DBNAMELEN PIC 9(4) USAGE BINARY. * location name 05 DBNAME PIC X(128). * authorization ID length 05 AUTHIDLEN PIC 9(4) USAGE BINARY. * authorization ID 05 AUTHID PIC X(128). * environment CCSID information 05 CODEPG PIC X(48). 05 CDPG-DB2 REDEFINES CODEPG. 10 DB2-CCSIDS OCCURS 3 TIMES. 15 DB2-SBCS PIC 9(9) USAGE BINARY. 15 DB2-DBCS PIC 9(9) USAGE BINARY. 15 DB2-MIXED PIC 9(9) USAGE BINARY. 10 ENCODING-SCHEME PIC 9(9) USAGE BINARY. 10 RESERVED PIC X(8). * other platform-specific deprecated CCSID structures not included here * schema name length 05 TBSCHEMALEN PIC 9(4) USAGE BINARY. * schema name 05 TBSCHEMA PIC X(128).
Chapter 10. Creating and modifying DB2 objects
521
table name length 05 TBNAMELEN PIC 9(4) USAGE BINARY. * table name 05 TBNAME PIC X(128). * column name length 05 COLNAMELEN PIC 9(4) USAGE BINARY. * column name 05 COLNAME PIC X(128). * product information 05 VER-REL PIC X(8). * reserved for expansion 05 RESD0 PIC X(2). * platform type 05 PLATFORM PIC 9(9) USAGE BINARY. * number of entries in tfcolumn list array (tfcolumn, below) 05 NUMTFCOL PIC 9(4) USAGE BINARY. * reserved for expansion 05 RESD1 PIC X(26). * tfcolumn will be allocated dynamically if TF is defined * otherwise this will be a null pointer 05 TFCOLUMN USAGE IS POINTER. * Application identifier 05 APPL-ID USAGE IS POINTER. * reserved for expansion 05 RESD2 PIC X(20). * PROCEDURE DIVISION USING UDFPARM1, UDFPARM2, UDFRESULT1, UDFRESULT2, UDF-IND1, UDF-IND2, UDF-RIND1, UDF-RIND2, UDF-SQLSTATE, UDF-FUNC, UDF-SPEC, UDF-DIAG, UDF-SCRATCHPAD, UDF-CALL-TYPE, SQLUDF-DBINFO.
PL/I: The following figure shows the parameter conventions for a user-defined scalar function that is written as a main program that receives two parameters and returns one result. For a PL/I user-defined function that is a subprogram, the conventions are the same.
*PROCESS SYSTEM(MVS); MYMAIN: PROC(UDF_PARM1, UDF_PARM2, UDF_RESULT, UDF_IND1, UDF_IND2, UDF_INDR, UDF_SQLSTATE, UDF_NAME, UDF_SPEC_NAME, UDF_DIAG_MSG, UDF_SCRATCHPAD, UDF_CALL_TYPE, UDF_DBINFO) OPTIONS(MAIN NOEXECOPS REENTRANT); DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL DCL UDF_PARM1 BIN FIXED(31); /* first parameter */ UDF_PARM2 CHAR(10); /* second parameter */ UDF_RESULT CHAR(10); /* result parameter */ UDF_IND1 BIN FIXED(15); /* indicator for 1st parm */ UDF_IND2 BIN FIXED(15); /* indicator for 2nd parm */ UDF_INDR BIN FIXED(15); /* indicator for result */ UDF_SQLSTATE CHAR(5); /* SQLSTATE returned to DB2 */ UDF_NAME CHAR(137) VARYING; /* Qualified function name */ UDF_SPEC_NAME CHAR(128) VARYING; /* Specific function name */ UDF_DIAG_MSG CHAR(70) VARYING; /* Diagnostic string */ 01 UDF_SCRATCHPAD /* Scratchpad */ 03 UDF_SPAD_LEN BIN FIXED(31), 03 UDF_SPAD_TEXT CHAR(100); DCL UDF_CALL_TYPE BIN FIXED(31); /* Call Type */ DCL DBINFO PTR; /* CONSTANTS FOR DB2_ENCODING_SCHEME */ DCL SQLUDF_ASCII BIN FIXED(15) INIT(1); DCL SQLUDF_EBCDIC BIN FIXED(15) INIT(2); DCL SQLUDF_MIXED BIN FIXED(15) INIT(3);
522
DCL 01 UDF_DBINFO BASED(DBINFO), /* Dbinfo 03 UDF_DBINFO_LLEN BIN FIXED(15), /* location length 03 UDF_DBINFO_LOC CHAR(128), /* location name 03 UDF_DBINFO_ALEN BIN FIXED(15), /* auth ID length 03 UDF_DBINFO_AUTH CHAR(128), /* authorization ID 03 UDF_DBINFO_CDPG, /* environment CCSID info 05 DB2_CCSIDS(3), 07 R1 BIN FIXED(15), /* Reserved 07 DB2_SBCS BIN FIXED(15), /* SBCS CCSID 07 R2 BIN FIXED(15), /* Reserved 07 DB2_DBCS BIN FIXED(15), /* DBCS CCSID 07 R3 BIN FIXED(15), /* Reserved 07 DB2_MIXED BIN FIXED(15), /* MIXED CCSID 05 DB2_ENCODING_SCHEME BIN FIXED(31), 05 DB2_CCSID_RESERVED CHAR(8), 03 UDF_DBINFO_SLEN BIN FIXED(15), /* schema length 03 UDF_DBINFO_SCHEMA CHAR(128), /* schema name 03 UDF_DBINFO_TLEN BIN FIXED(15), /* table length 03 UDF_DBINFO_TABLE CHAR(128), /* table name 03 UDF_DBINFO_CLEN BIN FIXED(15), /* column length 03 UDF_DBINFO_COLUMN CHAR(128), /* column name 03 UDF_DBINFO_RELVER CHAR(8), /* DB2 release level 03 UDF_DBINFO_RESERV0 CHAR(2), /* reserved 03 UDF_DBINFO_PLATFORM BIN FIXED(31), /* database platform 03 UDF_DBINFO_NUMTFCOL BIN FIXED(15), /* # of TF columns used 03 UDF_DBINFO_RESERV1 CHAR(26), /* reserved 03 UDF_DBINFO_TFCOLUMN PTR, /* -> TFcolumn list 03 UDF_DBINFO_APPLID PTR, /* -> application id 03 UDF_DBINFO_RESERV2 CHAR(20); /* reserved . . .
*/ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */ */
523
collections to search for the called program's package. The primary program can change this collection ID by executing the statement SET CURRENT PACKAGE PATH. If the value of CURRENT PACKAGE PATH is blank or an empty string, DB2 uses the CURRENT PACKAGESET special register to determine the collection to search for the called program's package. The primary program can change this value by executing the statement SET CURRENT PACKAGESET. If both special registers CURRENT PACKAGE PATH and CURRENT PACKAGESET contain a blank value, DB2 uses the method described in Binding an application plan on page 975 to search for the package.
CURRENT CLIENT_ACCTNG CURRENT CLIENT_APPLNAME CURRENT CLIENT_USERID CURRENT CLIENT_WRKSTNNAME CURRENT DATE
Not applicable5 Not applicable5 Not applicable5 Not applicable5 Not applicable5
Yes Yes
524
Table 90. Characteristics of special registers in a user-defined function or a stored procedure (continued) Initial value when INHERIT SPECIAL REGISTERS option is specified CURRENT DEGREE2 Initial value when DEFAULT SPECIAL REGISTERS option is specified The value of field CURRENT DEGREE on installation panel DSNTIP8 Routine can use SET statement to modify? Yes
CURRENT LOCALE LC_CTYPE Inherited from the invoking application CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION Inherited from the invoking application New value for each SET host-variable=CURRENT MEMBER statement The value of bind option OPTHINT for the user-defined function or stored procedure package or inherited from the invoking application6
The value of field CURRENT Yes LC_CTYPE on installation panel DSNTIPF System default value Yes
| CURRENT MEMBER | |
CURRENT OPTIMIZATION HINT
New value for each SET host-variable=CURRENT MEMBER statement The value of bind option OPTHINT for the user-defined function or stored procedure package
Not applicable5
Yes
An empty string, regardless of An empty string if the routine whether a COLLID value was was defined with a COLLID value; otherwise, inherited from specified for the routine4 the invoking application4 Inherited from the invoking application3 The value of bind option PATH for the user-defined function or stored procedure package or inherited from the invoking application6 Inherited from the invoking application Inherited from the invoking application Inherited from the invoking application Inherited from the invoking application3 The value of bind option PATH for the user-defined function or stored procedure package
Yes
Yes Yes
CURRENT PRECISION
The value of field DECIMAL ARITHMETIC on installation panel DSNTIP4 System default value System default value The empty string The value of bind option SQLRULES for the plan that invokes a user-defined function or stored procedure The value of CURRENT SCHEMA when the routine is entered Inherited from the invoking application
Yes
CURRENT SCHEMA
Inherited from the invoking application Inherited from the invoking application
Yes
CURRENT SERVER
Yes
525
Table 90. Characteristics of special registers in a user-defined function or a stored procedure (continued) Initial value when INHERIT SPECIAL REGISTERS option is specified Initial value when DEFAULT SPECIAL REGISTERS option is specified Routine can use SET statement to modify?
The primary authorization ID of The primary authorization ID of Yes8 the application process the application process or inherited from the invoking application7 New value for each SQL statement in the user-defined function or stored procedure package1 New value for each SQL statement in the user-defined function or stored procedure package1 Inherited from the invoking application Inherited from the invoking application Primary authorization ID of the application process New value for each SQL statement in the user-defined function or stored procedure package1 New value for each SQL statement in the user-defined function or stored procedure package1 Inherited from the invoking application Inherited from the invoking application Primary authorization ID of the application process Not applicable5
CURRENT TIME
CURRENT TIMESTAMP
Not applicable5
CURRENT TIMEZONE
1. If the user-defined function or stored procedure is invoked within the scope of a trigger, DB2 uses the timestamp for the triggering SQL statement as the timestamp for all SQL statements in the package. 2. DB2 allows parallelism at only one level of a nested SQL statement. If you set the value of the CURRENT DEGREE special register to ANY, and parallelism is disabled, DB2 ignores the CURRENT DEGREE value.
| 3. If the routine definition includes a specification for COLLID, DB2 sets CURRENT PACKAGESET to the value of COLLID. If both CURRENT PACKAGE PATH and COLLID are specified, the CURRENT PACKAGE PATH value | takes precedence and COLLID is ignored. | | 4. If the function definition includes a specification for PACKAGE PATH, DB2 sets CURRENT PACKAGE PATH to the value of PACKAGE PATH. |
5. Not applicable because no SET statement exists for the special register. 6. If a program within the scope of the invoking program issues a SET statement for the special register before the user-defined function or stored procedure is invoked, the special register inherits the value from the SET statement. Otherwise, the special register contains the value that is set by the bind option for the user-defined function or stored procedure package. 7. If a program within the scope of the invoking program issues a SET CURRENT SQLID statement before the user-defined function or stored procedure is invoked, the special register inherits the value from the SET statement. Otherwise, CURRENT SQLID contains the authorization ID of the application process. 8. If the user-defined function or stored procedure package uses a value other than RUN for the DYNAMICRULES bind option, the SET CURRENT SQLID statement can be executed. However, it does not affect the authorization ID that is used for the dynamic SQL statements in the package. The DYNAMICRULES value determines the authorization ID that is used for dynamic SQL statements.
526
Related concepts: DYNAMICRULES bind option (DB2 Application programming and SQL) Special registers (DB2 SQL) Related reference: BIND and REBIND options (DB2 Commands)
Procedure
To access transition tables in a user-defined function or stored procedure: 1. Declare input parameters to receive table locators. You must define each parameter that receives a table locator as an unsigned 4-byte integer. 2. Declare table locators. You can declare table locators in assembler, C, C++, COBOL, PL/I, and in an SQL procedure compound statement. 3. Declare a cursor to access the rows in each transition table. 4. Assign the input parameter values to the table locators. 5. Access rows from the transition tables using the cursors that are declared for the transition tables.
Results
The following examples show how a user-defined function that is written in C, C++, COBOL, or PL/I accesses a transition table for a trigger. The transition table, NEWEMP, contains modified rows of the employee sample table. The trigger is defined like this:
CREATE TRIGGER EMPRAISE AFTER UPDATE ON EMP REFERENCING NEW TABLE AS NEWEMPS FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC VALUES (CHECKEMP(TABLE NEWEMPS)); END;
527
CREATE FUNCTION CHECKEMP(TABLE LIKE EMP AS LOCATOR) RETURNS INTEGER EXTERNAL NAME CHECKEMP PARAMETER STYLE SQL LANGUAGE language;
Assembler: The following example shows how an assembler program accesses rows of transition table NEWEMPS.
CHECKEMP CSECT SAVE (14,12) LR R12,R15 USING CHECKEMP,R12 LR R7,R1 USING PARMAREA,R7 USING SQLDSECT,R8 L R6,PROGSIZE GETMAIN R,LV=(6) LR R10,R1 LR R2,R10 LR R3,R6 SR R4,R4 SR R5,R5 MVCL R2,R4 ST R13,FOUR(R10) ST R10,EIGHT(R13) LR R13,R10 USING PROGAREA,R13 ST R6,GETLENTH ANY SAVE SEQUENCE CODE ADDRESSABILITY TELL THE ASSEMBLER SAVE THE PARM POINTER SET ADDRESSABILITY FOR PARMS ESTABLISH ADDRESSIBILITY TO SQLDSECT GET SPACE FOR USER PROGRAM GET STORAGE FOR PROGRAM VARIABLES POINT TO THE ACQUIRED STORAGE POINT TO THE FIELD GET ITS LENGTH CLEAR THE INPUT ADDRESS CLEAR THE INPUT LENGTH CLEAR OUT THE FIELD CHAIN THE SAVEAREA PTRS CHAIN SAVEAREA FORWARD POINT TO THE SAVEAREA SET ADDRESSABILITY SAVE THE LENGTH OF THE GETMAIN
. . . ************************************************************ * Declare table locator host variable TRIGTBL * ************************************************************ TRIGTBL SQL TYPE IS TABLE LIKE EMP AS LOCATOR ************************************************************ * Declare a cursor to retrieve rows from the transition * * table * ************************************************************ EXEC SQL DECLARE C1 CURSOR FOR SELECT LASTNAME FROM TABLE(:TRIGTBL LIKE EMP) WHERE SALARY > 100000 ************************************************************ * Copy table locator for trigger transition table * ************************************************************ L R2,TABLOC GET ADDRESS OF LOCATOR L R2,0(0,R2) GET LOCATOR VALUE ST R2,TRIGTBL EXEC SQL OPEN C1 EXEC SQL FETCH C1 INTO :NAME . . . . . . PROGAREA SAVEAREA GETLENTH . . . NAME . . . EXEC SQL CLOSE C1 DSECT DS 18F DS A DS CL24 0D *-PROGAREA A CHECKEMP WORKING STORAGE FOR THE PROGRAM THIS ROUTINES SAVE AREA GETMAIN LENGTH FOR THIS AREA
X X
528
C or C++: The following example shows how a C or C++ program accesses rows of transition table NEWEMPS.
int CHECK_EMP(int trig_tbl_id) { . . . /**********************************************************/ /* Declare table locator host variable trig_tbl_id */ /**********************************************************/ EXEC SQL BEGIN DECLARE SECTION; SQL TYPE IS TABLE LIKE EMP AS LOCATOR trig_tbl_id; char name[25]; EXEC SQL END DECLARE SECTION; . . . /**********************************************************/ /* Declare a cursor to retrieve rows from the transition */ /* table */ /**********************************************************/ EXEC SQL DECLARE C1 CURSOR FOR SELECT NAME FROM TABLE(:trig_tbl_id LIKE EMPLOYEE) WHERE SALARY > 100000; /**********************************************************/ /* Fetch a row from transition table */ /**********************************************************/ EXEC SQL OPEN C1; EXEC SQL FETCH C1 INTO :name; . . . EXEC SQL CLOSE C1; . . . }
COBOL: The following example shows how a COBOL program accesses rows of transition table NEWEMPS.
IDENTIFICATION DIVISION. PROGRAM-ID. CHECKEMP. ENVIRONMENT DIVISION. INPUT-OUTPUT SECTION. DATA DIVISION. WORKING-STORAGE SECTION. 01 NAME PIC X(24). . . . LINKAGE SECTION. ********************************************************* * Declare table locator host variable TRIG-TBL-ID * ********************************************************* 01 TRIG-TBL-ID SQL TYPE IS TABLE LIKE EMP AS LOCATOR. . . . PROCEDURE DIVISION USING TRIG-TBL-ID. . . . ********************************************************* * Declare cursor to retrieve rows from transition table * ********************************************************* EXEC SQL DECLARE C1 CURSOR FOR SELECT NAME FROM TABLE(:TRIG-TBL-ID LIKE EMP) WHERE SALARY > 100000 END-EXEC. ********************************************************* * Fetch a row from transition table * *********************************************************
Chapter 10. Creating and modifying DB2 objects
529
EXEC SQL OPEN C1 END-EXEC. EXEC SQL FETCH C1 INTO :NAME END-EXEC. . . . EXEC SQL CLOSE C1 END-EXEC. . . . PROG-END. GOBACK.
PL/I: The following example shows how a PL/I program accesses rows of transition table NEWEMPS.
CHECK_EMP: PROC(TRIG_TBL_ID) RETURNS(BIN FIXED(31)) OPTIONS(MAIN NOEXECOPS REENTRANT); /****************************************************/ /* Declare table locator host variable TRIG_TBL_ID */ /****************************************************/ DECLARE TRIG_TBL_ID SQL TYPE IS TABLE LIKE EMP AS LOCATOR; DECLARE NAME CHAR(24); . . . /****************************************************/ /* Declare a cursor to retrieve rows from the */ /* transition table */ /****************************************************/ EXEC SQL DECLARE C1 CURSOR FOR SELECT NAME FROM TABLE(:TRIG_TBL_ID LIKE EMP) WHERE SALARY > 100000; /****************************************************/ /* Retrieve rows from the transition table */ /****************************************************/ EXEC SQL OPEN C1; EXEC SQL FETCH C1 INTO :NAME; . . . EXEC SQL CLOSE C1; . . . END CHECK_EMP;
Procedure
To prepare an external user-defined function for execution: 1. Precompile the user-defined function program and bind the DBRM into a package. You need to do this only if your user-defined function contains SQL statements. You do not need to bind a plan for the user-defined function. 2. Compile the user-defined function program and link-edit it with Language Environment and RRSAF. You must compile the program with a compiler that supports Language Environment and link-edit the appropriate Language Environment components with the user-defined function. You must also link-edit the user-defined function with RRSAF. The program preparation JCL samples DSNHASM, DSNHC, DSNHCPP, DSNHICOB, and DSNHPLI show you how to precompile, compile, and
530
link-edit assembler, C, C++, COBOL, and PL/I DB2 programs. For object-oriented programs in C++, see JCL sample DSNHCPP2 for program preparation hints. 3. For a user-defined function that contains SQL statements, grant EXECUTE authority on the user-defined function package to the function definer.
531
NO SQL NO EXTERNAL ACTION LANGUAGE C PARAMETER STYLE SQL EXTERNAL NAME UDFCTR;
The scratchpad length is not specified, so the scratchpad has the default length of 100 bytes, plus 4 bytes for the length field. The user-defined function increments an integer value and stores it in the scratchpad on each execution.
#pragma linkage(ctr,fetchable) #include <stdlib.h> #include <stdio.h> /* Structure scr defines the passed scratchpad for function ctr */ struct scr { long len; long countr; char not_used[96]; }; /***************************************************************/ /* Function ctr: Increments a counter and reports the value */ /* from the scratchpad. */ /* */ /* Input: None */ /* Output: INTEGER out the value from the scratchpad */ /***************************************************************/ void ctr( long *out, /* Output answer (counter) */ short *outnull, /* Output null indicator */ char *sqlstate, /* SQLSTATE */ char *funcname, /* Function name */ char *specname, /* Specific function name */ char *mesgtext, /* Message text insert */ struct scr *scratchptr) /* Scratchpad */ { *out = ++scratchptr->countr; /* Increment counter and */ /* copy to output variable */ *outnull = 0; /* Set output null indicator*/ return; } /* end of user-defined function ctr */
532
Because no built-in function or user-defined function exists on which to build a sourced user-defined function, the function implementer must code an external user-defined function. The implementer performs the following steps: v Writes the user-defined function, which is a COBOL program v Precompiles, compiles, and links the program v Binds a package if the user-defined function contains SQL statements v Tests the program thoroughly v Grants execute authority on the user-defined function package to the definer The user-defined function definer executes this CREATE FUNCTION statement to register CALC_BONUS to DB2:
CREATE FUNCTION CALC_BONUS(DECIMAL(9,2),DECIMAL(9,2)) RETURNS DECIMAL(9,2) EXTERNAL NAME CBONUS PARAMETER STYLE SQL LANGUAGE COBOL;
The definer then grants execute authority on CALC_BONUS to all invokers. User-defined function invokers write and prepare application programs that invoke CALC_BONUS. An invoker might write a statement like this, which uses the user-defined function to update the BONUS field in the employee table:
UPDATE EMP SET BONUS = CALC_BONUS(SALARY,COMM);
Member that contains source code DSN8DUAD DSN8DUCD DSN8DUAT DSN8DUCT DSN8EUDN DSN8EUMN DSN8DUCY DSN8DUTI DSN8DUTI DSN8DUTI DSN8DUWF
Purpose Converts the current date to a user-specified format Converts a date from one format to another Converts the current time to a user-specified format Converts a time from one format to another Returns the day of the week for a user-specified date Returns the month for a user-specified date Formats a floating-point number as a currency value Returns the unqualified table name for a table, view, or alias Returns the qualifier for a table, view, or alias Returns the location for a table, view, or alias Returns a table of weather information from a EBCDIC data set
533
Table 91. User-defined function samples shipped with DB2 (continued) User-defined function name Notes: 1. This version of ALTDATE has one input parameter, of type VARCHAR(13). 2. This version of ALTDATE has three input parameters, of type VARCHAR(17), VARCHAR(13), and VARCHAR(13). 3. This version of ALTTIME has one input parameter, of type VARCHAR(14). 4. This version of ALTTIME has three input parameters, of type VARCHAR(11), VARCHAR(14), and VARCHAR(14). Member that contains source code
Language
Purpose
Member DSN8DUWC contains a client program that shows you how to invoke the WEATHER user-defined table function. Member DSNTEJ2U shows you how to define and prepare the sample user-defined functions and the client program. | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Determining the authorization cache size for stored procedures and user-defined functions
DB2 provides one routine-authorization cache for the subsystem. This cache stores authorization IDs that have the EXECUTE privilege on routines after DB2 has retrieved those IDs from the DB2 catalog and validated them. The size of this cache is set by a subsystem parameter.
Procedure
To determine the authorization cache size: 1. Consider how many routines you plan to run concurrently. If you run a large number of routines concurrently, the default value for CACHERAC is likely too small. The default value of 100 KB is enough to hold about 690 routines. If your cache is too small, entries in the cache are overwritten, and DB2 must read them again from the DB2 catalog. 2. Use IBM Tivoli OMEGAMON XE for DB2 Performance Monitor on z/OS to determine if your routine-authorization cache is being used effectively. If the
534
| | | | | | |
report from this tool shows that an authorization ID in the cache was overwritten many times, consider increasing the value of CACHERAC. Related reference: Protection panel: DSNTIPP (DB2 Installation and Migration) ROUTINE AUTH CACHE field (CACHERAC subsystem parameter) (DB2 Installation and Migration) IBM Tivoli OMEGAMON XE for DB2 Performance Monitor on z/OS
Procedure
To create a stored procedure, perform one of the following actions: v Creating a native SQL procedure on page 550 v Creating an external SQL procedure on page 579 v Creating an external stored procedure on page 595
535
Related concepts: External stored procedures on page 546 SQL procedures on page 542 Related tasks: Implementing DB2 stored procedures (DB2 Administration Guide) Related reference: DB2 for z/OS Exchange
Stored procedures
A stored procedure is a compiled program that can execute SQL statements and is stored at a local or remote DB2 server. You can invoke a stored procedure from an application program or from the command line processor. A single call to a stored procedure from a client application can access the database at the server several times. A typical stored procedure contains two or more SQL statements and some manipulative or logical processing in a host language or SQL procedure statements. You can call stored procedures from other applications or from the command line. DB2 provides some stored procedures, but you can also create your own. A stored procedure provides a common piece of code that is written only once and is maintained in a single instance that can be called from several different applications. Host languages can easily call procedures that exist on a local system, and SQL can call stored procedures that exist on remote systems. In fact, a major benefit of procedures in SQL is that they can be used to enhance the performance characteristics of distributed applications. With stored procedures, you can avoid network transfer of large amounts of data obtained as part of intermediate results in a long sequence of queries. The following diagram illustrates the processing for an application that does not use stored procedures. The client application embeds SQL statements and communicates with the server separately for each statement. This application design results in increased network traffic and processor costs.
Client EXEC SQL SELECT Perform SQL processing EXEC SQL UPDATE Perform SQL processing EXEC SQL INSERT Perform SQL processing
The following diagram illustrates the processing for an application that uses stored procedures. Because a stored procedure is used on the server, a series of SQL statements can be executed with a single send and receive operation, reducing
536
Stored procedures are useful for client/server applications that do at least one of the following things: v Execute multiple remote SQL statements. Remote SQL statements can create many network send and receive operations, which results in increased processor costs. Stored procedures can encapsulate many of your application's SQL statements into a single message to the DB2 server, reducing network traffic to a single send and receive operation for a series of SQL statements. Locks on DB2 tables are not held across network transmissions, which reduces contention for resources at the server. v Access tables from a dynamic SQL environment where table privileges for the application that is running are undesirable. Stored procedures allow static SQL authorization from a dynamic environment. v Access host variables for which you want to guarantee security and integrity. Stored procedures remove SQL applications from the workstation, which prevents workstation users from manipulating the contents of sensitive SQL statements and host variables. v Create a result set of rows to return to the client application. Stored procedures that are written in embedded static SQL provide the following additional advantages: v Better performance because static SQL is prepared at precompile time and has no run time overhead for access plan (package) generation. v Encapsulation enables programmers to write applications that access data without knowing the details of database objects. v Improved security because access privileges are encapsulated within the packages that are associated with the stored procedures. You can grant access to run a stored procedure that selects data from tables, without granting SELECT privilege to the user. You can create one of the following types of stored procedures: External stored procedures A procedure that is written in a host language.
Chapter 10. Creating and modifying DB2 objects
537
External SQL procedures A procedure whose body is written entirely in SQL, but is created, implemented, and executed like other external stored procedures. Native SQL procedures A procedure with a procedural body that is written entirely in SQL and is created by issuing a single SQL statement, CREATE PROCEDURE. Native SQL procedures do not have an associated external application program. DB2 also provides a set of stored procedures that you can call in your application programs to perform a number of utility, application programming, and performance management functions. These procedures are called DB2-supplied stored procedures. Typically, you create these procedures during installation or migration.
538
| | | |
v You cannot pass parameters with the type XML to stored procedures. You can specify tables or views that contain XML columns as table locator parameters. However, you cannot reference the XML columns in the body of the stored procedure. Related tasks: Chapter 14, Calling a stored procedure from your application, on page 775 Passing large output parameters to stored procedures by using indicator variables on page 780 Related reference: CALL (DB2 SQL) CREATE PROCEDURE (DB2 SQL)
| |
539
The following figure illustrates the steps that are involved in executing this stored procedure.
Notes User Workstation DB2 System DB2 Stored Procedures Address Space Stored Procedure A 1 2 3 EXEC SQL CONNECT TO LOCA; EXEC SQL CALL A(:EMP , :PRJ,:ACT,:EMT, :EMS,:EME, :TYPE,:CODE); Get information from SYSIBM. SYSROUTINES Prepare parameter list and pass control to stored procedure EXEC SQL DECLARE C1 CURSOR WITH RETURN FOR SELECT * FROM EMPPROJACT; USE SQL UPDATE to update EMPPROJACT with input parameter values If SQLCODE=+100, use SQL INSERT to add a row with the values in the parameter list EXEC SQL OPEN C1; 7 Return output parameters :TYPE and :CODE and a result set that contains all rows in EMPPROJACT Control returns to application EXEC SQL COMMIT; (or ROLLBACK)
Create Thread
8 9
10
Notes: 1. The workstation application uses the SQL CONNECT statement to create a conversation with DB2.
540
2. DB2 creates a DB2 thread to process SQL requests. 3. The SQL statement CALL tells the DB2 server that the application is going to run a stored procedure. The calling application provides the necessary parameters. 4. The plan for the client application contains information from catalog table SYSIBM.SYSROUTINES about stored procedure A. 5. DB2 passes information about the request to the stored procedures address space, and the stored procedure begins execution. 6. The stored procedure executes SQL statements. DB2 verifies that the owner of the package or plan containing the SQL statement CALL has EXECUTE authority for the package associated with the DB2 stored procedure. One of the SQL statements opens a cursor that has been declared WITH RETURN. This causes a result set to be returned to the workstation application when the procedure ends. Any SQLCODE that is issued within an external stored procedure is not returned to the workstation application in the SQLCA (as the result of the CALL statement). 7. If an error is not encountered, the stored procedure assigns values to the output parameters and exits. Control returns to the DB2 stored procedures address space, and from there to the DB2 system. If the stored procedure definition contains COMMIT ON RETURN NO, DB2 does not commit or roll back any changes from the SQL in the stored procedure until the calling program executes an explicit COMMIT or ROLLBACK statement. If the stored procedure definition contains COMMIT ON RETURN YES, and the stored procedure executed successfully, DB2 commits all changes. The COMMIT statement closes the cursor unless it is declared with the WITH HOLD option. 8. Control returns to the calling application, which receives the output parameters and the result set. DB2 then: v Closes all cursors that the stored procedure opened, except those that the stored procedure opened to return result sets. v Discards all SQL statements that the stored procedure prepared. v Reclaims the working storage that the stored procedure used. The application can call more stored procedures, or it can execute more SQL statements. DB2 receives and processes the COMMIT or ROLLBACK request. The COMMIT or ROLLBACK operation covers all SQL operations, whether executed by the application or by stored procedures, for that unit of work. If the application involves IMS or CICS, similar processing occurs based on the IMS or CICS sync point rather than on an SQL COMMIT or ROLLBACK statement. 9. DB2 returns a reply message to the application describing the outcome of the COMMIT or ROLLBACK operation. 10. The workstation application executes the following steps to retrieve the contents of table EMPPROJACT, which the stored procedure has returned in a result set: a. Declares a result set locator for the result set being returned. b. Executes the ASSOCIATE LOCATORS statement to associate the result set locator with the result set. c. Executes the ALLOCATE CURSOR statement to associate a cursor with the result set.
Chapter 10. Creating and modifying DB2 objects
541
d. Executes the FETCH statement with the allocated cursor multiple times to retrieve the rows in the result set. e. Executes the CLOSE statement to close the cursor.
SQL procedures
An SQL procedure is a stored procedure that contains only SQL statements. The source code for these procedures (the SQL statements) is specified in an SQL CREATE PROCEDURE statement. The part of the CREATE PROCEDURE statement that contains SQL statements is called the procedure body. | | | | | | | | DB2 for z/OS supports the following two types of SQL procedures: Native SQL procedures A procedure with a procedural body that is written entirely in SQL and is created by issuing a single SQL statement, CREATE PROCEDURE. Native SQL procedures do not have an associated external application program. External SQL procedures A procedure whose body is written entirely in SQL, but is created, implemented, and executed like other external stored procedures.
542
All SQL procedures that were created prior to Version 9.1 are external SQL procedures. Starting in Version 9.1, you can create an external SQL procedure by specifying FENCED or EXTERNAL in the CREATE PROCEDURE statement. SQL procedure body: | | | The body of an SQL procedure contains one or more SQL statements. In the SQL procedure body, you can also declare variables, condition handlers, reference parameters, and reference variables. Statements that you can include in an SQL procedure body | | | | | An SQL procedure consists of a single SQL procedure statement. That procedure statement can be either an SQL control statement or another SQL statement. If the SQL control statement is a compound statement or a CASE statement, the procedure body can contain multiple statements. For native SQL procedures, you can use nested compound statements. How to code multiple statements in an SQL procedure Use a semicolon character to separate SQL statements within an SQL procedure. The procedure body has no terminating character. Therefore, if the procedure contains only one statement, you do not need to put a semicolon after that statement. If the procedure consists of a set of nested statements, you do not need to put a semicolon after the outermost statement. Variables in an SQL procedure To store data that you use only within an SQL procedure, you can declare SQL variables. SQL variables are the equivalent of host variables in external stored procedures. SQL variables can have the same data types and lengths as SQL procedure parameters. An SQL variable declaration has the following form:
DECLARE SQL-variable-name data-type;
The declaration for an SQL variable for which you use a result locator has the following form:
DECLARE SQL-variable-name data-type RESULT_SET_LOCATOR VARYING;
SQL variables in SQL procedures are subject to the following rules: v SQL variable names, condition names, and label names must be less than or equal to 128 bytes in length. The names can include alphanumeric characters and the underscore character. v SQL variable names must be unique. You cannot declare two SQL variables that have the same name, regardless of case. For example, you cannot declare two SQL variables named varx and VARX. (DB2 treats all SQL variable names as uppercase.) v SQL parameters, SQL variables, and SQL conditions should not include SQL reserved words. Although doing so is not recommended, you can specify an SQL reserved word as the name of an SQL parameter, SQL variable, or SQL condition in some contexts. If you specify a reserved word as the name of an SQL parameter, SQL variable, or SQL condition in a context where its use could be ambiguous, specify the name as a delimited identifier.
Chapter 10. Creating and modifying DB2 objects
543
v When you use an SQL variable in an SQL statement, do not precede the variable with a colon. You can perform any operations on SQL variables that you can perform on host variables in SQL statements. Object references in an SQL procedure To avoid ambiguity, qualify SQL variable names and other object names. Use the following guidelines to determine when to qualify object names: v Qualify column names with the associated table names or view names. v When you use an SQL procedure parameter in the procedure body, qualify the parameter name with the procedure name. v Specify a label for each compound statement, and qualify all SQL variables with the label name of the compound statement that declared them. Calls to user-defined functions from an SQL procedure | | | | | When you call a user-defined function from an SQL procedure, ensure that you pass parameters of the appropriate data type. The data type should be the same data type or a data type that can be promoted to the data type of the function definition. For example, DB2 can promote the data type CHAR to VARCHAR or SMALLINT to BIGINT. Related concepts: Nested compound statements in native SQL procedures on page 553 Stored procedure parameters on page 538 Promotion of data types (DB2 SQL) SQL control statements for external SQL procedures (DB2 SQL) SQL control statements for native SQL procedures (DB2 SQL) Related reference: SQL-procedure-statement (DB2 SQL) SQL statements allowed in SQL procedures (DB2 SQL) Examples of SQL procedures: You can use CASE statements, compound statements, and nested statements within an SQL procedure body. Example: CASE statement: The following SQL procedure demonstrates how to use a CASE statement. The procedure receives an employee's ID number and rating as input parameters. The CASE statement modifies the employee's salary and bonus, using a different UPDATE statement for each of the possible ratings.
CREATE PROCEDURE UPDATESALARY2 (IN EMPNUMBR CHAR(6), IN RATING INT) LANGUAGE SQL MODIFIES SQL DATA CASE RATING WHEN 1 THEN UPDATE CORPDATA.EMPLOYEE SET SALARY = SALARY * 1.10, BONUS = 1000
544
WHERE EMPNO = EMPNUMBR; WHEN 2 THEN UPDATE CORPDATA.EMPLOYEE SET SALARY = SALARY * 1.05, BONUS = 500 WHERE EMPNO = EMPNUMBR; ELSE UPDATE CORPDATA.EMPLOYEE SET SALARY = SALARY * 1.03, BONUS = 0 WHERE EMPNO = EMPNUMBR; END CASE
Example: Compound statement with nested IF and WHILE statements: The following example shows a compound statement that includes an IF statement, a WHILE statement, and assignment statements. The example also shows how to declare SQL variables, cursors, and handlers for classes of error codes. | | | | | | | | | | | | | | | | | The procedure receives a department number as an input parameter. A WHILE statement in the procedure body fetches the salary and bonus for each employee in the department, and uses an SQL variable to calculate a running total of employee salaries for the department. An IF statement within the WHILE statement tests for positive bonuses and increments an SQL variable that counts the number of bonuses in the department. When all employee records in the department have been processed, a NOT FOUND condition occurs. A NOT FOUND condition handler makes the search condition for the WHILE statement false, so execution of the WHILE statement ends. Assignment statements then assign the total employee salaries and the number of bonuses for the department to the output parameters for the stored procedure. If any SQL statement in the compound statement P1 receives an error, the SQLEXCEPTION handler receives control. The handler action sets the output parameter DEPTSALARY to NULL. After the handler action has completed successfully, the original error condition is resolved (SQLSTATE '00000', SQLCODE 0). Because this handler is an EXIT handler, execution passes to the end of the compound statement, and the SQL procedure ends.
CREATE PROCEDURE RETURNDEPTSALARY (IN DEPTNUMBER CHAR(3), OUT DEPTSALARY DECIMAL(15,2), OUT DEPTBONUSCNT INT) LANGUAGE SQL READS SQL DATA P1: BEGIN DECLARE EMPLOYEE_SALARY DECIMAL(9,2); DECLARE EMPLOYEE_BONUS DECIMAL(9,2); DECLARE TOTAL_SALARY DECIMAL(15,2) DEFAULT 0; DECLARE BONUS_CNT INT DEFAULT 0; DECLARE END_TABLE INT DEFAULT 0; DECLARE C1 CURSOR FOR SELECT SALARY, BONUS FROM CORPDATA.EMPLOYEE WHERE WORKDEPT = DEPTNUMBER; DECLARE CONTINUE HANDLER FOR NOT FOUND SET END_TABLE = 1; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET DEPTSALARY = NULL; OPEN C1; FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS; WHILE END_TABLE = 0 DO SET TOTAL_SALARY = TOTAL_SALARY + EMPLOYEE_SALARY + EMPLOYEE_BONUS; IF EMPLOYEE_BONUS > 0 THEN SET BONUS_CNT = BONUS_CNT + 1; END IF; FETCH C1 INTO EMPLOYEE_SALARY, EMPLOYEE_BONUS; END WHILE;
Chapter 10. Creating and modifying DB2 objects
545
Example: Compound statement with dynamic SQL statements: The following example shows a compound statement that includes dynamic SQL statements. The procedure receives a department number (P_DEPT) as an input parameter. In the compound statement, three statement strings are built, prepared, and executed: v The first statement string executes a DROP statement to ensure that the table to be created does not already exist. This table is named DEPT_deptno_T, where deptno is the value of input parameter P_DEPT. v The next statement string executes a CREATE statement to create DEPT_deptno_T. v The third statement string inserts rows for employees in department deptno into DEPT_deptno_T. Just as statement strings that are prepared in host language programs cannot contain host variables, statement strings in SQL procedures cannot contain SQL variables or stored procedure parameters. Therefore, the third statement string contains a parameter marker that represents P_DEPT. When the prepared statement is executed, parameter P_DEPT is substituted for the parameter marker.
CREATE PROCEDURE CREATEDEPTTABLE (IN P_DEPT CHAR(3)) LANGUAGE SQL BEGIN DECLARE STMT CHAR(1000); DECLARE MESSAGE CHAR(20); DECLARE TABLE_NAME CHAR(30); DECLARE CONTINUE HANDLER FOR SQLEXCEPTION SET MESSAGE = ok; SET TABLE_NAME = DEPT_||P_DEPT||_T; SET STMT = DROP TABLE ||TABLE_NAME; PREPARE S1 FROM STMT; EXECUTE S1; SET STMT = CREATE TABLE ||TABLE_NAME|| ( EMPNO CHAR(6) NOT NULL, || FIRSTNME VARCHAR(6) NOT NULL, || MIDINIT CHAR(1) NOT NULL, || LASTNAME CHAR(15) NOT NULL, || SALARY DECIMAL(9,2)); PREPARE S2 FROM STMT; EXECUTE S2; SET STMT = INSERT INTO ||TABLE_NAME || SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY || FROM EMPLOYEE || WHERE WORKDEPT = ?; PREPARE S3 FROM STMT; EXECUTE S3 USING P_DEPT; END
546
by using the CREATE PROCEDURE statement. Thus, the source code for an external stored procedure is separate from the definition for the stored procedure.
Language requirements for the external stored procedure and its caller
You can write an external stored procedure in Assembler, C, C++, COBOL, Java, REXX, or PL/I. All programs must be designed to run using Language Environment. Your COBOL and C++ stored procedures can contain object-oriented extensions. The program that calls the stored procedure can be in any language that supports the SQL CALL statement. ODBC applications can use an escape clause to pass a stored procedure call to DB2. Related concepts: Object-oriented extensions in COBOL on page 340 REXX stored procedures on page 632 Java stored procedures and user-defined functions (DB2 Application Programming for Java)
547
v How they specify the code for the stored procedure. An SQL procedure definition contains the source code for the stored procedure. An external stored procedure definition specifies the name of the stored procedure program. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | v How you define the stored procedure. For native SQL procedures and external procedures, you define the stored procedure to DB2 by executing the CREATE PROCEDURE statement. For external SQL procedures, you define the stored procedure to DB2 by preprocessing a CREATE PROCEDURE statement, then executing the CREATE PROCEDURE statement dynamically. For all procedures, you change the definition by executing the ALTER PROCEDURE statement. See Creating an external SQL procedure on page 579 for more information about defining an SQL procedure to DB2. Example: The following example shows a definition for an SQL procedure.
CREATE PROCEDURE UPDATESALARY1 (IN EMPNUMBR CHAR(10), IN RATE DECIMAL(6,2)) LANGUAGE SQL UPDATE EMP SET SALARY = SALARY * RATE WHERE EMPNO = EMPNUMBR 1 2 3 4
Notes: 1 2 3 4 The stored procedure name is UPDATESALARY1. The two parameters have data types of CHAR(10) and DECIMAL(6,2). Both are input parameters. LANGUAGE SQL indicates that this is an SQL procedure, so a procedure body follows the other parameters.
The procedure body consists of a single SQL UPDATE statement, which updates rows in the employee table. Example: The following example shows a definition for an equivalent external stored procedure that is written in COBOL. The stored procedure program, which updates employee salaries, is called UPDSAL.
CREATE PROCEDURE UPDATESALARY1 (IN EMPNUMBR CHAR(10), IN RATE DECIMAL(6,2)) LANGUAGE COBOL EXTERNAL NAME UPDSAL; 1 2 3 4
Notes: 1 2 3 4 The stored procedure name is UPDATESALARY1. The two parameters have data types of CHAR(10) and DECIMAL(6,2). Both are input parameters. LANGUAGE COBOL indicates that this is an external procedure, so the code for the stored procedure is in a separate, COBOL program. The name of the load module that contains the executable stored procedure program is UPDSAL.
548
If your stored procedure includes COMMIT or ROLLBACK statements, define it with the one of the following clauses: v CONTAINS SQL v READS SQL DATA v MODIFIES SQL DATA The COMMIT ON RETURN clause in a stored procedure definition has no effect on the COMMIT or ROLLBACK statements in the stored procedure code. If you specify COMMIT ON RETURN YES when you define the stored procedure, DB2 issues a COMMIT statement when control returns from the stored procedure. This action occurs regardless of whether the stored procedure contains COMMIT or ROLLBACK statements. A ROLLBACK statement has the same effect on cursors in a stored procedure as it has on cursors in stand-alone programs. A ROLLBACK statement closes all open cursors. A COMMIT statement in a stored procedure closes cursors that are not declared WITH HOLD and leaves open those cursors that are declared WITH HOLD. The effect of COMMIT or ROLLBACK on cursors applies to cursors that are declared in the calling application and to cursors that are declared in the stored procedure. Restriction: You cannot include COMMIT or ROLLBACK statements in a stored procedure if any of the following conditions are true: v The stored procedure is nested within a trigger or user-defined function. v The stored procedure is called by a client that uses two-phase commit processing. v The client program uses a type 2 connection to connect to the remote server that contains the stored procedure. v DB2 is not the commit coordinator. If a COMMIT or ROLLBACK statement in a stored procedure violates any of these conditions, DB2 puts the transaction in a must-rollback state. Also, in this case, the CALL statement fails. Related reference: CALL (DB2 SQL) COMMIT (DB2 SQL) ROLLBACK (DB2 SQL)
Procedure
To move stored procedures to a WLM-established environment:
549
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1. Define JCL procedures for the stored procedures address spaces. Use the sample JCL in member DSNTIJMV of data set DSN910.SDSNSAMP. 2. Define WLM application environments for groups of stored procedures, and associate a JCL startup procedure with each application environment. 3. For each stored procedure, issue ALTER PROCEDURE statements with the WLM ENVIRONMENT parameter to specify the name of the application environment. 4. Relink all of your existing stored procedures with DSNRLI, the language interface module for the Resource Recovery Services attachment facility (RRSAF). Use JCL and linkage editor control statements that are similar to the following statements.
//LINKRRS EXEC PGM=IEWL, // PARM=LIST,XREF,MAP //SYSPRINT DD SYSOUT=* //SYSLIB DD DISP=SHR,DSN=USER.RUNLIB.LOAD // DD DISP=SHR,DSN=DSN810.SDSNLOAD //SYSLMOD DD DISP=SHR,DSN=USER.RUNLIB.LOAD //SYSUT1 DD SPACE=(1024,(50,50)),UNIT=SYSDA //SYSLIN DD * ENTRY STORPROC REPLACE DSNALI INCLUDE SYSLIB(DSNRLI) INCLUDE SYSLMOD(STORPROC) NAME STORPROC(R)
The address spaces start automatically. Related tasks: Invoking the Resource Recovery Services attachment facility on page 72 Setting up a WLM application environment for stored procedures (DB2 Installation and Migration) Related reference: ALTER PROCEDURE (external) (DB2 SQL) ALTER PROCEDURE (SQL - external) (DB2 SQL)
550
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To create a native SQL procedure, perform one of the following actions: v Use IBM Optim Development Studio to specify the source statements for the SQL procedure and deploy the SQL procedure to DB2. IBM Optim Development Studio also allows you to create copies of the procedure package as needed and to deploy the procedure to remote servers. v Manually deploy the native SQL procedure by completing the following steps: 1. Issue the CREATE PROCEDURE statement: Include the procedure body, which is written entirely in SQL, in the SQL procedural language. For more information about what you can do within the procedure body, see the following information: - Controlling the scope of variables in an SQL procedure on page 552 - Declaring cursors in an SQL procedure with nested compound statements on page 556 - Handling SQL conditions in an SQL procedure on page 556 - Raising a condition within an SQL procedure by using the SIGNAL or RESIGNAL statements on page 566 Do not include the FENCED or EXTERNAL keywords. When you issue this CREATE PROCEDURE statement, the first version of this procedure is defined to DB2, and a package is implicitly bound with the options that you specify on the CREATE PROCEDURE statement. 2. If the native SQL procedure contains one or more of the following statements or references, make copies of the native SQL procedure package, as needed: CONNECT SET CURRENT PACKAGESET SET CURRENT PACKAGE PATH A table reference with a three-part name 3. If you plan to call the native SQL procedure at another DB2 server, deploy the procedure to another DB2 for z/OS server. You can customize the bind options at the same time. 4. Authorize the appropriate users to call the stored procedure.
What to do next
After you create a native SQL procedure, you can create one or more versions of it as needed.
551
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Related concepts: SQL procedures on page 542 SQL procedure body on page 543 Related tasks: Implementing DB2 stored procedures (DB2 Administration Guide) Developing database routines (IBM Data Studio, IBM Optim Database Administrator, IBM infoSphere Data Architect, IBM Optim Development Studio) Related reference: CREATE PROCEDURE (SQL - native) (DB2 SQL)
Procedure
To control the scope of a variable in an SQL procedure: 1. Declare the variable within the compound statement in which you want to reference it. Ensure that the variable name is unique within the compound statement, not including any nested statements. You can define variables with the same name in other compound statements in the same SQL procedure. 2. Reference the variable within that compound statement or any nested statements. Recommendation: If multiple variables with the same name exist within an SQL procedure, qualify the variable with the label from the compound statement in which it was declared. Otherwise, you might accidentally reference the wrong variable. If the variable name is unqualified and multiple variables with that name exist within the same scope, DB2 uses the variable in the innermost compound statement.
Results
Example: The following example contains three declarations of the variable A. One instance is declared in the outer compound statement, which has the label OUTER1. The other instances are declared in the inner compound statements with the labels INNER1 and INNER2. In the INNER1 compound statement, DB2 presumes that the unqualified references to A in the assignment statement and UPDATE statement refer to the instance of A that is declared in the INNER1 compound statement. To refer to the instance of A that is declared in the OUTER1 compound statement, qualify the variable as OUTER1.A.
CREATE PROCEDURE P2 () LANGUAGE SQL -- Outermost compound statement -----------------------OUTER1: BEGIN 1 DECLARE A INT DEFAULT 100; -- Inner compound statement with label INNER1 --INNER1: BEGIN 2 DECLARE A INT DEFAULT NULL;
552
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DECLARE W INT DEFAULT NULL; SET A = A + OUTER1.A; UPDATE T1 SET T1.B = 5 WHERE T1.B = A; 4 SET OUTER1.A = 100; 5 3
SET INNER1.A = 200; 6 END INNER1; 7 -- End of inner compound statement INNER1 ------- Inner compound statement with label INNER2 --INNER2: BEGIN 8 DECLARE A INT DEFAULT NULL; DECLARE Z INT DEFAULT NULL; SET A = A + OUTER1.A; END INNER2; 9 -- End of inner compound statement INNER2 -----SET OUTER1.A = 100; END OUTER1 11 10
The preceding example has the following parts: 1. The beginning of the outermost compound statement, which has the label OUTER1. 2. The beginning of the inner compound statement with the label INNER1. 3. The unqualified variable A refers to INNER1.A. 4. The unqualified variable A refers to INNER1.A. 5. OUTER1.A is a valid reference, because this variable is referenced in a nested compound statement. 6. INNER1.A is a valid reference, because this variable is referenced in the same compound statement in which it is declared. You cannot reference INNER2.A, because this variable is not in the scope of this compound statement. 7. 8. 9. 10. The end of the inner compound statement with the label INNER1. The beginning of the inner compound statement with the label INNER2. The end of the inner compound statement with the label INNER2. OUTER1.A is a valid reference, because this variable is referenced in the same compound statement in which it is declared. You cannot reference INNER1.A, because this variable is declared in a nested statement and cannot be referenced in the outer statement. 11. The end of the outermost compound statement, which has the label OUTER1. Nested compound statements in native SQL procedures: Nested compound statements are blocks of SQL statements that are contained by other blocks of SQL statements in native SQL procedures. Use nested compound statements to define condition handlers that execute more than one statement and to define different scopes for variables and condition handlers. The following pseudo code shows a basic structure of an SQL procedure with nested compound statements:
553
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
END INNER1; BEGIN ... ... END INNER2; END OUTERMOST INNER2:
In the preceding code, the OUTERMOST compound statement contains two nested compound statements: INNER1 and INNER2. INNER1 contains one nested compound statement: INNERMOST. Related concepts: Handlers in an SQL procedure on page 557 Related tasks: Defining condition handlers that execute more than one statement on page 558 Statement labels for nested compound statements in native SQL procedures: You can define a label for each compound statement in an SQL procedure. This label enables you to reference this block of statements in other statements such as the GOTO, LEAVE, and ITERATE SQL PL control statements. You can also use the label to qualify a variable when necessary. Labels are not required. A label name must meet the following criteria: v Be unique within the compound statement, including any compound statements that are nested within the compound statement. v Not be the same as the name of the SQL procedure. You can reference a label within the compound statement in which it is defined, including any compound statements that are nested within that compound statement. Example of statement labels: The following example shows several statement labels and their scope:
CREATE PROCEDURE P1 () LANGUAGE SQL --Outermost compound statement -----------------------OUTER1: BEGIN 1 --Inner compound statement with label INNER1 --INNER1: BEGIN 2 IF... ABC: LEAVE INNER1; 3 ELSEIF XYZ: LEAVE OUTER1; 4 END IF END INNER1; --End of inner compound statement INNER1 -------Inner compound statement with label INNER2---
554
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
INNER2: BEGIN 5 XYZ:...statement 6 END INNER2; -- End of inner compound statement INNER2 ----END OUTER1 7
The preceding example has the following parts: 1. The beginning of the outermost compound statement, which is labeled OUTER1 2. The beginning of an inner compound statement that is labeled INNER1 3. A LEAVE statement that is defined with the label ABC. This LEAVE statement specifies that DB2 is to terminate processing of the compound statement INNER1 and begin processing the next statement, which is INNER2. This LEAVE statement cannot specify INNER2, because that label is not within the scope of the INNER1 compound statement. 4. A LEAVE statement that is defined with the label XYZ. This LEAVE statement specifies that DB2 is to terminate processing of the compound statement OUTER1 and begin processing the next statement, if one exists. This example does not show the next statement. 5. The beginning of an inner compound statement that is labeled INNER2. 6. A statement that is defined with the label XYZ. This label is acceptable even though another statement in this procedure has the same label, because the two labels are in different scopes. Neither label is contained within the scope of the other. 7. The end of the outermost compound statement that is labeled OUTER1. The following examples show valid and invalid uses of labels: Invalid example of labels:
L1: BEGIN L2: SET A = B; L1: GOTO L2: --This duplicate use of the label L1 causes an error, because --the same label is already used in the same scope.
END L1;
555
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To declare a cursor in an SQL procedure with nested compound statements: Specify the DECLARE CURSOR statement within the compound statement in which you want to reference the cursor. Use a cursor name that is unique within the SQL procedure. You can reference the cursor within the compound statement in which it is declared and within any nested statements. If the cursor is declared as a result set cursor, even if the cursor is not declared in the outermost compound statement, any calling application can reference it.
Example
In the following example, cursor X is declared in the outer compound statement. This cursor can be referenced within the outer block in which it was declared and within any nested compound statements.
CREATE PROCEDURE SINGLE_CSR (INOUT IR1 INT, INOUT JR1 INT, INOUT IR2 INT, INOUT JR2 INT) LANGUAGE SQL DYNAMIC RESULT SETS 2 BEGIN DECLARE I INT; DECLARE J INT; DECLARE X CURSOR WITH RETURN FOR --outer declaration for X SELECT * FROM CSRT1; SUB: BEGIN OPEN X; FETCH X INTO I,J; SET IR1 = I; SET JR1 = J; END; FETCH X INTO I,J; SET IR2 = I; SET JR2 = j; CLOSE X; END --references X in outer block --references X in outer block
Related reference: CREATE PROCEDURE (SQL - native) (DB2 SQL) DECLARE CURSOR (DB2 SQL)
556
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To handle SQL conditions, use one of the following techniques: v Include statements called handlers to tell the procedure to perform some other action when an error or warning occurs. v Include a RETURN statement in an SQL procedure to return an integer status value to the caller. v Include a SIGNAL statement or a RESIGNAL statement to raise a specific SQLSTATE and to define the message text for that SQLSTATE. v Force a negative SQLCODE to be returned by a procedure if a trigger calls the procedure. Handlers in an SQL procedure: If an error occurs when an SQL procedure executes, the procedure ends unless you include statements to tell the procedure to perform some other action. These statements are called handlers. Handlers are similar to WHENEVER statements in external SQL application programs. Handlers tell the SQL procedure what to do when an error or warning occurs, or when no more rows are returned from a query. In addition, you can declare handlers for specific SQLSTATEs. You can refer to an SQLSTATE by its number in a handler, or you can declare a name for the SQLSTATE and then use that name in the handler. The general form of a handler declaration is:
DECLARE handler-type HANDLER FOR condition SQL-procedure-statement;
In general, the way that a handler works is that when an error occurs that matches condition, the SQL-procedure-statement executes. When the SQL-procedure-statement completes, DB2 performs the action that is indicated by handler-type. Types of handlers The handler type determines what happens after the completion of the SQL-procedure-statement. You can declare the handler type to be either CONTINUE or EXIT: CONTINUE Specifies that after SQL-procedure-statement completes, execution continues with the statement after the statement that caused the error. EXIT Specifies that after SQL-procedure-statement completes, execution continues at the end of the compound statement that contains the handler. Example: CONTINUE handler: This handler sets flag at_end when no more rows satisfy a query. The handler then causes execution to continue after the statement that returned no rows.
DECLARE CONTINUE HANDLER FOR NOT FOUND SET at_end=1;
557
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: EXIT handler: This handler places the string 'Table does not exist' into output parameter OUT_BUFFER when condition NO_TABLE occurs. NO_TABLE is previously declared as SQLSTATE 42704 (name is an undefined name). The handler then causes the SQL procedure to exit the compound statement in which the handler is declared.
DECLARE NO_TABLE CONDITION FOR 42704; . . . DECLARE EXIT HANDLER FOR NO_TABLE SET OUT_BUFFER=Table does not exist;
Defining condition handlers that execute more than one statement: A condition handler defines the action that an SQL procedure takes when a particular condition occurs. You must specify the action as a single SQL procedure statement. About this task To define a condition handler that executes more than one statement when the specified condition occurs, specify a compound statement within the declaration of that handler. Example: The following example shows a condition handler that captures the SQLSTATE value and sets a local flag to TRUE.
BEGIN DECLARE SQLSTATE CHAR(5); DECLARE PrvSQLState CHAR(5) DEFAULT 00000; DECLARE ExceptState INT; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET PrvSQLState = SQLSTATE; SET ExceptState = TRUE; END; ... END
Example: The following example declares a condition handler for SQLSTATE 72822. The subsequent SIGNAL statement is within the scope of this condition handler and thus activates this handler. The condition handler tests the value of the SQL variable VAR with an IF statement. Depending on the value of VAR, the SQLSTATE is changed and the message text is set.
DECLARE EXIT HANDLER FOR SQLSTATE 72822 IF ( VAR = OK ) THEN RESIGNAL SQLSTATE 72623 SET MESSAGE_TEXT = Got SQLSTATE 72822; ELSE RESIGNAL SQLSTATE 72319 SET MESSAGE_TEXT = VAR; END IF; SIGNAL SQLSTATE 72822;
Controlling how errors are handled within different scopes in an SQL procedure: You can use nested compound statements in an SQL procedure to specify that errors be handled differently within different scopes. You can also ensure that condition handlers are checked only with a particular compound statement.
558
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure To control how errors are handled within different scopes in an SQL procedure: 1. Optional: Declare a condition by specifying a DECLARE CONDITION statement within the compound statement in which you want to reference it. You can reference a condition in the declaration of a condition handler, a SIGNAL statement, or a RESIGNAL statement. Restriction: If multiple conditions with that name exist within the same scope, you cannot explicitly refer to a condition that is not the most local in scope. DB2 uses the condition in the innermost compound statement. 2. Declare a condition handler by specifying a DECLARE HANDLER statement within the compound statement to which you want the condition handler to apply. Within the declaration of the condition handler, you can specify a previously defined condition. Restriction: Condition handlers that are declared in the same compound statement cannot handle conditions encountered in each other or themselves. Results Example: In the following example, a condition with the name ABC is declared twice, and a condition named XYZ is declared once.
CREATE PROCEDURE... DECLARE ABC CONDITION... DECLARE XYZ CONDITION... BEGIN DECLARE ABC CONDITION... SIGNAL ABC; 1 END; SIGNAL ABC; 2
The following notes refer to the preceding example: 1. ABC refers to the condition that is declared in the innermost block. If this statement were changed to SIGNAL XYZ, XYZ would refer to the XYZ condition that is declared in the outermost block. 2. ABC refers to the condition that is declared in the outermost block. Example: The following example contains multiple declarations of a condition with the name FOO, and a single declaration of a condition with the name GORP.
CREATE PROCEDURE MYTEST (INOUT A CHAR(1), INOUT B CHAR(1)) L1: BEGIN DECLARE GORP CONDITION FOR SQLSTATE 33333; -- defines a condition with the name GORP for SQLSTATE 33333 DECLARE EXIT HANDLER FOR GORP --defines a condition handler for SQLSTATE 33333 L2: BEGIN DECLARE FOO CONDITION FOR SQLSTATE 12345; --defines a condition with the name FOO for SQLSTATE 12345 DECLARE CONTINUE HANDLER FOR FOO --defines a condition handler for SQLSTATE 12345 L3: BEGIN SET A = A; ...more statements... END L3; SET B = B; IF...
Chapter 10. Creating and modifying DB2 objects
559
| SIGNAL FOO; --raises SQLSTATE 12345 ELSEIF | SIGNAL GORP; --raises SQLSTATE 33333 | END IF; | | END L2; | | L4: BEGIN | DECLARE FOO CONDITION | FOR SQLSTATE 54321 --defines a condition with the name FOO for SQLSTATE 54321 | DECLARE EXIT HANDLER FOR FOO...; --defines a condition handler for SQLSTATE 54321 | | SIGNAL FOO SET MESSAGE_TEXT = ...; --raises SQLSTATE 54321 | | L5: BEGIN | DECLARE FOO CONDITION | FOR SQLSTATE 99999; --defines a condition with the name FOO for SQLSTATE 99999 | ...more statements... | END L5; | | END L4; | | --At this point, the procedure cannot reference FOO, because this condition is not defined | --in this outer scope | | | END L1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Example: In the following example, the compound statement with the label OUTER contains two other compound statements: INNER1A and INNER1B. The INNER1A compound statement contains another compound statement, which has the label INNER1A2, and the declaration for a condition handler HINNER1A. The body of the condition handler HINNER1A contains another compound statement, which defines another condition handler, HINNER1A_HANDLER.
OUTER: BEGIN -- Handler for OUTER DECLARE ... HANDLER -- HOUTER BEGIN : END; -- End of handler : : <=============. | | <---. | | | <---. | | | | -- Level 1 - first compound statement | INNER1A: | BEGIN <---------. | -- Handler for INNER1A | | DECLARE ... HANDLER -- HINNER1A | | BEGIN <------. | | -- Handler for handler HINNER1A | | DECLARE...HANDLER --HINNER1A_HANDLER| | | BEGIN <---. | | | : | | | | END; -- End of handler <---. | | | : | | | : -- stmt that gets condition | | | : | | | : -- more statements in handler | | | END; -- End of HINNER1A handler<------. | | | | INNER1A2: | | BEGIN <--. | | DECLARE ... HANDLER...-- HINNER1A2 | | | BEGIN; <---. | | | : | | | |
560
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| | | | | | | : | : -- statements in INNER1A | END INNER1A; | | -- Level 1 - second compound statement | INNER1B: | BEGIN <---------. | -- Handler for handler INNER1B | | DECLARE ...HANDLER -- HINNER1B | | BEGIN <------. | | -- Handler for HINNER1B -| | | DECLARE ...HANDLER --HINNER1B_HANDLER| | | BEGIN <---. | | | : | | | | END; -- End of handler <---. | | | : | | | : -- statements in handler | | | END; -- End of HINNER1B handler<-------. | | : | | : -- statements in INNER1B | | END INNER1B; <--------- | | : -- statements in OUTER | END OUTER; <=============
END; -- End of handler <---. | : | : -- statement that gets condition | : | : -- statement after statement | : -- that encountered condition | END INNER1A2; <--
| | | | | | | | | <---------
The following notes apply to the preceding example: 1. If an exception, warning, or NOT FOUND condition occurs within the INNER1A2 compound statement, the most appropriate handler within that compound statement is activated to handle the condition. Then, one of the following actions occurs depending on the type of condition handler: v If the condition handler (HINNER1A2) is an exit handler, control is returned to the end of the compound statement that contained the condition handler. v If the condition handler (HINNER1A2) is a continue handler, processing continues with the statement after the statement that encountered the condition. If no appropriate handler exists in the INNER1A2 compound statement, DB2 considers the following handlers in the specified order: a. The most appropriate handler within the INNER1A compound statement. b. The most appropriate handler within the OUTER compound statement. If no appropriate handler exists in the OUTER compound statement, the condition is an unhandled condition. If the condition is an exception condition, the procedure terminates and returns an unhandled condition to the invoking application. If the condition is a warning or NOT FOUND condition, the procedure returns the unhandled warning condition to the invoking application. 2. If an exception, warning, or NOT FOUND condition occurs within the body of the condition handler HINNER1A, and the condition handler HINNER1A_HANDLER is the most appropriate handler for the exception, that handler is activated. Otherwise, the most appropriate handler within the OUTER compound statement handles the condition. If no appropriate handler exists within the OUTER compound statement, the condition is treated as an unhandled condition.
Chapter 10. Creating and modifying DB2 objects
561
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: In the following example, when statement2 results in a NOT FOUND condition, the appropriate condition handler is activated to handle the condition. When the condition handler completes, the compound statement that contains that condition handler terminates, because the condition handler is an EXIT handler. Processing then continues with statement4.
BEGIN DECLARE EXIT HANDLER FOR NOT FOUND SET OUT_OF_DATA_FLAG = ON; statement1... statement2... --assume that this statement results in a NOT FOUND condition statement3... END; statement4 ...
Example: In the following example, DB2 checks for SQLSTATE 22H11 only for statements inside the INNER compound statement. DB2 checks for SQLEXCEPTION for all statements in both the OUTER and INNER blocks.
OUTER: BEGIN DECLARE var1 INT; DECLARE EXIT HANDLER FOR SQLEXCEPTION RETURN -3; INNER: BEGIN DECLARE EXIT HANDLER FOR SQLSTATE 22H11 RETURN -1; DECLARE C1 CURSOR FOR SELECT col1 FROM table1; OPEN C1; CLOSE C1; : : -- more statements END INNER; : : -- more statements
Example: In the following example, DB2 checks for SQLSTATE 42704 only for statements inside the A compound statement.
CREATE PROCEDURE EXIT_TEST () LANGUAGE SQL BEGIN DECLARE OUT_BUFFER VARCHAR(80); DECLARE NO_TABLE CONDITION FOR SQLSTATE 42704; A: BEGIN DECLARE EXIT HANDLER FOR NO_TABLE BEGIN SET OUT_BUFFER =Table does not exist; END; -- Drop potentially nonexistent table: DROP TABLE JAVELIN; B: SET OUT_BUFFER =Table dropped successfully; END; -- Copy OUT_BUFFER to some message table: C: INSERT INTO MESSAGES VALUES (OUT_BUFFER); 1 3 4
The following notes describe a possible flow for the preceding example: 1. A nested compound statement with label A confines the scope of the NO_TABLE exit handler to the statements that are specified in the A compound statement.
562
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2. If the table JAVELIN does not exist, the DROP statement raises the NO_TABLE condition. 3. The exit handler for NO_TABLE is activated. 4. The variable OUT_BUFFER is set to the string 'Table does not exist.' 5. Execution continues with the INSERT statement. No more statements in the A compound statement are processed. Example: The following example illustrates the scope of different condition handlers.
CREATE PROCEDURE ERROR_HANDLERS(IN PARAM INTEGER) LANGUAGE SQL OUTER: BEGIN DECLARE I INTEGER; DECLARE SQLSTATE CHAR(5) DEFAULT 00000; DECLARE EXIT HANDLER FOR SQLSTATE VALUE 38H02, SQLSTATE VALUE 38H04, SQLSTATE VALUE 38HI4, SQLSTATE VALUE 38H06 OUTER_HANDLER: BEGIN DECLARE TEXT VARCHAR(70); SET TEXT = SQLSTATE || RECEIVED AND MANAGED BY OUTER ERROR HANDLER ; RESIGNAL SQLSTATE VALUE 38HE0 SET MESSAGE_TEXT = TEXT; END OUTER_HANDLER; INNER: BEGIN DECLARE EXIT HANDLER FOR SQLSTATE VALUE 38H03 RESIGNAL SQLSTATE VALUE 38HI3 SET MESSAGE_TEXT = 38H03 MANAGED BY INNER ERROR HANDLER; DECLARE EXIT HANDLER FOR SQLSTATE VALUE 38H04 RESIGNAL SQLSTATE VALUE 38HI4 SET MESSAGE_TEXT = 38H04 MANAGED BY INNER ERROR HANDLER; DECLARE EXIT HANDLER FOR SQLSTATE VALUE 38H05 RESIGNAL SQLSTATE VALUE 38HI5 SET MESSAGE_TEXT = 38H05 MANAGED BY INNER ERROR HANDLER; CASE PARAM WHEN 1 THEN -- (1) SIGNAL SQLSTATE VALUE 38H01 SET MESSAGE_TEXT = EXAMPLE 1: ERROR SIGNALED FROM INNER COMPOUND STMT; WHEN 2 THEN -- (2) SIGNAL SQLSTATE VALUE 38H02 SET MESSAGE_TEXT = EXAMPLE 2: ERROR SIGNALED FROM INNER COMPOUND STMT; WHEN 3 THEN -- (3) SIGNAL SQLSTATE VALUE 38H03 SET MESSAGE_TEXT = EXAMPLE 3: ERROR SIGNALED FROM INNER COMPOUND STMT; WHEN 4 THEN -- (4) SIGNAL SQLSTATE VALUE 38H04 SET MESSAGE_TEXT = EXAMPLE 4: ERROR SIGNALED FROM INNER COMPOUND STMT; ELSE
Chapter 10. Creating and modifying DB2 objects
563
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SET I = 1; /*Do not do anything */ END CASE; END INNER; CASE PARAM WHEN 5 THEN -- (5) SIGNAL SQLSTATE VALUE 38H05 SET MESSAGE_TEXT = EXAMPLE 5: ERROR SIGNALED FROM OUTER COMPOUND STMT; WHEN 6 THEN -- (6) SIGNAL SQLSTATE VALUE 38H06 SET MESSAGE_TEXT = EXAMPLE 6: ERROR SIGNALED FROM OUTER COMPOUND STMT; ELSE -- (7) SET I = 1; /*Do not do anything */ END CASE; END OUTER
Expected behavior SQLSTATE 38H01 is signaled from the INNER compound statement. Because no appropriate handler exists, the procedure terminates and returns the unhandled exception condition, 38H01 with SQLCODE -438, to the calling application. SQLSTATE 38H02 is signaled from the INNER compound statement. The condition handler in the OUTER compound statement is activated. A RESIGNAL statement, with SQLSTATE 38HE0, is issued from within the body of the condition handler. This exception causes control to be returned to the end of the OUTER compound statement with exception condition 38HE0 and SQLCODE -438. The procedure terminates and returns the unhandled condition to the calling application. SQLSTATE 38H03 is signaled from the INNER compound statement. A condition handler within the INNER compound statement is activated. A RESIGNAL statement, with SQLSTATE 38HI3, is issued from within the body of the condition handler. Because no appropriate handler exists, the procedure terminates and returns the unhandled exception condition, 38HI3 with SQLCODE -438, to the calling application. SQLSTATE 38H04 is signaled from the INNER compound statement. A condition handler within the INNER compound statement is activated. A RESIGNAL statement, with SQLSTATE 38HI4, is issued from within the body of the condition handler. A condition handler in the OUTER compound statement is activated. A RESIGNAL statement, with SQLSTATE 38HE0, is issued from within the body of the condition handler. This exception causes control to be returned to the end of the OUTER compound statement with exception condition 38HE0 and SQLCODE -438. The procedure terminates and returns the unhandled condition to the calling application. SQLSTATE 38H05 is signaled from the OUTER compound statement. Because no appropriate handler exists, the procedure terminates and returns the unhandled exception condition, 38H05 with SQLCODE -438, to the calling application. SQLSTATE 38H06 is signaled from the OUTER compound statement. A condition handler in the OUTER compound statement is activated. A RESIGNAL statement, with SQLSTATE 38HE0, is issued from within the body of the condition handler. This exception causes control to be returned to the end of the OUTER compound statement with exception condition 38HE0 and SQLCODE -438. The procedure terminates and returns the unhandled condition to the calling application.
564
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Expected behavior The ELSE clause of the CASE statement executes and processes the SET statement. A successful completion code is returned to the calling application.
Example: In the following example SQL procedure, the condition handler for exception1 is not within the scope of the condition handler for exception0. If exception condition exception1 is raised in the body of the condition handler for exception0, no appropriate handler exists, and the procedure terminates with an unhandled exception.
CREATE PROCEDURE divide ( .....) LANGUAGE SQL CONTAINS SQL BEGIN DECLARE dn_too_long CHAR(5) DEFAULT abcde; -- Declare condition names -------------------------DECLARE exception0 CONDITION FOR SQLSTATE 22001; DECLARE exception1 CONDITION FOR SQLSTATE xxxxx; -- Declare cursors ---------------------------------DECLARE cursor1 CURSOR WITH RETURN FOR SELECT * FROM dept; -- Declare handlers --------------------------------DECLARE CONTINUE HANDLER FOR exception0 BEGIN some SQL statement that causes an error xxxxx END DECLARE CONTINUE HANDLER FOR exception1 BEGIN ... END -- Mainline of procedure ---------------------------INSERT INTO DEPT (DEPTNO) VALUES (dn_too_long); -- Assume that this statement results in SQLSTATE 22001 OPEN CURSOR1; END
Retrieving diagnostic information by using GET DIAGNOSTICS in a handler: Handlers specify the action that an SQL procedure takes when a particular error or condition occurs. In some cases, you want to retrieve additional diagnostic information about the error or warning condition. About this task You can include a GET DIAGNOSTICS statement in a handler to retrieve error or warning information. If you include GET DIAGNOSTICS, it must be the first statement that is specified in the handler. Example: Using GET DIAGNOSTICS to retrieve message text: Suppose that you create an SQL procedure, named divide1, that computes the result of the division of two integers. You include GET DIAGNOSTICS to return the text of the division error message as an output parameter:
Chapter 10. Creating and modifying DB2 objects
565
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CREATE PROCEDURE divide1 (IN numerator INTEGER, IN denominator INTEGER, OUT divide_result INTEGER, OUT divide_error VARCHAR(1000)) LANGUAGE SQL BEGIN DECLARE CONTINUE HANDLER FOR SQLEXCEPTION GET DIAGNOSTICS CONDITION 1 divide_error = MESSAGE_TEXT; SET divide_result = numerator / denominator; END
Ignoring a condition in an SQL procedure: You can specify that you want to ignore errors or warnings within a particular scope of statements in an SQL procedure. However, do so with caution. Procedure To ignore a condition in an SQL procedure: Declare a condition handler that contains an empty compound statement. Example The following example shows a condition handler that is declared as a way of ignoring a condition. Assume that your SQL procedure inserts rows into a table that has a unique column. If the value to be inserted for that column already exists in the table, the row is not inserted. However, in this case, you do not want DB2 to notify the application about this condition, which is indicated by SQLSTATE 23505.
DECLARE CONTINUE HANDLER FOR SQLSTATE 23505 BEGIN -- ignore error for duplicate value END;
Related concepts: Handlers in an SQL procedure on page 557 Related reference: SQLSTATE values and common error codes (DB2 Codes)
Raising a condition within an SQL procedure by using the SIGNAL or RESIGNAL statements
Within an SQL procedure, you can force a particular condition to occur with a specific SQLSTATE and message text.
566
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
No Yes Yes Yes Yes No
The following table summarizes the differences between issuing a RESIGNAL or SIGNAL statement within the body of a condition handler. For each row in the table, assume that the diagnostics area contains the following information when the RESIGNAL or SIGNAL statement is issued:
RETURNED_SQLSTATE xxxxx MESSAGE_TEXT this is my message Table 92. Example RESIGNAL and SIGNAL statements Specify a new condition? No Specify message text? No Example RESIGNAL statement... RESIGNAL
1
RESIGNAL 98765
2
SIGNAL 98765
RETURNED_SQLSTATE 98765 MESSAGE_TEXT 'APPLICATION RAISED ERROR WITH DIAGNOSTIC TEXT: this is my message'
NA RETURNED_SQLSTATE 98765 MESSAGE_TEXT 'APPLICATION RAISED ERROR WITH DIAGNOSTIC TEXT: xyz'
Note: 1. This statement raises the current condition with the existing SQLSTATE, SQLCODE, message text, and tokens. 2. This statement raises a new condition (SQLSTATE '98765'). Existing message text and tokens are reset. The SQLCODE is set to -438 for an error or 438 for a warning. 3. This statement raises a new condition (SQLSTATE '98765') with new message text ('xyz'). The SQLCODE is set to -438 for an error or 438 for a warning. Example of the SIGNAL statement in an SQL procedure: You can use the SIGNAL statement anywhere within an SQL procedure to raise a particular condition. The following example uses an ORDERS table and a CUSTOMERS table that are defined in the following way:
CREATE TABLE ORDERS (ORDERNO INTEGER NOT NULL, PARTNO INTEGER NOT NULL, ORDER_DATE DATE DEFAULT, CUSTNO INTEGER NOT NULL, QUANTITY SMALLINT NOT NULL, CONSTRAINT REF_CUSTNO FOREIGN KEY (CUSTNO) REFERENCES CUSTOMERS (CUSTNO) ON DELETE RESTRICT, PRIMARY KEY (ORDERNO,PARTNO));
567
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CREATE TABLE CUSTOMERS (CUSTNO INTEGER NOT NULL, CUSTNAME VARCHAR(30), CUSTADDR VARCHAR(80), PRIMARY KEY (CUSTNO));
Example: Using SIGNAL to set message text Suppose that you have an SQL procedure for an order system that signals an application error when a customer number is not known to the application. The ORDERS table has a foreign key to the CUSTOMERS table, which requires that the CUSTNO exist in the CUSTOMERS table before an order can be inserted:
CREATE PROCEDURE submit_order (IN ONUM INTEGER, IN PNUM INTEGER, IN CNUM INTEGER, IN QNUM INTEGER) LANGUAGE SQL MODIFIES SQL DATA BEGIN DECLARE EXIT HANDLER FOR SQLSTATE VALUE 23503 SIGNAL SQLSTATE 75002 SET MESSAGE_TEXT = Customer number is not known; INSERT INTO ORDERS (ORDERNO, PARTNO, CUSTNO, QUANTITY) VALUES (ONUM, PNUM, CNUM, QNUM); END
In this example, the SIGNAL statement is in the handler. However, you can use the SIGNAL statement to invoke a handler when a condition occurs that will result in an error. Related concepts: Example of the RESIGNAL statement in a handler Example of the RESIGNAL statement in a handler: You can use the RESIGNAL statement in an SQL procedure to assign a different value to the condition that activated the handler. T Example: Using RESIGNAL to set an SQLSTATE valu Suppose that you create an SQL procedure, named divide2, that computes the result of the division of two integers. You include SIGNAL to invoke the handler with an overflow condition that is caused by a zero divisor, and you include RESIGNAL to set a different SQLSTATE value for that overflow condition:
CREATE PROCEDURE divide2 (IN numerator INTEGER, IN denominator INTEGER, OUT divide_result INTEGER) LANGUAGE SQL BEGIN DECLARE overflow CONDITION FOR SQLSTATE 22003; DECLARE CONTINUE HANDLER FOR overflow RESIGNAL SQLSTATE 22375; IF denominator = 0 THEN SIGNAL overflow; ELSE SET divide_result = numerator / denominator; END IF; END
568
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: RESIGNAL in a nested compound statement If the following SQL procedure is invoked with argument values 1, 0, and 0, the procedure returns a value of 2 for RC and sets the oparm1 parameter to 650.
CREATE PROCEDURE resig4 (IN iparm1 INTEGER, INOUT oparm1 INTEGER, INOUT rc INTEGER) LANGUAGE SQL A1: BEGIN DECLARE c1 INT DEFAULT 1; DECLARE CONTINUE HANDLER FOR SQLSTATE VALUE 01ABX BEGIN .... some other statements SET RC = 3; 6 END; A2: SET oparm1 = 5; A3: BEGIN DECLARE c1 INT DEFAULT 1; DECLARE CONTINUE HANDLER FOR SQLSTATE VALUE 01ABC BEGIN SET RC = 1; RESIGNAL SQLSTATE VALUE 01ABX SET MESSAGE_TEXT = get out of here; SET RC = 2; END; A7: SET oparm1 = oparm1 + 110; SIGNAL SQLSTATE VALUE 01ABC SET MESSAGE_TEXT = yikes; SET oparm1 = oparm1 + 215; END; SET oparm1 = oparm1 + 320; END 1
4 5 7 2 3 8 9
The following notes refer to the preceding example: 1. oparm1 is initially set to 5. 2. oparm1 is incremented by 110. The value of oparm1 is now 115. 3. The SIGNAL statement causes the condition handler that is contained in the A3 compound statement to be activated. 4. In this condition handler, RC is set to 1. 5. The RESIGNAL statement changes the SQLSTATE to 01ABX. This value causes the continue handler in the A1 compound statement to be activated. 6. RC is set to 3 in this condition handler. Because this condition handler is a continue handler, when the handler action completes, control returns to the SET statement after the RESIGNAL statement. 7. RC is set to 2 in this condition handler. Because this condition handler is a continue handler, control returns to the SET statement that follows the SIGNAL statement that caused the condition handler to be activated. 8. oparm1 is incremented by 215. The value of oparm is now 330. 9. oparm1 is incremented by 320. The value of oparm is now 650. How SIGNAL and RESIGNAL statements affect the diagnostics area: When you issue a SIGNAL statement, a new logical diagnostics area is created. When you issue a RESIGNAL statement, the current diagnostics area is updated.
569
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
When you issue a SIGNAL statement, a new diagnostics area is logically created. In that diagnostics area, RETURNED_SQLSTATE is set to the SQLSTATE or condition name specified. If you specified message text as part of the SIGNAL statement, MESSAGE_TEXT in the diagnostics area is also set to the specified value. When you issue a RESIGNAL statement with a SQLSTATE value, condition name, or message text, the current diagnostics area is updated with the specified information.
570
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
case is at location SAN_JOSE. The subsequent BIND command creates a copy of the package for version ABC of the procedure TEST.MYPROC. This package is created at location SAN_JOSE and is used by DB2 when this procedure is executed.
CREATE PROCEDURE TEST.MYPROC VERSION ABC LANGUAGE SQL ... BEGIN ... CONNECT TO SAN_JOSE ... END BIND PACKAGE (SAN_JOSE.TEST) COPY(TEST.MYPROC) COPYVER(ABC) ACTION(ADD)
Example: The following native SQL procedure sets the CURRENT PACKAGESET special register to ensure that DB2 uses the package with the collection ID COLL2 for this version of the procedure. Consequently, you must create such a package. The subsequent BIND command creates this package with collection ID COLL2. This package is a copy of the package for version ABC of the procedure TEST.MYPROC. DB2 uses this package to process the SQL statements in this procedure.
CREATE PROCEDURE TEST.MYPROC VERSION ABC LANGUAGE SQL ... BEGIN ... SET CURRENT PACKAGESET = COLL2 ... END BIND PACKAGE(COLL2) COPY(TEST.MYPROC) COPYVER(ABC) ACTION(ADD) QUALIFIER(XYZ)
Related tasks: Regenerating an existing version of a native SQL procedure on page 578 Replacing copies of a package for a version of a native SQL procedure Related reference: ALTER PROCEDURE (SQL - native) (DB2 SQL) Replacing copies of a package for a version of a native SQL procedure: When you change a version of a native SQL procedure and the ALTER PROCEDURE REPLACE statement contains certain options, you must replace any local or remote copies of the package that exist for that version of the procedure. About this task If you specify any of the following ALTER PROCEDURE options, you must replace copies of the package: v REPLACE VERSION v REGENERATE v v v v v v v DISABLE DEBUG MODE QUALIFIER PACKAGE OWNER DEFER PREPARE NODEFER PREPARE CURRENT DATA DEGREE
Chapter 10. Creating and modifying DB2 objects
571
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v v v v v v v v v v v v v v v v v v
DYNAMICRULES APPLICATION ENCODING SCHEME WITH EXPLAIN WITHOUT EXPLAIN WITH IMMEDIATE WRITE WITHOUT IMMEDIATE WRITE ISOLATION LEVEL WITH KEEP DYNAMIC WITHOUT KEEP DYNAMIC OPTHINT SQL PATH RELEASE AT COMMIT RELEASE AT DEALLOCATE REOPT VALIDATE RUN VALIDATE BIND ROUNDING DATE FORMAT
v DECIMAL v FOR UPDATE CLAUSE OPTIONAL v FOR UPDATE CLAUSE REQUIRED v TIME FORMAT To replace copies of a package for a version of a native SQL procedure, specify the BIND COPY ACTION(REPLACE) command with the appropriate package name and version ID.
Procedure
To create a new version of a procedure: Issue the ALTER PROCEDURE statement with the following items: v The name of the native SQL procedure for which you want to create a new version. v The ADD VERSION clause with a name for the new version. v The parameter list of the procedure that you want to alter. This parameter list must be the same as the original procedure.
572
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v Any procedure options. These options can be different than the options for other versions of this procedure. If you do not specify a value for a particular option, the default value is used, regardless of the value that is used by the current active version of this procedure. v A procedure body. This body can be different than the procedure body for other versions of this procedure.
Example
For example, the following CREATE PROCEDURE statement defines a new native SQL procedure called UPDATE_BALANCE. The version of the procedure is V1, and it is the active version.
CREATE PROCEDURE UPDATE_BALANCE (IN CUSTOMER_NO INTEGER, IN AMOUNT DECIMAL(9,2)) VERSION V1 LANGUAGE SQL READS SQL DATA BEGIN DECLARE CUSTOMER_NAME CHAR(20); SELECT CUSTNAME INTO CUSTOMER_NAME FROM ACCOUNTS WHERE CUSTNO = CUSTOMER_NO; END
The following ALTER PROCEDURE statement creates a new version of the UPDATE_BALANCE procedure. The version name of the new version is V2. This new version has a different procedure body.
ALTER PROCEDURE UPDATE_BALANCE ADD VERSION V2 (IN CUSTOMER_NO INTEGER, IN AMOUNT DECIMAL (9,2) ) MODIFIES SQL DATA BEGIN UPDATE ACCOUNTS SET BAL = BAL + AMOUNT WHERE CUSTNO = CUSTOMER_NO; END
What to do next
After you create a new version, if you want that version to be invoked by all subsequent calls to this procedure, you need to make that version the active version. Multiple versions of native SQL procedures: You can define multiple versions of a native SQL procedure. DB2 maintains this version information for you. One or more versions of a procedure can exist at any point in time at the current server, but only one version of a procedure is considered the active version. When you first create a procedure, that initial version is considered the active version of the procedure. Using multiple versions of a native SQL procedure has the following advantages:
Chapter 10. Creating and modifying DB2 objects
573
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v You can keep the existing version of a procedure active while you create another version. When the other version is ready, you can make it the active one. v When you make another version of a procedure active, you do not need to change any existing calls to that procedure. v You can easily switch back to a previous version of a procedure if the version that you switched to does not work as planned. v You can drop an unneeded version of a procedure. A new version of a native SQL procedure can have different values for the following items: v Parameter names v Procedure options v Procedure body Restrictions: v A new version of a native SQL procedure cannot have different values for the following items: Number of parameters Parameter data types Parameter attributes for character data Parameter CCSIDs Whether a parameter is an input or output parameter, as defined by the IN, OUT, and INOUT options If you need to specify different values for any of the preceding items, create a new native SQL procedure, instead of a new version.
Procedure
To deploy a native SQL procedure to another DB2 for z/OS server: Issue the BIND PACKAGE command with the following options: DEPLOY Specify the name of the procedure whose logic you want to use on the target server. Tip: When specifying the parameters for the DEPLOY option, consider the following naming rules for native SQL procedures:
574
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v The collection ID is the same as the schema name in the original CREATE PROCEDURE statement. v The package ID is the same as the procedure name in the original CREATE PROCEDURE statement. COPYVER Specify the version of the procedure whose logic you want to use on the target server. ACTION(ADD) or ACTION(REPLACE) Specify whether you want DB2 to create a new version of the native SQL procedure and its associated package or to replace the specified version. Optionally, you can also specify the bind options QUALIFIER or OWNER if want to change them.
Example
Example of deploying the same version of a procedure at another location: The following BIND command creates a native SQL procedure with the name PRODUCTION.MYPROC at the CHICAGO location. This procedure is created from the procedure TEST.MYPROC at the current site. Both native SQL procedures have the same content and version, ABC. However, the package for the procedure CHICAGO.PRODUCTION.MYPROC has XYZ as its qualifier.
CREATE PROCEDURE TEST.MYPROC VERSION ABC LANGUAGE SQL ... BEGIN ... END BIND PACKAGE(CHICAGO.PRODUCTION) DEPLOY(TEST.MYPROC) COPYVER(ABC) ACTION(ADD) QUALIFIER(XYZ)
Example of replacing a version of a procedure at another location: The following BIND command replaces version ABC of the procedure PRODUCTION.MYPROC at the CHICAGO location with version ABC of the procedure TEST.MYPROC at the current site.
BIND PACKAGE(CHICAGO.PRODUCTION) DEPLOY(TEST.MYPROC) COPYVER(ABC) ACTION(REPLACE) REPLVER(ABC)
Related concepts: Communications database for the server (Managing Security) Related reference: BIND and REBIND options (DB2 Commands) BIND PACKAGE (DSN) (DB2 Commands) Scenario for deployment (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond)
575
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To migrate an external SQL procedure to a native SQL procedure: 1. Find and save the existing CREATE PROCEDURE and GRANT EXECUTE statements for the existing external SQL procedure. 2. Drop the existing external SQL procedure by using the DROP PROCEDURE statement. 3. Re-create the procedure as a native SQL procedure by using the same CREATE PROCEDURE statement that you used to originally create the procedure, with both of the following changes: v If the procedure was defined with the options FENCED or EXTERNAL, remove these keywords. v Either remove the WLM ENVIRONMENT keyword, or add the FOR DEBUG MODE clause. v If the procedure body contains statements with unqualified names that could refer to either a column or an SQL variable or parameter, qualify these names. Otherwise, you might need to change the statement. DB2 resolves these names differently depending on whether the procedure is an external SQL procedure or a native SQL procedure. For external SQL procedures, DB2 first treats the name as a variable or parameter if one exists with that name. For native SQL procedures, DB2 first treats the name as a column if a column exists with that name. For example, consider the following statement:
CREATE PROCEDURE P1 (INOUT C1 INT) ... SELECT C1 INTO xx FROM T1
In the preceding example, if P1 is an external SQL procedure, C1 is a parameter. For native SQL procedures, C1 is a column in table T1. If such a column does not exist, C1 is a parameter. 4. Issue the same GRANT EXECUTE statements that you used to originally grant privileges for this stored procedure. 5. Increase the value of the TIME parameter on the job statement for applications that call stored procedures. Important: This change is necessary because time for SQL external stored procedures is charged to the WLM address space, while time for native SQL stored procedures is charged to the address space of the task. 6. Test your new native SQL procedure.
576
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Related tasks: Implementing DB2 stored procedures (DB2 Administration Guide) Related reference: CREATE PROCEDURE (SQL - external) (DB2 SQL) CREATE PROCEDURE (SQL - native) (DB2 SQL) DROP (DB2 SQL) GRANT (function or procedure privileges) (DB2 SQL)
Using the DB2 precompiler to assist you in converting an external SQL procedure to a native SQL procedure
The DB2 precompiler can be useful when considering any conversion of an external SQL procedure to a native SQL procedure.
Procedure
To inspect the quality of native SQL PL source coding using the DB2 precompiler: 1. Copy the original SQL PL source code to a FB80 data set. Reformat the source as needed to fit within the precompiler margins. 2. Precompile the SQL PL source by executing program DSNHPSM with the HOST(SQLPL) option. 3. Inspect the produced listing (SYSPRINT). Pay attention to error and warning messages. 4. Modify the SQL PL source to address coding problems that are identified by messages in the listing. 5. Repeat steps 1 through 4 until all error and warning messages are resolved. Address informational messages as needed. 6. Copy the modified SQL PL source file back to its original source format, reformatting as needed.
Results
Sample JCL DSNTEJ67 demonstrates this process for an external SQL procedure that was produced using the DB2 SQL procedure processor DSNTPSMP. Related reference: Sample programs to help you prepare and run external SQL procedures on page 594
577
Procedure
To change an existing version of a native SQL procedure: Issue the ALTER PROCEDURE statement with the REPLACE VERSION clause. Any option that you do not explicitly specify inherits the system default values. This inheritance occurs even if those options were explicitly specified for a prior version by using a CREATE PROCEDURE statement, ALTER PROCEDURE statement, or REBIND command.
Example
The following ALTER PROCEDURE statement updates version V2 of the UPDATE_BALANCE procedure.
ALTER PROCEDURE TEST.UPDATE_BALANCE REPLACE VERSION V2 (IN CUSTOMER_NO INTEGER, IN AMOUNT DECIMAL(9,2)) MODIFIES SQL DATA ASUTIME LIMIT 100 BEGIN UPDATE ACCOUNTS SET BAL = BAL + AMOUNT WHERE CUSTNO = CUSTOMER_NO AND CUSTSTAT = A; END
Related tasks: Creating a new version of a native SQL procedure on page 572 Related reference: REBIND PACKAGE (DSN) (DB2 Commands) ALTER PROCEDURE (SQL - native) (DB2 SQL) CREATE PROCEDURE (SQL - native) (DB2 SQL)
Procedure
To regenerate an existing version of a native SQL procedure: 1. Issue the ALTER PROCEDURE statement with the REGENERATE clause and specify the version to be regenerated.
578
2. If copies of the package for the specified version of the procedure exist at remote sites, replace those packages. Issue the BIND PACKAGE command with the COPY option and appropriate location for each remote package. 3. If copies of the package for the specified version of the procedure exist locally with different schema names, replace those packages. Issue the BIND PACKAGE command with the COPY option and appropriate schema for each local package.
Example
The following ALTER PROCEDURE statement regenerates the active version of the UPDATE_SALARY_1 procedure.
ALTER PROCEDURE UPDATE_SALARY_1 REGENERATE ACTIVE VERSION
Procedure
To remove an existing version of a native SQL procedure: | | | | | | | | | | | | | Issue the ALTER PROCEDURE statement with the DROP VERSION clause and the name of the version that you want to drop. If you instead want to drop all versions of the procedure, use the DROP statement. Example of dropping a version that is not active: The following statement drops the OLD_PRODUCTION version of the P1 procedure.
ALTER PROCEDURE P1 DROP VERSION OLD_PRODUCTION
Example of dropping an active version: Assume that the OLD_PRODUCTION version of the P1 procedure is the active version. The following example first switches the active version to NEW_PRODUCTION and then drops the OLD_PRODUCTION version.
ALTER PROCEDURE P1 ACTIVATE VERSION NEW_PRODUCTION; ALTER PROCEDURE P1 DROP VERSION OLD_PRODUCTION;
Related tasks: Designating the active version of a native SQL procedure on page 789
| | | | | |
579
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To create an external SQL procedure: 1. Use one of the following methods to create the external SQL procedure: v IBM Optim Development Studio. See Developing database routines (IBM Data Studio, IBM Optim Database Administrator, IBM infoSphere Data Architect, IBM Optim Development Studio). v JCL v The DB2 for z/OS SQL procedure processor (DSNTPSMP) The preceding methods that you use to create an external SQL procedure perform the following actions: v Convert the external SQL procedure source statements into a C language program by using the DB2 precompiler v Create an executable load module and a DB2 package from the C language program. v Define the external SQL procedure to DB2 by issuing a CREATE PROCEDURE statement either statically or dynamically. Restriction: If you plan to use the DB2 stored procedure debugger or the Unified Debugger, do not use JCL. Use either IBM Optim Development Studio or DSNTPSMP. If you plan to use IBM Optim Development Studio or DSNTPSMP, you must set up support for external SQL procedures. 2. Authorize the appropriate users to use the stored procedure by issuing the GRANT EXECUTE statement.
Example
For examples of how to prepare and run external SQL procedures, see Sample programs to help you prepare and run external SQL procedures on page 594. Related concepts: SQL procedures on page 542 Related tasks: Implementing DB2 stored procedures (DB2 Administration Guide) Related reference: CREATE PROCEDURE (SQL - external) (DB2 SQL) GRANT (function or procedure privileges) (DB2 SQL)
580
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Package privilege to use BIND or REBIND to BIND ON PACKAGE collection-id.* bind packages in the specified collection. System privilege to use BIND with the ADD option to create packages and plans. BINDADD
CREATEIN, ALTERIN, DROPIN ON Schema privilege to create, alter, or drop SCHEMA schema-name stored procedures in the specified schema. The BUILDOWNER authorization ID must have the CREATEIN privilege on the schema. You can use an asterisk (*) as the identifier for a schema. Table privileges to select or delete from, insert into, or update the specified catalog tables. SELECT ON TABLE SYSIBM.SYSROUTINES SELECT ON TABLE SYSIBM.SYSPARMS SELECT, INSERT, UPDATE, DELETE ON TABLE SYSIBM.SYSROUTINES_SRC SELECT, INSERT, UPDATE, DELETE ON TABLE SYSIBM.SYSROUTINES_OPTS ALL ON TABLE SYSIBM.SYSPSMOUT Any privileges that are required for the SQL statements and that are contained within the SQL procedure body. These privileges must be associated with the OWNER authorization-id that is specified in your bind options. The default owner is the user that is invoking DSNTPSMP. Syntax varies depending on the SQL procedure body
Procedure
To create an external SQL procedure by using DSNTPSMP: 1. Write an application program that calls DSNTPSMP. Include the following items in your program: v A CLOB host variable that contains a CREATE PROCEDURE statement for the external SQL procedure. That statement should include the FENCED keyword or the EXTERNAL keyword, and the procedure body, which is written in SQL. Alternatively, instead of defining a host variable for the CREATE PROCEDURE statement, you can store the statement in a data set member.
581
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v An SQL CALL statement with the BUILD function. The CALL statement should use the proper syntax for invoking DSNTPSMP. Pass the SQL procedure source to DSNTPSMP as one of the following input parameters: SQL-procedure-source Use this parameter if you defined a host variable in your application to contain the CREATE PROCEDURE statement. source-data-set-name Use this parameter if you stored the CREATE PROCEDURE statement in a data set. v Based on the return value from the CALL statement, issue either an SQL COMMIT or a ROLLBACK statement. If the return value is 0 or 4, issue a COMMIT statement. Otherwise, issue a ROLLBACK statement. You must process the result set before issuing the COMMIT or ROLLBACK statement. A QUERYLEVEL request must be followed by the COMMIT statement. 2. Precompile, compile, and link-edit the application program. 3. Bind a package for the application program. 4. Run the application program. Related concepts: SQL procedure body on page 543 Related reference: CREATE PROCEDURE (SQL - external) (DB2 SQL) DB2 for z/OS SQL procedure processor (DSNTPSMP): The SQL procedure processor, DSNTPSMP, is a REXX stored procedure that you can use to prepare an external SQL procedure for execution. You can also use DSNTPSMP to perform selected steps in the preparation process or delete an existing external SQL procedure. DSNTPSMP is the only preparation method for enabling external SQL procedures to be debugged with either the SQL Debugger or the Unified Debugger. DSNTPSMP requires that your system EBCDIC CCSID also be compatible with the C compiler. Using an incompatible CCSID results in compile-time errors. Examples of incompatible CCSIDs include 290, 930, 1026, and 1155. If your system EBCDIC CCSID is not compatible, do not just change it. Contact IBM Software Support for help. Sample startup procedure for a WLM address space for DSNTPSMP: You must run DSNTPSMP in a WLM-established stored procedures address space. You should run only DSNTPSMP in that address space, and you must limit the address space to run only one task concurrently. This example shows how to set up a WLM address space for DSNTPSMP. The following example shows sample JCL for a startup procedure for the address space in which DSNTPSMP runs.
582
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
//DSN8WLMP PROC DB2SSN=DSN,NUMTCB=1,APPLENV=WLMTPSMP 1 //* //WLMTPSMP EXEC PGM=DSNX9WLM,TIME=1440, 2 // PARM=&DB2SSN,&NUMTCB,&APPLENV, // REGION=0M,DYNAMNBR=10 //STEPLIB DD DISP=SHR,DSN=DSN810.SDSNEXIT 3 // DD DISP=SHR,DSN=DSN810.SDSNLOAD // DD DISP=SHR,DSN=CBC.SCCNCMP // DD DISP=SHR,DSN=CEE.SCEERUN //SYSEXEC DD DISP=SHR,DSN=DSN810.SDSNCLST 4 //SYSTSPRT DD SYSOUT=A //CEEDUMP DD SYSOUT=A //SYSABEND DD DUMMY //* //SQLDBRM DD DISP=SHR,DSN=DSN910.DBRMLIB.DATA 5 //SQLCSRC DD DISP=SHR,DSN=USER.PSMLIB.DATA 6 //SQLLMOD DD DISP=SHR,DSN=DSN910.RUNLIB.LOAD 7 //SQLLIBC DD DISP=SHR,DSN=CEE.SCEEH.H 8 // DD DISP=SHR,DSN=CEE.SCEEH.SYS.H //SQLLIBL DD DISP=SHR,DSN=CEE.SCEELKED 9 // DD DISP=SHR,DSN=DSN910.SDSNLOAD //SYSMSGS DD DISP=SHR,DSN=CEE.SCEEMSGP(EDCPMSGE) 10 //* //* DSNTPSMP Configuration File - CFGTPSMP (optional) 11 //* A site-provided sequential data set or member, used to //* define customized operation of DSNTPSMP in this APPLENV //* //* CFGTPSMP DD DISP=SHR,DSN= //* //SQLSRC DD UNIT=SYSALLDA,SPACE=(800,(20,20)), 12 // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLPRINT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882) //SQLTERM DD UNIT=SYSALLDA,SPACE=(4000,(20,20)), // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882) //SQLOUT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLCPRT DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=VB,LRECL=137,BLKSIZE=882) //SQLUT1 DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLUT2 DD UNIT=SYSALLDA,SPACE=(16000,(20,20)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLCIN DD UNIT=SYSALLDA,SPACE=(8000,(20,20)) //SQLLIN DD UNIT=SYSALLDA,SPACE=(3200,(30,30)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SYSMOD DD UNIT=SYSALLDA,SPACE=(8000,(20,20)), // DCB=(RECFM=FB,LRECL=80,BLKSIZE=3200) //SQLDUMMY DD DUMMY
Notes: 1 APPLENV specifies the application environment in which DSNTPSMP runs. To ensure that DSNTPSMP always uses the correct data sets and parameters for preparing each external SQL procedure, you can set up different application environments for preparing stored procedures with different program preparation requirements. For example, if all payroll applications use the same set of data sets during program preparation, you could set up an application environment called PAYROLL for preparing only payroll applications. The startup procedure for PAYROLL would point to the data sets that are used for payroll applications. DB2SSN specifies the DB2 subsystem name.
583
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 3 2
NUMTCB specifies the number of programs that can run concurrently in the address space. You should always set NUMTCB to 1 to ensure that executions of DSNTPSMP occur serially. WLMTPSMP specifies the address space in which DSNTPSMP runs. DYNAMNBR reserves space for dynamic allocation of data sets during the SQL procedure preparation process. STEPLIB specifies the DB2 load libraries, the z/OS C/C++ compiler library, and the Language Environment run time library that DSNTPSMP uses when it runs. SYSEXEC specifies the library that contains the REXX exec DSNTPSMP. SQLDBRM is an output data set that specifies the library into which DSNTPSMP puts the DBRM that it generates when it precompiles your external SQL procedure. SQLCSRC is an output data set that specifies the library into which DSNTPSMP puts the C source code that it generates from the external SQL procedure source code. This data set should have a logical record length of 80. SQLLMOD is an output data set that specifies the library into which DSNTPSMP puts the load module that it generates when it compiles and link-edits your external SQL procedure. SQLLIBC specifies the library that contains standard C header files. This library is used during compilation of the generated C program. SQLLIBL specifies the following libraries, which DSNTPSMP uses when it link-edits the external SQL procedure: v Language Environment link-edit library v DB2 load library SYSMSGS specifies the library that contains messages that are used by the C prelink-edit utility. CFGTPSMP specifies an optional data set that you can use to customize DSNTPSMP, including specifying the compiler level. For details on all of the options that you can set in this file and how to set them, see the DSNTPSMP CLIST comments. The DD statements that follow describe work file data sets that are used by DSNTPSMP.
4 5
8 9
10 11
12
CALL statement syntax for invoking DSNTPSMP: You can invoke the SQL procedure processor, DSNTPSMP, from an application program by using an SQL CALL statement. DSNTPSMP prepares an external SQL procedure. The following diagrams show the syntax of invoking DSNTPSMP through the SQL CALL statement:
584
| | | | | | | | | | | | | | |
CALL SYSPROC.DSNTPSMP (
function , SQL-procedure-name
SQL-procedure-source empty-string ,
source-data-set-name empty-string
build-owner empty-string
, return-code
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Note: You must specify: v The DSNTPSMP parameters in the order listed v The empty string if an optional parameter is not required for the function v The options in the order: bind, compiler, precompiler, prelink, and link The DSNTPSMP parameters are: function A VARCHAR(20) input parameter that identifies the task that you want DSNTPSMP to perform. The tasks are: BUILD Creates the following objects for an external SQL procedure: v A DBRM, in the data set that DD name SQLDBRM points to v A load module, in the data set that DD name SQLLMOD points to v The C language source code for the external SQL procedure, in the data set that DD name SQLCSRC points to v The stored procedure package v The stored procedure definition The following input parameters are required for the BUILD function: SQL-procedure name SQL-procedure-source or source-data-set-name If you choose the BUILD function, and an external SQL procedure with name SQL-procedure-name already exists, DSNTPSMP issues an error message and terminates. BUILD_DEBUG Creates the following objects for an external SQL procedure and includes the preparation necessary to debug the external SQL procedure with the SQL Debugger and the Unified Debugger: v A DBRM, in the data set that DD name SQLDBRM points to
Chapter 10. Creating and modifying DB2 objects
585
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v A load module, in the data set that DD name SQLLMOD points to v The C language source code for theexternal SQL procedure, in the data set that DD name SQLCSRC points to v The stored procedure package v The stored procedure definition The following input parameters are required for the BUILD_DEBUG function: SQL-procedure name SQL-procedure-source or source-data-set-name If you choose the BUILD_DEBUG function, and an external SQL procedure with name SQL-procedure-name already exists, DSNTPSMP issues an error message and terminates. REBUILD Replaces all objects that were created by the BUILD function for an external SQL procedure, if it exists, otherwise creates those objects. The following input parameters are required for the REBUILD function: SQL-procedure name SQL-procedure-source or source-data-set-name REBUILD_DEBUG Replaces all objects that were created by the BUILD_DEBUG function for an external SQL procedure, if it exists, otherwise creates those objects, and includes the preparation necessary to debug the external SQL procedure with the SQL Debugger and the Unified Debugger. The following input parameters are required for the REBUILD_DEBUG function: SQL-procedure name SQL-procedure-source or source-data-set-name REBIND Binds the external SQL procedure package for an existing external SQL procedure. The following input parameter is required for the REBIND function: SQL-procedure name DESTROY Deletes the following objects for an existing external SQL procedure: v The DBRM, from the data set that DD name SQLDBRM points to v The load module, from the data set that DD name SQLLMOD points to v The C language source code for the external SQL procedure, from the data set that DD name SQLCSRC points to v The stored procedure package v The stored procedure definition The following input parameter is required for the DESTROY function: SQL-procedure name ALTER Updates the registration for an existing external SQL procedure. The following input parameters are required for the ALTER function: SQL-procedure name alter-statement
586
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
ALTER_REBUILD Updates an existing external SQL procedure. The following input parameters are required for the ALTER_REBUILD function: SQL-procedure name SQL-procedure-source or source-data-set-name ALTER_REBUILD_DEBUG Updates an existing external SQL procedure, and includes the preparation necessary to debug the external SQL procedure with the SQL Debugger and the Unified Debugger. The following input parameters are required for the ALTER_REBUILD_DEBUG function: SQL-procedure name SQL-procedure-source or source-data-set-name ALTER_REBIND Updates the registration and binds the SQL package for an existing external SQL procedure. The following input parameters are required for the ALTER_REBIND function: SQL-procedure name alter-statement QUERYLEVEL Obtains the interface level of the build utility invoked. No other input is required. SQL-procedure-name A VARCHAR(261) input parameter that specifies the external SQL procedure name. The name can be qualified or unqualified. The name must match the procedure name that is specified within the CREATE PROCEDURE statement that is provided in SQL-procedure-source or that is obtained from source-data-set-name. In addition, the name must match the procedure name that is specified within the ALTER PROCEDURE statement that is provided in alter-statement. Do not mix qualified and unqualified references. SQL-procedure-source A CLOB(2M) input parameter that contains the CREATE PROCEDURE statement for the external SQL procedure. If you specify an empty string for this parameter, you need to specify the name source-data-set-name of a data set that contains the external SQL procedure source code. bind-options A VARCHAR(1024) input parameter that contains the options that you want to specify for binding the external SQL procedure package. Do not specify the MEMBER or LIBRARY option for the DB2 BIND PACKAGE command. compiler-options A VARCHAR(255) input parameter that contains the options that you want to specify for compiling the C language program that DB2 generates for the external SQL procedure. precompiler-options A VARCHAR(255) input parameter that contains the options that you want to specify for precompiling the C language program that DB2 generates for the external SQL procedure. Do not specify the HOST option.
Chapter 10. Creating and modifying DB2 objects
587
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
prelink-options A VARCHAR(255) input parameter that contains the options that you want to specify for prelinking the C language program that DB2 generates for the external SQL procedure. link-options A VARCHAR(255) input parameter that contains the options that you want to specify for linking the C language program that DB2 generates for the external SQL procedure. alter-statement A VARCHAR(32672) input parameter that contains the SQL ALTER PROCEDURE statement to process with the ALTER or ALTER_REBIND function. source-data-set-name A VARCHAR(80) input parameter that contains the name of a z/OS sequential data set or partitioned data set member that contains the source code for the external SQL procedure. If you specify an empty string for this parameter, you need to provide the external SQL procedure source code in SQL-procedure-source. build-owner A VARCHAR(130) input parameter that contains the SQL identifier to serve as the build owner for newly created SQL stored procedures. When this parameter is not specified, the value defaults to the value in the CURRENT SQLID special register when the build utility is invoked. build-utility A VARCHAR(255) input parameter that contains the name of the build utility that is invoked. The qualified form of the name is suggested, for example, SYSPROC.DSNTPSMP. return-code A VARCHAR(255) output parameter in which DB2 puts the return code from the DSNTPSMP invocation. The values are: 0 4 Successful invocation. The calling application can optionally retrieve the result set and then issue the required SQL COMMIT statement. Successful invocation, but warnings occurred. The calling application should retrieve the warning messages in the result set and then issue the required SQL COMMIT statement. Failed invocation. The calling application should retrieve the error messages in the result set and then issue the required SQL ROLLBACK statement. Where x is a digit between 0 and 9. Failed invocation with severe errors. The calling application should retrieve the error messages in the result set and then issue the required SQL ROLLBACK statement. To view error messages that are not in the result set, see the job log of the address space for the DSNTPSMP execution. 999 998 997 996 Unknown severe internal error APF environment setup error DSNREXX setup error Global temporary table setup error
99x
588
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
995
1.2x Where x is a digit between 0 and 9. Level of DSNTPSMP when request is QUERYLEVEL. The calling application can retrieve the result set for additional information about the release and service level and then issue the required SQL COMMIT statement. Related reference: Descriptions of SQL processing options on page 959 BIND and REBIND options (DB2 Commands) Compiler Options (C/C++) (XL C/C++ User's Guide) Binder options reference (MVS Program Management: User's Guide and Reference) Examples of invoking the SQL procedure processor (DSNTPSMP): You can invoke the BUILD, DESTROY, REBUILD, and REBIND functions of DSNTPSMP. DSNTPSMP BUILD function: Call DSNTPSMP to build an external SQL procedure. The information that DSNTPSMP needs is listed in the following table:
Table 94. The functions DSNTPSMP needs to BUILD an SQL procedure Function External SQL procedure name Source location Bind options Compiler options Precompiler options Prelink options Link options Build utility Return value BUILD MYSCHEMA.SQLPROC String in CLOB host variable procsrc VALIDATE(BIND) SOURCE, LIST, LONGNAME, RENT SOURCE, XREF, STDSQL(NO) None specified AMODE=31, RMODE=ANY, MAP, RENT SYSPROC.DSNTPSMP String returned in varying-length host variable returnval
DSNTPSMP DESTROY function: Call DSNTPSMP to delete an external SQL procedure definition and the associated load module. The information that DSNTPMSP needs is listed in the following table:
589
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 95. The functions DSNTPSMP needs to DESTROY an SQL procedure Function External SQL procedure name Return value DESTROY MYSCHEMA.OLDPROC String returned in varying-length host variable returnval
DSNTPSMP REBUILD function: Call DSNTPSMP to recreate an existing external SQL procedure. The information that DSNTPMSP needs is listed in the following table:
Table 96. The functions DSNTPSMP needs to REBUILD an SQL procedure Function External SQL procedure name Bind options Compiler options Precompiler options Prelink options Link options Source data set name Return value REBUILD MYSCHEMA.SQLPROC VALIDATE(BIND) SOURCE, LIST, LONGNAME, RENT SOURCE, XREF, STDSQL(NO) None specified AMODE=31, RMODE=ANY, MAP, RENT Member PROCSRC of partitioned data set DSN910.SDSNSAMP String returned in varying-length host variable returnval
If you want to recreate an existing external SQL procedure for debugging with the SQL Debugger and the Unified Debugger, use the following CALL statement, which includes the REBUILD_DEBUG function:
EXEC SQL CALL SYSPROC.DSNTPSMP(REBUILD_DEBUG,MYSCHEMA.SQLPROC,, VALIDATE(BIND), SOURCE,LIST,LONGNAME,RENT, SOURCE,XREF,STDSQL(NO), , AMODE=31,RMODE=ANY,MAP,RENT, ,DSN910.SDSNSAMP(PROCSRC),,, :returnval);
590
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DSNTPSMP REBIND function: Call DSNTPSMP to rebind the package for an existing external SQL procedure. The information that DSNTPMSP needs is listed in the following table:
Table 97. The functions DSNTPSMP needs to REBIND an SQL procedure Function ExternalSQL procedure name Bind options Return value REBIND MYSCHEMA.SQLPROC VALIDATE(RUN), ISOLATION(RR) String returned in varying-length host variable returnval
Result set that the SQL procedure processor (DSNTPSMP) returns: DSNTPSMP returns one result set that contains messages and listings. You can write your client program to retrieve information from this result set. Because DSNTPSMP is a stored procedure, use the same technique that you would use to write a program to receive result sets from any stored procedure. Each row of the result set contains the following information: Processing step The step in the DSNTPSMP function process to which the message applies. DD name The DD statement that identifies the data set that contains the message. Sequence number The sequence number of a line of message text within a message. Message A line of message text. Rows in the message result set are ordered by processing step, DD name, and sequence number. For an example of how to process a result set from DSNTPSMP, see the DB2 sample program DSNTEJ65. Related concepts: DB2 for z/OS SQL procedure processor (DSNTPSMP) on page 582 Job DSNTEJ65 (DB2 Installation and Migration) Related tasks: Writing a program to receive the result sets from a stored procedure on page 792
591
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To create an external SQL procedure by using JCL, include the following job steps in your JCL job: 1. Issue a CREATE PROCEDURE statement that includes either the FENCED keyword or the EXTERNAL keyword and the procedure body, which is written in SQL. Alternatively, you can issue the CREATE PROCEDURE statement dynamically by using an application such as SPUFI, DSNTEP2, DSNTIAD, or the command line processor. Tip: If the routine body of the CREATE PROCEDURE statement contains embedded semicolons, change the default SQL terminator character from a semicolon to some other special character, such as the percent sign (%). This statement defines the stored procedure to DB2. DB2 stores the definition in the DB2 catalog. 2. Run program DSNHPC with the HOST(SQL) option. This program converts the external SQL procedure source statements into a C language program. DSNHPC also writes a new CREATE PROCEDURE statement in the data set that is specified in the SYSUT1 DD statement. 3. Precompile, compile, and link-edit the generated C program by using one of the following techniques: v The DB2 precompiler and JCL instructions to compile and link-edit the program v The SQL statement coprocessor When you perform this step, specify the following settings: v Give the DBRM the same name as the name of the load module for the external SQL procedure. v Specify MARGINS(1,80) for the MARGINS SQL processing option. v Specify the NOSEQ compiler option. This process produces an executable C language program. 4. Bind the resulting DBRM into a package.
Example
Suppose that you define an external SQL procedure by issuing the following CREATE PROCEDURE statement dynamically:
CREATE PROCEDURE DEVL7083.EMPDTLSS ( IN PEMPNO CHAR(6) ,OUT PFIRSTNME VARCHAR(12) ,OUT PMIDINIT CHAR(1) ,OUT PLASTNAME VARCHAR(15) ,OUT PWORKDEPT CHAR(3) ,OUT PHIREDATE DATE ,OUT PSALARY DEC(9,2) ,OUT PSQLCODE INTEGER
592
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
) RESULT SETS 0 MODIFIES SQL DATA FENCED NO DBINFO WLM ENVIRONMENT DB9AWLMR STAY RESIDENT NO COLLID DEVL7083 PROGRAM TYPE MAIN RUN OPTIONS TRAP(OFF),RPTOPTS(OFF) COMMIT ON RETURN NO LANGUAGE SQL BEGIN DECLARE SQLCODE INTEGER; DECLARE SQLSTATE CHAR(5); SELECT FIRSTNME , MIDINIT , LASTNAME , WORKDEPT , HIREDATE , SALARY INTO PFIRSTNME , PMIDINIT , PLASTNAME , PWORKDEPT , PHIREDATE , PSALARY FROM EMP WHERE EMPNO = PEMPNO ; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET PSQLCODE = SQLCODE; END
You can use JCL that is similar to the following JCL to prepare the procedure:
//ADMF001S JOB (999,POK),SQL C/L/B/E,CLASS=A,MSGCLASS=T, // NOTIFY=ADMF001,TIME=1440,REGION=0M /*JOBPARM SYSAFF=SC63,L=9999 // JCLLIB ORDER=(DB9AU.PROCLIB) //* //JOBLIB DD DSN=DB9A9.SDSNEXIT,DISP=SHR // DD DSN=DB9A9.SDSNLOAD,DISP=SHR // DD DSN=CEE.SCEERUN,DISP=SHR //*---------------------------------------------------------//* STEP 01: PRECOMP, COMP, LKED AN SQL PROCEDURE //*---------------------------------------------------------//SQL01 EXEC DSNHSQL,MEM=EMPDTLSS, // PARM.PC=HOST(SQL),SOURCE,XREF,MAR(1,80),STDSQL(NO), // PARM.PCC=HOST(C),SOURCE,XREF,MAR(1,80),STDSQL(NO), // PARM.C=SOURCE LIST MAR(1,80) NOSEQ LO RENT, // PARM.LKED=AMODE=31,RMODE=ANY,MAP,RENT //PC.SYSLIB DD DUMMY //PC.SYSUT2 DD DSN=&&SPDML,DISP=(,PASS), <=MAKE IT PERMANENT, IF YOU // UNIT=SYSDA,SPACE=(TRK,1), WANT TO USE IT LATER // DCB=(RECFM=FB,LRECL=80) //PC.SYSIN DD DISP=SHR,DSN=SG247083.PROD.DDL(&MEM.) //PC.SYSCIN DD DISP=SHR,DSN=SG247083.TEST.C.SOURCE(&MEM.) //PCC.SYSIN DD DISP=SHR,DSN=SG247083.TEST.C.SOURCE(&MEM.) //PCC.SYSLIB DD DUMMY //PCC.DBRMLIB DD DISP=SHR,DSN=SG247083.DEVL.DBRM(&MEM.) //LKED.SYSLMOD DD DISP=SHR,DSN=SG247083.DEVL.LOAD(&MEM.) //LKED.SYSIN DD * INCLUDE SYSLIB(DSNRLI) NAME EMPDTLSS(R) /* //*---------------------------------------------------------//* STEP 02: BIND THE PROGRAM //*---------------------------------------------------------00000001 00000002 00000003 00000004 00250000 00260000 00270000 00270001 00900000 01080000 00900000
593
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
//SQL02 EXEC PGM=IKJEFT01,DYNAMNBR=20,COND=(4,LT) //DBRMLIB DD DSN=SG247083.DEVL.DBRM,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSOUT DD SYSOUT=* //REPORT DD SYSOUT=* //SYSIN DD * //SYSTSIN DD * DSN SYSTEM(DB9A) BIND PACKAGE(DEVL7083) MEMBER(EMPDTLSS) VALIDATE(BIND) OWNER(DEVL7083) END //*
01300000 01310000 01320000 01330000 01340000 01350000 01360000 01370000 01390000 01400000 01410000 01460000 01550000
Related concepts: SQL procedure body on page 543 Command line processor (DB2 Commands) Related tasks: Changing SPUFI defaults on page 1074 Creating an external SQL procedure by using DSNTPSMP on page 580 Developing database routines (IBM Data Studio, IBM Optim Database Administrator, IBM infoSphere Data Architect, IBM Optim Development Studio) Related reference: Descriptions of SQL processing options on page 959 DSNTEP2 and DSNTEP4 on page 1142 DSNTIAD on page 1140 BIND PACKAGE (DSN) (DB2 Commands) CREATE PROCEDURE (SQL - external) (DB2 SQL)
Sample programs to help you prepare and run external SQL procedures
DB2 provides sample jobs to help you prepare and run external SQL procedures. All samples are in data set DSN910.SDSNSAMP. Before you can run the samples, you must customize them for your installation. See the prolog of each sample for specific instructions. The following table lists the sample jobs that DB2 provides for external SQL procedures.
Table 98. External SQL procedure samples shipped with DB2 Member that contains source code DSNHSQL DSNTEJ63 DSN8ES1
Purpose Precompiles, compiles, prelink-edits, and link-edits an external SQL procedure Invokes JCL procedure DSNHSQL to prepare external SQL procedure DSN8ES1 for execution A stored procedure that accepts a department number as input and returns a result set that contains salary information for each employee in that department
594
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 98. External SQL procedure samples shipped with DB2 (continued) Member that contains source code DSNTEJ64 DSN8ED3 DSN8ES2
Purpose Prepares client program DSN8ED3 for execution Calls SQL procedure DSN8ES1 A stored procedure that accepts one input parameter and returns two output parameters. The input parameter specifies a bonus to be awarded to managers. The external SQL procedure updates the BONUS column of DSN910.SDSNSAMP. If no SQL error occurs when the external SQL procedure runs, the first output parameter contains the total of all bonuses awarded to managers and the second output parameter contains a null value. If an SQL error occurs, the second output parameter contains an SQLCODE. Calls the SQL procedure processor, DSNTPSMP, to prepare DSN8ES2 for execution A sample startup procedure for the WLM-established stored procedures address space in which DSNTPSMP runs Calls external SQL procedure DSN8ES2 Prepares and executes programs DSN8ED4 and DSN8ED5. DSNTEJ65 uses DSNTPSMP, the SQL procedure processor, which requires that the default EBCDIC CCSID that is used by DB2 also be compatible with the C compiler. Do not run DSNTEJ65 if the default EBCDIC CCSID for DB2 is not compatible with the C compiler. Examples of incompatible CCSIDs include 290, 930, 1026, and 1155.
DSN8ED4 DSN8WLMP
DSN8ED5 DSNTEJ65
DSNTEJ67
JCL job
Prepares an existing external SQL procedure (sample DSN8.DSN8ES2) for conversion to a native SQL procedure. DSNTEJ67 obtains the source of external SQL procedure DSN8.DSN8ES2 from the catalog and formats it into a data set. DSNTEJ67 executes DSNHPSM with HOST(SQLPL), obtains a listing for the source, and replaces the offending procedure options in the source data set.
DSNTIJSD
JCL job
Prepares a DB2 for z/OS server for operation with the SQL Debugger and the Unified Debugger
595
Procedure
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To create an external stored procedure: 1. Write the external stored procedure body in assembler, C, C++, COBOL, REXX, or PL/I. Ensure that the procedure body that you write follows the guidelines for external stored procedures that are described in the following information: Accessing other sites in an external procedure on page 619 Accessing non-DB2 resources in your stored procedure on page 619 Writing an external procedure to access IMS databases on page 621 Writing an external procedure to return result sets to a DRDA client on page 622 v Restrictions when calling other programs from an external stored procedure on page 623 v External stored procedures as main programs and subprograms on page 625 v v v v v Data types in stored procedures on page 627 v COMMIT and ROLLBACK statements in a stored procedure on page 548 Restrictions: v Do not include explicit attachment facility calls. External stored procedures that run in a WLM-established address space use Resource Recovery Services attachment facility (RRSAF) calls implicitly. If an external stored procedure makes an explicit attachment facility call, DB2 rejects the call. v Do not include SRRCMIT or SRRBACK service calls. If an external stored procedure invokes either SRRCMIT or SRRBACK, DB2 puts the transaction in a state where a rollback operation is required and the CALL statement fails. For REXX procedures, continue with step 3 on page 597. 2. For assembler, C, C++, COBOL, or PL/I stored procedures, prepare the external procedure by completing the following tasks: a. Precompile, compile, and link-edit the application by using one of the following techniques: v The DB2 precompiler and JCL instructions to compile and link-edit the program v The SQL statement coprocessor Recommendation: Compile and link-edit code as reentrant. If you want to make the stored procedure reentrant, see Creating an external stored procedure as reentrant on page 624
596
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Link-edit the application by using DSNRLI, the language interface module for the Resource Recovery Services attachment facility. You must specify the parameter AMODE(31) when you link-edit the application. If you want to run your procedure as a z/OS-authorized program, you must also perform the following tasks when you link-edit the application: v Indicate that the load module can use restricted system services by specifying the parameter value AC=1. v Put the load module for the stored procedure in an APF-authorized library. You can compile COBOL stored procedures with either the DYNAM or NODYNAM COBOL compiler options. If you use DYNAM, ensure that the correct DB2 language interface module is loaded dynamically by performing one of the following actions: v Specify the ATTACH(RRSAF) SQL processing option. v Copy the DSNRLI module into a load library that is concatenated in front of the DB2 libraries. Use the member name DSNHLI. b. Bind the DBRM into a DB2 package by issuing the BIND PACKAGE command. If you want to control access to a stored procedure package, specify the ENABLE bind option with the system connection type of the calling application. Stored procedures require only a package. You do not need to bind a plan for the stored procedure or bind the stored procedure package to the plan for the calling application. For remote access scenarios, you need a package at both the requester and server sites. For more information about stored procedure packages, see Packages and plans for external stored procedures on page 617. The following example BIND PACKAGE command binds the DBRM EMPDTL1P to the collection DEVL7083.
BIND PACKAGE(DEVL7083) MEMBER(EMPDTL1P) ACT(REP) ISO(UR) ENCODING(EBCDIC) OWNER(DEVL7083) LIBRARY(SG247083.DEVL.DBRM)
3. Define the stored procedure to DB2 by issuing the CREATE PROCEDURE statement with the EXTERNAL option. Use the EXTERNAL NAME clause to specify the name of the load module for the program that runs when this procedure is called. If you want to run your procedure as a z/OS-authorized program, specify an appropriate environment with the WLM ENVIRONMENT option. The stored procedure must run in an address space with a startup procedure in which all libraries in the STEPLIB concatenation are APF-authorized. If you want environment information to be passed to the stored procedure when it is invoked, specify the DBINFO and PARAMETER STYLE SQL options in the CREATE PROCEDURE statement. When the procedure is invoked, DB2 passes the DBINFO structure, which contains environment information, to the stored procedure. For more information about PARAMETER STYLE, see Defining the linkage convention for an external stored procedure on page 599. If you compiled the stored procedure as reentrant, specify the STAY RESIDENT YES option in the CREATE PROCEDURE statement. This option makes the procedure remain resident in storage. 4. Authorize the appropriate users to use the stored procedure by issuing the GRANT EXECUTE statement.
Chapter 10. Creating and modifying DB2 objects
597
| | |
Example: The following statement allows an application that runs under the authorization ID JONES to call stored procedure SPSCHEMA.STORPRCA:
GRANT EXECUTE ON PROCEDURE SPSCHEMA.STORPRCA TO JONES;
v The parameters can have null values. v The stored procedure is to be deleted from memory when it completes. v The stored procedure needs the following Language Environment runtime options:
MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON)
v The stored procedure is part of the WLM application environment that is named PAYROLL. v The stored procedure runs as a main program. v The stored procedure does not access non-DB2 resources, so it does not need a special RACF environment. v The stored procedure can return at most 10 result sets. v When control returns to the client program, DB2 does not commit updates automatically. The following CREATE PROCEDURE statement defines the stored procedure to DB2:
CREATE PROCEDURE B(IN V1 INTEGER, OUT V2 CHAR(9)) LANGUAGE C DETERMINISTIC NO SQL EXTERNAL NAME SUMMOD COLLID SUMCOLL ASUTIME LIMIT 900 PARAMETER STYLE GENERAL WITH NULLS STAY RESIDENT NO RUN OPTIONS MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON) WLM ENVIRONMENT PAYROLL PROGRAM TYPE MAIN SECURITY DB2 DYNAMIC RESULT SETS 10 COMMIT ON RETURN NO;
What to do next
You can now invoke the stored procedure from an application program or command line processor.
598
Related concepts: Java stored procedures and user-defined functions (DB2 Application Programming for Java) Related tasks: Implementing DB2 stored procedures (DB2 Administration Guide) Related reference: BIND and REBIND options (DB2 Commands) CREATE PROCEDURE (external) (DB2 SQL) GRANT (function or procedure privileges) (DB2 SQL) C programming (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond) COBOL programming (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond) Four release levels: Sample scenario (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond) REXX programming (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond)
Procedure
To define the linkage convention for a stored procedure: When you define the stored procedure with the CREATE PROCEDURE statement, specify one of the following values for the PARAMETER STYLE option: v GENERAL v GENERAL WITH NULLS v SQL SQL is the default. Linkage conventions for external stored procedures: The linkage convention for a stored procedure can be either GENERAL, GENERAL WITH NULLS, or SQL. These linkage conventions apply to only external stored procedures. GENERAL Specify the GENERAL linkage convention when you do not want the calling program to pass null values for input parameters (IN or INOUT) to the stored procedure. If you specify GENERAL, ensure that the stored procedure contains a variable declaration for each parameter that is passed in the CALL statement.
599
The following figure shows the structure of the parameter list for PARAMETER STYLE GENERAL.
Register 1
. . .
Parameter n
Figure 32. Parameter convention GENERAL for a stored procedure
Parameter n data
GENERAL WITH NULLS Specify the GENERAL WITH NULLS linkage convention when you want to allow the calling program to supply a null value for any parameter that is passed to the stored procedure. If you specify GENERAL WITH NULLS, ensure that the stored procedure performs the following tasks: v Declares a variable for each parameter that is passed in the CALL statement. v Declares a null indicator structure that contains an indicator variable for each parameter. v On entry, examines all indicator variables that are associated with input parameters to determine which parameters contain null values. v On exit, assigns values to all indicator variables that are associated with output variables. If the output variable returns a null value to the caller, assign the associated indicator variable a negative number. Otherwise, assign a value of 0 to the indicator variable. In the CALL statement in the calling application, follow each parameter with its indicator variable. Use one of the following forms: v host-variable :indicator-variable v host-variable INDICATOR :indicator-variable The following figure shows the structure of the parameter list for PARAMETER STYLE GENERAL WITH NULLS.
600
Register 1
. . .
Parameter n Indicator array Parameter n data Indicator 1 Indicator 2
. . .
Indicator n
Figure 33. Parameter convention GENERAL WITH NULLS for a stored procedure
SQL
Specify the SQL linkage convention when you want both of the following conditions: v The calling program to be able to supply a null value for any parameter that is passed to the stored procedure. v DB2 to pass input and output parameters to the stored procedure that contain the following information: The SQLSTATE that is to be returned to DB2. This value is a CHAR(5) parameter that represents the SQLSTATE that is passed into the program from the database manager. The initial value is set to 00000'. Although the SQLSTATE is usually not set by the program, it can be set as the result SQLSTATE that is used to return an error or a warning. Returned values that start with anything other than 00', 01', or 02' are error conditions. The qualified name of the stored procedure. This is a VARCHAR(128) value. The specific name of the stored procedure. The specific name is a VARCHAR(128) value that is the same as the unqualified name. The SQL diagnostic string that is to be returned to DB2. This is a VARCHAR(1000) value. Use this area to pass descriptive information about an error or warning to the caller. Restriction: You cannot use the SQL linkage convention for a REXX language stored procedure. The following figure shows the structure of the parameter list for PARAMETER STYLE SQL.
| | |
601
Register 1
Addresses of: Parameter 1 Parameter 2 . . . Parameter n . Indicator 1 Indicator 2 . . . Indicator n SQLSTATE Procedure name Specific name Diagnostic data DBINFO
1, 2
1 2
For PL/I, this value is the address of a pointer to the DBINFO data. Passed if the DBINFO option is specified in the user-defined function definition
Related concepts: Examples of programs that call stored procedures on page 227 Related reference: CREATE PROCEDURE (external) (DB2 SQL) SQLSTATE values and common error codes (DB2 Codes) Example of GENERAL linkage convention: Specify the GENERAL linkage convention when you do not want the calling program to pass null values for input parameters (IN or INOUT) to the stored procedure. The following examples demonstrate how an assembler, C, COBOL, or PL/I stored procedure uses the GENERAL linkage convention to receive parameters. For these examples, assume that a COBOL application has the following parameter declarations and CALL statement:
************************************************************ * PARAMETERS FOR THE SQL STATEMENT CALL * ************************************************************ 01 V1 PIC S9(9) USAGE COMP. 01 V2 PIC X(9).
602
Assembler example: The following example shows how a stored procedure that is written in assembler language receives these parameters.
******************************************************************* * CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES * * THE GENERAL LINKAGE CONVENTION. * ******************************************************************* A CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS USING PROGAREA,R13 ******************************************************************* * BRING UP THE LANGUAGE ENVIRONMENT. * ******************************************************************* . . . ******************************************************************* * GET THE PASSED PARAMETER VALUES. THE GENERAL LINKAGE CONVENTION* * FOLLOWS THE STANDARD ASSEMBLER LINKAGE CONVENTION: * * ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS TO THE * * PARAMETERS. * ******************************************************************* L R7,0(R1) GET POINTER TO V1 MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1 . . . L MVC . . . CEETERM RC=0 ******************************************************************* * VARIABLE DECLARATIONS AND EQUATES * ******************************************************************* R1 EQU 1 REGISTER 1 R7 EQU 7 REGISTER 7 PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK LTORG , PLACE LITERAL POOL HERE PROGAREA DSECT ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART LOCV1 DS F LOCAL COPY OF PARAMETER V1 LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2 . . . PROGSIZE EQU *-PROGAREA CEEDSA , CEECAA , END A R7,4(R1) 0(9,R7),LOCV2 GET POINTER TO V2 MOVE A VALUE INTO OUTPUT VAR V2
MAPPING OF THE DYNAMIC SAVE AREA MAPPING OF THE COMMON ANCHOR AREA
C example: The following figure shows how a stored procedure that is written in the C language receives these parameters.
#pragma runopts(PLIST(OS)) #pragma options(RENT) #include <stdlib.h> #include <stdio.h> /*****************************************************************/ /* Code for a C language stored procedure that uses the */ /* GENERAL linkage convention. */ /*****************************************************************/ main(argc,argv)
Chapter 10. Creating and modifying DB2 objects
603
/* Number of parameters passed */ /* Array of strings containing */ /* the parameter values */ /* Local copy of V1 /* Local copy of V2 /* (null-terminated) */ */ */
/***************************************************************/ /* Get the passed parameters. The GENERAL linkage convention */ /* follows the standard C language parameter passing */ /* conventions: */ /* - argc contains the number of parameters passed */ /* - argv[0] is a pointer to the stored procedure name */ /* - argv[1] to argv[n] are pointers to the n parameters */ /* in the SQL statement CALL. */ /***************************************************************/ if(argc==3) /* Should get 3 parameters: */ { /* procname, V1, V2 */ locv1 = *(int *) argv[1]; /* Get local copy of V1 */ . . . strcpy(argv[2],locv2); /* Assign a value to V2 . . . } } */
COBOL example: The following figure shows how a stored procedure that is written in the COBOL language receives these parameters.
CBL RENT IDENTIFICATION DIVISION. ************************************************************ * CODE FOR A COBOL LANGUAGE STORED PROCEDURE THAT USES THE * * GENERAL LINKAGE CONVENTION. * ************************************************************ PROGRAM-ID. A. . . . DATA DIVISION. . . . LINKAGE SECTION. ************************************************************ * DECLARE THE PARAMETERS PASSED BY THE SQL STATEMENT * * CALL HERE. * ************************************************************ 01 V1 PIC S9(9) USAGE COMP. 01 V2 PIC X(9). . . . PROCEDURE DIVISION USING V1, V2. ************************************************************ * THE USING PHRASE INDICATES THAT VARIABLES V1 AND V2 * * WERE PASSED BY THE CALLING PROGRAM. * ************************************************************ . . . **************************************** * ASSIGN A VALUE TO OUTPUT VARIABLE V2 * **************************************** MOVE 123456789 TO V2.
604
PL/I example: The following figure shows how a stored procedure that is written in the PL/I language receives these parameters.
*PROCESS SYSTEM(MVS); A: PROC(V1, V2) OPTIONS(MAIN NOEXECOPS REENTRANT); /***************************************************************/ /* Code for a PL/I language stored procedure that uses the */ /* GENERAL linkage convention. */ /***************************************************************/ /***************************************************************/ /* Indicate on the PROCEDURE statement that two parameters */ /* were passed by the SQL statement CALL. Then declare the */ /* parameters in the following section. */ /***************************************************************/ DCL V1 BIN FIXED(31), V2 CHAR(9); . . . V2 = 123456789; /* Assign a value to output variable V2 */
Example of GENERAL WITH NULLS linkage convention: Specify the GENERAL WITH NULLS linkage convention when you want to allow the calling program to supply a null value for any parameter that is passed to the stored procedure. The following examples demonstrate how an assembler, C, COBOL, or PL/I stored procedure uses the GENERAL WITH NULLS linkage convention to receive parameters. For these examples, assume that a C application has the following parameter declarations and CALL statement:
/************************************************************/ /* Parameters for the SQL statement CALL */ /************************************************************/ long int v1; char v2[10]; /* Allow an extra byte for */ /* the null terminator */ /************************************************************/ /* Indicator structure */ /************************************************************/ struct indicators { short int ind1; short int ind2; } indstruc; . . . indstruc.ind1 = 0; /* Remember to initialize the */ /* input parameters indicator*/ /* variable before executing */ /* the CALL statement */ EXEC SQL CALL B (:v1 :indstruc.ind1, :v2 :indstruc.ind2); . . .
Assembler example: The following figure shows how a stored procedure that is written in assembler language receives these parameters.
605
******************************************************************* * CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES * * THE GENERAL WITH NULLS LINKAGE CONVENTION. * ******************************************************************* B CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS USING PROGAREA,R13 ******************************************************************* * BRING UP THE LANGUAGE ENVIRONMENT. * ******************************************************************* . . . ******************************************************************* * GET THE PASSED PARAMETER VALUES. THE GENERAL WITH NULLS LINKAGE* * CONVENTION IS AS FOLLOWS: * * ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS. IF N * * PARAMETERS ARE PASSED, THERE ARE N+1 POINTERS. THE FIRST * * N POINTERS ARE THE ADDRESSES OF THE N PARAMETERS, JUST AS * * WITH THE GENERAL LINKAGE CONVENTION. THE N+1ST POINTER IS * * THE ADDRESS OF A LIST CONTAINING THE N INDICATOR VARIABLE * * VALUES. * ******************************************************************* L R7,0(R1) GET POINTER TO V1 MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1 L R7,8(R1) GET POINTER TO INDICATOR ARRAY MVC LOCIND(2*2),0(R7) MOVE VALUES INTO LOCAL STORAGE LH R7,LOCIND GET INDICATOR VARIABLE FOR V1 LTR R7,R7 CHECK IF IT IS NEGATIVE BM NULLIN IF SO, V1 IS NULL . . . L MVC L MVC . . . CEETERM RC=0 ******************************************************************* * VARIABLE DECLARATIONS AND EQUATES * ******************************************************************* R1 EQU 1 REGISTER 1 R7 EQU 7 REGISTER 7 PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK LTORG , PLACE LITERAL POOL HERE PROGAREA DSECT ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART LOCV1 DS F LOCAL COPY OF PARAMETER V1 LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2 LOCIND DS 2H LOCAL COPY OF INDICATOR ARRAY . . . PROGSIZE EQU *-PROGAREA CEEDSA , CEECAA , END B R7,4(R1) 0(9,R7),LOCV2 R7,8(R1) 2(2,R7),=H(0) GET POINTER TO V2 MOVE A VALUE INTO OUTPUT VAR V2 GET POINTER TO INDICATOR ARRAY MOVE ZERO TO V2S INDICATOR VAR
MAPPING OF THE DYNAMIC SAVE AREA MAPPING OF THE COMMON ANCHOR AREA
C example: The following figure shows how a stored procedure that is written in the C language receives these parameters.
#pragma options(RENT) #pragma runopts(PLIST(OS)) #include <stdlib.h> #include <stdio.h> /*****************************************************************/ /* Code for a C language stored procedure that uses the */ /* GENERAL WITH NULLS linkage convention. */
606
/*****************************************************************/ main(argc,argv) int argc; /* Number of parameters passed */ char *argv[]; /* Array of strings containing */ /* the parameter values */ { long int locv1; /* Local copy of V1 */ char locv2[10]; /* Local copy of V2 */ /* (null-terminated) */ short int locind[2]; /* Local copy of indicator */ /* variable array */ short int *tempint; /* Used for receiving the */ /* indicator variable array */ . . . /***************************************************************/ /* Get the passed parameters. The GENERAL WITH NULLS linkage */ /* convention is as follows: */ /* - argc contains the number of parameters passed */ /* - argv[0] is a pointer to the stored procedure name */ /* - argv[1] to argv[n] are pointers to the n parameters */ /* in the SQL statement CALL. */ /* - argv[n+1] is a pointer to the indicator variable array */ /***************************************************************/ if(argc==4) /* Should get 4 parameters: */ { /* procname, V1, V2, */ /* indicator variable array */ locv1 = *(int *) argv[1]; /* Get local copy of V1 */ tempint = argv[3]; /* Get pointer to indicator */ /* variable array */ locind[0] = *tempint; /* Get 1st indicator variable */ locind[1] = *(++tempint); /* Get 2nd indicator variable */ if(locind[0]<0) /* If 1st indicator variable */ { /* is negative, V1 is null */ . . . } . . . strcpy(argv[2],locv2); *(++tempint) = 0; } } /* Assign a value to V2 */ /* Assign 0 to V2s indicator */ /* variable */
COBOL example: The following figure shows how a stored procedure that is written in the COBOL language receives these parameters.
CBL RENT IDENTIFICATION DIVISION. ************************************************************ * CODE FOR A COBOL LANGUAGE STORED PROCEDURE THAT USES THE * * GENERAL WITH NULLS LINKAGE CONVENTION. * ************************************************************ PROGRAM-ID. B. . . . DATA DIVISION. . . . LINKAGE SECTION. ************************************************************
Chapter 10. Creating and modifying DB2 objects
607
* DECLARE THE PARAMETERS AND THE INDICATOR ARRAY THAT * * WERE PASSED BY THE SQL STATEMENT CALL HERE. * ************************************************************ 01 V1 PIC S9(9) USAGE COMP. 01 V2 PIC X(9). * 01 INDARRAY. 10 INDVAR PIC S9(4) USAGE COMP OCCURS 2 TIMES. . . . PROCEDURE DIVISION USING V1, V2, INDARRAY. ************************************************************ * THE USING PHRASE INDICATES THAT VARIABLES V1, V2, AND * * INDARRAY WERE PASSED BY THE CALLING PROGRAM. * ************************************************************ . . . *************************** * TEST WHETHER V1 IS NULL * *************************** IF INDARRAY(1) < 0 PERFORM NULL-PROCESSING. . . . **************************************** * ASSIGN A VALUE TO OUTPUT VARIABLE V2 * * AND ITS INDICATOR VARIABLE * **************************************** MOVE 123456789 TO V2. MOVE ZERO TO INDARRAY(2).
PL/I example: The following figure shows how a stored procedure that is written in the PL/I language receives these parameters.
*PROCESS SYSTEM(MVS); A: PROC(V1, V2, INDSTRUC) OPTIONS(MAIN NOEXECOPS REENTRANT); /***************************************************************/ /* Code for a PL/I language stored procedure that uses the */ /* GENERAL WITH NULLS linkage convention. */ /***************************************************************/ /***************************************************************/ /* Indicate on the PROCEDURE statement that two parameters */ /* and an indicator variable structure were passed by the SQL */ /* statement CALL. Then declare them in the following section.*/ /* For PL/I, you must declare an indicator variable structure, */ /* not an array. */ /***************************************************************/ DCL V1 BIN FIXED(31), V2 CHAR(9); DCL 01 INDSTRUC, 02 IND1 BIN FIXED(15), 02 IND2 BIN FIXED(15); . . . IF IND1 < 0 THEN CALL NULLVAL; . . . V2 = 123456789; IND2 = 0; /* Assign a value to output variable V2 */ /* Assign 0 to V2s indicator variable */ /* If indicator variable is negative /* then V1 is null */ */
608
Example of SQL linkage convention: Specify the SQL linkage convention when you want diagnostic information to be passed in the parameters and allow null values. The following examples demonstrate how an assembler, C, COBOL, or PL/I stored procedure uses the SQL linkage convention to receive parameters. These examples also show how a stored procedure receives the DBINFO structure. For these examples, assume that a C application has the following parameter declarations and CALL statement:
/************************************************************/ /* Parameters for the SQL statement CALL */ /************************************************************/ long int v1; char v2[10]; /* Allow an extra byte for */ /* the null terminator */ /************************************************************/ /* Indicator variables */ /************************************************************/ short int ind1; short int ind2; . . . ind1 = 0; /* Remember to initialize the */ /* input parameters indicator*/ /* variable before executing */ /* the CALL statement */ EXEC SQL CALL B (:v1 :ind1, :v2 :ind2); . . .
Assembler example: The following figure shows how a stored procedure that is written in assembler language receives these parameters.
******************************************************************* * CODE FOR AN ASSEMBLER LANGUAGE STORED PROCEDURE THAT USES * * THE SQL LINKAGE CONVENTION. * ******************************************************************* B CEEENTRY AUTO=PROGSIZE,MAIN=YES,PLIST=OS USING PROGAREA,R13 ******************************************************************* * BRING UP THE LANGUAGE ENVIRONMENT. * ******************************************************************* . . . ******************************************************************* * GET THE PASSED PARAMETER VALUES. THE SQL LINKAGE * * CONVENTION IS AS FOLLOWS: * * ON ENTRY, REGISTER 1 POINTS TO A LIST OF POINTERS. IF N * * PARAMETERS ARE PASSED, THERE ARE 2N+4 POINTERS. THE FIRST * * N POINTERS ARE THE ADDRESSES OF THE N PARAMETERS, JUST AS * * WITH THE GENERAL LINKAGE CONVENTION. THE NEXT N POINTERS ARE *
Chapter 10. Creating and modifying DB2 objects
609
* THE ADDRESSES OF THE INDICATOR VARIABLE VALUES. THE LAST * * 4 POINTERS (5, IF DBINFO IS PASSED) ARE THE ADDRESSES OF * * INFORMATION ABOUT THE STORED PROCEDURE ENVIRONMENT AND * * EXECUTION RESULTS. * ******************************************************************* L R7,0(R1) GET POINTER TO V1 MVC LOCV1(4),0(R7) MOVE VALUE INTO LOCAL COPY OF V1 L R7,8(R1) GET POINTER TO 1ST INDICATOR VARIABLE MVC LOCI1(2),0(R7) MOVE VALUE INTO LOCAL STORAGE L R7,20(R1) GET POINTER TO STORED PROCEDURE NAME MVC LOCSPNM(20),0(R7) MOVE VALUE INTO LOCAL STORAGE L R7,24(R1) GET POINTER TO DBINFO MVC LOCDBINF(DBINFLN),0(R7) * MOVE VALUE INTO LOCAL STORAGE LH R7,LOCI1 GET INDICATOR VARIABLE FOR V1 LTR R7,R7 CHECK IF IT IS NEGATIVE BM NULLIN IF SO, V1 IS NULL . . . L MVC L MVC L MVC . . . CEETERM RC=0 R7,4(R1) 0(9,R7),LOCV2 R7,12(R1) 0(2,R7),=H0 R7,16(R1) 0(5,R7),=CL5xxxxx GET POINTER TO V2 MOVE A VALUE INTO OUTPUT VAR V2 GET POINTER TO INDICATOR VAR 2 MOVE ZERO TO V2S INDICATOR VAR GET POINTER TO SQLSTATE MOVE xxxxx TO SQLSTATE
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
******************************************************************* * VARIABLE DECLARATIONS AND EQUATES * ******************************************************************* R1 EQU 1 REGISTER 1 R7 EQU 7 REGISTER 7 PPA CEEPPA , CONSTANTS DESCRIBING THE CODE BLOCK LTORG , PLACE LITERAL POOL HERE PROGAREA DSECT ORG *+CEEDSASZ LEAVE SPACE FOR DSA FIXED PART LOCV1 DS F LOCAL COPY OF PARAMETER V1 LOCV2 DS CL9 LOCAL COPY OF PARAMETER V2 LOCI1 DS H LOCAL COPY OF INDICATOR 1 LOCI2 DS H LOCAL COPY OF INDICATOR 2 LOCSQST DS CL5 LOCAL COPY OF SQLSTATE LOCSPNM DS H,CL27 LOCAL COPY OF STORED PROC NAME LOCSPSNM DS H,CL18 LOCAL COPY OF SPECIFIC NAME LOCDIAG DS H,CL1000 LOCAL COPY OF DIAGNOSTIC DATA LOCDBINF DS 0H LOCAL COPY OF DBINFO DATA DBNAMELN DS H DATABASE NAME LENGTH DBNAME DS CL128 DATABASE NAME AUTHIDLN DS H APPL AUTH ID LENGTH AUTHID DS CL128 APPL AUTH ID ASC_SBCS DS F ASCII SBCS CCSID ASC_DBCS DS F ASCII DBCS CCSID ASC_MIXD DS F ASCII MIXED CCSID EBC_SBCS DS F EBCDIC SBCS CCSID EBC_DBCS DS F EBCDIC DBCS CCSID EBC_MIXD DS F EBCDIC MIXED CCSID UNI_SBCS DS F UNICODE SBCS CCSID UNI_DBCS DS F UNICODE DBCS CCSID UNI_MIXD DS F UNICODE MIXED CCSID ENCODE DS F PROCEDURE ENCODING SCHEME
610
| | | | | | | | | | | | | | | | | | | | | | | |
RESERV0 TBQUALLN TBQUAL TBNAMELN TBNAME CLNAMELN COLNAME RELVER RESERV1 PLATFORM NUMTFCOL RESERV2 TFCOLNUM APPLID RESERV3 DBINFLN
DS DS DS DS DS DS DS DS DS DS DS DS DS DS DS EQU
RESERVED TABLE QUALIFIER LENGTH TABLE QUALIFIER TABLE NAME LENGTH TABLE NAME COLUMN NAME LENGTH COLUMN NAME DBMS RELEASE AND VERSION RESERVED DBMS OPERATING SYSTEM NUMBER OF TABLE FUNCTION COLS USED RESERVED POINTER TO TABLE FUNCTION COL LIST POINTER TO APPLICATION ID RESERVED LENGTH OF DBINFO
MAPPING OF THE DYNAMIC SAVE AREA MAPPING OF THE COMMON ANCHOR AREA
C example: The following figure shows how a stored procedure that is written as a main program in the C language receives these parameters.
#pragma runopts(plist(os)) #include <;stdlib.h> #include <;stdio.h> main(argc,argv) int argc; char *argv[]; { int parm1; short int ind1; char p_proc[28]; char p_spec[19]; /***************************************************/ /* Assume that the SQL CALL statement included */ /* 3 input/output parameters in the parameter list.*/ /* The argv vector will contain these entries: */ /* argv[0] 1 contains load module */ /* argv[1-3] 3 input/output parms */ /* argv[4-6] 3 null indicators */ /* argv[7] 1 SQLSTATE variable */ /* argv[8] 1 qualified proc name */ /* argv[9] 1 specific proc name */ /* argv[10] 1 diagnostic string */ /* argv[11] + 1 dbinfo */ /* -----*/ /* 12 for the argc variable */ /***************************************************/ if argc<>12 { . . . /* We end up here when invoked with wrong number of parms */ } /***************************************************/ /* Assume the first parameter is an integer. */ /* The following code shows how to copy the integer*/ /* parameter into the application storage. */ /***************************************************/ parm1 = *(int *) argv[1]; /***************************************************/ /* We can access the null indicator for the first */
Chapter 10. Creating and modifying DB2 objects
611
/* parameter on the SQL CALL as follows: */ /***************************************************/ ind1 = *(short int *) argv[4]; /***************************************************/ /* We can use the following expression */ /* to assign xxxxx to the SQLSTATE returned to */ /* caller on the SQL CALL statement. */ /***************************************************/ strcpy(argv[7],"xxxxx/0"); /***************************************************/ /* We obtain the value of the qualified procedure */ /* name with this expression. */ /***************************************************/ strcpy(p_proc,argv[8]); /***************************************************/ /* We obtain the value of the specific procedure */ /* name with this expression. */ /***************************************************/ strcpy(p_spec,argv[9]); /***************************************************/ /* We can use the following expression to assign */ /* yyyyyyyy to the diagnostic string returned */ /* in the SQLDA associated with the CALL statement.*/ /***************************************************/ strcpy(argv[10],"yyyyyyyy/0"); . . . }
The following figure shows how a stored procedure that is written as a subprogram in the C language receives these parameters. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
#pragma linkage(myproc,fetchable) #include <stdlib.h> #include <stdio.h> #include <sqludf.h> void myproc(*parm1 int, /* assume INT for PARM1 */ parm2 char[11], /* assume CHAR(10) parm2 */ . . . *p_ind1 short int, /* null indicator for parm1 */ *p_ind2 short int, /* null indicator for parm2 */ . . . p_sqlstate char[6], /* SQLSTATE returned to DB2 */ p_proc char[28], /* Qualified stored proc name */ p_spec char[19], /* Specific stored proc name */ p_diag char[1001], /* Diagnostic string */ struct sqludf_dbinfo *udf_dbinfo); /* DBINFO */ { int l_p1; char[11] l_p2; short int l_ind1; short int l_ind2; char[6] l_sqlstate; char[28] l_proc; char[19] l_spec; char[71] l_diag; sqludf_dbinfo *ludf_dbinfo;
612
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
. . . /***************************************************/ /* Copy each of the parameters in the parameter */ /* list into a local variable, just to demonstrate */ /* how the parameters can be referenced. */ /***************************************************/ l_p1 = *parm1; strcpy(l_p2,parm2); l_ind1 = *p_ind1; l_ind1 = *p_ind2; strcpy(l_sqlstate,p_sqlstate); strcpy(l_proc,p_proc); strcpy(l_spec,p_spec); strcpy(l_diag,p_diag); memcpy(&ludf_dbinfo,udf_dbinfo,sizeof(ludf_dbinfo)); . . . }
COBOL example: The following figure shows how a stored procedure that is written in the COBOL language receives these parameters. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CBL RENT IDENTIFICATION DIVISION. . . . DATA DIVISION. . . . LINKAGE SECTION. * Declare each of the parameters 01 PARM1 ... 01 PARM2 ... . . . * Declare a null indicator for each parameter 01 P-IND1 PIC S9(4) USAGE COMP. 01 P-IND2 PIC S9(4) USAGE COMP. . . . * Declare the SQLSTATE that can be set by stored proc 01 P-SQLSTATE PIC X(5). * Declare the qualified procedure name 01 P-PROC. 49 P-PROC-LEN PIC 9(4) USAGE BINARY. 49 P-PROC-TEXT PIC X(27). * Declare the specific procedure name 01 P-SPEC. 49 P-SPEC-LEN PIC 9(4) USAGE BINARY. 49 P-SPEC-TEXT PIC X(18). * Declare SQL diagnostic message token 01 P-DIAG. 49 P-DIAG-LEN PIC 9(4) USAGE BINARY. 49 P-DIAG-TEXT PIC X(1000). ********************************************************* * Structure used for DBINFO * ********************************************************* 01 SQLUDF-DBINFO. * Location name length 05 DBNAMELEN PIC 9(4) USAGE BINARY. * Location name 05 DBNAME PIC X(128). * authorization ID length 05 AUTHIDLEN PIC 9(4) USAGE BINARY.
Chapter 10. Creating and modifying DB2 objects
613
| | | | | | | | | | |
* *
authorization ID 05 AUTHID PIC X(128). environment CCSID information 05 CODEPG PIC X(48). 05 CDPG-DB2 REDEFINES CODEPG. 10 DB2-CCSIDS OCCURS 3 TIMES. 15 DB2-SBCS PIC 9(9) USAGE BINARY. 15 DB2-DBCS PIC 9(9) USAGE BINARY. 15 DB2-MIXED PIC 9(9) USAGE BINARY. 10 ENCODING-SCHEME PIC 9(9) USAGE BINARY. 10 RESERVED PIC X(20).
* other platform-specific deprecated CCSID structures not included here * schema name length 05 TBSCHEMALEN PIC 9(4) USAGE BINARY. * schema name 05 TBSCHEMA PIC X(128). * table name length 05 TBNAMELEN PIC 9(4) USAGE BINARY. * table name 05 TBNAME PIC X(128). * column name length 05 COLNAMELEN PIC 9(4) USAGE BINARY. * column name 05 COLNAME PIC X(128). * product information 05 VER-REL PIC X(8). * reserved 05 RESD0 PIC X(2). * platform type 05 PLATFORM PIC 9(9) USAGE BINARY. * number of entries in the TF column list array (tfcolumn, below) 05 NUMTFCOL PIC 9(4) USAGE BINARY. * reserved 05 RESD1 PIC X(26). * tfcolumn will be allocated dynamically of it is defined * otherwise this will be a null pointer 05 TFCOLUMN USAGE IS POINTER. * application identifier 05 APPL-ID USAGE IS POINTER. * reserved 05 RESD2 PIC X(20). * . . . PROCEDURE DIVISION USING PARM1, PARM2, P-IND1, P-IND2, P-SQLSTATE, P-PROC, P-SPEC, P-DIAG, SQLUDF-DBINFO. . . .
PL/I example: The following figure shows how a stored procedure that is written in the PL/I language receives these parameters. | | | | | | | | | | | | | | | | |
*PROCESS SYSTEM(MVS); MYMAIN: PROC(PARM1, PARM2, ..., P_IND1, P_IND2, ..., P_SQLSTATE, P_PROC, P_SPEC, P_DIAG, DBINFO) OPTIONS(MAIN NOEXECOPS REENTRANT); DCL DCL . . . DCL DCL . . . DCL PARM1 ... PARM2 ... /* first parameter */ /* second parameter */ */ */
P_IND1 BIN FIXED(15);/* indicator for 1st parm P_IND2 BIN FIXED(15);/* indicator for 2nd parm P_SQLSTATE CHAR(5);
614
| | | | | | |
DCL 01 P_PROC
CHAR(27) /* Qualified procedure name VARYING; DCL 01 P_SPEC CHAR(18) /* Specific stored proc VARYING; DCL 01 P_DIAG CHAR(1000) /* Diagnostic string VARYING; DCL DBINFO PTR;
*/ */ */
DCL 01 SP_DBINFO BASED(DBINFO), /* Dbinfo */ 03 UDF_DBINFO_LLEN BIN FIXED(15), /* location length */ 03 UDF_DBINFO_LOC CHAR(128), /* location name */ 03 UDF_DBINFO_ALEN BIN FIXED(15), /* auth ID length */ 03 UDF_DBINFO_AUTH CHAR(128), /* authorization ID */ 03 UDF_DBINFO_CCSID, /* CCSIDs for DB2 for z/OS */ 05 R1 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_ASBCS BIN FIXED(15), /* ASCII SBCS CCSID */ 05 R2 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_ADBCS BIN FIXED(15), /* ASCII DBCS CCSID */ 05 R3 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_AMIXED BIN FIXED(15), /* ASCII MIXED CCSID */ 05 R4 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_ESBCS BIN FIXED(15), /* EBCDIC SBCS CCSID */ 05 R5 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_EDBCS BIN FIXED(15), /* EBCDIC DBCS CCSID */ 05 R6 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_EMIXED BIN FIXED(15), /* EBCDIC MIXED CCSID*/ 05 R7 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_USBCS BIN FIXED(15), /* Unicode SBCS CCSID */ 05 R8 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_UDBCS BIN FIXED(15), /* Unicode DBCS CCSID */ 05 R9 BIN FIXED(15), /* Reserved */ 05 UDF_DBINFO_UMIXED BIN FIXED(15), /* Unicode MIXED CCSID*/ 05 UDF_DBINFO_ENCODE BIN FIXED(31), /* SP encode scheme */ 05 UDF_DBINFO_RESERV0 CHAR(20), /* reserved */ 03 UDF_DBINFO_SLEN BIN FIXED(15), /* schema length */ 03 UDF_DBINFO_SCHEMA CHAR(128), /* schema name */ 03 UDF_DBINFO_TLEN BIN FIXED(15), /* table length */ 03 UDF_DBINFO_TABLE CHAR(128), /* table name */ 03 UDF_DBINFO_CLEN BIN FIXED(15), /* column length */ 03 UDF_DBINFO_COLUMN CHAR(128), /* column name */ 03 UDF_DBINFO_RELVER CHAR(8), /* DB2 release level */ 03 UDF_DBINFO_RESERV0 CHAR(2), /* reserved */ 03 UDF_DBINFO_PLATFORM BIN FIXED(31), /* database platform*/ 03 UDF_DBINFO_NUMTFCOL BIN FIXED(15), /* # of TF cols used*/ 03 UDF_DBINFO_RESERV1 CHAR(26), /* reserved */ 03 UDF_DBINFO_TFCOLUMN PTR, /* -> table fun col list */ 03 UDF_DBINFO_APPLID PTR, /* -> application id */ 03 UDF_DBINFO_RESERV2 CHAR(20); /* reserved */ . . .
DBINFO structure
Use the DBINFO structure to pass environment information to user-defined functions and stored procedures. Some fields in the structure are not used for stored procedures. DBINFO is a structure that contains information such as the name of the current server, the application runtime authorization ID and identification of the version and release of the database manager that invoked the procedure. The DBINFO structure includes the following information:
615
Location name length An unsigned 2-byte integer field. It contains the length of the location name in the next field. Location name A 128-byte character field. It contains the name of the location to which the invoker is currently connected. Authorization ID length An unsigned 2-byte integer field. It contains the length of the authorization ID in the next field. Authorization ID A 128-byte character field. It contains the authorization ID of the application from which the stored procedure is invoked, padded on the right with blanks. If this stored procedure is nested within other routines (user-defined functions or stored procedures), this value is the authorization ID of the application that invoked the highest-level routine. Subsystem code page A 48-byte structure that consists of 10 integer fields and an eight-byte reserved area. These fields provide information about the CCSIDs of the subsystem from which the stored procedure is invoked. Table qualifier length An unsigned 2-byte integer field. This field contains 0. Table qualifier A 128-byte character field. This field is not used for stored procedures. Table name length An unsigned 2-byte integer field. This field contains 0. Table name A 128-byte character field. This field is not used for stored procedures. Column name length An unsigned 2-byte integer field. This field contains 0. Column name A 128-byte character field. This field is not used for stored procedures. Product information An 8-byte character field that identifies the product on which the stored procedure executes. This field has the form pppvvrrm, where: v ppp is a 3-byte product code: ARI DSN QSQ DB2 Server for VSE & VM DB2 for z/OS DB2 for i
SQL DB2 for Linux, UNIX, and Windows v vv is a two-digit version identifier. v rr is a two-digit release identifier. v m is a one-digit maintenance level identifier. Reserved area 2 bytes.
616
Operating system A 4-byte integer field. It identifies the operating system on which the program that invokes the user-defined function runs. The value is one of these: 0 1 3 4 5 6 7 8 13 15 16 18 19 24 25 26 27 28 29 400 Unknown OS/2 Windows AIX Windows NT HP-UX Solaris z/OS Siemens Nixdorf Windows 95 SCO UNIX Linux DYNIX/ptx Linux for S/390 Linux for System z Linux/IA64 Linux/PPC Linux/PPC64 Linux/AMD64 iSeries
Number of entries in table function column list An unsigned 2-byte integer field. This field contains 0. Reserved area 26 bytes. Table function column list pointer This field is not used for stored procedures. Unique application identifier This field is a pointer to a string that uniquely identifies the application's connection to DB2. The string is regenerated at for each connection to DB2. The string is the LUWID, which consists of a fully-qualified LU network name followed by a period and an LUW instance number. The LU network name consists of a one- to eight-character network ID, a period, and a one- to eight-character network LU name. The LUW instance number consists of 12 hexadecimal characters that uniquely identify the unit of work. Reserved area 20 bytes.
617
As part of the process of creating an external stored procedure, you prepare the procedure, which means that you precompile, compile, link-edit, and bind the application. The result of this process is a DB2 package. You do not need to create a DB2 plan for an external procedure. The procedure runs under the caller's thread and uses the plan from the client program that calls it. The calling application can use a DB2 package or plan to execute the CALL statement. Both the stored procedure package and the calling application plan or package must exist on the server before you run the calling application. The following figure shows this relationship between a client program and a stored procedure. In the figure, the client program, which was bound into package A, issues a CALL statement to program B. Program B is an external stored procedure in a WLM address space. This external stored procedure was bound into package B.
Client Program User ID=yyyy Program A DB2 System User ID=yyyy Address Space User ID=xxxx
. . .
Package B Program B
You can control access to the stored procedure package by specifying the ENABLE bind option when you bind the package. In the following situations, the stored procedure might use more than one package: v You bind a DBRM several times into several versions of the same package, all of which have the same package name but reside in different package collections. Your stored procedure can switch from one version to another by using the SET CURRENT PACKAGESET statement. v The stored procedure calls another program that contains SQL statements. This program has an associated package. This package must exist at the location where the stored procedure is defined and at the location where the SQL statements are executed.
618
Related reference: BIND and REBIND options (DB2 Commands) BIND PACKAGE (DSN) (DB2 Commands) SET CURRENT PACKAGESET (DB2 SQL)
619
Procedure
To access non-DB2 resources in your stored procedure: 1. Consider serializing access to non-DB2 resources within your application. Not all non-DB2 resources can tolerate concurrent access by multiple TCBs in the same address space. 2. To access CICS, use one of the following methods: v Stored procedure DSNACICS v Message Queue Interface (MQI) for asynchronous execution of CICS transactions v External CICS interface (EXCI) for synchronous execution of CICS transactions v Advanced Program-to-Program Communication (APPC), using the Common Programming Interface Communications (CPI Communications) application programming interface If your system is running a release of CICS that uses z/OS RRS, z/OS RRS controls commitment of all resources. 3. To access IMS DL/I data, use one of the following methods v Open Database Access interface (ODBA) v Stored procedures DSNAIMS and DSNAIMS2 If your system is not running a release of IMS that uses z/OS RRS, take one of the following actions: v Use the CICS EXCI interface to run a CICS transaction synchronously. That CICS transaction can, in turn, access DL/I data. v Invoke IMS transactions asynchronously using the MQI. v Use APPC through the Common Programming Interface (CPI) Communications application programming interface. 4. Determine which of the following authorization IDs you want to use to access the non-DB2 resources.
Table 99. Authorization IDs for accessing non-DB2 resources from a stored procedure ID that you want to use to access the non-DB2 resources The authorization ID that is associated with the stored procedures address space The authorization ID under which the CALL statement is executed SECURITY value to specify in the CREATE PROCEDURE statement SECURITY DB2 SECURITY USER
| | | |
| | | | | | | | | | | | |
The authorization ID under which the SECURITY DEFINER CREATE PROCEDURE statement is executed
5. Issue the CREATE PROCEDURE statement with the appropriate SECURITY option that you determined in the previous step.
Results
When the stored procedure runs, DB2 establishes a RACF environment for accessing non-DB2 resources and uses the specified authorization ID to access protected z/OS resources.
620
Related tasks: Chapter 14, Calling a stored procedure from your application, on page 775 Implementing RRS for stored procedures (DB2 Installation and Migration) Controlling stored procedure access to non-DB2 resources by using RACF (Managing Security) Related reference: DSNACICS stored procedure (DB2 Administration Guide) DSNAIMS stored procedure (DB2 Administration Guide) DSNAIMS2 stored procedure (DB2 Administration Guide) CREATE PROCEDURE (SQL - external) (DB2 SQL) APPC/MVS Configuration (Multiplatform APPC Configuration Guide) Related information: Accessing CICS and IMS (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond) External CICS interface (EXCI) (CICS Transaction Server for z/OS)
621
The startup procedure for a stored procedures address space in which stored procedures that use ODBA run must include a DFSRESLB DD statement and an extra data set in the STEPLIB concatenation. Related information: Application programming design
DB2 returns the result set and the name of the SQL cursor for the stored procedure to the client. Use meaningful cursor names for returning result sets: The name of the cursor that is used to return result sets is made available to the client application through extensions to the DESCRIBE statement. Use cursor names that are meaningful to the DRDA client application, especially when the stored procedure returns multiple result sets. Objects from which you can return result sets: You can use any of these objects in the SELECT statement that is associated with the cursor for a result set: v Tables, synonyms, views, created temporary tables, declared temporary tables, and aliases defined at the local DB2 subsystem
622
v Tables, synonyms, views, created temporary tables, and aliases defined at remote DB2 for z/OS systems that are accessible through DB2 private protocol access Returning a subset of rows to the client: If you execute FETCH statements with a result set cursor, DB2 does not return the fetched rows to the client program. For example, if you declare a cursor WITH RETURN and then execute the statements OPEN, FETCH, and FETCH, the client receives data beginning with the third row in the result set. If the result set cursor is scrollable and you fetch rows with it, you need to position the cursor before the first row of the result table after you fetch the rows and before the stored procedure ends. Using a temporary table to return result sets: You can use a created temporary table or declared temporary table to return result sets from a stored procedure. This capability can be used to return nonrelational data to a DRDA client. For example, you can access IMS data from a stored procedure in the following way: v Use APPC/MVS to issue an IMS transaction. v Receive the IMS reply message, which contains data that should be returned to the client. v Insert the data from the reply message into a temporary table. v Open a cursor against the temporary table. When the stored procedure ends, the rows from the temporary table are returned to the client. Related tasks: Writing a program to receive the result sets from a stored procedure on page 792
| | |
623
PACKAGESET special register. The package of the called program comes from the collection that is specified in the CURRENT PACKAGESET special register. v If both of the following conditions are true, DB2 uses the collection ID of the package that contains the SQL statement CALL: the stored procedure does not execute SET CURRENT PACKAGE PATH or SET CURRENT PACKAGESET the stored procedure definition contains the NO COLLID option When control returns from the stored procedure, the value of the CURRENT PACKAGESET special register is reset.DB2 restores the value of the CURRENT PACKAGESET special register to the value that it contained before the client program executed the SQL statement CALL.
Procedure
To create an external stored procedure as reentrant: 1. Compile the procedure as reentrant and link-edit it as reentrant and reusable. For instructions on compiling programs to be reentrant, see the information for the programming language that you are using. For C and C++ procedures, you can use the z/OS binder to produce reentrant and reusable load modules. If your stored procedure cannot be reentrant, link-edit it as non-reentrant and non-reusable. The non-reusable attribute prevents multiple tasks from using a single copy of the stored procedure at the same time. 2. Specify STAY RESIDENT YES in the CREATE PROCEDURE or ALTER PROCEDURE statement for the stored procedure. This option makes a reentrant stored procedure remain in storage. A non-reentrant stored procedure must not remain in storage. You therefore need to specify STAY RESIDENT NO in the CREATE PROCEDURE or ALTER PROCEDURE statement for a non-reentrant stored procedure. STAY RESIDENT NO is the default.
624
Related concepts: Making programs reentrant (Enterprise COBOL for z/OS Programming Guide) Related reference: Compiler options (COBOL) (Enterprise COBOL for z/OS Programming Guide) ALTER PROCEDURE (external) (DB2 SQL) CREATE PROCEDURE (external) (DB2 SQL) Binder options reference (MVS Program Management: User's Guide and Reference) OPTIONS(REENTRANT) (Enterprise PL/I for z/OS Compiler and Runtime Migration Guide) Compile-time option descriptions (PL/I) (Enterprise PL/I for z/OS Programming Guide:) Reentrancy (XL C/C++ User's Guide)
625
Table 100. Characteristics of main programs and subprograms Language Assembler Main program MAIN=YES is specified in the invocation of the CEEENTRY macro. Contains a main() function. Pass parameters to it through argc and argv. A COBOL program that ends with GOBACK Subprogram MAIN=NO is specified in the invocation of the CEEENTRY macro. A fetchable function. Pass parameters to it explicitly. A dynamically loaded subprogram that ends with GOBACK
COBOL PL/I
The following code shows an example of coding a C++ stored procedure as a subprogram.
626
/******************************************************************/ /* This C++ subprogram is a stored procedure that uses linkage */ /* convention GENERAL and receives 3 parameters. */ /* The extern statement is required. */ /******************************************************************/ extern "C" void cppfunc(char p1[11],long *p2,short *p3); #pragma linkage(cppfunc,fetchable) #include <stdlib.h> EXEC SQL INCLUDE SQLCA; void cppfunc(char p1[11],long *p2,short *p3) { /****************************************************************/ /* Declare variables used for SQL operations. These variables */ /* are local to the subprogram and must be copied to and from */ /* the parameter list for the stored procedure call. */ /****************************************************************/ EXEC SQL BEGIN DECLARE SECTION; char parm1[11]; long int parm2; short int parm3; EXEC SQL END DECLARE SECTION; /*************************************************************/ /* Receive input parameter values into local variables. */ /*************************************************************/ strcpy(parm1,p1); parm2 = *p2; parm3 = *p3; /*************************************************************/ /* Perform operations on local variables. */ /*************************************************************/ . . . /*************************************************************/ /* Set values to be passed back to the caller. */ /*************************************************************/ strcpy(parm1,"SETBYSP"); parm2 = 100; parm3 = 200; /*************************************************************/ /* Copy values to output parameters. */ /*************************************************************/ strcpy(p1,parm1); *p2 = parm2; *p3 = parm3; }
627
Table 101. Compatible assembler language declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) Assembler declaration DS FL4
If n <= 65535: var DS 0FL4 var_length DS FL4 var_data DS CLn If n > 65535: var DS 0FL4 var_length DS FL4 var_data DS CL65535 ORG var_data+(n-65535) If n <= 65535: var DS 0FL4 var_length DS FL4 var_data DS CLn If n > 65535: var DS 0FL4 var_length DS FL4 var_data DS CL65535 ORG var_data+(n-65535) If m (=2*n) <= 65534: var DS 0FL4 var_length DS FL4 var_data DS CLm If m > 65534: var DS 0FL4 var_length DS FL4 var_data DS CL65534 ORG var_data+(m-65534) DS HL2,CL40 If PARAMETER VARCHAR NULTERM is specified or implied: char data[n+1]; If PARAMETER VARCHAR STRUCTURE is specified: struct {short len; char data[n]; } var;
CLOB(n)
DBCLOB(n)
ROWID VARCHAR(n)
Note: 1. This row does not apply to VARCHAR(n) FOR BIT DATA. BIT DATA is always passed in a structured representation.
For LOBs, ROWIDs, and locators, the following table shows compatible declarations for the C language.
Table 102. Compatible C language declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR C declaration unsigned long
628
Table 102. Compatible C language declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition BLOB(n) C declaration struct {unsigned long length; char data[n]; } var; struct {unsigned long length; char var_data[n]; } var; struct {unsigned long length; sqldbchar data[n]; } var; struct {short int length; char data[40]; } var;
CLOB(n)
DBCLOB(n)
ROWID
For LOBs, ROWIDs, and locators, the following table shows compatible declarations for COBOL.
Table 103. Compatible COBOL declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR BLOB(n) COBOL declaration 01 var PIC S9(9) USAGE IS BINARY.
If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC X(n). If n > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC X(32767). 49 FILLER PIC X(32767). . . . 49 FILLER PIC X(mod(n,32767)). 02
629
Table 103. Compatible COBOL declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition CLOB(n) COBOL declaration If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC X(n). If n > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC X(32767). 49 FILLER PIC X(32767). . . . 49 FILLER PIC X(mod(n,32767)). 02 DBCLOB(n) If n <= 32767: 01 var. 49 var-LENGTH PIC 9(9) USAGE COMP. 49 var-DATA PIC G(n) USAGE DISPLAY-1. If n > 32767: 01 var. var-LENGTH PIC S9(9) USAGE COMP. 02 var-DATA. 49 FILLER PIC G(32767) USAGE DISPLAY-1. 49 FILLER PIC G(32767). USAGE DISPLAY-1. . . . 49 FILLER PIC G(mod(n,32767)) USAGE DISPLAY-1. 02 ROWID 01 var. 49 var-LEN PIC 9(4) USAGE COMP. 49 var-DATA PIC X(40).
For LOBs, ROWIDs, and locators, the following table shows compatible declarations for PL/I.
Table 104. Compatible PL/I declarations for LOBs, ROWIDs, and locators SQL data type in definition TABLE LOCATOR BLOB LOCATOR CLOB LOCATOR DBCLOB LOCATOR PL/I BIN FIXED(31)
630
Table 104. Compatible PL/I declarations for LOBs, ROWIDs, and locators (continued) SQL data type in definition BLOB(n) PL/I If n <= 32767: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA CHAR(n); If n > 32767: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) CHAR(32767), 03 var_DATA2 CHAR(mod(n,32767)); CLOB(n) If n <= 32767: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA CHAR(n); If n > 32767: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) CHAR(32767), 03 var_DATA2 CHAR(mod(n,32767)); DBCLOB(n) If n <= 16383: 01 var, 03 var_LENGTH BIN FIXED(31), 03 var_DATA GRAPHIC(n); If n > 16383: 01 var, 02 var_LENGTH BIN FIXED(31), 02 var_DATA, 03 var_DATA1(n) GRAPHIC(16383), 03 var_DATA2 GRAPHIC(mod(n,16383)); ROWID CHAR(40) VAR
Tables of results: Each high-level language definition for stored procedure parameters supports only a single instance (a scalar value) of the parameter. There is no support for structure, array, or vector parameters. Because of this, the SQL statement CALL limits the ability of an application to return some kinds of tables. For example, an application might need to return a table that represents multiple occurrences of one or more of the parameters passed to the stored procedure.
Chapter 10. Creating and modifying DB2 objects
631
Because the SQL statement CALL cannot return more than one set of parameters, use one of the following techniques to return such a table: v Put the data that the application returns in a DB2 table. The calling program can receive the data in one of these ways: The calling program can fetch the rows from the table directly. Specify FOR FETCH ONLY or FOR READ ONLY on the SELECT statement that retrieves data from the table. A block fetch can retrieve the required data efficiently. The stored procedure can return the contents of the table as a result set. See Writing an external procedure to return result sets to a DRDA client on page 622 and Writing a program to receive the result sets from a stored procedure on page 792 for more information. v Convert tabular data to string format and return it as a character string parameter to the calling program. The calling program and the stored procedure can establish a convention for interpreting the content of the character string. For example, the SQL statement CALL can pass a 1920-byte character string parameter to a stored procedure, which enables the stored procedure to return a 24x80 screen image to the calling program. Related concepts: Compatibility of SQL and language data types on page 144
632
DSNREXCS Cursor stability (CS) DSNREXUR Uncommitted read (UR) This topic shows an example of a REXX stored procedure that executes DB2 commands. The stored procedure performs the following actions: v Receives one input parameter, which contains a DB2 command. v Calls the IFI COMMAND function to execute the command. v Extracts the command result messages from the IFI return area and places the messages in a created temporary table. Each row of the temporary table contains a sequence number and the text of one message. v Opens a cursor to return a result set that contains the command result messages. v Returns the unformatted contents of the IFI return area in an output parameter. The following example shows the definition of the stored procedure.
CREATE PROCEDURE COMMAND(IN CMDTEXT VARCHAR(254), OUT CMDRESULT VARCHAR(32704)) LANGUAGE REXX EXTERNAL NAME COMMAND NO COLLID ASUTIME NO LIMIT PARAMETER STYLE GENERAL STAY RESIDENT NO RUN OPTIONS TRAP(ON) WLM ENVIRONMENT WLMENV1 SECURITY DB2 DYNAMIC RESULT SETS 1 COMMIT ON RETURN NO;
The following example shows the COMMAND stored procedure that executes DB2 commands.
/* REXX */ PARSE UPPER ARG CMD /* Get the DB2 command text */ /* Remove enclosing quotation marks IF LEFT(CMD,2) = """ & RIGHT(CMD,2) = """ THEN CMD = SUBSTR(CMD,2,LENGTH(CMD)-2) ELSE IF LEFT(CMD,2) = """" & RIGHT(CMD,2) = """" THEN CMD = SUBSTR(CMD,3,LENGTH(CMD)-4) COMMAND = SUBSTR("COMMAND",1,18," ") /****************************************************************/ /* Set up the IFCA, return area, and output area for the */ /* IFI COMMAND call. */ /****************************************************************/ IFCA = SUBSTR(00X,1,180,00X) IFCA = OVERLAY(D2C(LENGTH(IFCA),2),IFCA,1+0) IFCA = OVERLAY("IFCA",IFCA,4+1) RTRNAREASIZE = 262144 /*1048572*/ RTRNAREA = D2C(RTRNAREASIZE+4,4)LEFT( ,RTRNAREASIZE, ) OUTPUT = D2C(LENGTH(CMD)+4,2)||0000X||CMD BUFFER = SUBSTR(" ",1,16," ") /****************************************************************/ /* Make the IFI COMMAND call. */ /****************************************************************/ ADDRESS LINKPGM "DSNWLIR COMMAND IFCA RTRNAREA OUTPUT" WRC = RC RTRN= SUBSTR(IFCA,12+1,4) REAS= SUBSTR(IFCA,16+1,4) TOTLEN = C2D(SUBSTR(IFCA,20+1,4)) /****************************************************************/ /* Set up the host command environment for SQL calls. */
*/
633
/****************************************************************/ "SUBCOM DSNREXX" /* Host cmd env available? */ IF RC THEN /* No--add host cmd env */ S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) /****************************************************************/ /* Set up SQL statements to insert command output messages */ /* into a temporary table. */ /****************************************************************/ SQLSTMT=INSERT INTO SYSIBM.SYSPRINT(SEQNO,TEXT) VALUES(?,?) ADDRESS DSNREXX "EXECSQL DECLARE C1 CURSOR FOR S1" IF SQLCODE = 0 THEN CALL SQLCA ADDRESS DSNREXX "EXECSQL PREPARE S1 FROM :SQLSTMT" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Extract messages from the return area and insert them into */ /* the temporary table. */ /****************************************************************/ SEQNO = 0 OFFSET = 4+1 DO WHILE ( OFFSET < TOTLEN ) LEN = C2D(SUBSTR(RTRNAREA,OFFSET,2)) SEQNO = SEQNO + 1 TEXT = SUBSTR(RTRNAREA,OFFSET+4,LEN-4-1) ADDRESS DSNREXX "EXECSQL EXECUTE S1 USING :SEQNO,:TEXT" IF SQLCODE = 0 THEN CALL SQLCA OFFSET = OFFSET + LEN END /****************************************************************/ /* Set up a cursor for a result set that contains the command */ /* output messages from the temporary table. */ /****************************************************************/ SQLSTMT=SELECT SEQNO,TEXT FROM SYSIBM.SYSPRINT ORDER BY SEQNO ADDRESS DSNREXX "EXECSQL DECLARE C2 CURSOR FOR S2" IF SQLCODE = 0 THEN CALL SQLCA ADDRESS DSNREXX "EXECSQL PREPARE S2 FROM :SQLSTMT" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Open the cursor to return the message output result set to */ /* the caller. */ /****************************************************************/ ADDRESS DSNREXX "EXECSQL OPEN C2" IF SQLCODE = 0 THEN CALL SQLCA S_RC = RXSUBCOM(DELETE,DSNREXX,DSNREXX) /* REMOVE CMD ENV */ EXIT SUBSTR(RTRNAREA,1,TOTLEN+4) /****************************************************************/ /* Routine to display the SQLCA */ /****************************************************************/ SQLCA: SAY SQLCODE =SQLCODE SAY SQLERRMC =SQLERRMC SAY SQLERRP =SQLERRP SAY SQLERRD =SQLERRD.1,, || SQLERRD.2,, || SQLERRD.3,, || SQLERRD.4,, || SQLERRD.5,, || SQLERRD.6 SAY SQLWARN =SQLWARN.0,, || SQLWARN.1,, || SQLWARN.2,, || SQLWARN.3,, || SQLWARN.4,, || SQLWARN.5,, || SQLWARN.6,, || SQLWARN.7,, || SQLWARN.8,,
634
|| SQLWARN.9,, || SQLWARN.10 SAY SQLSTATE=SQLSTATE SAY SQLCODE =SQLCODE EXIT SQLERRMC =SQLERRMC; , || SQLERRP =SQLERRP; , || SQLERRD =SQLERRD.1,, || SQLERRD.2,, || SQLERRD.3,, || SQLERRD.4,, || SQLERRD.5,, || SQLERRD.6; , || SQLWARN =SQLWARN.0,, || SQLWARN.1,, || SQLWARN.2,, || SQLWARN.3,, || SQLWARN.4,, || SQLWARN.5,, || SQLWARN.6,, || SQLWARN.7,, || SQLWARN.8,, || SQLWARN.9,, || SQLWARN.10; , || SQLSTATE=SQLSTATE;
Related reference: Calling a stored procedure from a REXX procedure on page 781 TSO/E services available under IKJTSOEV (TSO/E Programming Services) | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To modify an external stored procedure definition: 1. Issue the ALTER PROCEDURE statement with the appropriate options. This new definition replaces the existing definition. 2. Prepare the external stored procedure again, as you did when you originally created the external stored procedure.
Example
Suppose that an existing C stored procedure was defined with the following statement:
CREATE PROCEDURE B(IN V1 INTEGER, OUT V2 CHAR(9)) LANGUAGE C DETERMINISTIC NO SQL EXTERNAL NAME SUMMOD COLLID SUMCOLL ASUTIME LIMIT 900 PARAMETER STYLE GENERAL WITH NULLS STAY RESIDENT NO RUN OPTIONS MSGFILE(OUTFILE),RPTSTG(ON),RPTOPTS(ON) WLM ENVIRONMENT PAYROLL PROGRAM TYPE MAIN SECURITY DB2 DYNAMIC RESULT SETS 10 COMMIT ON RETURN NO;
635
| | | | | | | | | | | | | | | | |
Assume that you need to make the following changes to the stored procedure definition: v The stored procedure selects data from DB2 tables but does not modify DB2 data. v The parameters can have null values, and the stored procedure can return a diagnostic string. v The length of time that the stored procedure runs is unlimited. v If the stored procedure is called by another stored procedure or a user-defined function, the stored procedure uses the WLM environment of the caller. The following ALTER PROCEDURE statement makes these changes:
ALTER PROCEDURE B READS SQL DATA ASUTIME NO LIMIT PARAMETER STYLE SQL WLM ENVIRONMENT (PAYROLL,*);
Procedure
To create multiple versions of external procedures and external SQL procedures, use one of the following techniques: v Define multiple procedures with the same name in different schemas. You can subsequently use the SQL path to determine which version of the procedure is to be used by a calling program. v Define multiple versions of the executable code. You can subsequently use a particular version by specifying the name of the load module for the version that you want to use on the EXTERNAL clause of the CREATE PROCEDURE statement or ALTER PROCEDURE statement. v Define multiple packages for a procedure. You can subsequently use the COLLID option, the CURRENT PACKAGESET special register, or the CURRENT PACKAGE PATH special register to specify which version of the procedure is to be used by the calling application. v Set up multiple WLM environments to use different versions of a procedure.
636
| |
637
v The column has data type ROWID. ROWID columns always have default values. v The column is an identity column. Identity columns always have default values. v The column is a row change timestamp column. | | | The values that you can insert into a ROWID column, an identity column, or a row change timestamp column depend on whether the column is defined with GENERATED ALWAYS or GENERATED BY DEFAULT. Inserting a single row: You can use the VALUES clause of the INSERT statement to insert a single row of column values into a table. You can either name all of the columns for which you are providing values, or you can omit the list of column names. If you omit the column name list, you must specify values for all of the columns. Recommendation: For static INSERT statements, name all of the columns for which you are providing values for the following reasons: v Your INSERT statement is independent of the table format. (For example, you do not need to change the statement when a column is added to the table.) v You can verify that you are specifying the values in order. v Your source statements are more self-descriptive. If you do not name the columns in a static INSERT statement, and a column is added to the table, an error can occur if the INSERT statement is rebound. An error will occur after any rebind of the INSERT statement unless you change the INSERT statement to include a value for the new column. This is true even if the new column has a default value. When you list the column names, you must specify their corresponding values in the same order as in the list of column names. Example: The following statement inserts information about a new department into the YDEPT table.
INSERT INTO YDEPT (DEPTNO, DEPTNAME, MGRNO, ADMRDEPT, LOCATION) VALUES (E31, DOCUMENTATION, 000010, E01, );
After inserting a new department row into your YDEPT table, you can use a SELECT statement to see what you have loaded into the table. The following SQL statement shows you all of the new department rows that you have inserted:
SELECT * FROM YDEPT WHERE DEPTNO LIKE E% ORDER BY DEPTNO;
638
Example: The following statement inserts information about a new employee into the YEMP table. Because the WORKDEPT column is a foreign key, the value that is inserted for that column (E31) must be a value in the primary key column, which is DEPTNO in the YDEPT table.
INSERT INTO YEMP VALUES (000400, RUTHERFORD, B, HAYES, E31, 5678, 1998-01-01, MANAGER, 16, M, 1970-07-10, 24000, 500, 1900);
Example: The following statement also inserts a row into the YEMP table. Because the unspecified columns allow null values, DB2 inserts null values into the columns that you do not specify.
INSERT INTO YEMP (EMPNO, FIRSTNME, MIDINIT, LASTNAME, WORKDEPT, PHONENO, JOB) VALUES (000410, MILLARD, K, FILLMORE, D11, 4888, MANAGER);
Related concepts: Rules for inserting data into an identity column on page 641 Rules for inserting data into a ROWID column on page 640 Related tasks: Inserting multiple rows of data from host variable arrays on page 157 Inserting rows into a table from another table Related reference: CREATE TABLE (DB2 SQL)
The following statement copies data from DSN8910.EMP into the newly created table:
INSERT INTO TELE SELECT LASTNAME, FIRSTNME, PHONENO FROM DSN8910.EMP WHERE WORKDEPT = D21;
The two previous statements create and fill a table, TELE, that looks similar to the following table:
NAME2 =============== PULASKI JEFFERSON MARINO SMITH JOHNSON PEREZ MONTEVERDE NAME1 ============ EVA JAMES SALVATORE DANIEL SYBIL MARIA ROBERT PHONE ===== 7831 2094 3780 0961 8953 9001 3780
639
The CREATE TABLE statement example creates a table which, at first, is empty. The table has columns for last names, first names, and phone numbers, but does not have any rows. The INSERT statement fills the newly created table with data that is selected from the DSN8910.EMP table: the names and phone numbers of employees in department D21. Example: The following CREATE statement creates a table that contains an employee's department name and phone number. The fullselect within the INSERT statement fills the DLIST table with data from rows that are selected from two existing tables, DSN8910.DEPT and DSN8910.EMP.
CREATE TABLE DLIST (DEPT CHAR(3) NOT NULL, DNAME VARCHAR(36) , LNAME VARCHAR(15) NOT NULL, FNAME VARCHAR(12) NOT NULL, INIT CHAR , PHONE CHAR(4) ); INSERT INTO DLIST SELECT DEPTNO, DEPTNAME, LASTNAME, FIRSTNME, MIDINIT, PHONENO FROM DSN8910.DEPT, DSN8910.EMP WHERE DEPTNO = WORKDEPT;
If ROWIDCOL2 is defined as GENERATED ALWAYS, you cannot insert the ROWID column data from T1 into T2, but you can insert the integer column data. To insert only the integer data, use one of the following methods: v Specify only the integer column in your INSERT statement, as in the following statement:
INSERT INTO T2 (INTCOL2) SELECT INTCOL1 FROM T1;
640
v Specify the OVERRIDING USER VALUE clause in your INSERT statement to tell DB2 to ignore any values that you supply for system-generated columns, as in the following statement:
INSERT INTO T2 (INTCOL2,ROWIDCOL2) OVERRIDING USER VALUE SELECT * FROM T1;
If IDENTCOL2 is defined as GENERATED ALWAYS, you cannot insert the identity column data from T1 into T2, but you can insert the character column data. To insert only the character data, use one of the following methods: v Specify only the character column in your INSERT statement, as in the following statement:
INSERT INTO T2 (CHARCOL2) SELECT CHARCOL1 FROM T1;
v Specify the OVERRIDING USER VALUE clause in your INSERT statement to tell DB2 to ignore any values that you supply for system-generated columns, as in the following statement:
INSERT INTO T2 (CHARCOL2,IDENTCOL2) OVERRIDING USER VALUE SELECT * FROM T1;
641
If you need to assign a value of one distinct type to a column of another distinct type, a function must exist that converts the value from one type to another. Because DB2 provides cast functions only between distinct types and their source types, you must write the function to convert from one distinct type to another.
You need to insert values from the TOTAL column in JAPAN_SALES into the TOTAL column of JAPAN_SALES_03. Because INSERT statements follow assignment rules, DB2 does not let you insert the values directly from one column to the other because the columns are of different distinct types. Suppose that a user-defined function called US_DOLLAR has been written that accepts values of type JAPANESE_YEN as input and returns values of type US_DOLLAR. You can then use this function to insert values into the JAPAN_SALES_03 table:
INSERT INTO JAPAN_SALES_03 SELECT PRODUCT_ITEM, US_DOLLAR(TOTAL) FROM JAPAN_SALES WHERE YEAR = 2003;
642
values of hv1 cannot be assigned to SIZECOL1 and SIZECOL2, because C data type double, which is equivalent to DB2 data type DOUBLE, is not promotable to data type INTEGER.
EXEC SQL BEGIN DECLARE SECTION; double hv1; short hv2; EXEC SQL END DECLARE SECTION; CREATE DISTINCT TYPE SIZE AS INTEGER; CREATE TABLE TABLE1 (SIZECOL1 SIZE, SIZECOL2 SIZE); . . . INSERT INTO TABLE1 VALUES (:hv1,:hv1); /* Invalid statement */ INSERT INTO TABLE1 VALUES (:hv2,:hv2); /* Valid statement */
643
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The MERGE statement simplifies the update and the insert into a single statement:
MERGE INTO INVENTORY USING ( VALUES (:hv_model, :hv_delta) ) AS SOURCE(MODEL, DELTA) ON INVENTORY.MODEL = SOURCE.MODEL WHEN MATCHED THEN UPDATE SET QUANTITY = QUANTITY + SOURCE.DELTA WHEN NOT MATCHED THEN INSERT VALUES (SOURCE.MODEL, SOURCE.DELTA) NOT ATOMIC CONTINUE ON SQLEXCEPTION;
Now, suppose that :hv_symbol and :hv_price are host variable arrays that contain updated data that corresponds to the data that is shown in Table 105. Table 106 shows the host variable data for stock activity.
Table 106. Host variable arrays of stock activity hv_symbol XCOM hv_price 97.00
644
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 106. Host variable arrays of stock activity (continued) hv_symbol NEWC XCOM hv_price 30.00 107.00
NEWC is new to the STOCK table, so its symbol and price need to be inserted into the STOCK table. The rows for XCOM in Table 106 on page 644represent changed stock prices, so these values need to be updated in the STOCK table. Also, the output needs to show the change in stock prices as a DELTA value. The following SELECT FROM MERGE statement updates the price of XCOM, inserts the symbol and price for NEWC, and returns an output that includes a DELTA value for the change in stock price.
SELECT SYMBOL, PRICE, DELTA FROM FINAL TABLE (MERGE INTO STOCK AS S INCLUDE (DELTA DECIMAL(5,20) USING ((:hv_symbol, :hv_price) FOR :hv_nrows ROWS) AS R (SYMBOL, PRICE) ON S.SYMBOL = R.SYMBOL WHEN MATCHED THEN UPDATE SET DELTA = R.PRICE - S.PRICE, PRICE=R.PRICE WHEN NOT MATCHED THEN INSERT (SYMBOL, PRICE, DELTA) VALUES (R.SYMBOL, R.PRICE, R.PRICE) NOT ATOMIC CONTINUE ON SQLEXCEPTION);
The INCLUDE clause specifies that an additional column, DELTA, can be returned in the output without adding a column to the STOCK table. The UPDATE portion of the MERGE statement sets the DELTA value to the differential of the previous stock price with the value set for the update operation. The INSERT portion of the MERGE statement sets the DELTA value to the same value as the PRICE column. After the SELECT FROM MERGE statement is processed, the STOCK table contains the data that is shown in Table 107.
Table 107. STOCK table after SELECT FROM MERGE statement SYMBOL XCOM YCOM NEWC PRICE 107.00 24.50 30.00
The following output of the SELECT FROM MERGE statement includes both updates to XCOM and a DELTA value for each output row.
SYMBOL PRICE DELTA ============================= XCOM 97.00 2.00 NEWC 30.00 30.00 XCOM 107.00 10.00
645
v The value of an automatically generated column such as a ROWID or identity column v Any default values for columns v All values for an inserted row, without specifying individual column names v All values that are inserted by a multiple-row INSERT operation v Values that are changed by a BEFORE INSERT trigger Example: In addition to examples that use the DB2 sample tables, the examples in this topic use an EMPSAMP table that has the following definition:
CREATE TABLE (EMPNO NAME SALARY DEPTNO LEVEL HIRETYPE HIREDATE EMPSAMP INTEGER GENERATED ALWAYS AS IDENTITY, CHAR(30), DECIMAL(10,2), SMALLINT, CHAR(30), VARCHAR(30) NOT NULL WITH DEFAULT New Hire, DATE NOT NULL WITH DEFAULT);
Assume that you need to insert a row for a new employee into the EMPSAMP table. To find out the values for the generated EMPNO, HIRETYPE, and HIREDATE columns, use the following SELECT FROM INSERT statement:
SELECT EMPNO, HIRETYPE, HIREDATE FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, DEPTNO, LEVEL) VALUES(Mary Smith, 35000.00, 11, Associate));
The SELECT statement returns the DB2-generated identity value for the EMPNO column, the default value 'New Hire' for the HIRETYPE column, and the value of the CURRENT DATE special register for the HIREDATE column. Recommendation: Use the SELECT FROM INSERT statement to insert a row into a parent table and retrieve the value of a primary key that was generated by DB2 (a ROWID or identity column). In another INSERT statement, specify this generated value as a value for a foreign key in a dependent table. Result table of the INSERT operation: The rows that are inserted into the target table produce a result table whose columns can be referenced in the SELECT list of the query. The columns of the result table are affected by the columns, constraints, and triggers that are defined for the target table: v The result table includes DB2-generated values for identity columns, ROWID columns, or row change timestamp columns. v Before DB2 generates the result table, it enforces any constraints that affect the insert operation (that is, check constraints, unique index constraints, and referential integrity constraints). v The result table includes any changes that result from a BEFORE trigger that is activated by the insert operation. An AFTER trigger does not affect the values in the result table. Example: Suppose that a BEFORE INSERT trigger is created on table EMPSAMP to give all new employees at the Associate level a $5000 increase in salary. The trigger has the following definition: | | | |
CREATE TRIGGER NEW_ASSOC NO CASCADE BEFORE INSERT ON EMPSAMP REFERENCING NEW AS NEWSALARY FOR EACH ROW MODE DB2SQL
646
| | | |
WHEN (NEWSALARY.LEVEL = ASSOCIATE) BEGIN ATOMIC SET NEWSALARY.SALARY = NEWSALARY.SALARY + 5000.00; END;
The INSERT statement in the FROM clause of the following SELECT statement inserts a new employee into the EMPSAMP table:
SELECT NAME, SALARY FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, LEVEL) VALUES(Mary Smith, 35000.00, Associate));
The SELECT statement returns a salary of 40000.00 for Mary Smith instead of the initial salary of 35000.00 that was explicitly specified in the INSERT statement. Selecting values when you insert a single row: When you insert a new row into a table, you can retrieve any column in the result table of the SELECT FROM INSERT statement. When you embed this statement in an application, you retrieve the row into host variables by using the SELECT ... INTO form of the statement. Example: You can retrieve all the values for a row that is inserted into a structure:
EXEC SQL SELECT * INTO :empstruct FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, DEPTNO, LEVEL) VALUES(Mary Smith, 35000.00, 11, Associate));
For this example, :empstruct is a host variable structure that is declared with variables for each of the columns in the EMPSAMP table. Selecting values when you insert data into a view: If the INSERT statement references a view that is defined with a search condition, that view must be defined with the WITH CASCADED CHECK OPTION option. When you insert data into the view, the result table of the SELECT FROM INSERT statement includes only rows that satisfy the view definition. Example: Because view V1 is defined with the WITH CASCADED CHECK OPTION option, you can reference V1 in the INSERT statement:
CREATE VIEW V1 AS SELECT C1, I1 FROM T1 WHERE I1 > 10 WITH CASCADED CHECK OPTION; SELECT C1 FROM FINAL TABLE (INSERT INTO V1 (I1) VALUES(12));
The value 12 satisfies the search condition of the view definition, and the result table consists of the value for C1 in the inserted row. If you use a value that does not satisfy the search condition of the view definition, the insert operation fails, and DB2 returns an error. Selecting values when you insert multiple rows: In an application program, to retrieve values from the insertion of multiple rows, declare a cursor so that the INSERT statement is in the FROM clause of the SELECT statement of the cursor.
647
Example: Inserting rows with ROWID values: To see the values of the ROWID columns that are inserted into the employee photo and resume table, you can declare the following cursor:
EXEC SQL DECLARE CS1 CURSOR FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8910.EMP_PHOTO_RESUME (EMPNO) SELECT EMPNO FROM DSN8910.EMP);
Example: Using the FETCH FIRST clause: To see only the first five rows that are inserted into the employee photo and resume table, use the FETCH FIRST clause:
EXEC SQL DECLARE CS2 CURSOR FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8910.EMP_PHOTO_RESUME (EMPNO) SELECT EMPNO FROM DSN8910.EMP) FETCH FIRST 5 ROWS ONLY;
Example: Using the INPUT SEQUENCE clause: To retrieve rows in the order in which they are inserted, use the INPUT SEQUENCE clause:
EXEC SQL DECLARE CS3 CURSOR FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8910.EMP_PHOTO_RESUME (EMPNO) VALUES(:hva_empno) FOR 5 ROWS) ORDER BY INPUT SEQUENCE;
The INPUT SEQUENCE clause can be specified only if an INSERT statement is in the FROM clause of the SELECT statement. In this example, the rows are inserted from an array of employee numbers. Example: Inserting rows with multiple encoding CCSIDs: Suppose that you want to populate an ASCII table with values from an EBCDIC table and then see selected values from the ASCII table. You can use the following cursor to select the EBCDIC columns, populate the ASCII table, and then retrieve the ASCII values:
EXEC SQL DECLARE CS4 CURSOR FOR SELECT C1, C2 FROM FINAL TABLE (INSERT INTO ASCII_TABLE SELECT * FROM EBCDIC_TABLE);
Selecting an additional column when you insert data: You can use the INCLUDE clause to introduce a new column to the result table but not add a column to the target table. Example: Suppose that you need to insert department number data into the project table. Suppose also, that you want to retrieve the department number and the corresponding manager number for each department. Because MGRNO is not a column in the project table, you can use the INCLUDE clause to include the manager number in your result but not in the insert operation. The following SELECT FROM INSERT statement performs the insert operation and retrieves the data.
DECLARE CS1 CURSOR FOR SELECT manager_num, projname FROM FINAL TABLE (INSERT INTO PROJ (DEPTNO) INCLUDE(manager_num CHAR(6)) SELECT DEPTNO, MGRNO FROM DEPT);
648
In an application program, when you insert multiple rows into a table, you declare a cursor so that the INSERT statement is in the FROM clause of the SELECT statement of the cursor. The result table of the cursor is determined during OPEN cursor processing. The result table may or may not be affected by other processes in your application. Effect on cursor sensitivity: When you declare a scrollable cursor, the cursor must be declared with the INSENSITIVE keyword if an INSERT statement is in the FROM clause of the cursor specification. The result table is generated during OPEN cursor processing and does not reflect any future changes. You cannot declare the cursor with the SENSITIVE DYNAMIC or SENSITIVE STATIC keywords. Effect of searched updates and deletes: When you declare a non-scrollable cursor, any searched updates or deletes do not affect the result table of the cursor. The rows of the result table are determined during OPEN cursor processing. Example: Assume that your application declares a cursor, opens the cursor, performs a fetch, updates the table, and then fetches additional rows:
EXEC SQL DECLARE CS1 CURSOR FOR SELECT SALARY FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY, LEVEL) SELECT NAME, INCOME, BAND FROM OLD_EMPLOYEE); EXEC SQL OPEN CS1; EXEC SQL FETCH CS1 INTO :hv_salary; /* print fetch result */ ... EXEC SQL UPDATE EMPSAMP SET SALARY = SALARY + 500; while (SQLCODE == 0) { EXEC SQL FETCH CS1 INTO :hv_salary; /* print fetch result */ ... }
The fetches that occur after the updates return the rows that were generated when the cursor was opened. If you use a simple SELECT (with no INSERT statement in the FROM clause), the fetches might return the updated values, depending on the access path that DB2 uses. Effect of WITH HOLD: When you declare a cursor with the WITH HOLD option and open the cursor, all of the rows are inserted into the target table. The WITH HOLD option has no effect on the SELECT FROM INSERT statement of the cursor definition. After your application performs a commit, you can continue to retrieve all of the inserted rows. Example: Assume that the employee table in the DB2 sample application has five rows. Your application declares a WITH HOLD cursor, opens the cursor, fetches two rows, performs a commit, and then fetches the third row successfully:
EXEC SQL DECLARE CS2 CURSOR WITH HOLD FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8910.EMP_PHOTO_RESUME (EMPNO) SELECT EMPNO FROM DSN8910.EMP); EXEC SQL OPEN CS2; /* Inserts 5 rows */ EXEC SQL FETCH CS2 INTO :hv_rowid; /* Retrieves ROWID for 1st row */
Chapter 11. Adding and modifying data
649
EXEC SQL FETCH CS2 INTO :hv_rowid; EXEC SQL COMMIT; EXEC SQL FETCH CS2 INTO :hv_rowid;
/* Retrieves ROWID for 2nd row */ /* Commits 5 rows */ /* Retrieves ROWID for 3rd row */
Effect of SAVEPOINT and ROLLBACK: A savepoint is a point in time within a unit of recovery to which relational database changes can be rolled back. You can set a savepoint with the SAVEPOINT statement. When you set a savepoint prior to opening the cursor and then roll back to that savepoint, all of the insertions are undone. Example: Assume that your application declares a cursor, sets a savepoint, opens the cursor, sets another savepoint, rolls back to the second savepoint, and then rolls back to the first savepoint:
EXEC SQL DECLARE CS3 CURSOR FOR SELECT EMP_ROWID FROM FINAL TABLE (INSERT INTO DSN8910.EMP_PHOTO_RESUME (EMPNO) SELECT EMPNO FROM DSN8910.EMP); EXEC SQL SAVEPOINT A ON ROLLBACK RETAIN CURSORS; /* Sets 1st savepoint EXEC SQL OPEN CS3; EXEC SQL SAVEPOINT B ON ROLLBACK RETAIN CURSORS; /* Sets 2nd savepoint ... EXEC SQL ROLLBACK TO SAVEPOINT B; /* Rows still in DSN8910.EMP_PHOTO_RESUME ... EXEC SQL ROLLBACK TO SAVEPOINT A; /* All inserted rows are undone
*/ */ */ */
What happens if an error occurs: In an application program, when you insert one or more rows into a table by using the SELECT FROM INSERT statement, the result table of the insert operation may or may not be affected, depending on where the error occurred in the application processing. During SELECT INTO processing: If the insert processing or the select processing fails during a SELECT INTO statement, no rows are inserted into the target table, and no rows are returned from the result table of the insert operation. Example: Assume that the employee table of the DB2 sample application has one row, and that the SALARY column has a value of 9 999 000.00.
EXEC SQL SELECT EMPNO INTO :hv_empno FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY) SELECT FIRSTNAME || MIDINIT || LASTNAME, SALARY + 10000.00 FROM DSN8910.EMP)
The addition of 10000.00 causes a decimal overflow to occur, and no rows are inserted into the EMPSAMP table. During OPEN cursor processing: If the insertion of any row fails during the OPEN cursor processing, all previously successful insertions are undone. The result table of the insert is empty. During FETCH processing: If the FETCH statement fails while retrieving rows from the result table of the insert operation, a negative SQLCODE is returned to the application, but the result table still contains the original number of rows that was determined during the OPEN cursor processing. At this point, you can undo all of the inserts.
650
Example: Assume that the result table contains 100 rows and the 90th row that is being fetched from the cursor returns a negative SQLCODE:
EXEC SQL DECLARE CS1 CURSOR FOR SELECT EMPNO FROM FINAL TABLE (INSERT INTO EMPSAMP (NAME, SALARY) SELECT FIRSTNAME || MIDINIT || LASTNAME, SALARY + 10000.00 FROM DSN8910.EMP); EXEC SQL OPEN CS1; /* Inserts 100 rows */ while (SQLCODE == 0) EXEC SQL FETCH CS1 INTO :hv_empno; if (SQLCODE == -904) /* If SQLCODE is -904, undo all inserts */ EXEC SQL ROLLBACK; else /* Else, commit inserts */ EXEC SQL COMMIT;
Related concepts: Held and non-held cursors on page 708 Rules for host variables in an SQL statement on page 147 Identity columns on page 441 Types of cursors on page 705 Related tasks: Inserting multiple rows of data from host variable arrays on page 157 Retrieving a set of rows by using a cursor on page 704 Undoing selected changes within a unit of work by using savepoints on page 30 Related reference: Command line processor BIND command on page 973 | | | | | | | | | | | | | | | | | | | | | | | | |
651
| | | | | | | | | | | | | | | |
Example: The following example joins data from table T1 to the result table of a nested table expression. The nested table expression is ordered by the second column in table T2. The ORDER BY ORDER OF TEMP clause in the query specifies that the fullselect result rows are to be returned in the same order as the nested table expression.
SELECT T1.C1, T1.C2, TEMP.Cy, TEMP.Cx FROM T1, (SELECT T2.C1, T2.C2 FROM T2 ORDER BY 2) as TEMP(Cx, Cy) WHERE Cy = T1.C1 ORDER BY ORDER OF TEMP;
Alternatively, you can produce the same result by explicitly stating the ORDER BY column TEMP.Cy in the fullselect instead of using the ORDER OF syntax.
SELECT T1.C1, T1.C2, TEMP.Cy, TEMP.Cx FROM T1, (SELECT T2.C1, T2.C2 FROM T2 ORDER BY 2) as TEMP(Cx, Cy) WHERE Cy = T1.C1 ORDER BY TEMP.Cy;
652
You cannot update rows in a created temporary table, but you can update rows in a declared temporary table. The SET clause names the columns that you want to update and provides the values that you want to assign to those columns. You can replace a column value in the SET clause with any of the following items: v A null value The column to which you assign the null value must not be defined as NOT NULL. v An expression, which can be any of the following items: A column A constant A scalar fullselect A host variable A special register v A default value If you specify DEFAULT, DB2 determines the value based on how the corresponding column is defined in the table. In addition, you can replace one or more column values in the SET clause with the column values in a row that is returned by a fullselect. Next, identify the rows to update: v To update a single row, use a WHERE clause that locates one, and only one, row. v To update several rows, use a WHERE clause that locates only the rows that you want to update. If you omit the WHERE clause, DB2 updates every row in the table or view with the values that you supply. If DB2 finds an error while executing your UPDATE statement (for example, an update value that is too large for the column), it stops updating and returns an error. No rows in the table change. Rows that were already changed, if any, are
| | |
653
restored to their previous values. If the UPDATE statement is successful, SQLERRD(3) is set to the number of rows that are updated. Example: The following statement supplies a missing middle initial and changes the job for employee 000200.
UPDATE YEMP SET MIDINIT = H, JOB = FIELDREP WHERE EMPNO = 000200;
The following statement gives everyone in department D11 a raise of 400.00. The statement can update several rows.
UPDATE YEMP SET SALARY = SALARY + 400.00 WHERE WORKDEPT = D11;
The following statement sets the salary for employee 000190 to the average salary and sets the bonus to the minimum bonus for all employees.
UPDATE YEMP SET (SALARY, BONUS) = (SELECT AVG(SALARY), MIN(BONUS) FROM EMP) WHERE EMPNO = 000190;
| | | | | | | | | | | | | | | | | | | | | | | | | | |
To retrieve row-by-row output of updated data, use a cursor with a SELECT FROM UPDATE statement. Example: Suppose that all designers for a company are receiving a 30 percent increase in their bonus. You can use the following SELECT FROM UPDATE statement to increase the bonus of each clerk by 30 percent and to retrieve the bonus for each clerk.
654
| | | | | | | | | | | | | | | | | | |
DECLARE CS1 CURSOR FOR SELECT LASTNAME, BONUS FROM FINAL TABLE (UPDATE EMP SET BONUS = BONUS * 1.3 WHERE JOB = CLERK); FETCH CS1 INTO :lastname, :bonus;
You can use the INCLUDE clause to introduce a new column to the result table but not add the column to the target table. Example: Suppose that sales representatives received a 20 percent increase in their commission. You need to update the commission (COMM) of sales representatives (SALESREP) in the EMP table and that you need to retrieve the old commission and the new commission for each sales representative. You can use the following SELECT FROM UPDATE statement to perform the update and to retrieve the required data.
DECLARE CS2 CURSOR FOR SELECT LASTNAME, COMM, old_comm FROM FINAL TABLE (UPDATE EMP INCLUDE(old_comm DECIMAL (7,2)) SET COMM = COMM * 1.2, old_comm = COMM WHERE JOB = SALESREP);
Procedure
To delete one or more rows in a table: v Use the DELETE statement with a WHERE clause to specify a search condition. The DELETE statement removes zero or more rows of a table, depending on how many rows satisfy the search condition that you specify in the WHERE clause. You can use DELETE with a WHERE clause to remove only selected rows from a declared temporary table, but not from a created temporary table. The following DELETE statement deletes each row in the YEMP table that has an employee number '000060'.
DELETE FROM YEMP WHERE EMPNO = 000060;
When this statement executes, DB2 deletes any row from the YEMP table that meets the search condition.
655
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
If DB2 finds an error while executing your DELETE statement, it stops deleting data and returns error codes in the SQLCODE and SQLSTATE variables or related fields in the SQLCA. The data in the table does not change. If the DELETE is successful, SQLERRD(3) in the SQLCA contains the number of deleted rows. This number includes only the number of deleted rows in the table that is specified in the DELETE statement. Rows that are deleted (in other tables) according to the CASCADE rule are not included in SQLERRD(3). To delete every row in a table: v Use the DELETE statement without specifying a WHERE clause. With segmented table spaces, deleting all rows of a table is very fast. The following DELETE statement deletes every row in the YDEPT table:
DELETE FROM YDEPT;
If the statement executes, the table continues to exist (that is, you can insert rows into it), but it is empty. All existing views and authorizations on the table remain intact when using DELETE. v Use the TRUNCATE statement. The TRUNCATE statement can provide the following advantages over a DELETE statement: The TRUNCATE statement can ignore delete triggers The TRUNCATE statement can perform an immediate commit The TRUNCATE statement can keep storage allocated for the table The TRUNCATE statement does not, however, reset the count for an automatically generated value for an identity column on the table. If 14872 was the next identity column value to be generated before a TRUNCATE statement, 14872 would be the next value generated after the TRUNCATE statement. Suppose that you need to empty the data from an old inventory table, regardless of any existing delete triggers, and you need to make the space that is allocated for the table available for other uses. Use the following TRUNCATE statement.
TRUNCATE INVENTORY_TABLE IGNORE DELETE TRIGGERS DROP STORAGE;
Suppose that you need to empty the data from an old inventory table permanently, regardless of any existing delete triggers, and you need to preserve the space that is allocated for the table. You need the emptied data to be completely unavailable, so that a ROLLBACK statement cannot return the data. Use the following TRUNCATE statement.
TRUNCATE INVENTORY_TABLE REUSE STORAGE IGNORE DELETE TRIGGERS IMMEDIATE;
v Use the DROP TABLE statement. DROP TABLE drops the specified table and all related views and authorizations, which can invalidate plans and packages.
656
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Related concepts: SQL communication area (SQLCA) (DB2 SQL) Related tasks: Dropping tables on page 460 Related reference: DROP (DB2 SQL) TRUNCATE (DB2 SQL)
To retrieve row-by-row output of deleted data, use a cursor with a SELECT FROM DELETE statement. Example: Suppose that a company is eliminating all analyst positions and that the company wants to know how many years of experience each analyst had with the company. You can use the following SELECT FROM DELETE statement to delete analysts from the EMP table and to retrieve the experience of each analyst.
DECLARE CS1 CURSOR FOR SELECT YEAR(CURRENT DATE - HIREDATE) FROM OLD TABLE (DELETE FROM EMP WHERE JOB = ANALYST); FETCH CS1 INTO :years_of_service;
If you need to retrieve calculated data based on the data that you delete but not add that column to the target table. Example: Suppose that you need to delete managers from the EMP table and that you need to retrieve the salary and the years of employment for each manager. You can use the following SELECT FROM DELETE statement to perform the delete operation and to retrieve the required data.
657
| | | | | |
DECLARE CS2 CURSOR FOR SELECT LASTNAME, SALARY, years_employed FROM OLD TABLE (DELETE FROM EMP INCLUDE(years_employed INTEGER) SET years_employed = YEAR(CURRENT DATE - HIREDATE) WHERE JOB = MANAGER);
658
In this query, the predicate GRANTEETYPE = selects authorization IDs. Exception: If your DB2 subsystem uses an exit routine for access control authorization, you cannot rely on catalog queries to tell you the tables that you can access. When such an exit routine is installed, both RACF and DB2 control table access.
659
Example: Suppose that you want to display information about table DSN8910.DEPT. If you have the SELECT privilege on SYSIBM.SYSCOLUMNS, you can use the following statement:
SELECT NAME, COLTYPE, SCALE, LENGTH FROM SYSIBM.SYSCOLUMNS WHERE TBNAME = DEPT AND TBCREATOR = DSN8910;
If you display column information about a table that includes LOB or ROWID columns, the LENGTH field for those columns contains the number of bytes that those column occupy in the base table. The LENGTH field does not contain the length of the LOB or ROWID data. Example: To determine the maximum length of data for a LOB or ROWID column, include the LENGTH2 column in your query:
SELECT NAME, COLTYPE, LENGTH, LENGTH2 FROM SYSIBM.SYSCOLUMNS WHERE TBNAME = EMP_PHOTO_RESUME AND TBCREATOR = DSN8910;
660
----------------
----------------------
Because the example does not specify a WHERE clause, the statement retrieves data from all rows. The dashes for MGRNO and LOCATION in the result table indicate null values. SELECT * is recommended mostly for use with dynamic SQL and view definitions. You can use SELECT * in static SQL, but doing so is not recommended because of host variable compatibility and performance reasons.Suppose that you add a column to the table to which SELECT * refers. If you have not defined a receiving host variable for that column, an error occurs. If you list the column names in a static SELECT statement instead of using an asterisk, you can avoid the problem that sometimes occurs with SELECT *. You can also see the relationship between the receiving host variables and the columns in the result table. Selecting some columns: SELECT column-name: Select the column or columns you want to retrieve by naming each column. All columns appear in the order you specify, not in their order in the table. Example: SELECT column-name: The following SQL statement retrieves only the MGRNO and DEPTNO columns from the department table:
SELECT MGRNO, DEPTNO FROM DSN8910.DEPT;
With a single SELECT statement, you can select data from one column or as many as 750 columns. | | | | To SELECT data from implicitly hidden columns, such as ROWID and XML document ID, look up the column names in SYSIBM.SYSCOLUMNS and specify these names in the SELECT list. For example, suppose that you create and populate the following table:
CREATE TABLE MEMBERS (MEMBERID INTEGER, BIO XML, REPORT XML, RECOMMENDATIONS XML);
Chapter 12. Accessing data
661
DB2 generates one additional implicitly hidden XML document ID column. To retrieve data in all columns, including the generated XML document ID column, first look up the name of the generated column in SYSIBM.SYSCOLUMNS. Suppose the name is DB2_GENERATED_DOCID_FOR_XML. Then, specify the following statement:
SELECT DB2_GENERATED_DOCID_FOR_XML, MEMBERID, BIO, REPORT, RECOMMENDATIONS FROM MEMBERS
Selecting rows using search conditions: WHERE: Use a WHERE clause to select the rows that meet certain conditions. A WHERE clause specifies a search condition. A search condition consists of one or more predicates. A predicate specifies a test that you want DB2 to apply to each table row. DB2 evaluates a predicate for each row as true, false, or unknown. Results are unknown only if an operand is null. If a search condition contains a column of a distinct type, the value to which that column is compared must be of the same distinct type, or you must cast the value to the distinct type. The following table lists the type of comparison, the comparison operators, and an example of each type of comparison that you can use in a predicate in a WHERE clause.
Table 108. Comparison operators used in conditions Type of comparison Equal to Not equal to Less than Less than or equal to Not less than Greater than Comparison operator = <> < <= >= > Example DEPTNO = 'X01' DEPTNO <> 'X01' AVG(SALARY) < 30000 AGE <= 25 AGE >= 21 SALARY > 2000 SALARY >= 5000 SALARY <= 5000 PHONENO IS NULL PHONENO IS DISTINCT FROM :PHONEHV NAME LIKE ' or STATUS LIKE 'N_' HIREDATE < '1965-01-01' OR SALARY < 16000 HIREDATE < '1965-01-01' AND SALARY < 16000 SALARY BETWEEN 20000 AND 40000 DEPTNO IN ('B01', 'C01', 'D01')
Greater than or equal to >= Not greater than Equal to null Not equal to another value or one value is equal to null <= IS NULL IS DISTINCT FROM
Similar to another value LIKE At least one of two conditions Both of two conditions Between two values Equals a value in a set OR AND BETWEEN IN (X, Y, Z)
Note: SALARY BETWEEN 20000 AND 40000 is equivalent to SALARY >= 20000 AND SALARY <= 40000.
662
You can also search for rows that do not satisfy one of the preceding conditions by using the NOT keyword before the specified condition. You can search for rows that do not satisfy the IS DISTINCT FROM predicate by using either of the following predicates: v value 1 IS NOT DISTINCT FROM value 2 v NOT(value 1 IS DISTINCT FROM value 2) Both of these forms of the predicate create an expression for which one value is equal to another value or both values are equal to null. Related concepts: Distinct types on page 489 Host variables on page 139 Remote servers and distributed data on page 34 Subqueries on page 693 Predicates (DB2 SQL)
Derived columns in a result table, such as (SALARY + BONUS + COMM), do not have names. You can use the AS clause to give a name to an unnamed column of the result table. For information about using the AS clause, see Naming result columns on page 665. To order the rows in a result table by the values in a derived column, specify a name for the column by using the AS clause, and specify that name in the ORDER BY clause. For information about using the ORDER BY clause, see Ordering the result table rows on page 666. | | | | | | | | |
663
| | | | | | | | | | | | | | | | | | | |
To select a subset of data in an XML column, specify the XMLQUERY function in your SELECT statement with the following parameters: v An XPath expression that is embedded in a character string constant. Specify an XPath expression that identifies which XML data to return. v Any additional values to pass to the XPath expression, including the XML column name. Specify these values after the PASSING keyword. Example: Suppose that you store purchase orders as XML documents in the POrder column in the PurchaseOrders table. You need to find in each purchase order the items whose product name is equal to a name in the Product table. You can use the following statement to find these values:
SELECT XMLQUERY(//item[productName = $n] PASSING PO.POrder, P.name AS "n") FROM PurchaseOrders PO, Product P;
This statement returns the item elements in the POrder column that satisfy the criteria in the XPath expression. Related concepts: Overview of XPath (DB2 Programming for XML) Related reference: XMLQUERY (DB2 SQL)
Result tables
The data that is retrieved by an SQL statement is always in the form of a table, which is called a result table. Like the tables from which you retrieve the data, a result table has rows and columns. A program fetches this data one row at a time. Example result table: Assume that you issue the following SELECT statement, which retrieves the last name, first name, and phone number of employees in department D11 from the sample employee table:
SELECT LASTNAME, FIRSTNME, PHONENO FROM DSN8910.EMP WHERE WORKDEPT = D11 ORDER BY LASTNAME;
664
| |
Restriction: You cannot use the DISTINCT keyword with LOB columns or XML columns.
Example: CREATE VIEW with AS clause: You can specify result column names in the select-clause of a CREATE VIEW statement. You do not need to supply the column list of CREATE VIEW, because the AS keyword names the derived column. The columns in the view EMP_SAL are EMPNO and TOTAL_SAL.
CREATE VIEW EMP_SAL AS SELECT EMPNO,SALARY+BONUS+COMM AS TOTAL_SAL FROM DSN8910.EMP;
Example: set operator with AS clause: You can use the AS clause with set operators, such as UNION. In this example, the AS clause is used to give the same name to corresponding columns of tables in a UNION. The third result column from the union of the two tables has the name TOTAL_VALUE, even though it contains data that is derived from columns with different names:
665
SELECT On hand AS STATUS, PARTNO, QOH * COST AS TOTAL_VALUE FROM PART_ON_HAND UNION ALL SELECT Ordered AS STATUS, PARTNO, QORDER * COST AS TOTAL_VALUE FROM ORDER_PART ORDER BY PARTNO, TOTAL_VALUE;
The column STATUS and the derived column TOTAL_VALUE have the same name in the first and second result tables. They are combined in the union of the two result tables, which is similar to the following partial output:
STATUS ======= On hand Ordered . . . PARTNO ====== 00557 00557 TOTAL_VALUE =========== 345.60 150.50
Example: GROUP BY derived column: You can use the AS clause in a FROM clause to assign a name to a derived column that you want to refer to in a GROUP BY clause. This SQL statement names HIREYEAR in the nested table expression, which lets you use the name of that result column in the GROUP BY clause:
SELECT HIREYEAR, AVG(SALARY) FROM (SELECT YEAR(HIREDATE) AS HIREYEAR, SALARY FROM DSN8910.EMP) AS NEWEMP GROUP BY HIREYEAR;
You cannot use GROUP BY with a name that is defined with an AS clause for the derived column YEAR(HIREDATE) in the outer SELECT, because that name does not exist when the GROUP BY runs. However, you can use GROUP BY with a name that is defined with an AS clause in the nested table expression, because the nested table expression runs before the GROUP BY that references the name. Related tasks: Combining result tables from multiple SELECT statements on page 670 Defining a view on page 460 Summarizing group values on page 675 Related reference: select-clause (DB2 SQL)
666
You can list the rows in ascending or descending order. Null values appear last in an ascending sort and first in a descending sort. | | | DB2 sorts strings in the collating sequence associated with the encoding scheme of the table. DB2 sorts numbers algebraically and sorts datetime values chronologically. Restriction: You cannot use the ORDER BY clause with LOB or XML columns. Example: ORDER BY clause with a column name as the sort key: Retrieve the employee numbers, last names, and hire dates of employees in department A00 in ascending order of hire dates:
SELECT EMPNO, LASTNAME, HIREDATE FROM DSN8910.EMP WHERE WORKDEPT = A00 ORDER BY HIREDATE ASC;
Example: ORDER BY clause with an expression as the sort key: The following subselect retrieves the employee numbers, salaries, commissions, and total compensation (salary plus commission) for employees with a total compensation greater than 40000. Order the results by total compensation:
SELECT EMPNO, SALARY, COMM, SALARY+COMM AS "TOTAL COMP" FROM DSN8910.EMP WHERE SALARY+COMM > 40000 ORDER BY SALARY+COMM;
Referencing derived columns in the ORDER BY clause: If you use the AS clause to name an unnamed column in a SELECT statement, you can use that name in the ORDER BY clause. Example: ORDER BY clause that uses a derived column: The following SQL statement orders the selected information by total salary:
SELECT EMPNO, (SALARY + BONUS + COMM) AS TOTAL_SAL FROM DSN8910.EMP ORDER BY TOTAL_SAL;
| | |
667
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example
Suppose that you want a list of employees and salaries from department D11 in the sample EMP table. You can return a numbered list that is ordered by last name by submitting the following query:
SELECT ROW_NUMBER() OVER (ORDER BY LASTNAME) AS NUMBER, WORKDEPT, LASTNAME, SALARY FROM DSN8910.EMP WHERE WORKDEPT=D11
When you use the RANK specification, DB2 returns the following rank numbers:
668
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 109. Example of values returned when you specify RANK Value 2:31:57 2:34:52 2:34:52 2:37:26 2:38:01 Rank number 1 2 2 4 5
DENSE_RANK Returns a rank number for each row value. Use this specification if you do not want rank numbers to be skipped when duplicate row values exist. For example, when you specify DENSE_RANK with the same times that are listed in the description of RANK, DB2 returns the following rank numbers:
Table 110. Example of values returned when you specify RANK Value 2:31:57 2:34:52 2:34:52 2:37:26 2:38:01 Rank number 1 2 2 3 4
Example: Suppose that you had the following values in the DATA column of table T1:
DATA ------100 35 23 8 8 6
Suppose that you use the following DENSE_RANK specification on the same data:
669
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SELECT DATA, DENSE_RANK() OVER (ORDER BY DATA DESC) AS RANK_DATA FROM T1 ORDER BY RANK_DATA;
In the example with the RANK specification, two equal values are both ranked as 4. The next rank number is 6. Number 5 is skipped. In the example with the DENSE_RANK option, those two equal values are also ranked as 4. However, the next rank number is 5. With DENSE_RANK, no gaps exist in the sequential rank numbering. Related reference: OLAP specification (DB2 SQL)
670
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
column of the first result table (R1) and the nth column of the second result table (R2) have the same result column name, the nth column of the result table has that same result column name. If the nth column of R1 and the nth column of R2 do not have the same names, the result column is unnamed. Examples: Assume that you want to combine the results of two SELECT statements that return the following result tables: R1 result table
COL1 COL2 a a a b a c
R2 result table
COL1 COL2 a b a c a d
A UNION operation combines the two result tables and returns four rows:
COL1 a a a a COL2 a b c d
An EXCEPT operation combines the two result tables and returns one row The result of the EXCEPT operation depends on the which SELECT statement is included before the EXCEPT keyword in the SQL statement. If the SELECT statement that returns the R1 result table is listed first, the result is a single row:
COL1 COL2 a a
If the SELECT statement that returns the R2 result table is listed first, the final result is a different row:
COL1 COL2 a d
An INTERSECT operation combines the two result tables and returns two rows:
COL1 COL2 a b a c
Eliminating redundant duplicate rows when combining result tables: To eliminate redundant duplicate rows when combining result tables, specify one of the following keywords: v UNION or UNION DISTINCT v EXCEPT or EXCEPT DISTINCT v INTERSECT or INTERSECT DISTINCT To order the entire result table, specify the ORDER BY clause at the end. Examples: Assume that you have the following tables to manage stock at two book stores.
671
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 111. STOCKA ISBN 8778997709 4599877699 9228736278 1002387872 4599877699 0087873532 TITLE For Whom the Bell Tolls The Good Earth A Tale of Two Cities Beloved The Good Earth The Labyrinth of Solitude AUTHOR Hemmingway Buck Dickens Morrison Buck Paz NOBEL PRIZE N Y N Y Y Y
Table 112. STOCKB ISBN 6689038367 2909788445 1182983745 9228736278 1002387872 TITLE The Grapes of Wrath The Silent Cry Light in August A Tale of Two Cities Beloved AUTHOR Steinbeck Oe Faulkner Dickens Morrison NOBEL PRIZE Y Y Y N Y
Example 1: UNION clause: Suppose that you want a list of books whose authors have won the Nobel Prize and that are in stock at either store. The following SQL statement returns these books in order by author name without redundant duplicate rows:
SELECT TITLE, AUTHOR FROM STOCKA WHERE NOBELPRIZE = Y UNION SELECT TITLE, AUTHOR FROM STOCKB WHERE NOBELPRIZE = Y ORDER BY AUTHOR
Example 2: EXCEPT: Suppose that you want a list of books that are only in STOCKA. The following SQL statement returns the book names that are in STOCKA only without any redundant duplicate rows:
672
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SELECT TITLE FROM STOCKA EXCEPT SELECT TITLE FROM STOCKB ORDER BY TITLE;
Example 3: INTERSECT: Suppose that you want a list of books that are in both STOCKA and in STOCKB. The following statement returns a list of all books from both of these tables with redundant duplicate rows are removed.
SELECT TITLE FROM STOCKA INTERSECT SELECT TITLE FROM STOCKB ORDER BY TITLE;
Keeping all duplicate rows when combining result tables: To keep all duplicate rows when combining result tables, specify ALL with one of the following set operator keywords: v UNION ALL v EXCEPT ALL v INTERSECT ALL To order the entire result table, specify the ORDER BY clause at the end. Examples: The following examples use the STOCKA and STOCK B tables. Example: UNION ALL: The following SQL statement returns a list of books that won Nobel prizes and are in stock at either store, with duplicates included.
SELECT TITLE, AUTHOR FROM STOCKA WHERE NOBELPRIZE = Y UNION ALL SELECT TITLE, AUTHOR FROM STOCKB WHERE NOBELPRIZE = Y ORDER BY AUTHOR
673
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example: EXCEPT ALL: Suppose that you want a list of books that are only in STOCKA. The following SQL statement returns the book names that are in STOCKA only with all duplicate rows:
SELECT TITLE FROM STOCKA EXCEPT ALL SELECT TITLE FROM STOCKB ORDER BY TITLE;
Example: INTERSECT ALL clause: Suppose that you want a list of books that are in both STOCKA and in STOCKB, including any duplicate matches. The following statement returns a list of titles that are in both stocks, including duplicate matches. In this case, one match exists for "A Tale of Two Cities" and one match exists for "Beloved."
SELECT TITLE FROM STOCKA INTERSECT ALL SELECT TITLE FROM STOCKB ORDER BY TITLE;
674
If a column that you specify in the GROUP BY clause contains null values, DB2 considers those null values to be equal. Thus, all nulls form a single group. When it is used, the GROUP BY clause follows the FROM clause and any WHERE clause, and it precedes the ORDER BY clause. You can group the rows by the values of more than one column. Example: GROUP BY clause using more than one column: The following statement finds the average salary for men and women in departments A00 and C01:
SELECT WORKDEPT, SEX, AVG(SALARY) AS AVG_SALARY FROM DSN8910.EMP WHERE WORKDEPT IN (A00, C01) GROUP BY WORKDEPT, SEX;
DB2 groups the rows first by department number and then (within each department) by sex before it derives the average SALARY value for each group. You can also group the rows by the results of an expression Example: GROUP BY clause using a expression: The following statement groups departments by their leading characters, and lists the lowest and highest education level for each group:
SELECT SUBSTR(WORKDEPT,1,1), MIN(EDLEVEL), MAX(EDLEVEL) FROM DSN8910.EMP GROUP BY SUBSTR(WORKDEPT,1,1);
675
Filtering groups
If you group rows in the result table, you can also specify a search condition that each retrieved group must satisfy. The search condition tests properties of each group rather than properties of individual rows in the group.
Compare the preceding example with the second example shown in Summarizing group values on page 675. The clause, HAVING COUNT(*) > 1, ensures that only departments with more than one member are displayed. In this case, departments B01 and E01 do not display because the HAVING clause tests a property of the group. Example: HAVING clause used with a GROUP BY clause: Use the HAVING clause to retrieve the average salary and minimum education level of women in each department for which all female employees have an education level greater than or equal to 16. Assuming that you want results from only departments A00 and D11, the following SQL statement tests the group property, MIN(EDLEVEL):
SELECT WORKDEPT, AVG(SALARY) AS AVG_SALARY, MIN(EDLEVEL) AS MIN_EDLEVEL FROM DSN8910.EMP WHERE SEX = F AND WORKDEPT IN (A00, D11) GROUP BY WORKDEPT HAVING MIN(EDLEVEL) >= 16;
When you specify both GROUP BY and HAVING, the HAVING clause must follow the GROUP BY clause. A function in a HAVING clause can include DISTINCT if you have not used DISTINCT anywhere else in the same SELECT statement. You can also connect multiple predicates in a HAVING clause with AND or OR, and
676
Example: Suppose that you want to return all of the records that have changed since 9:00 AM January 1, 2004. The following query returns all of those rows.
SELECT * FROM TAB WHERE ROW CHANGE TIMESTAMP FOR TAB >= 2004-01-01-09.00.00;
Related reference: ROW CHANGE expression (DB2 SQL) CREATE TABLE (DB2 SQL)
677
A join operation typically matches a row of one table with a row of another on the basis of a join condition. DB2 supports the following types of joins: inner join, left outer join, right outer join, and full outer join. You can specify joins in the FROM clause of a query. Nested table expressions and user-defined table functions in joins: An operand of a join can be more complex than the name of a single table. You can specify one of the following items as a join operand: nested table expression A fullselect that is enclosed in parentheses and followed by a correlation name. The correlation name lets you refer to the result of that expression. Using a nested table expression in a join can be helpful when you want to create a temporary table to use in a join. You can specify the nested table expression as either the right or left operand of a join, depending on which unmatched rows you want included. user-defined table function A user-defined function that returns a table. Using a nested table expression in a join can be helpful when you want to perform some operation on the values in a table before you join them to another table. Example of using correlated references: In the following SELECT statement, the correlation name that is used for the nested table expression is CHEAP_PARTS. You can use this correlation name to refer to the columns that are returned by the expression. In this case, those correlated references are CHEAP_PARTS.PROD# and CHEAP_PARTS.PRODUCT.
SELECT CHEAP_PARTS.PROD#, CHEAP_PARTS.PRODUCT FROM (SELECT PROD#, PRODUCT FROM PRODUCTS WHERE PRICE < 10) AS CHEAP_PARTS;
The correlated references are valid because they do not occur in the table expression where CHEAP_PARTS is defined. The correlated references are from a table specification at a higher level in the hierarchy of subqueries. Example of using a nested table expression as the right operand of a join: The following query contains a fullselect (in bold) as the right operand of a left outer join with the PROJECTS table. The correlation name is TEMP. In this case the unmatched rows from the PROJECTS table are included, but the unmatched rows from the nested table expression are not.
SELECT PROJECT, COALESCE(PROJECTS.PROD#, PRODNUM) AS PRODNUM, PRODUCT, PART, UNITS FROM PROJECTS LEFT JOIN (SELECT PART, COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM, PRODUCTS.PRODUCT FROM PARTS FULL OUTER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#) AS TEMP ON PROJECTS.PROD# = PRODNUM;
678
Example of using a nested table expression as the left operand of a join: The following query contains a fullselect as the left operand of a left outer join with the PRODUCTS table. The correlation name is PARTX. In this case the unmatched rows from the nested table expression are included, but the unmatched rows from the PRODUCTS table are not.
SELECT PART, SUPPLIER, PRODNUM, PRODUCT FROM (SELECT PART, PROD# AS PRODNUM, SUPPLIER FROM PARTS WHERE PROD# < 200) AS PARTX LEFT OUTER JOIN PRODUCTS ON PRODNUM = PROD#;
Because PROD# is a character field, DB2 does a character comparison to determine the set of rows in the result. Therefore, because the characters '30' are greater than '200', the row in which PROD# is equal to '30' does not appear in the result. Example: Using a table function as an operand of a join: Suppose that CVTPRICE is a table function that converts the prices in the PRODUCTS table to the currency that you specify and returns the PRODUCTS table with the prices in those units. You can obtain a table of parts, suppliers, and product prices with the prices in your choice of currency by executing a query similar to the following query:
SELECT PART, SUPPLIER, PARTS.PROD#, Z.PRODUCT, Z.PRICE FROM PARTS, TABLE(CVTPRICE(:CURRENCY)) AS Z WHERE PARTS.PROD# = Z.PROD#;
Correlated references in table specifications in joins: Use correlation names to refer to the results of a nested table expression. After you specify the correlation name for an expression, any subsequent reference to this correlation name is called a correlated reference. You can include correlated references in nested table expressions or as arguments to table functions. The basic rule that applies for both of these cases is that the correlated reference must be from a table specification at a higher level in the hierarchy of subqueries. You can also use a correlated reference and the table specification to which it refers in the same FROM clause if the table specification appears to the left of the correlated reference and the correlated reference is in one of the following clauses: v A nested table expression that is preceded by the keyword TABLE v The argument of a table function For more information about correlated references, see Correlation names in references on page 698. A table function or a table expression that contains correlated references to other tables in the same FROM clause cannot participate in a full outer join or a right outer join. The following examples illustrate valid uses of correlated references in table specifications.
679
Example: In this example, the correlated reference T.C2 is valid because the table specification, to which it refers, T, is to its left.
SELECT T.C1, Z.C5 FROM T, TABLE(TF3(T.C2)) AS Z WHERE T.C3 = Z.C4;
If you specify the join in the opposite order, with T following TABLE(TF3(T.C2), T.C2 is invalid. Example: In this example, the correlated reference D.DEPTNO is valid because the nested table expression within which it appears is preceded by TABLE, and the table specification D appears to the left of the nested table expression in the FROM clause.
SELECT D.DEPTNO, D.DEPTNAME, EMPINFO.AVGSAL, EMPINFO.EMPCOUNT FROM DEPT D, TABLE(SELECT AVG(E.SALARY) AS AVGSAL, COUNT(*) AS EMPCOUNT FROM EMP E WHERE E.WORKDEPT=D.DEPTNO) AS EMPINFO;
680
DB2 determines the intermediate and final results of the previous query by performing the following logical steps: 1. Join the employee and project tables on the employee number, dropping the rows with no matching employee number in the project table. 2. Join the intermediate result table with the department table on matching department numbers. 3. Process the select list in the final result table, leaving only four columns. Joining more than two tables by using more than one join type: When joining more than two tables, you do not have to use the same join type for every join. To join tables by using more than one join type, specify the join types in the FROM clause. Example: Suppose that you want a result table that shows the following items: v employees whose last name begins with 'S' or a letter that comes after 'S' in the alphabet v the department names for the these employees v any projects that these employees are responsible for You can use the following SELECT statement:
SELECT EMPNO, LASTNAME, DEPTNAME, PROJNO FROM DSN8910.EMP INNER JOIN DSN8910.DEPT ON WORKDEPT = DSN8910.DEPT.DEPTNO LEFT OUTER JOIN DSN8910.PROJ ON EMPNO = RESPEMP WHERE LASTNAME > S;
DB2 determines the intermediate and final results of the previous query by performing the following logical steps: 1. Join the employee and department tables on matching department numbers, dropping the rows where the last name begins with a letter before 'S in the alphabet'.
681
2. Join the intermediate result table with the project table on the employee number, keeping the rows for which no matching employee number exists in the project table. 3. Process the select list in the final result table, leaving only four columns.
Inner joins
An inner join is a method of combining two tables that discards rows of either table that do not match any row of the other table. The matching is based on the join condition. To request an inner join, execute a SELECT statement in which you specify the tables that you want to join in the FROM clause, and specify a WHERE clause or an ON clause to indicate the join condition. The join condition can be any simple or compound search condition that does not contain a subquery reference. In the simplest type of inner join, the join condition is column1=column2.
Example
You can join the PARTS and PRODUCTS tables in sample data from joins on the PROD# column to get a table of parts with their suppliers and the products that use the parts. To do this, you can use either one of the following SELECT statements:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS, PRODUCTS WHERE PARTS.PROD# = PRODUCTS.PROD#; SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS INNER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#;
Three things about this example: v A part in the parts table (OIL) has product (#160), which is not in the products table. A product (SCREWDRIVER, #505) has no parts listed in the parts table. Neither OIL nor SCREWDRIVER appears in the result of the join. In contrast, an outer join includes rows in which the values in the joined columns do not match. v You can explicitly specify that this join is an inner join (not an outer join). Use INNER JOIN in the FROM clause instead of the comma, and use ON to specify the join condition (rather than WHERE) when you explicitly join tables in the FROM clause. v If you do not specify a WHERE clause in the first form of the query, the result table contains all possible combinations of rows for the tables that are identified in the FROM clause. You can obtain the same result by specifying a join condition that is always true in the second form of the query, as in the following statement:
682
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS INNER JOIN PRODUCTS ON 1=1;
Regardless of whether you omit the WHERE clause or specify a join condition that is always true, the number of rows in the result table is the product of the number of rows in each table. You can specify more complicated join conditions to obtain different sets of results. For example, to eliminate the suppliers that begin with the letter A from the table of parts, suppliers, product numbers, and products, write a query like the following query:
SELECT PART, SUPPLIER, PARTS.PROD#, PRODUCT FROM PARTS INNER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD# AND SUPPLIER NOT LIKE A%;
The result of the query is all rows that do not have a supplier that begins with A. The result table looks like the following output:
PART ======= MAGNETS PLASTIC SUPPLIER ============ BATEMAN PLASTIK_CORP PROD# ===== 10 30 PRODUCT ========== GENERATOR RELAY
In this example, the comma in the FROM clause implicitly specifies an inner join, and it acts the same as if the INNER JOIN keywords had been used. When you use the comma for an inner join, you must specify the join condition on the WHERE clause. When you use the INNER JOIN keywords, you must specify the join condition on the ON clause.
683
Related reference: Sample data for joins on page 688 from-clause (DB2 SQL)
Outer joins
An outer join is a method of combining two or more tables so that the result includes unmatched rows of one of the tables, or of both tables. The matching is based on the join condition. DB2 supports three types of outer joins: full outer join Includes unmatched rows from both tables. If any column of the result table does not have a value, that column has the null value in the result table. left outer join Includes rows from the table that is specified before LEFT OUTER JOIN that have no matching values in the table that is specified after LEFT OUTER JOIN. right outer join Includes rows from the table that is specified after RIGHT OUTER JOIN that have no matching values in the table that is specified before RIGHT OUTER JOIN. The following table illustrates how the PARTS and PRODUCTS tables in Sample data for joins on page 688 can be combined using the three outer join functions.
PARTS PART WIRE MAGNETS BLADES PLASTIC OIL PROD# 10 10 205 30 160 PRODUCTS PROD# Matches 505 10 205 30 PRICE 3.70 45.75 18.90 7.55 Unmatched row
Unmatched row
LEFT OUTER JOIN PART WIRE MAGNETS BLADES PLASTIC OIL PROD# PRICE 45.75 10 45.75 10 18.90 205 7.55 30 (null) 160
FULL OUTER JOIN PART WIRE MAGNETS BLADES PLASTIC OIL (null) PROD# 10 10 205 30 160 505 PRICE 45.75 45.75 18.90 7.55 (null) 3.70
RIGHT OUTER JOIN PART PROD# PRICE 45.75 10 WIRE 45.75 MAGNETS 10 18.90 BLADES 205 7.55 PLASTIC 30 3.70 505 (null)
Figure 36. Three outer joins from the PARTS and PRODUCTS tables
The result table contains data that is joined from all of the tables, for rows that satisfy the search conditions. The result columns of a join have names if the outermost SELECT list refers to base columns. However, if you use a function (such as COALESCE or VALUE) to build a column of the result, that column does not have a name unless you use the AS clause in the SELECT list.
684
The result table from the query looks similar to the following output:
PART ======= WIRE MAGNETS PLASTIC BLADES OIL ------SUPPLIER ============ ACWF BATEMAN PLASTIK_CORP ACE_STEEL WESTERN_CHEM -----------PROD# ===== 10 10 30 205 160 --PRODUCT ========== GENERATOR GENERATOR RELAY SAW ----------SCREWDRIVER
Example of using COALESCE or VALUE: COALESCE is the keyword that is specified by the SQL standard as a synonym for the VALUE function. This function, by either name, can be particularly useful in full outer join operations because it returns the first non-null value from the pair of join columns. The product number in the result of the example for Full outer join is null for SCREWDRIVER, even though the PRODUCTS table contains a product number for SCREWDRIVER. If you select PRODUCTS.PROD# instead, PROD# is null for OIL. If you select both PRODUCTS.PROD# and PARTS.PROD#, the result contains two columns, both of which contain some null values. You can merge data from both columns into a single column, eliminating the null values, by using the COALESCE function. With the same PARTS and PRODUCTS tables, the following example merges the non-null data from the PROD# columns:
SELECT PART, SUPPLIER, COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM, PRODUCT FROM PARTS FULL OUTER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD#;
685
The AS clause (AS PRODNUM) provides a name for the result of the COALESCE function.
A row from the PRODUCTS table is in the result table only if its product number matches the product number of a row in the PARTS table and the price is greater than 10.00 for that row. Rows in which the PRICE value does not exceed 10.00 are included in the result of the join, but the PRICE value is set to null. In this result table, the row for PROD# 30 has null values on the right two columns because the price of PROD# 30 is less than 10.00. PROD# 160 has null values on the right two columns because PROD# 160 does not match another product number.
686
As in an inner join, the join condition can be any simple or compound search condition that does not contain a subquery reference. Example: The following example uses the tables in Sample data for joins on page 688. To include rows from the PRODUCTS table that have no corresponding rows in the PARTS table, execute this query:
SELECT PART, SUPPLIER, PRODUCTS.PROD#, PRODUCT, PRICE FROM PARTS RIGHT OUTER JOIN PRODUCTS ON PARTS.PROD# = PRODUCTS.PROD# AND PRODUCTS.PRICE>10.00;
A row from the PARTS table is in the result table only if its product number matches the product number of a row in the PRODUCTS table and the price is greater than 10.00 for that row. Because the PRODUCTS table can have rows with nonmatching product numbers in the result table, and the PRICE column is in the PRODUCTS table, rows in which PRICE is less than or equal to 10.00 are included in the result. The PARTS columns contain null values for these rows in the result table.
687
DB2 performs the join operation first. The result of the join operation includes rows from one table that do not have corresponding rows from the other table. However, the WHERE clause then excludes the rows from both tables that have null values for the PROD# column. The following statement is a correct SELECT statement to produce the list:
SELECT PART, SUPPLIER, VALUE(X.PROD#, Y.PROD#) AS PRODNUM, PRODUCT FROM (SELECT PART, SUPPLIER, PROD# FROM PARTS WHERE PROD# <> 10) X FULL OUTER JOIN (SELECT PROD#, PRODUCT FROM PRODUCTS WHERE PROD# <> 10) Y ON X.PROD# = Y.PROD#;
For this statement, DB2 applies the WHERE clause to each table separately. DB2 then performs the full outer join operation, which includes rows in one table that do not have a corresponding row in the other table. The final result includes rows with the null value for the PROD# column and looks similar to the following output:
PART ======= OIL BLADES PLASTIC ------SUPPLIER ============ WESTERN_CHEM ACE_STEEL PLASTIK_CORP -----------PRODNUM ======= 160 205 30 505 PRODUCT =========== ----------SAW RELAY SCREWDRIVER
688
DB2 usually optimizes queries to retrieve all rows that qualify. But sometimes you want to retrieve only the first few rows. For example, to retrieve the first row that is greater than or equal to a known value, code:
SELECT column list FROM table WHERE key >= value ORDER BY key ASC
Even with the ORDER BY clause, DB2 might fetch all the data first and sort it afterwards, which could be wasteful. Instead, you can write the query in one of the following ways:
SELECT * FROM table WHERE key >= value ORDER BY key ASC OPTIMIZE FOR 1 ROW SELECT * FROM table WHERE key >= value ORDER BY key ASC FETCH FIRST n ROWS ONLY
Use OPTIMIZE FOR 1 ROW to influence the access path. OPTIMIZE FOR 1 ROW tells DB2 to select an access path that returns the first qualifying row quickly. Use FETCH FIRST n ROWS ONLY to limit the number of rows in the result table to n rows. FETCH FIRST n ROWS ONLY has the following benefits: v When you use FETCH statements to retrieve data from a result table, FETCH FIRST n ROWS ONLY causes DB2 to retrieve only the number of rows that you need. This can have performance benefits, especially in distributed applications. If you try to execute a FETCH statement to retrieve the n+1st row, DB2 returns a +100 SQLCODE. v When you use FETCH FIRST ROW ONLY in a SELECT INTO statement, you never retrieve more than one row. Using FETCH FIRST ROW ONLY in a SELECT INTO statement can prevent SQL errors that are caused by inadvertently selecting more than one value into a host variable. When you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n ROWS, OPTIMIZE FOR n ROWS is implied. When you specify FETCH FIRST n ROWS ONLY and OPTIMIZE FOR m ROWS, and m is less than n, DB2 optimizes the query for m rows. If m is greater than n, DB2 optimizes the query for n rows.
689
Related concepts: Optimization for large and small result sets (Introduction to DB2 for z/OS) Related tasks: Optimizing retrieval for a small set of rows (DB2 Application programming and SQL) Fetching a limited number of rows (DB2 Performance) Related reference: optimize-clause (DB2 SQL) fetch-first-clause (DB2 SQL)
| |
690
See Examples of recursive common table expressions on page 464 for examples of bill-of-materials applications that use recursive common table expressions.
691
Either operand of the operation has a precision greater than 15 digits. The operation is in a dynamic SQL statement, and any of the following conditions is true: - The current value of special register CURRENT PRECISION is DEC31 or D31.s, where s is a number between 1 and 9 and represents the minimum scale to be used for division operations. - The installation option for DECIMAL ARITHMETIC on panel DSNTIP4 is DEC31, 31, or D31.s, where s is a number between 1 and 9; the installation option for USE FOR DYNAMICRULES on panel DSNTIP4 is YES; and the value of CURRENT PRECISION has not been set by the application. - The SQL statement has bind, define, or invoke behavior; the statement is in an application that is precompiled with option DEC(31); the installation option for USE FOR DYNAMICRULES on panel DSNTIP4 is NO; and the value of CURRENT PRECISION has not been set by the application. See DYNAMICRULES bind option on page 987 for an explanation of bind, define, and invoke behavior. The operation is in an embedded (static) SQL statement that you precompiled with the DEC(31), DEC31, or D31.s option, or with the default for that option when the installation option DECIMAL ARITHMETIC is DEC31 or 31. s is a number between 1 and 9 and represents the minimum scale to be used for division operations. See Processing SQL statements on page 944 for information about precompiling and for a list of all precompiler options. Recommendation: To reduce the chance of overflow, or when dealing with a precision greater than 15 digits, choose DEC31 or D31.s , wheres is a number between 1 and 9 and represents the minimum scale to be used for division operations. | | | | | | | | | |
Procedure
To control how DB2 rounds decimal floating point numbers: Set the CURRENT DECFLOAT ROUNDING MODE special register. Related concepts: CURRENT DECFLOAT ROUNDING MODE (DB2 SQL) Related reference: SET CURRENT DECFLOAT ROUNDING MODE (DB2 SQL)
692
| | | |
not returned when you specify SELECT *.) One alternative to selecting all columns is to use views defined with only the necessary columns, and use SELECT * to access the views. Avoid SELECT * if all the selected columns participate in a sort operation (SELECT DISTINCT and SELECT...UNION, for example).
Subqueries
When you need to narrow your search condition based on information in an interim table, you can use a subquery. For example, you might want to find all employee numbers in one table that also exist for a given project in a second table. Conceptual overview of subqueries: Suppose that you want a list of the employee numbers, names, and commissions of all employees who work on a particular project, whose project number is MA2111. The first part of the SELECT statement is easy to write:
SELECT EMPNO, LASTNAME, COMM FROM DSN8910.EMP WHERE EMPNO . . .
However, you cannot proceed because the DSN8910.EMP table does not include project number data. You do not know which employees are working on project MA2111 without issuing another SELECT statement against the DSN8910.EMPPROJACT table. You can use a subquery to solve this problem. A subquery is a subselect or a fullselect in a WHERE clause. The SELECT statement that surrounds the subquery is called the outer SELECT.
SELECT EMPNO, LASTNAME, COMM FROM DSN8910.EMP WHERE EMPNO IN (SELECT EMPNO FROM DSN8910.EMPPROJACT WHERE PROJNO = MA2111);
To better understand the results of this SQL statement, imagine that DB2 goes through the following process: 1. DB2 evaluates the subquery to obtain a list of EMPNO values:
(SELECT EMPNO FROM DSN8910.EMPPROJACT WHERE PROJNO = MA2111);
The result is in an interim result table, similar to the one in the following output:
from EMPNO ===== 200 200 220
2. The interim result table then serves as a list in the search condition of the outer SELECT. Effectively, DB2 executes this statement:
SELECT EMPNO, LASTNAME, COMM FROM DSN8910.EMP WHERE EMPNO IN (000200, 000220);
693
Correlated and uncorrelated subqueries: Subqueries supply information that is needed to qualify a row (in a WHERE clause) or a group of rows (in a HAVING clause). The subquery produces a result table that is used to qualify the row or group of selected rows. A subquery executes only once, if the subquery is the same for every row or group. This kind of subquery is uncorrelated, which means that it executes only once. For example, in the following statement, the content of the subquery is the same for every row of the table DSN8910.EMP:
SELECT EMPNO, LASTNAME, COMM FROM DSN8910.EMP WHERE EMPNO IN (SELECT EMPNO FROM DSN8910.EMPPROJACT WHERE PROJNO = MA2111);
Subqueries that vary in content from row to row or group to group are correlated subqueries. For information about correlated subqueries, see Correlated subqueries on page 697. Subqueries and predicates: A predicate is an element of a search condition that specifies a condition that is true, false, or unknown about a given row or group. A subquery, which is a SELECT statement within the WHERE or HAVING clause of another SQL statement, is always part of a predicate. The predicate is of the form:
operand operator (subquery)
A WHERE or HAVING clause can include predicates that contain subqueries. A predicate that contains a subquery, like any other search predicate, can be enclosed in parentheses, can be preceded by the keyword NOT, and can be linked to other predicates through the keywords AND and OR. For example, the WHERE clause of a query can look something like the following clause:
WHERE X IN (subquery1) AND (Y > SOME (subquery2) OR Z IS NULL)
Subqueries can also appear in the predicates of other subqueries. Such subqueries are nested subqueries at some level of nesting. For example, a subquery within a subquery within an outer SELECT has a nesting level of 2. DB2 allows nesting down to a level of 15, but few queries require a nesting level greater than 1. The relationship of a subquery to its outer SELECT is the same as the relationship of a nested subquery to a subquery, and the same rules apply, except where otherwise noted. The subquery result table: A subquery must produce a result table that has the same number of columns as the number of columns on the left side of the comparison operator. For example, both of the following SELECT statements are acceptable:
694
SELECT EMPNO, LASTNAME FROM DSN8910.EMP WHERE SALARY = (SELECT AVG(SALARY) FROM DSN8910.EMP); SELECT EMPNO, LASTNAME FROM DSN8910.EMP WHERE (SALARY, BONUS) IN (SELECT AVG(SALARY), AVG(BONUS) FROM DSN8910.EMP);
Except for a subquery of a basic predicate, the result table can contain more than one row. For more information, see Places where you can include a subquery.
Quantified predicate in a subquery: ALL, ANY, or SOME: You can use a subquery after a comparison operator, followed by the keyword ALL, ANY, or SOME. The number of columns and rows that the subquery can return for a quantified predicate depends on the type of quantified predicate: v For = SOME, = ANY, or <> ALL, the subquery can return one or many rows and one or many columns. The number of columns in the result table must match the number of columns on the left side of the operator. v For all other quantified predicates, the subquery can return one or many rows, but no more than one column. See the information about quantified predicates, including what to do if a subquery that returns one or more null values gives you unexpected results. ALL predicate: Use ALL to indicate that the operands on the left side of the comparison must compare in the same way with all of the values that the subquery returns. For example, suppose that you use the greater-than comparison operator with ALL:
WHERE column > ALL (subquery)
695
To satisfy this WHERE clause, the column value must be greater than all of the values that the subquery returns. A subquery that returns an empty result table satisfies the predicate. Now suppose that you use the <> operator with ALL in a WHERE clause like this:
WHERE (column1, column1, ... columnn) <> ALL (subquery)
To satisfy this WHERE clause, each column value must be unequal to all of the values in the corresponding column of the result table that the subquery returns. A subquery that returns an empty result table satisfies the predicate. ANY or SOME predicate: Use ANY or SOME to indicate that the values on the left side of the operator must compare in the indicated way to at least one of the values that the subquery returns. For example, suppose that you use the greater-than comparison operator with ANY:
WHERE expression > ANY (subquery)
To satisfy this WHERE clause, the value in the expression must be greater than at least one of the values (that is, greater than the lowest value) that the subquery returns. A subquery that returns an empty result table does not satisfy the predicate. Now suppose that you use the = operator with SOME in a WHERE clause like this:
WHERE (column1, column1, ... columnn) = SOME (subquery)
To satisfy this WHERE clause, each column value must be equal to at least one of the values in the corresponding column of the result table that the subquery returns. A subquery that returns an empty result table does not satisfy the predicate. IN predicate in a subquery: You can use IN to say that the value or values on the left side of the IN operator must be among the values that are returned by the subquery. Using IN is equivalent to using = ANY or = SOME. Example: The following query returns the names of department managers:
SELECT EMPNO,LASTNAME FROM DSN8910.EMP WHERE EMPNO IN (SELECT DISTINCT MGRNO FROM DSN8910.DEPT);
EXISTS predicate in a subquery: When you use the keyword EXISTS, DB2 checks whether the subquery returns one or more rows. Returning one or more rows satisfies the condition; returning no rows does not satisfy the condition. Example: The search condition in the following query is satisfied if any project that is represented in the project table has an estimated start date that is later than 1 January 2005:
696
SELECT EMPNO,LASTNAME FROM DSN8910.EMP WHERE EXISTS (SELECT * FROM DSN8910.PROJ WHERE PRSTDATE > 2005-01-01);
The result of the subquery is always the same for every row that is examined for the outer SELECT. Therefore, either every row appears in the result of the outer SELECT or none appears. A correlated subquery is more powerful than the uncorrelated subquery that is used in this example because the result of a correlated subquery is evaluated for each row of the outer SELECT. As shown in the example, you do not need to specify column names in the subquery of an EXISTS clause. Instead, you can code SELECT *. You can also use the EXISTS keyword with the NOT keyword in order to select rows when the data or condition that you specify does not exist; that is, you can code the following clause:
WHERE NOT EXISTS (SELECT ...);
Correlated subqueries
A correlated subquery is a subquery that DB2 reevaluates when it examines a new row (in a WHERE clause) or a group of rows (in a HAVING clause) as it executes the outer SELECT statement. In an uncorrelated subquery, DB2 executes the subquery once, substitutes the result of the subquery in the right side of the search condition, and evaluates the outer SELECT based on the value of the search condition. User-defined functions in correlated subqueries: Use care when you invoke a user-defined function in a correlated subquery, and that user-defined function uses a scratchpad. DB2 does not refresh the scratchpad between invocations of the subquery. This can cause undesirable results because the scratchpad keeps values across the invocations of the subquery. An example of a correlated subquery Suppose that you want a list of all the employees whose education levels are higher than the average education levels in their respective departments. To get this information, DB2 must search the DSN8910.EMP table. For each employee in the table, DB2 needs to compare the employee's education level to the average education level for that employee's department. For this example, you need to use a correlated subquery, which differs from an uncorrelated subquery. An uncorrelated subquery compares the employee's education level to the average of the entire company, which requires looking at the entire table. A correlated subquery evaluates only the department that corresponds to the particular employee. In the subquery, you tell DB2 to compute the average education level for the department number in the current row. The following query performs this action:
697
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL FROM DSN8910.EMP X WHERE EDLEVEL > (SELECT AVG(EDLEVEL) FROM DSN8910.EMP WHERE WORKDEPT = X.WORKDEPT);
A correlated subquery looks like an uncorrelated one, except for the presence of one or more correlated references. In the example, the single correlated reference is the occurrence of X.WORKDEPT in the WHERE clause of the subselect. In this clause, the qualifier X is the correlation name that is defined in the FROM clause of the outer SELECT statement. X designates rows of the first instance of DSN8910.EMP. At any time during the execution of the query, X designates the row of DSN8910.EMP to which the WHERE clause is being applied. Consider what happens when the subquery executes for a given row of DSN8910.EMP. Before it executes, X.WORKDEPT receives the value of the WORKDEPT column for that row. Suppose, for example, that the row is for Christine Haas. Her work department is A00, which is the value of WORKDEPT for that row. Therefore, the following is the subquery that is executed for that row:
(SELECT AVG(EDLEVEL) FROM DSN8910.EMP WHERE WORKDEPT = A00);
The subquery produces the average education level of Christine's department. The outer SELECT then compares this average to Christine's own education level. For some other row for which WORKDEPT has a different value, that value appears in the subquery in place of A00. For example, in the row for Michael L Thompson, this value is B01, and the subquery for his row delivers the average education level for department B01. The result table that is produced by the query is similar to the following output:
EMPNO ====== 000010 000030 000070 000090 LASTNAME ========= HASS KWAN PULASKI HENDERSON WORKDEPT EDLEVEL ======== ======= A00 18 C01 20 D21 16 E11 16
698
Suppose, for example, that a query contains subqueries A, B, and C, and that A contains B and B contains C. The subquery C can use a correlation reference that is defined in B, A, or the outer SELECT. You can define a correlation name for each table name in a FROM clause. Specify the correlation name after its table name. Leave one or more blanks between a table name and its correlation name. You can include the word AS between the table name and the correlation name to increase the readability of the SQL statement. The following example demonstrates the use of a correlated reference in the search condition of a subquery:
SELECT EMPNO, LASTNAME, WORKDEPT, EDLEVEL FROM DSN8910.EMP AS X WHERE EDLEVEL > (SELECT AVG(EDLEVEL) FROM DSN8910.EMP WHERE WORKDEPT = X.WORKDEPT);
The following example demonstrates the use of a correlated reference in the select list of a subquery:
UPDATE BP1TBL T1 SET (KEY1, CHAR1, VCHAR1) = (SELECT VALUE(T2.KEY1,T1.KEY1), VALUE(T2.CHAR1,T1.CHAR1), VALUE(T2.VCHAR1,T1.VCHAR1) FROM BP2TBL T2 WHERE (T2.KEY1 = T1.KEY1)) WHERE KEY1 IN (SELECT KEY1 FROM BP2TBL T3 WHERE KEY2 > 0);
Using correlated subqueries in an UPDATE statement: Use correlation names in an UPDATE statement to refer to the rows that you are updating. The subquery for which you specified a correlation name is called a correlated subquery. For example, when all activities of a project must complete before September 2006, your department considers that project to be a priority project. Assume that you have added the PRIORITY column to DSN8910.PROJ. You can use the following SQL statement to evaluate the projects in the DSN8910.PROJ table, and write a 1 (a flag to indicate PRIORITY) in the PRIORITY column for each priority project:
UPDATE DSN8910.PROJ X SET PRIORITY = 1 WHERE DATE(2006-09-01) > (SELECT MAX(ACENDATE) FROM DSN8910.PROJACT WHERE PROJNO = X.PROJNO);
As DB2 examines each row in the DSN8910.PROJ table, it determines the maximum activity end date (the ACENDATE column) for all activities of the project (from the DSN8910.PROJACT table). If the end date of each activity that is associated with the project is before September 2006, the current row in the DSN8910.PROJ table qualifies, and DB2 updates it. Using correlated subqueries in a DELETE statement:
699
Use correlation names in a DELETE statement to refer to the rows that you are deleting. The subquery for which you specified a correlation name is called a correlated subquery. DB2 evaluates the correlated subquery once for each row in the table that is named in the DELETE statement to decide whether to delete the row. Using tables with no referential constraints: Suppose that a department considers a project to be complete when the combined amount of time currently spent on it is less than or equal to half of a person's time. The department then deletes the rows for that project from the DSN8910.PROJ table. In the examples in this topic, PROJ and PROJACT are independent tables; that is, they are separate tables with no referential constraints defined on them.
DELETE FROM DSN8910.PROJ X WHERE .5 > (SELECT SUM(ACSTAFF) FROM DSN8910.PROJACT WHERE PROJNO = X.PROJNO);
To process this statement, DB2 determines for each project (represented by a row in the DSN8910.PROJ table) whether the combined staffing for that project is less than 0.5. If it is, DB2 deletes that row from the DSN8910.PROJ table. To continue this example, suppose that DB2 deletes a row in the DSN8910.PROJ table. You must also delete rows that are related to the deleted project in the DSN8910.PROJACT table. To do this, use a statement similar to this statement:
DELETE FROM DSN8910.PROJACT X WHERE NOT EXISTS (SELECT * FROM DSN8910.PROJ WHERE PROJNO = X.PROJNO);
DB2 determines, for each row in the DSN8910.PROJACT table, whether a row with the same project number exists in the DSN8910.PROJ table. If not, DB2 deletes the row from DSN8910.PROJACT. Using a single table: A subquery of a searched DELETE statement (a DELETE statement that does not use a cursor) can reference the same table from which rows are deleted. In the following statement, which deletes the employee with the highest salary from each department, the employee table appears in the outer DELETE and in the subselect:
DELETE FROM YEMP X WHERE SALARY = (SELECT MAX(SALARY) FROM YEMP Y WHERE X.WORKDEPT =Y.WORKDEPT);
This example uses a copy of the employee table for the subquery. The following statement, without a correlated subquery, yields equivalent results:
DELETE FROM YEMP WHERE (SALARY, WORKDEPT) IN (SELECT MAX(SALARY), WORKDEPT FROM YEMP GROUP BY WORKDEPT);
Using tables with referential constraints: DB2 restricts delete operations for dependent tables that are involved in referential constraints. If a DELETE statement has a subquery that references a table that is
700
involved in the deletion, make the last delete rule in the path to that table RESTRICT or NO ACTION. This action ensures that the result of the subquery is not materialized before the deletion occurs. However, if the result of the subquery is materialized before the deletion, the delete rule can also be CASCADE or SET NULL. Example: Without referential constraints, the following statement deletes departments from the department table whose managers are not listed correctly in the employee table:
DELETE FROM DSN8910.DEPT THIS WHERE NOT DEPTNO = (SELECT WORKDEPT FROM DSN8910.EMP WHERE EMPNO = THIS.MGRNO);
With the referential constraints that are defined for the sample tables, this statement causes an error because the result table for the subquery is not materialized before the deletion occurs. Because DSN8910.EMP is a dependent table of DSN8910.DEPT, the deletion involves the table that is referred to in the subquery, and the last delete rule in the path to EMP is SET NULL, not RESTRICT or NO ACTION. If the statement could execute, its results would depend on the order in which DB2 accesses the rows. Therefore, DB2 prohibits the deletion.
Restrictions when using distinct types with UNION, EXCEPT, and INTERSECT
| | | | DB2 enforces strong typing of distinct types with UNION, EXCEPT, and INTERSECT. When you use these keywords to combine column values from several tables, the combined columns must be of the same types. If a column is a distinct type, the corresponding column must be the same distinct type. Example: Suppose that you create a view that combines the values of the US_SALES, EUROPEAN_SALES, and JAPAN_SALES tables. The TOTAL columns in the three tables are of different distinct types. Before you combine the table values, you must convert the types of two of the TOTAL columns to the type of the third TOTAL column. Assume that the US_DOLLAR type has been chosen as the common distinct type. Because DB2 does not generate cast functions to convert from one distinct type to another, two user-defined functions must exist: v A function called EURO_TO_US that converts values of type EURO to type US_DOLLAR v A function called YEN_TO_US that converts values of type JAPANESE_YEN to type US_DOLLAR Then you can execute a query like this to display a table of combined sales:
SELECT PRODUCT_ITEM, MONTH, YEAR, TOTAL FROM US_SALES UNION SELECT PRODUCT_ITEM, MONTH, YEAR, EURO_TO_US(TOTAL) FROM EUROPEAN_SALES UNION SELECT PRODUCT_ITEM, MONTH, YEAR, YEN_TO_US(TOTAL) FROM JAPAN_SALES;
Because the result type of both the YEN_TO_US function and the EURO_TO_US function is US_DOLLAR, you have satisfied the requirement that the distinct types of the combined columns are the same.
701
The casting satisfies the requirement that the compared data types are identical. You cannot use host variables in statements that you prepare for dynamic execution. As explained in Dynamically executing an SQL statement by using PREPARE and EXECUTE on page 185, you can substitute parameter markers for host variables when you prepare a statement, and then use host variables when you execute the statement. If you use a parameter marker in a predicate of a query, and the column to which you compare the value represented by the parameter marker is of a distinct type, you must cast the parameter marker to the distinct type, or cast the column to its source type. For example, suppose that distinct type CNUM is defined like this:
CREATE DISTINCT TYPE CNUM AS INTEGER;
702
CREATE TABLE CUSTOMER (CUST_NUM CNUM NOT NULL, FIRST_NAME CHAR(30) NOT NULL, LAST_NAME CHAR(30) NOT NULL, PHONE_NUM CHAR(20) WITH DEFAULT, PRIMARY KEY (CUST_NUM));
In an application program, you prepare a SELECT statement that compares the CUST_NUM column to a parameter marker. Because CUST_NUM is of a distinct type, you must cast the distinct type to its source type:
SELECT FIRST_NAME, LAST_NAME, PHONE_NUM FROM CUSTOMER WHERE CAST(CUST_NUM AS INTEGER) = ?
Alternatively, you can cast the parameter marker to the distinct type:
SELECT FIRST_NAME, LAST_NAME, PHONE_NUM FROM CUSTOMER WHERE CUST_NUM = CAST (? AS CNUM)
Be aware of the following DB2 restrictions on nested SQL statements: | | | v Restrictions for SELECT statements: When you execute a SELECT statement on a table, you cannot execute INSERT, UPDATE, MERGE, or DELETE statements on the same table at a lower level of nesting. For example, suppose that you execute this SQL statement at level 1 of nesting:
SELECT UDF1(C1) FROM T1;
| | | |
v Restrictions for SELECT FROM FINAL TABLE statements that specify INSERT, UPDATE, or DELETE statements to change data: When you execute this type of statement, an error occurs if both of the following conditions exist:
703
| | | | | | | | | | | | | | | | | | | | | |
The SELECT statement that modifies data (by specifying INSERT, UPDATE, or DELETE) activates an AFTER TRIGGER. The AFTER TRIGGER results in additional nested SQL operations that modify the table that is the target of the original SELECT statement that modifies data. v Restrictions for INSERT, UPDATE, MERGE, and DELETE statements: When you execute an INSERT, UPDATE, MERGE, or DELETE statement on a table, you cannot access that table from a user-defined function or stored procedure that is at a lower level of nesting. For example, suppose that you execute this SQL statement at level 1 of nesting:
DELETE FROM T1 WHERE UDF3(T1.C1) = 3;
If the AFTER trigger is not activated by an INSERT, UPDATE, or DELETE data change statement that is specified in a data-change-table-reference SELECT FROM FINAL TABLE, the preceding list of restrictions do not apply to SQL statements that are executed at a lower level of nesting as a result of an after trigger. For example, suppose an UPDATE statement at nesting level 1 activates an after update trigger, which calls a stored procedure. The stored procedure executes two SQL statements that reference the triggering table: one SELECT statement and one INSERT statement. In this situation, both the SELECT and the INSERT statements can be executed even though they are at nesting level 3. Although trigger activations count in the levels of SQL statement nesting, the previous restrictions on SQL statements do not apply to SQL statements that are executed in the trigger body. Example: Suppose that trigger TR1 is defined on table T1:
CREATE TRIGGER TR1 AFTER INSERT ON T1 FOR EACH STATEMENT MODE DB2SQL BEGIN ATOMIC UPDATE T1 SET C1=1; END
Now suppose that you execute this SQL statement at level 1 of nesting:
INSERT INTO T1 VALUES(...);
Although the UPDATE statement in the trigger body is at level 2 of nesting and modifies the same table that the triggering statement updates, DB2 can execute the INSERT statement successfully.
704
most a single row. For information about how to use a row-positioned cursor, see Accessing data by using a row-positioned cursor on page 709. v A rowset-positioned cursor retrieves zero, one, or more rows at a time, as a rowset, from the result table into host variable arrays. At any point in time, the cursor can be positioned on a rowset. You can reference all of the rows in the rowset, or only one row in the rowset, when you use a positioned DELETE or positioned UPDATE statement. For information about how to use a rowset-positioned cursor, see Accessing data by using a rowset-positioned cursor on page 714.
Cursors
A cursor is a mechanism that points to one or more rows in a set of rows. The rows are retrieved from a table or in a result set that is returned by a stored procedure. Your application program can use a cursor to retrieve rows from a table.
Types of cursors
You can declare row-positioned or rowset-positioned cursors in a number of ways. These cursors can be scrollable or not scrollable, held or not held, or returnable or not returnable. In addition, you can declare a returnable cursor in a stored procedure by including the WITH RETURN clause; the cursor can return result sets to a caller of the stored procedure. Scrollable and non-scrollable cursors: When you declare a cursor, you tell DB2 whether you want the cursor to be scrollable or non-scrollable by including or omitting the SCROLL clause. This clause determines whether the cursor moves sequentially forward through the result table or can move randomly through the result table. Using a non-scrollable cursor: The simplest type of cursor is a non-scrollable cursor. A non-scrollable cursor can be either row-positioned or rowset-positioned. A row-positioned non-scrollable cursor moves forward through its result table one row at a time. Similarly, a rowset-positioned non-scrollable cursor moves forward through its result table one rowset at a time.
705
A non-scrollable cursor always moves sequentially forward in the result table. When the application opens the cursor, the cursor is positioned before the first row (or first rowset) in the result table. When the application executes the first FETCH, the cursor is positioned on the first row (or first rowset). When the application executes subsequent FETCH statements, the cursor moves one row ahead (or one rowset ahead) for each FETCH. After each FETCH statement, the cursor is positioned on the row (or rowset) that was fetched. After the application executes a positioned UPDATE or positioned DELETE statement, the cursor stays at the current row (or rowset) of the result table. You cannot retrieve rows (or rowsets) backward or move to a specific position in a result table with a non-scrollable cursor. Using a scrollable cursor: To make a cursor scrollable, you declare it as scrollable. A scrollable cursor can be either row-positioned or rowset-positioned. To use a scrollable cursor, you execute FETCH statements that indicate where you want to position the cursor. If you want to order the rows of the cursor's result set, and you also want the cursor to be updatable, you need to declare the cursor as scrollable, even if you use it only to retrieve rows (or rowsets) sequentially. You can use the ORDER BY clause in the declaration of an updatable cursor only if you declare the cursor as scrollable. Declaring a scrollable cursor: To indicate that a cursor is scrollable, you declare it with the SCROLL keyword. The following examples show a characteristic of scrollable cursors: the sensitivity. The following figure shows a declaration for an insensitive scrollable cursor.
EXEC SQL DECLARE C1 INSENSITIVE SCROLL CURSOR FOR SELECT DEPTNO, DEPTNAME, MGRNO FROM DSN8910.DEPT ORDER BY DEPTNO END-EXEC.
Declaring a scrollable cursor with the INSENSITIVE keyword has the following effects: v The size, the order of the rows, and the values for each row of the result table do not change after the application opens the cursor. v The result table is read-only. Therefore, you cannot declare the cursor with the FOR UPDATE clause, and you cannot use the cursor for positioned update or delete operations. The following figure shows a declaration for a sensitive static scrollable cursor.
EXEC SQL DECLARE C2 SENSITIVE STATIC SCROLL CURSOR FOR SELECT DEPTNO, DEPTNAME, MGRNO FROM DSN8910.DEPT ORDER BY DEPTNO END-EXEC.
Declaring a cursor as SENSITIVE STATIC has the following effects: v When the application executes positioned UPDATE and DELETE statements with the cursor, those changes are visible in the result table.
706
v When the current value of a row no longer satisfies the SELECT statement that was used in the cursor declaration, that row is no longer visible in the result table. v When a row of the result table is deleted from the underlying table, that row is no longer visible in the result table. v Changes that are made to the underlying table by other cursors or other application processes can be visible in the result table, depending on whether the FETCH statements that you use with the cursor are FETCH INSENSITIVE or FETCH SENSITIVE statements. The following figure shows a declaration for a sensitive dynamic scrollable cursor.
EXEC SQL DECLARE C2 SENSITIVE DYNAMIC SCROLL CURSOR FOR SELECT DEPTNO, DEPTNAME, MGRNO FROM DSN8910.DEPT ORDER BY DEPTNO END-EXEC.
Declaring a cursor as SENSITIVE DYNAMIC has the following effects: v When the application executes positioned UPDATE and DELETE statements with the cursor, those changes are visible. In addition, when the application executes insert, update, or delete operations (within the application but outside the cursor), those changes are visible. v All committed inserts, updates, and deletes by other application processes are visible. v Because the FETCH statement executes against the base table, the cursor needs no temporary result table. When you define a cursor as SENSITIVE DYNAMIC, you cannot specify the INSENSITIVE keyword in a FETCH statement for that cursor. v If you specify an ORDER BY clause for a SENSITIVE DYNAMIC cursor, DB2 might choose an index access path if the ORDER BY is fully satisfied by an existing index. However, a dynamic scrollable cursor that is declared with an ORDER BY clause is not updatable. Static scrollable cursor: Both the INSENSITIVE cursor and the SENSITIVE STATIC cursor follow the static cursor model: v The size of the result table does not grow after the application opens the cursor. Rows that are inserted into the underlying table are not added to the result table. v The order of the rows does not change after the application opens the cursor. If the cursor declaration contains an ORDER BY clause, and the columns that are in the ORDER BY clause are updated after the cursor is opened, the order of the rows in the result table does not change. Dynamic scrollable cursor: When you declare a cursor as SENSITIVE, you can declare it either STATIC or DYNAMIC. The SENSITIVE DYNAMIC cursor follows the dynamic cursor model: v The size and contents of the result table can change with every fetch. The base table can change while the cursor is scrolling on it. If another application process changes the data, the cursor sees the newly changed data
707
when it is committed. If the application process of the cursor changes the data, the cursor sees the newly changed data immediately. v The order of the rows can change after the application opens the cursor. If the cursor declaration contains an ORDER BY clause, and columns that are in the ORDER BY clause are updated after the cursor is opened, the order of the rows in the result table changes. Related concepts: FETCH statement interaction between row and rowset positioning on page 732
IMS
You cannot use DECLARE CURSOR...WITH HOLD in message processing programs (MPP) and message-driven batch message processing (BMP). Each message is a new user for DB2; whether or not you declare them using WITH HOLD, no cursors continue for new users. You can use WITH HOLD in non-message-driven BMP and DL/I batch programs.
CICS
In CICS applications, you can use DECLARE CURSOR...WITH HOLD to indicate that a cursor should not close at a commit or sync point. However, SYNCPOINT
708
ROLLBACK closes all cursors, and end-of-task (EOT) closes all cursors before DB2 reuses or terminates the thread. Because pseudo-conversational transactions usually have multiple EXEC CICS RETURN statements and thus span multiple EOTs, the scope of a held cursor is limited. Across EOTs, you must reopen and reposition a cursor declared WITH HOLD, as if you had not specified WITH HOLD. You should always close cursors that you no longer need. If you let DB2 close a CICS attachment cursor, the cursor might not close until the CICS attachment facility reuses or terminates the thread. | | | If the CICS application is using a protected entry thread, this thread will continue to hold resources, even when the task that has used these resources ends. These resources will not be released until the protected thread terminates. The following cursor declaration causes the cursor to maintain its position in the DSN8910.EMP table after a commit point:
EXEC SQL DECLARE EMPLUPDT CURSOR WITH HOLD FOR SELECT EMPNO, LASTNAME, PHONENO, JOB, SALARY, WORKDEPT FROM DSN8910.EMP WHERE WORKDEPT < D11 ORDER BY EMPNO END-EXEC.
Procedure
To access data by using a row-positioned cursor: 1. Execute a DECLARE CURSOR statement to define the result table on which the cursor operates. See Declaring a row cursor. 2. Execute an OPEN CURSOR to make the cursor available to the application. See Opening a row cursor on page 711. 3. Specify what the program is to do when all rows have been retrieved. See Specifying the action that the row cursor is to take when it reaches the end of the data on page 712. 4. Execute multiple SQL statements to retrieve data from the table or modify selected rows of the table. See Executing SQL statements by using a row cursor on page 712. 5. Execute a CLOSE CURSOR statement to make the cursor unavailable to the application. See Closing a row cursor on page 714.
Results
Your program can have several cursors, each of which performs the previous steps.
709
You can use this cursor to list select information about employees. More complicated cursors might include WHERE clauses or joins of several tables. For example, suppose that you want to use a cursor to list employees who work on a certain project. Declare a cursor like this to identify those employees:
EXEC SQL DECLARE C2 CURSOR FOR SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY FROM DSN8910.EMP X WHERE EXISTS (SELECT * FROM DSN8910.PROJ Y WHERE X.EMPNO=Y.RESPEMP AND Y.PROJNO=:GOODPROJ);
Declaring cursors for tables that use multilevel security: You can declare a cursor that retrieves rows from a table that uses multilevel security with row-level granularity. However, the result table for the cursor contains only those rows that have a security label value that is equivalent to or dominated by the security label value of your ID. Updating a column: You can update columns in the rows that you retrieve. Updating a row after you use a cursor to retrieve it is called a positioned update. If you intend to perform any positioned updates on the identified table, include the FOR UPDATE clause. The FOR UPDATE clause has two forms: v The first form is FOR UPDATE OF column-list. Use this form when you know in advance which columns you need to update. v The second form is FOR UPDATE, with no column list. Use this form when you might use the cursor to update any of the columns of the table. For example, you can use this cursor to update only the SALARY column of the employee table:
EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY FROM DSN8910.EMP X WHERE EXISTS (SELECT * FROM DSN8910.PROJ Y WHERE X.EMPNO=Y.RESPEMP AND Y.PROJNO=:GOODPROJ) FOR UPDATE OF SALARY;
710
If you might use the cursor to update any column of the employee table, define the cursor like this:
EXEC SQL DECLARE C1 CURSOR FOR SELECT EMPNO, FIRSTNME, MIDINIT, LASTNAME, SALARY FROM DSN8910.EMP X WHERE EXISTS (SELECT * FROM DSN8910.PROJ Y WHERE X.EMPNO=Y.RESPEMP AND Y.PROJNO=:GOODPROJ) FOR UPDATE;
DB2 must do more processing when you use the FOR UPDATE clause without a column list than when you use the FOR UPDATE clause with a column list. Therefore, if you intend to update only a few columns of a table, your program can run more efficiently if you include a column list. The precompiler options NOFOR and STDSQL affect the use of the FOR UPDATE clause in static SQL statements. If you do not specify the FOR UPDATE clause in a DECLARE CURSOR statement, and you do not specify the STDSQL(YES) option or the NOFOR precompiler options, you receive an error if you execute a positioned UPDATE statement. You can update a column of the identified table even though it is not part of the result table. In this case, you do not need to name the column in the SELECT statement. When the cursor retrieves a row (using FETCH) that contains a column value you want to update, you can use UPDATE ... WHERE CURRENT OF to identify the row that is to be updated. Read-only result table: Some result tables cannot be updatedfor example, the result of joining two or more tables. Related concepts: Multilevel security (Managing Security) Related reference: Descriptions of SQL processing options on page 959 DECLARE CURSOR (DB2 SQL) select-statement (DB2 SQL)
711
If you use the CURRENT DATE, CURRENT TIME, or CURRENT TIMESTAMP special registers in a cursor, DB2 determines the values in those special registers only when it opens the cursor. DB2 uses the values that it obtained at OPEN time for all subsequent FETCH statements. Two factors that influence the amount of time that DB2 requires to process the OPEN statement are: v Whether DB2 must perform any sorts before it can retrieve rows v Whether DB2 uses parallelism to process the SELECT statement of the cursor
Specifying the action that the row cursor is to take when it reaches the end of the data
Your program must be coded to recognize and handle an end-of-data condition whenever you use a row cursor to fetch a row.
An alternative to this technique is to code the WHENEVER NOT FOUND statement. The WHENEVER NOT FOUND statement causes your program to branch to another part that then issues a CLOSE statement. For example, to branch to label DATA-NOT-FOUND when the FETCH statement does not return a row, use this statement:
EXEC SQL WHENEVER NOT FOUND GO TO DATA-NOT-FOUND END-EXEC.
For more information about the WHENEVER NOT FOUND statement, see Checking the execution of SQL statements on page 201.
The SELECT statement within DECLARE CURSOR statement identifies the result table from which you fetch rows, but DB2 does not retrieve any data until your application program executes a FETCH statement.
712
When your program executes the FETCH statement, DB2 positions the cursor on a row in the result table. That row is called the current row. DB2 then copies the current row contents into the program host variables that you specify on the INTO clause of FETCH. This sequence repeats each time you issue FETCH, until you process all rows in the result table. The row that DB2 points to when you execute a FETCH statement depends on whether the cursor is declared as a scrollable or non-scrollable. When you query a remote subsystem with FETCH, consider using block fetch for better performance. Block fetch processes rows ahead of the current row. You cannot use a block fetch when you perform a positioned update or delete operation. After your program has executed a FETCH statement to retrieve the current row, you can use a positioned UPDATE statement to modify the data in that row. An example of a positioned UPDATE statement is:
EXEC SQL UPDATE DSN8910.EMP SET SALARY = 50000 WHERE CURRENT OF C1 END-EXEC.
A positioned UPDATE statement updates the row on which the cursor is positioned. A positioned UPDATE statement is subject to these restrictions: v You cannot update a row if your update violates any unique, check, or referential constraints. v You cannot use an UPDATE statement to modify the rows of a created temporary table. However, you can use an UPDATE statement to modify the rows of a declared temporary table. v If the right side of the SET clause in the UPDATE statement contains a fullselect, that fullselect cannot include a correlated name for a table that is being updated. v You cannot use an SQL data change statement in the FROM clause of a SELECT statement that defines a cursor that is used in a positioned UPDATE statement. v A positioned UPDATE statement will fail if the value of the security label column of the row where the cursor is positioned is not equivalent to the security label value of your user id. If your user id has write down privilege, a positioned UPDATE statement will fail if the value of the security label column of the row where the cursor is positioned does not dominate the security label value of your user id. After your program has executed a FETCH statement to retrieve the current row, you can use a positioned DELETE statement to delete that row. A example of a positioned DELETE statement looks like this:
EXEC SQL DELETE FROM DSN8910.EMP WHERE CURRENT OF C1 END-EXEC.
| |
A positioned DELETE statement deletes the row on which the cursor is positioned. A positioned DELETE statement is subject to these restrictions:
713
v You cannot use a DELETE statement with a cursor to delete rows from a created temporary table. However, you can use a DELETE statement with a cursor to delete rows from a declared temporary table. v After you have deleted a row, you cannot update or delete another row using that cursor until you execute a FETCH statement to position the cursor on another row. v You cannot delete a row if doing so violates any referential constraints. v You cannot use an SQL data change statement in the FROM clause of a SELECT statement that defines a cursor that is used in a positioned DELETE statement. v A positioned DELETE statement will fail if the value of the security label column of the row where the cursor is positioned is not equivalent to the security label value of your user id. If your user id has write down privilege, a positioned DELETE statement will fail if the value of the security label column of the row where the cursor is positioned does not dominate the security label value of your user id.
| |
Procedure
To close a row cursor: Issue a CLOSE statement. An example of a CLOSE statement looks like this:
EXEC SQL CLOSE C1 END-EXEC.
Procedure
To access data by using a rowset-positioned cursor: 1. Execute a DECLARE CURSOR statement to define the result table on which the cursor operates. See Declaring a rowset cursor on page 715. 2. Execute an OPEN CURSOR to make the cursor available to the application. See Opening a rowset cursor on page 715. 3. Specify what the program is to do when all rows have been retrieved. See Specifying the action that the rowset cursor is to take when it reaches the end of the data on page 715.
714
4. Execute multiple SQL statements to retrieve data from the table or modify selected rows of the table. See Executing SQL statements by using a rowset cursor on page 716. 5. Execute a CLOSE CURSOR statement to make the cursor unavailable to the application. See Closing a rowset cursor on page 720.
Results
Your program can have several cursors, each of which performs the previous steps.
Procedure
To declare a rowset cursor: Use the WITH ROWSET POSITIONING clause in the DECLARE CURSOR statement. The following example shows how to declare a rowset cursor:
EXEC SQL DECLARE C1 CURSOR WITH ROWSET POSITIONING FOR SELECT EMPNO, LASTNAME, SALARY FROM DSN8910.EMP END-EXEC.
Specifying the action that the rowset cursor is to take when it reaches the end of the data
Your program must be coded to recognize and handle an end-of-data condition whenever you use a rowset cursor to fetch rows.
715
last row. For an example of end-of-data processing for a rowset cursor, see Examples of fetching rows by using cursors on page 733. To determine the number of retrieved rows, use either of the following values: v The contents of the SQLERRD(3) field in the SQLCA v The contents of the ROW_COUNT item of GET DIAGNOSTICS For information about GET DIAGNOSTICS, see Checking the execution of SQL statements by using the GET DIAGNOSTICS statement on page 208. If you declare the cursor as dynamic scrollable, and SQLCODE has the value +100, you can continue with a FETCH statement until no more rows are retrieved. Additional fetches might retrieve more rows because a dynamic scrollable cursor is sensitive to updates by other application processes. For information about dynamic cursors, see Types of cursors on page 705.
When your program executes a FETCH statement with the ROWSET keyword, the cursor is positioned on a rowset in the result table. That rowset is called the current rowset. The dimension of each of the host variable arrays must be greater than or equal to the number of rows to be retrieved. Suppose that you want to dynamically allocate the storage needed for the arrays of column values that are to be retrieved from the employee table. You must: 1. Declare an SQLDA structure and the variables that reference the SQLDA. 2. Dynamically allocate the SQLDA and the arrays needed for the column values. 3. Set the fields in the SQLDA for the column values to be retrieved. 4. Open the cursor. 5. Fetch the rows.
716
You must first declare the SQLDA structure. The following SQL INCLUDE statement requests a standard SQLDA declaration:
EXEC SQL INCLUDE SQLDA;
Your program must also declare variables that reference the SQLDA structure, the SQLVAR structure within the SQLDA, and the DECLEN structure for the precision and scale if you are retrieving a DECIMAL column. For C programs, the code looks like this:
struct sqlda *sqldaptr; struct sqlvar *varptr; struct DECLEN { unsigned char precision; unsigned char scale; };
Before you can set the fields in the SQLDA for the column values to be retrieved, you must dynamically allocate storage for the SQLDA structure. For C programs, the code looks like this:
sqldaptr = (struct sqlda *) malloc (3 * 44 + 16);
The size of the SQLDA is SQLN * 44 + 16, where the value of the SQLN field is the number of output columns. You must set the fields in the SQLDA structure for your FETCH statement. Suppose you want to retrieve the columns EMPNO, LASTNAME, and SALARY. The C code to set the SQLDA fields for these columns looks like this: | | | | | | | | | | | | | | | | | | | | | | | | | |
strcpy(sqldaptr->sqldaid,"SQLDA"); sqldaptr->sqldabc = 148; /* number bytes of storage allocated for the SQLDA */ sqldaptr->sqln = 3; /* number of SQLVAR occurrences */ sqldaptr->sqld = 3; varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0])); /* Point to first SQLVAR */ varptr->sqltype = 452; /* data type CHAR(6) */ varptr->sqllen = 6; varptr->sqldata = (char *) hva1; varptr->sqlind = (short *) inda1; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length); varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 1); /* Point to next SQLVAR */ varptr->sqltype = 448; /* data type VARCHAR(15) */ varptr->sqllen = 15; varptr->sqldata = (char *) hva2; varptr->sqlind = (short *) inda2; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length); varptr = (struct sqlvar *) (&(sqldaptr->sqlvar[0]) + 2); /* Point to next SQLVAR */ varptr->sqltype = 485; /* data type DECIMAL(9,2) */ ((struct DECLEN *) &(varptr->sqllen))->precision = 9; ((struct DECLEN *) &(varptr->sqllen))->scale = 2; varptr->sqldata = (char *) hva3; varptr->sqlind = (short *) inda3; varptr->sqlname.length = 8; memcpy(varptr->sqlname.data, "\x00\x00\x00\x00\x00\x01\x00\x14",varptr->sqlname.length);
The SQLDA structure has these fields: v SQLDABC indicates the number of bytes of storage that are allocated for the SQLDA. The storage includes a 16-byte header and 44 bytes for each SQLVAR field. The value is SQLN x 44 + 16, or 148 for this example. v SQLN is the number of SQLVAR occurrences (or the number of output columns). v SQLD is the number of variables in the SQLDA that are used by DB2 when processing the FETCH statement.
Chapter 12. Accessing data
717
v Each SQLVAR occurrence describes a host variable array or buffer into which the values for a column in the result table are to be returned. Within each SQLVAR: SQLTYPE indicates the data type of the column. SQLLEN indicates the length of the column. If the data type is DECIMAL, this field has two parts: the PRECISION and the SCALE. SQLDATA points to the first element of the array for the column values. For this example, assume that your program allocates the dynamic variable arrays hva1, hva2, and hva3, and their indicator arrays inda1, inda2, and inda3. SQLIND points to the first element of the array of indicator values for the column. If SQLTYPE is an odd number, this attribute is required. (If SQLTYPE is an odd number, null values are allowed for the column.) | | | | | SQLNAME has two parts: the LENGTH and the DATA. The LENGTH is 8. The first two bytes of the DATA field is X'0000'. Bytes 5 and 6 of the DATA field are a flag indicating whether the variable is an array or a FOR n ROWS value. Bytes 7 and 8 are a two-byte binary integer representation of the dimension of the array.
You can open the cursor only after all of the fields have been set in the output SQLDA:
EXEC SQL OPEN C1;
After the OPEN statement, the program fetches the next rowset:
EXEC SQL FETCH NEXT ROWSET FROM C1 FOR 20 ROWS USING DESCRIPTOR :*sqldaptr;
The USING clause of the FETCH statement names the SQLDA that describes the columns that are to be retrieved. After your program executes a FETCH statement to establish the current rowset, you can use a positioned UPDATE statement with either of the following clauses: v Use WHERE CURRENT OF to modify all of the rows in the current rowset v Use FOR ROW n OF ROWSET to modify row n in the current rowset An example of a positioned UPDATE statement that uses the WHERE CURRENT OF clause is:
EXEC SQL UPDATE DSN8910.EMP SET SALARY = 50000 WHERE CURRENT OF C1 END-EXEC.
When the UPDATE statement is executed, the cursor must be positioned on a row or rowset of the result table. If the cursor is positioned on a row, that row is updated. If the cursor is positioned on a rowset, all of the rows in the rowset are updated. An example of a positioned UPDATE statement that uses the FOR ROW n OF ROWSET clause is:
EXEC SQL UPDATE DSN8910.EMP SET SALARY = 50000 FOR CURSOR C1 FOR ROW 5 OF ROWSET END-EXEC.
718
When the UPDATE statement is executed, the cursor must be positioned on a rowset of the result table. The specified row (in the example, row 5) of the current rowset is updated. After your program executes a FETCH statement to establish the current rowset, you can use a positioned DELETE statement with either of the following clauses: v Use WHERE CURRENT OF to delete all of the rows in the current rowset v Use FOR ROW n OF ROWSET to delete row n in the current rowset An example of a positioned DELETE statement that uses the WHERE CURRENT OF clause is:
EXEC SQL DELETE FROM DSN8910.EMP WHERE CURRENT OF C1 END-EXEC.
When the DELETE statement is executed, the cursor must be positioned on a row or rowset of the result table. If the cursor is positioned on a row, that row is deleted, and the cursor is positioned before the next row of its result table. If the cursor is positioned on a rowset, all of the rows in the rowset are deleted, and the cursor is positioned before the next rowset of its result table. An example of a positioned DELETE statement that uses the FOR ROW n OF ROWSET clause is:
EXEC SQL DELETE FROM DSN8910.EMP FOR CURSOR C1 FOR ROW 5 OF ROWSET END-EXEC.
When the DELETE statement is executed, the cursor must be positioned on a rowset of the result table. The specified row of the current rowset is deleted, and the cursor remains positioned on that rowset. The deleted row (in the example, row 5 of the rowset) cannot be retrieved or updated. Related concepts: Dynamic SQL on page 158 Related tasks: Executing SQL statements by using a row cursor on page 712 Related reference: SQL descriptor area (SQLDA) (DB2 SQL) Specifying the number of rows in a rowset: If you do not explicitly specify the number of rows in a rowset, DB2 implicitly determines the number of rows based on the last fetch request. About this task To explicitly set the size of a rowset, use the FOR n ROWS clause in the FETCH statement. If a FETCH statement specifies the ROWSET keyword, and not the FOR n ROWS clause, the size of the rowset is implicitly set to the size of the rowset that was most recently specified in a prior FETCH statement. If a prior FETCH statement did not specify the FOR n ROWS clause or the ROWSET keyword, the size of the current rowset is implicitly set to 1. For examples of rowset positioning, see Table 121 on page 732.
719
Procedure
To close a rowset cursor: Issue a CLOSE statement.
Cursor position when FETCH is executed1 Before the first row On the first row On the last row After the last row On an absolute row number, from before the first row forward or from after the last row backward On the row that is forward or backward a relative number of rows from the current row On the current row On the previous row On the next row (default)
720
Example: To use the cursor that is declared in Types of cursors on page 705 to fetch the fifth row of the result table, use a FETCH statement like this:
EXEC SQL FETCH ABSOLUTE +5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
To fetch the fifth row from the end of the result table, use this FETCH statement:
EXEC SQL FETCH ABSOLUTE -5 C1 INTO :HVDEPTNO, :DEPTNAME, :MGRNO;
Related concepts: Types of cursors on page 705 Related reference: FETCH (DB2 SQL)
SENSITIVE STATIC
All updates and deletes are visible in the result table. Inserts made by other processes are not visible in the result table. All committed changes are visible in the result table, including updates, deletes, inserts, and changes in the order of the rows.
SENSITIVE DYNAMIC
721
722
Determining the number of rows in the result table for a static scrollable cursor
You can determine how many rows are in the result table of an INSENSITIVE or SENSITIVE STATIC scrollable cursor.
A hole becomes visible to a cursor when a cursor operation returns a non-zero SQLCODE. The point at which a hole becomes visible depends on the following factors:
Chapter 12. Accessing data
723
v Whether the scrollable cursor creates the hole v Whether the FETCH statement is FETCH SENSITIVE or FETCH INSENSITIVE If the scrollable cursor creates the hole, the hole is visible when you execute a FETCH statement for the row that contains the hole. The FETCH statement can be FETCH INSENSITIVE or FETCH SENSITIVE. If an update or delete operation outside the scrollable cursor creates the hole, the hole is visible at the following times: v If you execute a FETCH SENSITIVE statement for the row that contains the hole, the hole is visible when you execute the FETCH statement. v If you execute a FETCH INSENSITIVE statement, the hole is not visible when you execute the FETCH statement. DB2 returns the row as it was before the update or delete operation occurred. However, if you follow the FETCH INSENSITIVE statement with a positioned UPDATE or DELETE statement, the hole becomes visible. Holes in the result table of a scrollable cursor: A hole in the result table means that the result table does not shrink to fill the space of deleted rows. It also does not shrink to fill the space of rows that have been updated and no longer satisfy the search condition. You cannot access a delete or update hole. However, you can remove holes in specific situations. In some situations, you might not be able to fetch a row from the result table of a scrollable cursor, depending on how the cursor is declared: v Scrollable cursors that are declared as INSENSITIVE or SENSITIVE STATIC follow a static model, which means that DB2 determines the size of the result table and the order of the rows when you open the cursor. Deleting or updating rows after a static cursor is open can result in holes in the result table. See Removing a delete hole or update hole on page 723. v Scrollable cursors that are declared as SENSITIVE DYNAMIC follow a dynamic model, which means that the size and contents of the result table, and the order of the rows, can change after you open the cursor. A dynamic cursor scrolls directly on the base table. If the current row of the cursor is deleted or if it is updated so that it no longer satisfies the search condition, and the next cursor operation is FETCH CURRENT, then DB2 issues an SQL warning. The following examples demonstrate how delete and update holes can occur when you use a SENSITIVE STATIC scrollable cursor. Creating a delete hole with a static scrollable cursor: Suppose that table A consists of one integer column, COL1, which has the values shown in the following figure.
724
Now suppose that you declare the following SENSITIVE STATIC scrollable cursor, which you use to delete rows from A:
EXEC SQL DECLARE C3 SENSITIVE STATIC SCROLL CURSOR FOR SELECT COL1 FROM A FOR UPDATE OF COL1;
The positioned delete statement creates a delete hole, as shown in the following figure.
After you execute the positioned delete statement, the third row is deleted from the result table, but the result table does not shrink to fill the space that the deleted row creates. Creating an update hole with a static scrollable cursor Suppose that you declare the following SENSITIVE STATIC scrollable cursor, which you use to update rows in A:
EXEC SQL DECLARE C4 SENSITIVE STATIC SCROLL CURSOR FOR SELECT COL1 FROM A WHERE COL1<6;
725
The searched UPDATE statement creates an update hole, as shown in the following figure.
After you execute the searched UPDATE statement, the last row no longer qualifies for the result table, but the result table does not shrink to fill the space that the disqualified row creates. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
726
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To use dynamic buffer allocation for LOB and XML data: 1. Use an initial FETCH WITH CONTINUE to fetch data into a pre-allocated buffer of a moderate size. 2. If the value is too large to fit in the buffer, use the length information that is returned by DB2 to allocate the appropriate amount of storage. 3. Use a single FETCH CURRENT CONTINUE statement to retrieve the remainder of the data.
Example
Suppose that table T1 was created with the following statement:
CREATE TABLE T1 (C1 INT, C2 CLOB(100M), C3 CLOB(32K), C4 XML);
A row exists in T1 where C1 contains a valid integer, C2 contains 10MB of data, C3 contains 32KB of data, and C4 contains 4MB of data. Now, suppose that you declare CURSOR1, prepare and describe statement DYNSQLSTMT1 with descriptor sqlda, and open CURSOR1 with the following statements:
EXEC EXEC EXEC EXEC SQL SQL SQL SQL DECLARE CURSOR1 CURSOR FOR DYNSQLSTMT1; PREPARE DYNSQLSTMT1 FROM SELECT * FROM T1; DESCRIBE DYNSQLSTMT1 INTO DESCRIPTOR :SQLDA; OPEN CURSOR1;
Next, suppose that you allocate moderately sized buffers (32 KB for each CLOB or XML column) and set data pointers and lengths in SQLDA. Then, you use the following FETCH WITH CONTINUE statement:
EXEC SQL FETCH WITH CONTINUE CURSOR1 INTO DESCRIPTOR :SQLDA;
Because C2 and C4 contain data that do not fit in the buffer, some of the data is truncated. Your application can use the information that DB2 returns to allocate large enough buffers for the remaining data and reset the data pointers and length fields in SQLDA. At that point, you can resume the fetch and complete the process with the following FETCH CURRENT CONTINUE statement and CLOSE CURSOR statement:
EXEC SQL FETCH CURRENT CONTINUE CURSOR1 INTO DESCRIPTOR :SQLDA; EXEC SQL CLOSE CURSOR1;
The application needs to concatenate the two returned pieces of the data value. One technique is to move the first piece of data to the dynamically-allocated larger buffer before the FETCH CONTINUE. Set the SQLDATA pointer in the SQLDA structure to point immediately after the last byte of this truncated value. DB2 then writes the remaining data to this location and thus completes the concatenation.
727
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Moving data through fixed-size buffers when fetching XML and LOB data
If you use the WITH CONTINUE clause, DB2 returns information about which data does not fit in the buffer. Your application can then use repeated FETCH CURRENT CONTINUE operations to effectively streamlarge XML and LOB data through a fixed-size buffer, one piece at a time.
Procedure
To use fixed buffer allocation for LOB and XML data, perform the following steps: 1. Use an initial FETCH WITH CONTINUE to fetch data into a pre-allocated buffer of a moderate size. 2. If the value is too large to fit in the buffer, use as many FETCH CONTINUE statements as necessary to process all of the data through a fixed buffer. After each FETCH operation, check whether a column was truncated by first examining the SQLWARN1 field in the returned SQLCA. If that field contains a 'W' value, at least one column in the returned row has been truncated. To then determine if a particular LOB or XML column was truncated, your application must compare the value that is returned in the length field with the declared length of the host variable. If a column is truncated, continue to use FETCH CONTINUE statements until all of the data has been retrieved. After you fetch each piece of the data, move it out of the buffer to make way for the next fetch. Your application can write the pieces to an output file or reconstruct the entire data value in a buffer above the 2-GB bar.
Results
Example: Suppose that table T1 was created with the following statement:
CREATE TABLE T1 (C1 INT, C2 CLOB(100M), C3 CLOB(32K), C4 XML);
A row exists in T1 where C2 contains 10 MB of data. Now, suppose that you declare a 32 KB section CLOBHV:
EXEC SQL BEGIN DECLARE SECTION DECLARE CLOBHV SQL TYPE IS CLOB(32767); EXEC SQL END DECLARE SECTION.
Next, suppose that you use the following statements to declare and open CURSOR1 and to FETCH WITH CONTINUE:
EXEC SQL DECLARE CURSOR1 CURSOR FOR SELECT C2 FROM T1; EXEC SQL OPEN CURSOR1; EXEC SQL FETCH WITH CONTINUE CURSOR1 INTO :CLOBHV;
As each piece of the data value is fetched, move it from the buffer to the output file. Because the 10 MB value in C2 does not fit into the 32 KB buffer, some of the data is truncated. Your application can loop through the following FETCH CURRENT CONTINUE:
EXEC SQL FETCH CURRENT CONTINUE CURSOR1 INTO :CLOBHV;
After each FETCH operation, you can determine if the data was truncated by first checking if the SQLWARN1 field in the returned SQLCA contains a 'W' value. If so, then check if the length value, which is returned in CLOBHV_LENGTH, is greater than the declared length of 32767. (CLOBHV_LENGTH is declared as part of the
728
| | | | | |
precompiler expansion of the CLOBHV declaration.) If the value is greater, that value has been truncated and more data can be retrieved with the next FETCH CONTINUE operation. When all of the data has moved to the output file, you can close the cursor:
EXEC SQL CLOSE CURSOR1;
729
DB2_SQL_ATTR_CURSOR_ROWSET Indicates whether the cursor can use rowset positioning (Y or N) DB2_SQL_ATTR_CURSOR_SCROLLABLE Indicates whether the cursor is scrollable (Y or N) DB2_SQL_ATTR_CURSOR_SENSITIVITY Indicates whether the cursor is insensitive or sensitive to changes that are made by other processes (I or S) DB2_SQL_ATTR_CURSOR_TYPE Indicates whether the cursor is forward (F) declared static (S for INSENSITIVE or SENSITIVE STATIC) or dynamic (D for SENSITIVE DYNAMIC) For more information about the GET DIAGNOSTICS statement, see Checking the execution of SQL statements by using the GET DIAGNOSTICS statement on page 208.
730
/* result table. */ /**********************************************************/ EXEC SQL OPEN C1; EXEC SQL FETCH AFTER FROM C1; /**********************************************************/ /* Fetch rows backward until all rows are fetched. */ /**********************************************************/ while(SQLCODE==0) { EXEC SQL FETCH PRIOR FROM C1 INTO :hv_deptname; . . . } EXEC SQL CLOSE C1;
Using a ROWID or identity column: If your table contains a ROWID column or an identity column, you can use that column to rapidly retrieve the rows in reverse order. When you perform the original SELECT, you can store the ROWID or identity column value for each row you retrieve. Then, to retrieve the values in reverse order, you can execute SELECT statements with a WHERE clause that compares the ROWID or identity column value to each stored value. For example, suppose you add ROWID column DEPTROWID to table DSN8910.DEPT. You can use code like the following example to select all department names, then retrieve the names in reverse order:
/**************************/ /* Declare host variables */ /**************************/ EXEC SQL BEGIN DECLARE SECTION; SQL TYPE IS ROWID hv_dept_rowid; char[37] hv_deptname; EXEC SQL END DECLARE SECTION; /***************************/ /* Declare other variables */ /***************************/ struct rowid_struct { short int length; char data[40]; /* ROWID variable structure */ } struct rowid_struct rowid_array[200]; /* Array to hold retrieved */ /* ROWIDs. Assume no more */ /* than 200 rows will be */ /* retrieved. */ short int i,j,n; /***********************************************/ /* Declare cursor to retrieve department names */ /***********************************************/ EXEC SQL DECLARE C1 CURSOR FOR SELECT DEPTNAME, DEPTROWID FROM DSN8910.DEPT; . . . /**********************************************************/ /* Retrieve the department name and ROWID from DEPT table */ /* and store the ROWID in an array. */ /**********************************************************/ EXEC SQL OPEN C1; i=0; while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :hv_deptname, :hv_dept_rowid; rowid_array[i].length=hv_dept_rowid.length; for(j=0;j<hv_dept_rowid.length;j++) rowid_array[i].data[j]=hv_dept_rowid.data[j]; i++; } EXEC SQL CLOSE C1;
Chapter 12. Accessing data
731
n=i-1; /* Get the number of array elements */ /**********************************************************/ /* Use the ROWID values to retrieve the department names */ /* in reverse order. */ /**********************************************************/ for(i=n;i>=0;i--) { hv_dept_rowid.length=rowid_array[i].length; for(j=0;j<hv_dept_rowid.length;j++) hv_dept_rowid.data[j]=rowid_array[i].data[j]; EXEC SQL SELECT DEPTNAME INTO :hv_deptname FROM DSN8910.DEPT WHERE DEPTROWID=:hv_dept_rowid; }
Procedure
To update previously retrieved data: 1. Declare the cursor with the SENSITIVE STATIC SCROLL keywords. 2. Open the cursor. 3. Execute a FETCH statement to position the cursor at the end of the result table. 4. FETCH statements that move the cursor backward, until you reach the row that you want to update. 5. Execute the UPDATE WHERE CURRENT OF statement to update the current row. 6. Repeat steps 4 and 5 until you have update all the rows that you need to. 7. When you have retrieved and updated all the data, close the cursor.
732
Table 121. Interaction between row and rowset positioning for a scrollable cursor (continued) Keywords in FETCH statement CURRENT ROWSET CURRENT NEXT (default) NEXT ROWSET NEXT ROWSET FOR 3 ROWS NEXT ROWSET LAST LAST ROWSET FOR 2 ROWS PRIOR ROWSET ABSOLUTE 2 ROWSET STARTING AT ABSOLUTE 2 FOR 3 ROWS RELATIVE 2 ROWSET STARTING AT ABSOLUTE 2 FOR 4 ROWS RELATIVE -1 ROWSET STARTING AT ABSOLUTE 3 FOR 2 ROWS ROWSET STARTING AT RELATIVE 4 PRIOR ROWSET STARTING AT ABSOLUTE 13 FOR 5 ROWS FIRST ROWSET Cursor position when FETCH is executed On a rowset of size 5, consisting of rows 1, 2, 3, 4, and 5 On row 1 On row 2 On a rowset of size 1, consisting of row 3 On a rowset of size 3, consisting of rows 4, 5, and 6 On a rowset of size 3, consisting of rows 7, 8, and 9 On row 15 On a rowset of size 2, consisting of rows 14 and 15 On a rowset of size 2, consisting of rows 12 and 13 On row 2 On a rowset of size 3, consisting of rows 2, 3, and 4 On row 4 On a rowset of size 4, consisting of rows 2, 3, 4, and 5 On row 1 On a rowset of size 2, consisting of rows 3 and 4 On a rowset of size 2, consisting of rows 7 and 8 On row 6 On a rowset of size 3, consisting of rows 13, 14, and 15 On a rowset of size 5, consisting of rows 1, 2, 3, 4, and 5
733
WORKDEPT, JOB FROM DSN8910.EMP WHERE WORKDEPT = D11 FOR UPDATE OF JOB END-EXEC. ************************************************** * Open the cursor * ************************************************** EXEC SQL OPEN THISEMP END-EXEC. ************************************************** * Indicate what action to take when all rows * * in the result table have been fetched. * ************************************************** EXEC SQL WHENEVER NOT FOUND GO TO CLOSE-THISEMP END-EXEC. ************************************************** * Fetch a row to position the cursor. * ************************************************** EXEC SQL FETCH FROM THISEMP INTO :EMP-NUM, :NAME2, :DEPT, :JOB-NAME END-EXEC. ************************************************** * Update the row where the cursor is positioned. * ************************************************** EXEC SQL UPDATE DSN8910.EMP SET JOB = :NEW-JOB WHERE CURRENT OF THISEMP END-EXEC. . . . ************************************************** * Branch back to fetch and process the next row. * ************************************************** . . . ************************************************** * Close the cursor * ************************************************** CLOSE-THISEMP. EXEC SQL CLOSE THISEMP END-EXEC.
The following example shows how to retrieve data backward with a cursor.
************************************************** * Declare a cursor to retrieve the data backward * * from the EMP table. The cursor has access to * * changes by other processes. * ************************************************** EXEC SQL DECLARE THISEMP SENSITIVE STATIC SCROLL CURSOR FOR SELECT EMPNO, LASTNAME, WORKDEPT, JOB FROM DSN8910.EMP END-EXEC. ************************************************** * Open the cursor * ************************************************** EXEC SQL OPEN THISEMP END-EXEC. **************************************************
734
* Indicate what action to take when all rows * * in the result table have been fetched. * ************************************************** EXEC SQL WHENEVER NOT FOUND GO TO CLOSE-THISEMP END-EXEC. ************************************************** * Position the cursor after the last row of the * * result table. This FETCH statement cannot * * include the SENSITIVE or INSENSITIVE keyword * * and cannot contain an INTO clause. * ************************************************** EXEC SQL FETCH AFTER FROM THISEMP END-EXEC. ************************************************** * Fetch the previous row in the table. * ************************************************** EXEC SQL FETCH SENSITIVE PRIOR FROM THISEMP INTO :EMP-NUM, :NAME2, :DEPT, :JOB-NAME END-EXEC. ************************************************** * Check that the fetched row is not a hole * * (SQLCODE +222). If not, print the contents. * ************************************************** IF SQLCODE IS GREATER THAN OR EQUAL TO 0 AND SQLCODE IS NOT EQUAL TO +100 AND SQLCODE IS NOT EQUAL TO +222 THEN PERFORM PRINT-RESULTS. . . . ************************************************** * Branch back to fetch the previous row. * ************************************************** . . . ************************************************** * Close the cursor * ************************************************** CLOSE-THISEMP. EXEC SQL CLOSE THISEMP END-EXEC.
The following example shows how to update an entire rowset with a cursor.
************************************************** * Declare a rowset cursor to update the JOB * * column of the EMP table. * ************************************************** EXEC SQL DECLARE EMPSET CURSOR WITH ROWSET POSITIONING FOR SELECT EMPNO, LASTNAME, WORKDEPT, JOB FROM DSN8910.EMP WHERE WORKDEPT = D11 FOR UPDATE OF JOB END-EXEC. ************************************************** * Open the cursor. * ************************************************** EXEC SQL OPEN EMPSET END-EXEC. ************************************************** * Indicate what action to take when end-of-data * * occurs in the rowset being fetched. * **************************************************
Chapter 12. Accessing data
735
EXEC SQL WHENEVER NOT FOUND GO TO CLOSE-EMPSET END-EXEC. ************************************************** * Fetch next rowset to position the cursor. * ************************************************** EXEC SQL FETCH NEXT ROWSET FROM EMPSET FOR :SIZE-ROWSET ROWS INTO :HVA-EMPNO, :HVA-LASTNAME, :HVA-WORKDEPT, :HVA-JOB END-EXEC. ************************************************** * Update rowset where the cursor is positioned. * ************************************************** UPDATE-ROWSET. EXEC SQL UPDATE DSN8910.EMP SET JOB = :NEW-JOB WHERE CURRENT OF EMPSET END-EXEC. END-UPDATE-ROWSET. . . . ************************************************** * Branch back to fetch the next rowset. * ************************************************** . . . ************************************************** * Update the remaining rows in the current * * rowset and close the cursor. * ************************************************** CLOSE-EMPSET. PERFORM UPDATE-ROWSET. EXEC SQL CLOSE EMPSET END-EXEC.
The following example shows how to update specific rows with a rowset cursor.
***************************************************** * Declare a static scrollable rowset cursor. * ***************************************************** EXEC SQL DECLARE EMPSET SENSITIVE STATIC SCROLL CURSOR WITH ROWSET POSITIONING FOR SELECT EMPNO, WORKDEPT, JOB FROM DSN8910.EMP FOR UPDATE OF JOB END-EXEC. ***************************************************** * Open the cursor. * ***************************************************** EXEC SQL OPEN EMPSET END-EXEC. ***************************************************** * Fetch next rowset to position the cursor. * ***************************************************** EXEC SQL FETCH SENSITIVE NEXT ROWSET FROM EMPSET FOR :SIZE-ROWSET ROWS INTO :HVA-EMPNO, :HVA-WORKDEPT :INDA-WORKDEPT, :HVA-JOB :INDA-JOB END-EXEC. *****************************************************
736
* Process fetch results if no error and no hole. * ***************************************************** IF SQLCODE >= 0 EXEC SQL GET DIAGNOSTICS :HV-ROWCNT = ROW_COUNT END-EXEC PERFORM VARYING N FROM 1 BY 1 UNTIL N > HV-ROWCNT IF INDA-WORKDEPT(N) NOT = -3 EVALUATE HVA-WORKDEPT(N) WHEN (D11) PERFORM UPDATE-ROW WHEN (E11) PERFORM DELETE-ROW END-EVALUATE END-IF END-PERFORM IF SQLCODE = 100 GO TO CLOSE-EMPSET END-IF ELSE EXEC SQL GET DIAGNOSTICS :HV-NUMCOND = NUMBER END-EXEC PERFORM VARYING N FROM 1 BY 1 UNTIL N > HV-NUMCOND EXEC SQL GET DIAGNOSTICS CONDITION :N :HV-SQLCODE = DB2_RETURNED_SQLCODE, :HV-ROWNUM = DB2_ROW_NUMBER END-EXEC DISPLAY "SQLCODE = " HV-SQLCODE DISPLAY "ROW NUMBER = " HV-ROWNUM END-PERFORM GO TO CLOSE-EMPSET END-IF. . . . ***************************************************** * Branch back to fetch and process * * the next rowset. * ***************************************************** . . . ***************************************************** * Update row N in current rowset. * ***************************************************** UPDATE-ROW. EXEC SQL UPDATE DSN8910.EMP SET JOB = :NEW-JOB FOR CURSOR EMPSET FOR ROW :N OF ROWSET END-EXEC. END-UPDATE-ROW. ***************************************************** * Delete row N in current rowset. * ***************************************************** DELETE-ROW. EXEC SQL DELETE FROM DSN8910.EMP WHERE CURRENT OF EMPSET FOR ROW :N OF ROWSET END-EXEC. END-DELETE-ROW. . . . ***************************************************** * Close the cursor. * ***************************************************** CLOSE-EMPSET. EXEC SQL CLOSE EMPSET END-EXEC.
Chapter 12. Accessing data
737
The following code uses the SELECT from INSERT statement to retrieve the value of the ROWID column from a new row that is inserted into the EMPLOYEE table. This value is then used to reference that row for the update of the SALARY column.
EXEC SQL BEGIN DECLARE SECTION; SQL TYPE IS ROWID hv_emp_rowid; short hv_dept, hv_empno; char hv_name[30]; decimal(7,2) hv_salary; EXEC SQL END DECLARE SECTION; ... EXEC SQL SELECT EMP_ROWID INTO :hv_emp_rowid FROM FINAL TABLE (INSERT INTO EMPLOYEE VALUES (DEFAULT, :hv_empno, :hv_name, :hv_salary, :hv_dept)); EXEC SQL UPDATE EMPLOYEE SET SALARY = SALARY + 1200 WHERE EMP_ROWID = :hv_emp_rowid; EXEC SQL COMMIT;
| | |
For DB2 to be able to use direct row access for the update operation, the SELECT from INSERT statement and the UPDATE statement must execute within the same unit of work. If these statements execute in different units of work, the ROWID value for the inserted row might change due to a REORG of the table space before the update operation. Alternatively, you can use a SELECT from MERGE statement. The MERGE statement performs INSERT and UPDATE operations as one coordinated statement. ROWID columns as keys: If you define a column in a table to have the ROWID data type, DB2 provides a unique value for each row in the table only if you define the column as GENERATED ALWAYS. The purpose of the value in the ROWID column is to uniquely identify rows in the table. You can use a ROWID column to write queries that navigate directly to a row, which can be useful in situations where high performance is a requirement. This direct navigation, without using an index or scanning the table space, is called
738
direct row access. In addition, a ROWID column is a requirement for tables that contain LOB columns. This topic discusses the use of a ROWID column in direct row access. Requirement: To use direct row access, you must use a retrieved ROWID value before you commit. When your application commits, it releases its claim on the table space. After the commit, a REORG on your table space might execute and change the physical location of the rows. Restriction: In general, you cannot use a ROWID column as a key that is to be used as a single column value across multiple tables. The ROWID value for a particular row in a table might change over time due to a REORG of the table space. In particular, you cannot use a ROWID column as part of a parent key or foreign key. The value that you retrieve from a ROWID column is a varying-length character value that is not monotonically ascending or descending (the value is not always increasing or not always decreasing). Therefore, a ROWID column does not provide suitable values for many types of entity keys, such as order numbers or employee numbers. Specifying direct row access by using RIDs: When you specify a particular row ID, or RID, DB2 can navigate directly to the specified row for those queries that qualify for direct row access. Before you begin this task, ensure that the query qualifies for direct row access. To qualify, the search condition must be a Boolean term, stage 1 predicate that fits one of the following criteria: v A simple Boolean term predicate of the following form:
RID (table designator) = noncolumn expression
Where the noncolumn expression contains a result of a RID function. v A compound Boolean term that combines several simple predicates by using the AND operator, where one of the simple predicates fits the first criteria. To specify direct row access by using RIDs, specify the RID function in the search condition of a SELECT, DELETE, or UPDATE statement. The RID function returns the RID of a row, which you can use to uniquely identify a row. | | | | | Restriction: Because DB2 might reuse RID numbers when the REORG utility is run, the RID function might return different values when invoked for a row multiple times. If you specify a RID and DB2 cannot locate the row through direct row access, DB2 does not switch to another access method. Instead, DB2 returns no rows.
ROWID columns
A ROWID column uniquely identifies each row in a table. This column enables queries to be written that navigate directly to a row in the table because the column implicitly contains the location of the row.
739
You can define a ROWID column as either GENERATED BY DEFAULT or GENERATED ALWAYS: v If you define the column as GENERATED BY DEFAULT, you can insert a value. DB2 provides a default value if you do not supply one. However, to be able to insert an explicit value (by using the INSERT statement with the VALUES clause), you must create a unique index on that column. v If you define the column as GENERATED ALWAYS (which is the default), DB2 always generates a unique value for the column. You cannot insert data into that column. In this case, DB2 does not require an index to guarantee unique values. For more information, see Rules for inserting data into a ROWID column on page 640.
These statements use host variables of data type large object locator (LOB locator). LOB locators let you manipulate LOB data without moving the LOB data into host variables. By using LOB locators, you need much smaller amounts of memory for your programs. | | | You can also use LOB file reference variables when you are working with LOB data. You can use LOB file reference variables to insert LOB data from a file into a DB2 table or to retrieve LOB data from a DB2 table. Sample LOB applications: The following table lists the sample programs that DB2 provides to assist you in writing applications to manipulate LOB data. All programs reside in data set DSN910.SDSNSAMP.
740
Table 122. LOB samples shipped with DB2 Member that contains source code DSNTEJ7
Language JCL
Function Demonstrates how to create a table with LOB columns, an auxiliary table, and an auxiliary index. Also demonstrates how to load LOB data that is 32 KB or less into a LOB table space. Demonstrates the use of LOB locators and UPDATE statements to move binary data into a column of type BLOB. Demonstrates how to use a locator to manipulate data of type CLOB. Demonstrates how to allocate an SQLDA for rows that include LOB data and use that SQLDA to describe an input statement and fetch data from LOB columns.
DSN8DLPL
DSN8DLRV DSNTEP2
C PL/I
Related concepts: LOB file reference variables on page 751 Phase 7: Accessing LOB data (DB2 Installation and Migration) Related tasks: Saving storage when manipulating LOBs by using LOB locators on page 747
LOB host variable, LOB locator, and LOB file reference variable declarations
When you write applications to manipulate LOB data, you need to declare host variables to hold the LOB data or LOB locator. Alternatively, you need to declare LOB file reference variables to point to the LOB data. | | | | | | | | | | You can declare LOB host variables and LOB locators in assembler, C, C++, COBOL, Fortran, and PL/I. Additionally, you can declare LOB file reference variables in assembler, C, C++, COBOL, and PL/I. REXX does not support LOB host variable, LOB locators, or LOB file reference variables. For each host variable, locator, or file reference variable of SQL type BLOB, CLOB, or DBCLOB that you declare, DB2 generates an equivalent declaration that uses host language data types. When you refer to a LOB host variable, LOB locator, or LOB file reference variable in an SQL statement, you must use the variable that you specified in the SQL type declaration. When you refer to the host variable in a host language statement, you must use the variable that DB2 generates. DB2 supports host variable declarations for LOBs with lengths of up to 2 GB - 1. However, the size of a LOB host variable is limited by the restrictions of the host language and the amount of storage available to the program. | | | Declare LOB host variables that are referenced by the precompiler in SQL statements by using the SQL TYPE IS BLOB, SQL TYPE IS CLOB, or SQL TYPE IS DBCLOB keywords. LOB host variables that are referenced only by an SQL statement that uses a DESCRIPTOR should use the same form as declared by the precompiler. In this
741
form, the LOB host-variable-array consists of a 31-bit length, followed by the data, followed by another 31-bit length, followed by the data, and so on. The 31-bit length must be fullword aligned. Example: Suppose that you want to allocate a LOB array of 10 elements, each with a length of 5 bytes. You need to allocate the following bytes for each element, for a total of 120 bytes: v 4 bytes for the 31-bit integer v 5 bytes for the data v 3 bytes to force fullword alignment The following examples show you how to declare LOB host variables in each supported language. In each table, the left column contains the declaration that you code in your application program. The right column contains the declaration that DB2 generates.
clob_loc SQL TYPE IS CLOB_LOCATOR dbclob_loc SQL TYPE IS DBCLOB_LOCATOR blob_loc SQL TYPE IS BLOB_LOCATOR
| clob_file SQL TYPE IS CLOB_FILE | dbclob_file SQL TYPE IS DBCLOB_FILE | blob_file SQL TYPE IS BLOB_FILE
Notes:
1. Because assembler language allows character declarations of no more than 65535 bytes, DB2 separates the host language declarations for BLOB and CLOB host variables that are longer than 65535 bytes into two parts. 2. Because assembler language allows graphic declarations of no more than 65534 bytes, DB2 separates the host language declarations for DBCLOB host variables that are longer than 65534 bytes into two parts.
742
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 124. Examples of C language variable declarations You declare this variable SQL TYPE IS BLOB (1M) blob_var; DB2 generates this variable struct { unsigned long length; char data[1048576]; } blob_var; struct { unsigned long length; char data[409600]; } clob_var; struct { unsigned long length; sqldbchar data[4096000]; } dbclob_var; unsigned long blob_loc; unsigned long clob_loc; unsigned long dbclob_loc; #pragma pack(full) struct { unsigned long name_length; unsigned long data_length; unsigned long file_options; char name??(255??); } FBLOBhv ; #pragma pack(reset) #pragma pack(full) struct { unsigned long name_length; unsigned long data_length; unsigned long file_options; char name??(255??); } FCLOBhv ; #pragma pack(reset) #pragma pack(full) struct { unsigned long name_length; unsigned long data_length; unsigned long file_options; char name??(255??); } FDBCLOBhv ; #pragma pack(reset)
SQL TYPE IS BLOB_LOCATOR blob_loc; SQL TYPE IS CLOB_LOCATOR clob_loc; SQL TYPE IS DBCLOB_LOCATOR dbclob_loc; SQL TYPE IS BLOB_FILE FBLOBhv;
743
| Table 125. Examples of COBOL variable declarations by the DB2 precompiler | You declare this variable | 01 BLOB-VAR USAGE IS | | SQL TYPE IS BLOB(1M). | | | | | | | | | | | | 01 CLOB-VAR USAGE IS SQL TYPE IS CLOB(40000K). | | | | | | | | | | | | 01 DBCLOB-VAR USAGE IS SQL TYPE IS DBCLOB(4000K). | | | | | | | | | | | | | | | 01 BLOB-LOC USAGE IS SQL TYPE IS BLOB-LOCATOR. | | 01 CLOB-LOC USAGE IS SQL TYPE IS CLOB-LOCATOR. | | 01 DBCLOB-LOC USAGE IS SQL TYPE IS DBCLOB-LOCATOR. | | 01 BLOB-FILE USAGE IS SQL TYPE IS BLOB-FILE. | | | | | 01 CLOB-FILE USAGE IS SQL TYPE IS CLOB-FILE. | | | | | 01 DBCLOB-FILE USAGE IS SQL TYPE IS DBCLOB-FILE. | | | |
DB2 precompiler generates this variable BLOB-VAR. BLOB-VAR-LENGTH PIC 9(9) COMP. 02 BLOB-VAR-DATA. 49 FILLER PIC X(32767).1 49 FILLER PIC X(32767). Repeat 30 times . . . 49 FILLER PIC X(1048576-32*32767). 02 01 CLOB-VAR. CLOB-VAR-LENGTH PIC 9(9) COMP. 02 CLOB-VAR-DATA. 49 FILLER PIC X(32767).1 49 FILLER PIC X(32767). Repeat 1248 times . . . 49 FILLER PIC X(40960000-1250*32767). 02 01 DBCLOB-VAR. DBCLOB-VAR-LENGTH PIC 9(9) COMP. 02 DBCLOB-VAR-DATA. 49 FILLER PIC G(32767) USAGE DISPLAY-1.2 49 FILLER PIC G(32767) USAGE DISPLAY-1. Repeat 123 times . . . 49 FILLER PIC G(20480000-125*32767) USAGE DISPLAY-1. 02 01 01 01 01 49 49 49 49 01 49 49 49 49 01 49 49 49 49 BLOB-LOC PIC S9(9) COMP. CLOB-LOC PIC S9(9) COMP. DBCLOB-LOC PIC S9(9) COMP. BLOB-FILE. BLOB-NAME-LENGTH PIC S9(9) COMP-5 SYNC. BLOB-FILE-DATA-LENGTH PIC S9(9) COMP-5. BLOB-FILE-FILE-OPTION PIC S9(9) COMP-5. BLOB-FILE-NAME PIC X(255) . CLOB-FILE. CLOB-NAME-LENGTH PIC S9(9) COMP-5 SYNC. CLOB-FILE-DATA-LENGTH PIC S9(9) COMP-5. CLOB-FILE-FILE-OPTION PIC S9(9) COMP-5. CLOB-FILE-NAME PIC X(255) . DBCLOB-FILE. DBCLOB-NAME-LENGTH PIC S9(9) COMP-5 SYNC. DBCLOB-FILE-DATA-LENGTH PIC S9(9) COMP-5. DBCLOB-FILE-FILE-OPTION PIC S9(9) COMP-5. DBCLOB-FILE-NAME PIC X(255) . 01
744
| | | | | | | | | |
Table 125. Examples of COBOL variable declarations by the DB2 precompiler (continued) You declare this variable Notes: 1. Because the COBOL language allows character declarations of no more than 32767 bytes, for BLOB or CLOB host variables that are greater than 32767 bytes in length, DB2 creates multiple host language declarations of 32767 or fewer bytes. 2. Because the COBOL language allows graphic declarations of no more than 32767 double-byte characters, for DBCLOB host variables that are greater than 32767 double-byte characters in length, DB2 creates multiple host language declarations of 32767 or fewer double-byte characters. DB2 precompiler generates this variable
745
| Table 127. Examples of PL/I variable declarations by the DB2 precompiler (continued) | You declare this variable | DCL CLOB_VAR | SQL TYPE IS CLOB (40000K); | | | | | | DCL DBCLOB_VAR SQL TYPE IS DBCLOB (4000K); | | | | | | | DCL blob_loc SQL TYPE IS BLOB_LOCATOR; | | DCL clob_loc SQL TYPE IS CLOB_LOCATOR; | | DCL dbclob_loc SQL TYPE IS DBCLOB_LOCATOR; | | DCL blob_file SQL TYPE IS BLOB_FILE; | | | | | | DCL clob_file SQL TYPE IS CLOB_FILE; | | | | | | DCL dbclob_file SQL TYPE IS DBCLOB_FILE; | | | | |
DB2 precompiler generates this variable DCL 1 CLOB_VAR, 2 CLOB_VAR_LENGTH FIXED BINARY(31), 2 CLOB_VAR_DATA,1 3 CLOB_VAR_DATA1(1250) CHARACTER(32767), 3 CLOB_VAR_DATA2 CHARACTER(40960000-1250*32767); DCL 1 DBCLOB_VAR, 2 DBCLOB_VAR_LENGTH FIXED BINARY(31), 2 DBCLOB_VAR_DATA,2 3 DBCLOB_VAR_DATA1(250) GRAPHIC(16383), 3 DBCLOB_VAR_DATA2 GRAPHIC(4096000-250*16383); DCL blob_loc FIXED BINARY(31); DCL clob_loc FIXED BINARY(31); DCL dbclob_loc FIXED BINARY(31); DCL 1 blob_file, blob_file_NAME_LENGTH BIN FIXED(31) blob_file_DATA_LENGTH BIN FIXED(31), blob_file_FILE_OPTIONS BIN FIXED(31), blob_file_NAME CHAR(255) ; clob_file, clob_file_NAME_LENGTH BIN FIXED(31) clob_file_DATA_LENGTH BIN FIXED(31), clob_file_FILE_OPTIONS BIN FIXED(31), clob_file_NAME CHAR(255) ; dbclob_file, dbclob_file_NAME_LENGTH BIN FIXED(31) dbclob_file_DATA_LENGTH BIN FIXED(31), dbclob_file_FILE_OPTIONS BIN FIXED(31), dbclob_file_NAME CHAR(255) ;
| Notes: | 1. For BLOB or CLOB host variables that are greater than 32767 bytes in length, DB2 creates PL/I host language declarations in the following way: | v If the length of the LOB is greater than 32767 bytes and evenly divisible by 32767, DB2 creates an array of | 32767-byte strings. The dimension of the array is length/32767. | v If the length of the LOB is greater than 32767 bytes but not evenly divisible by 32767, DB2 creates two | declarations: The first is an array of 32767 byte strings, where the dimension of the array, n, is length/32767. | The second is a character string of length length-n*32767. | | 2. For DBCLOB host variables that are greater than 16383 double-byte characters in length, DB2 creates PL/I host language declarations in the following way: | v If the length of the LOB is greater than 16383 characters and evenly divisible by 16383, DB2 creates an array of | 16383-character strings. The dimension of the array is length/16383. | v If the length of the LOB is greater than 16383 characters but not evenly divisible by 16383, DB2 creates two | declarations: The first is an array of 16383 byte strings, where the dimension of the array, m, is length/16383. | The second is a character string of length length-m*16383. |
746
| Related concepts: LOB file reference variables on page 751 Related tasks: Saving storage when manipulating LOBs by using LOB locators
LOB materialization
Materialization means that DB2 puts the data that is selected into a buffer for processing. This action can slow performance. Because LOB values can be very large, DB2 avoids materializing LOB data until absolutely necessary. DB2 stores LOB values in contiguous storage. DB2 must materialize LOBs when your application program performs the following actions: v Calls a user-defined function with a LOB as an argument v Moves a LOB into or out of a stored procedure v Assigns a LOB host variable to a LOB locator host variable v Converts a LOB from one CCSID to another The amount of storage that is used for LOB materialization depends on a number of factors including: v The size of the LOBs v The number of LOBs that need to be materialized in a statement DB2 loads LOBs into virtual pools above the bar. If insufficient space is available for LOB materialization, your application receives SQLCODE -904. Although you cannot completely avoid LOB materialization, you can minimize it by using LOB locators, rather than LOB host variables in your application programs. Related tasks: Saving storage when manipulating LOBs by using LOB locators
747
A LOB locator is associated with a LOB value or expression, not with a row in a DB2 table or a physical storage location in a table space. Therefore, after you select a LOB value using a locator, the value in the locator normally does not change until the current unit of work ends. However the value of the LOB itself can change. If you want to remove the association between a LOB locator and its value before a unit of work ends, execute the FREE LOCATOR statement. To keep the association between a LOB locator and its value after the unit of work ends, execute the HOLD LOCATOR statement. After you execute a HOLD LOCATOR statement, the locator keeps the association with the corresponding value until you execute a FREE LOCATOR statement or the program ends. If you execute HOLD LOCATOR or FREE LOCATOR dynamically, you cannot use EXECUTE IMMEDIATE. Related reference: FREE LOCATOR (DB2 SQL) HOLD LOCATOR (DB2 SQL)
748
Example
Suppose that the encoding scheme of the following statement is EBCDIC:
SET : unicode_hv = SUBSTR(:Unicode_lob_locator,X,Y);
DB2 must materialize the LOB that is specified by :Unicode_lob_locator and convert that entire LOB to EBCDIC before executing the statement. To avoid materialization and conversion, you can execute the following statement, which produces the same result but is processed by the Unicode encoding scheme of the table:
SELECT SUBSTR(:Unicode_lob_locator,X,Y) INTO :unicode_hv FROM SYSIBM.SYSDUMMYU;
749
Because the program in the following figure uses LOB locators, rather than placing the LOB data into host variables, no LOB data is moved until the INSERT statement executes. In addition, no LOB data moves between the client and the server.
EXEC SQL INCLUDE SQLCA; /**************************/ /* Declare host variables */ /**************************/ EXEC SQL BEGIN DECLARE SECTION; char userid[9]; char passwd[19]; long HV_START_DEPTINFO; long HV_START_EDUC; long HV_RETURN_CODE; SQL TYPE IS CLOB_LOCATOR HV_NEW_SECTION_LOCATOR; SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR1; SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR2; SQL TYPE IS CLOB_LOCATOR HV_DOC_LOCATOR3; EXEC SQL END DECLARE SECTION; /*************************************************/ /* Delete any instance of "A00130" from previous */ /* executions of this sample */ /*************************************************/ EXEC SQL DELETE FROM EMP_RESUME WHERE EMPNO = A00130; /*************************************************/ /* Use a single row select to get the document */ 2 /*************************************************/ EXEC SQL SELECT RESUME INTO :HV_DOC_LOCATOR1 FROM EMP_RESUME WHERE EMPNO = 000130 AND RESUME_FORMAT = ascii; /*****************************************************/ /* Use the POSSTR function to locate the start of */ /* sections "Department Information" and "Education" */ 3 /*****************************************************/ EXEC SQL SET :HV_START_DEPTINFO = POSSTR(:HV_DOC_LOCATOR1, Department Information); EXEC SQL SET :HV_START_EDUC = POSSTR(:HV_DOC_LOCATOR1, Education); /*******************************************************/ /* Replace Department Information section with nothing */ /*******************************************************/ EXEC SQL SET :HV_DOC_LOCATOR2 = SUBSTR(:HV_DOC_LOCATOR1, 1, :HV_START_DEPTINFO -1) || SUBSTR (:HV_DOC_LOCATOR1, :HV_START_EDUC); /*******************************************************/ /* Associate a new locator with the Department */ /* Information section */ /*******************************************************/ EXEC SQL SET :HV_NEW_SECTION_LOCATOR = SUBSTR(:HV_DOC_LOCATOR1, :HV_START_DEPTINFO, :HV_START_EDUC -:HV_START_DEPTINFO); /*******************************************************/ /* Append the Department Information to the end */ /* of the resume */ /*******************************************************/ EXEC SQL SET :HV_DOC_LOCATOR3 = :HV_DOC_LOCATOR2 || :HV_NEW_SECTION_LOCATOR; /*******************************************************/ /* Store the modified resume in the table. This is */ 1
750
/* where the LOB data really moves. */ /*******************************************************/ EXEC SQL INSERT INTO EMP_RESUME VALUES (A00130, ascii, :HV_DOC_LOCATOR3, DEFAULT); /*********************/ /* Free the locators */ 5 /*********************/ EXEC SQL FREE LOCATOR :HV_DOC_LOCATOR1, :HV_DOC_LOCATOR2, :HV_DOC_LOCATOR3;
Notes: 1 2 3 4 5 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Declare the LOB locators here. This SELECT statement associates LOB locator HV_DOC_LOCATOR1 with the value of column RESUME for employee number 000130. The next five SQL statements use LOB locators to manipulate the resume data without moving the data. Evaluation of the LOB expressions in the previous statements has been deferred until execution of this INSERT statement. Free all LOB locators to release them from their associated values.
751
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Direction This property must be specified by the application program at run time as part of the file option property. The direction property can have the following values: Input Output Used as the target of data on a FETCH statement or a SELECT INTO statement. File name This property must be specified by the application program at run time. The file name property can have the following values: v The complete path name of the file. This is recommended. File name length This property must be specified by the application program at run time. File options An application must assign one of the file options to a file reference variable before the application can use that variable. File options are set by the INTEGER value in a field in the file reference variable construct. One of the following values must be specified for each file reference variable: v Input (from application to database): SQL_FILE_READ A regular file that can be opened, read, and closed. v Output (from database to application): SQL_FILE_CREATE If the file does not exists, a new file is created. If the file already exists, an error is returned. SQL_FILE_OVERWRITE If the file does not exists, a new file is created. If the file already exists, it is overwritten. SQL_FILE_APPEND If the file does not exists, a new file is created. If the file already exists, the output is appended to the existing file. Data length The length, in bytes, of the new data written to the file Used as a data source on an EXECUTE, OPEN, UPDATE, INSERT, DELETE, SET, or MERGE statement.
752
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
unsigned long data_length; // Data length unsigned long file_options; // File options char name [255]; // File name } hv_text_file; char hv_thesis_title[64]
With the DB2-generated construct, you can use the following code to select from a CLOB column in the database into a new file that is referenced by :hv_text_file. The file name must be an absolute path.
strcopy(hv_text_file.name, "/u/gainer/papers/sigmod.94"); hv_text_file.name_length = strlen("/u/gainer/papers/sigmod.94"); hv_text_file.file_options = SQL_FILE_CREATE; EXEC SQL SELECT CONTENT INTO :hv_text_file FROM PAPERS WHERE TITLE = The Relational Theory Behind Juggling;
Similarly, you can use the following code to insert the data from a file that is referenced by :hv_text_file into a CLOB column. The file name must be an absolute path.
strcopy(hv_text_file.name, "/u/gainer/patents/chips.13"); hv_text_file.name_length = strlen("/u/gainer/patents/chips.13"); hv_text_file.file_options = SQL_FILE_READ; strcopy(:hv_patent_title, "A Method for Pipelining Chip Consumption"); EXEC SQL INSERT INTO PATENTS(TITLE, TEXT) VALUES(:hv_patent_title, :hv_text_file);
For examples of how to declare file reference variables for XML data in C, COBOL, and PL/I, see Host variable data types for XML data in embedded SQL applications on page 216.
753
Procedure
To determine when a row was changed: Issue a SELECT statement with the ROW CHANGE TIMESTAMP column in the column list. If a qualifying row does not have a value for the ROW CHANGE TIMESTAMP column, DB2 returns the time that the page in which that row resides was updated.
Example
Suppose that you issue the following statements to create, populate, and alter a table:
754
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CREATE TABLE T1 (C1 INTEGER NOT NULL); INSERT INTO T1 VALUES (1); ALTER TABLE T1 ADD COLUMN C2 NOT NULL GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP; SELECT T1.C2 FROM T1 WHERE T1.C1 = 1;
Because the ROW CHANGE TIMESTAMP column was added after the data was inserted, the following statement returns the time that the page was last modified:
SELECT T1.C2 FROM T1 WHERE T1.C1 = 1;
Assume that this row is added to the same page as the first row. The following statement returns the time that value "2" was inserted into the table:
SELECT T1.C2 FROM T1 WHERE T1.C1 = 2;
Because the row with value "1" still does not have a value for the ROW CHANGE TIMESTAMP column, the following statement still returns the time that the page was last modified, which in this case is the time that value "2" was inserted:
SELECT T1.C2 FROM T1 WHERE T1.C1 = 1;
Procedure
To check whether an XML column contains a certain value: Specify the XMLEXISTS predicate in the WHERE clause of your SQL statement. Include the following parameters for the XMLEXISTS predicate: v An XPath expression that is embedded in a character string literal. Specify an XPath expression that identifies the XML data that you are looking for. If the result of the XPath expression is an empty sequence, XMLEXISTS returns false. If the result is not empty, XMLEXISTS returns true. If the evaluation of the XPath expression returns an error, XMLEXISTS returns an error. v The XML column name. Specify this value after the PASSING keyword.
Example
Suppose that you want to return only purchase orders that have a billing address. Assume that column XMLPO stores the XML purchase order documents and that the billTo nodes within these documents contain any billing addresses. You can use the following SELECT statement with the XMLEXISTS predicate:
SELECT XMLPO FROM T1 WHERE XMLEXISTS (declare namespace ipo="http://www.example.com/IPO"; /ipo:purchaseOrder[billTo] PASSING XMLPO);
755
| |
Example: To return the value of an expression in a host variable, use the VALUES INTO statement:
EXEC SQL VALUES RAND(:hvrand) INTO :hvrandval;
Example: To select the expression from the DB2-provided EBCDIC table, named SYSIBM.SYSDUMMY1, which consists of one row, use the following statement:
EXEC SQL SELECT RAND(:hvrand) INTO :hvrandval FROM SYSIBM.SYSDUMMY1;
Procedure
To ensure that queries perform sufficiently: 1. Tune each query in your program by following the general tuning guidelines for how to write efficient queries. 2. If you suspect that a query is not as efficient as it could be, monitor its performance. You can use a number of different functions and techniques to monitor SQL performance, including the SQL EXPLAIN statement and SQL optimization tools.
756
Related concepts: Investigating SQL performance by using EXPLAIN (DB2 Performance) Interpreting data access by using EXPLAIN (DB2 Performance) Related tasks: Programming applications for performance (DB2 Performance) Investigating access path problems (DB2 Performance) Generating visual representations of access plans Related reference: EXPLAIN (DB2 SQL) InfoSphere Optim Query Workload Tuner Related information: Tuning SQL with Optim Query Tuner, Part 1: Understanding access paths
757
v The DB2 IMS attachment facility, which handles the two-phase commit protocol and enables both systems to synchronize a unit of recovery during a restart after a failure v The IMS log, which is used to record the instant of commit In a data sharing environment, DL/I batch supports group attachment. You can specify a group attachment name instead of a subsystem name in the SSN parameter of the DDITV02 data set for the DL/I batch job.
758
that case, if the program abnormally terminates before completing, DB2 backs out any updates, and you can use the IMS batch backout utility to back out the DL/I changes. You can also have IMS dynamically back out the updates within the same job. You must specify the BKO parameter as 'Y' and allocate the IMS log to DASD. You could have a problem if the system on which the job is run fails after the program terminates but before the job step ends. If you do not have a checkpoint call before the program ends, DB2 commits the unit of work without involving IMS. If the system fails before DL/I commits the data, the DB2 data is out of synchronization with the DL/I changes. If the system fails during DB2 commit processing, the DB2 data could be indoubt. When you restart the application program, use the XRST call to obtain checkpoint information and resolve any DB2 indoubt work units. Recommendation: Always issue a symbolic checkpoint at the end of any update job to coordinate the commit of the outstanding unit of work for IMS and DB2. Checkpoint and XRST considerations in DL/I batch: If you use an XRST call, DB2 assumes that any checkpoint that is issued is a symbolic checkpoint. The options of the symbolic checkpoint call differ from the options of a basic checkpoint call. Using the incorrect form of the checkpoint call can cause problems. If you do not use an XRST call, DB2 assumes that any checkpoint call that is issued is a basic checkpoint. To make restart easier, use EBCDIC characters for checkpoint IDs. When an application program needs to be restartable, you must use symbolic checkpoint and XRST calls. If you use an XRST call, it must be the first IMS call that is issued, and it must occur before any SQL statement. Also, you must use only one XRST call. Synchronization call abends in DL/I batch: If the application program contains an incorrect IMS synchronization call (CHKP, ROLB, ROLL, or XRST), causing IMS to issue a bad status code in the PCB, DB2 abends the application program. Be sure to test these calls before placing the programs in production. Related concepts: Input and output data sets for DL/I batch jobs on page 1004 Multiple system consistency (DB2 Administration Guide) Related tasks: Chapter 17, Preparing an application to run on DB2 for z/OS, on page 941
759
760
COUNTER is a user-defined function that increments a variable in the scratchpad each time it is invoked. DB2 invokes an instance of COUNTER in the predicate 3 times. Assume that COUNTER is invoked for row 1 first, for row 2 second, and for row 3 third. Then
Copyright IBM Corp. 1983, 2013
761
COUNTER returns 1 for row 1, 2 for row 2, and 3 for row 3. Therefore, row 2 satisfies the predicate WHERE COUNTER()=2, so DB2 evaluates the SELECT list for row 2. DB2 uses a different instance of COUNTER in the select list from the instance in the predicate. Because the instance of COUNTER in the select list is invoked only once, it returns a value of 1. Therefore, the result of the query is:
COUNTER() --------1 C1 C2 -- -2 c
This is not the result you might expect. The results can differ even more, depending on the order in which DB2 retrieves the rows from the table. Suppose that an ascending index is defined on column C2. Then DB2 retrieves row 3 first, row 1 second, and row 2 third. This means that row 1 satisfies the predicate WHERE COUNTER()=2. The value of COUNTER in the select list is again 1, so the result of the query in this case is:
COUNTER() --------1 C1 C2 -- -1 b
Understand the interaction between scrollable cursors and nondeterministic user-defined functions or user-defined functions with external actions: When you use a scrollable cursor, you might retrieve the same row multiple times while the cursor is open. If the select list of the cursor's SELECT statement contains a user-defined function, that user-defined function is executed each time you retrieve a row. Therefore, if the user-defined function has an external action, and you retrieve the same row multiple times, the external action is executed multiple times for that row. A similar situation occurs with scrollable cursors and nondeterministic functions. The result of a nondeterministic user-defined function can be different each time you execute the user-defined function. If the select list of a scrollable cursor contains a nondeterministic user-defined function, and you use that cursor to retrieve the same row multiple times, the results can differ each time you retrieve the row. A nondeterministic user-defined function in the predicate of a scrollable cursor's SELECT statement does not change the result of the predicate while the cursor is open. DB2 evaluates a user-defined function in the predicate only once while the cursor is open.
762
Related concepts: Abnormal termination of an external user-defined function on page 531 Cases when DB2 casts arguments for a user-defined function on page 772 How DB2 resolves functions on page 764 Function invocation (DB2 SQL) Related reference: from-clause (DB2 SQL)
If the SQL statements in the user-defined function package execute: statically dynamically
The authorization ID is: The owner of the user-defined function package dependent upon the value of DYNAMICRULES with which the user-defined function package was bound.
The DYNAMICRULES bind parameter influences a number of characteristics of an application program. Related concepts: DYNAMICRULES bind option on page 987
763
The number of candidate functions is smaller, so DB2 takes less time for function resolution. v Cast parameters in a user-defined function invocation to the types in the user-defined function definition. For example, if an input parameter for user-defined function FUNC is defined as DECIMAL(13,2), and the value you want to pass to the user-defined function is an integer value, cast the integer value to DECIMAL(13,2):
SELECT FUNC(CAST (INTCOL AS DECIMAL(13,2))) FROM T1;
| | | | | | | | | | | | | | | | | | | | | | | | | | |
v Use the data type BIGINT for numeric parameters in a user-defined function. If you use BIGINT as the parameter type, when you invoke the function, you can pass in SMALLINT, INTEGER, or BIGINT values. If you use SMALLINT or REAL as the parameter type, you must pass parameters of the same types. For example, if user-defined function FUNC is defined with a parameter of type SMALLINT, only an invocation with a parameter of type SMALLINT resolves correctly. The following call does not resolve to FUNC because the constant 123 is of type INTEGER, not SMALLINT:
SELECT FUNC(123) FROM T1;
v Avoid defining user-defined function string parameters with fixed-length string types. If you define a parameter with a fixed-length string type (CHAR, GRAPHIC, or BINARY), you can invoke the user-defined function only with a fixed-length string parameter. However, if you define the parameter with a varying-length string type (VARCHAR, VARGRAPHIC, or VARBINARY), you can invoke the user-defined function with either a fixed-length string parameter or a varying-length string parameter. If you must define parameters for a user-defined function as CHAR or BINARY, and you call the user-defined function from a C program or SQL procedure, you need to cast the corresponding parameter values in the user-defined function invocation to CHAR or BINARY to ensure that DB2 invokes the correct function. For example, suppose that a C program calls user-defined function CVRTNUM, which takes one input parameter of type CHAR(6). Also suppose that you declare host variable empnumbr as char empnumbr[6]. When you invoke CVRTNUM, cast empnumbr to CHAR:
UPDATE EMP SET EMPNO=CVRTNUM(CHAR(:empnumbr)) WHERE EMPNO = :empnumbr;
764
2. Compares the data types of the input parameters to determine which candidates fit the invocation best. DB2 does not compare data types for input parameters that are untyped parameter markers. For a qualified function invocation, if there are no parameter markers in the invocation, the result of the data type comparison is one best fit. That best fit is the choice for execution. If there are parameter markers in the invocation, there might be more than one best fit. DB2 issues an error if there is more than one best fit. For an unqualified function invocation, DB2 might find multiple best fits because the same function name with the same input parameters can exist in different schemas, or because there are parameter markers in the invocation. 3. If two or more candidates fit the unqualified function invocation equally well because the same function name with the same input parameters exists in different schemas, DB2 chooses the user-defined function whose schema name is earliest in the SQL path. For example, suppose functions SCHEMA1.X and SCHEMA2.X fit a function invocation equally well. Assume that the SQL path is:
"SCHEMA2", "SYSPROC", "SYSIBM", "SCHEMA1", "SYSFUN"
Then DB2 chooses function SCHEMA2.X. If two or more candidates fit the unqualified function invocation equally well because the function invocation contains parameter markers, DB2 issues an error. The remainder of this section discusses details of the function resolution process and gives suggestions on how you can ensure that DB2 picks the right function. How DB2 chooses candidate functions: An instance of a user-defined function is a candidate for execution only if it meets all of the following criteria: v If the function name is qualified in the invocation, the schema of the function instance matches the schema in the function invocation. If the function name is unqualified in the invocation, the schema of the function instance matches a schema in the invoker's SQL path. v The name of the function instance matches the name in the function invocation. v The number of input parameters in the function instance matches the number of input parameters in the function invocation. v The function invoker is authorized to execute the function instance. v The type of each of the input parameters in the function invocation matches or is promotable to the type of the corresponding parameter in the function instance. If an input parameter in the function invocation is an untyped parameter marker, DB2 considers that parameter to be a match or promotable. For a function invocation that passes a transition table, the data type, length, precision, and scale of each column in the transition table must match exactly the data type, length, precision, and scale of each column of the table that is named in the function instance definition. For information about transition tables, see Creating triggers on page 468. v The create timestamp for a user-defined function must be older than the BIND or REBIND timestamp for the package or plan in which the user-defined function is invoked.
Chapter 13. Invoking a user-defined function
765
If DB2 authorization checking is in effect, and DB2 performs an automatic rebind on a plan or package that contains a user-defined function invocation, any user-defined functions that were created after the original BIND or REBIND of the invoking plan or package are not candidates for execution. If you use an access control authorization exit routine, some user-defined functions that were not candidates for execution before the original BIND or REBIND of the invoking plan or package might become candidates for execution during the automatic rebind of the invoking plan or package. If a user-defined function is invoked during an automatic rebind, and that user-defined function is invoked from a trigger body and receives a transition table, then the form of the invoked function that DB2 uses for function selection includes only the columns of the transition table that existed at the time of the original BIND or REBIND of the package or plan for the invoking program. During an automatic rebind, DB2 does not consider built-in functions for function resolution if those built-in functions were introduced in a later release of DB2 than the release in which the BIND or REBIND of the invoking plan or package occurred. When you explicitly bind or rebind a plan or package, the plan or package receives a release dependency marker. When DB2 performs an automatic rebind of a query that contains a function invocation, a built-in function is a candidate for function resolution only if the release dependency marker of the built-in function is the same as or lower than the release dependency marker of the plan or package that contains the function invocation. Example: Suppose that in this statement, the data type of A is SMALLINT:
SELECT USER1.ADDTWO(A) FROM TABLEA;
Two instances of USER1.ADDTWO are defined: one with an input parameter of type INTEGER and one with an input parameter of type DECIMAL. Both function instances are candidates for execution because the SMALLINT type is promotable to either INTEGER or DECIMAL. However, the instance with the INTEGER type is a better fit because INTEGER is higher in the list than DECIMAL. How DB2 chooses the best fit among candidate functions: More than one function instance might be a candidate for execution. In that case, DB2 determines which function instances are the best fit for the invocation by comparing parameter data types. If the data types of all parameters in a function instance are the same as those in the function invocation, that function instance is a best fit. If no exact match exists, DB2 compares data types in the parameter lists from left to right, using this method: 1. DB2 compares the data types of the first parameter in the function invocation to the data type of the first parameter in each function instance. If the first parameter in the invocation is an untyped parameter marker, DB2 does not do the comparison. 2. For the first parameter, if one function instance has a data type that fits the function invocation better than the data types in the other instances, that function is a best fit. 3. If the data types of the first parameter are the same for all function instances, or if the first parameter in the function invocation is an untyped parameter marker, DB2 repeats this process for the next parameter. DB2 continues this process for each parameter until it finds a best fit.
766
Example of function resolution: Suppose that a program contains the following statement:
SELECT FUNC(VCHARCOL,SMINTCOL,DECCOL) FROM T1;
In user-defined function FUNC, VCHARCOL has data type VARCHAR, SMINTCOL has data type SMALLINT, and DECCOL has data type DECIMAL. Also suppose that two function instances with the following definitions meet the appropriate criteria and are therefore candidates for execution.
Candidate 1: CREATE FUNCTION FUNC(VARCHAR(20),INTEGER,DOUBLE) RETURNS DECIMAL(9,2) EXTERNAL NAME FUNC1 PARAMETER STYLE SQL LANGUAGE COBOL; Candidate 2: CREATE FUNCTION FUNC(VARCHAR(20),REAL,DOUBLE) RETURNS DECIMAL(9,2) EXTERNAL NAME FUNC2 PARAMETER STYLE SQL LANGUAGE COBOL;
DB2 compares the data type of the first parameter in the user-defined function invocation to the data types of the first parameters in the candidate functions. Because the first parameter in the invocation has data type VARCHAR, and both candidate functions also have data type VARCHAR, DB2 cannot determine the better candidate based on the first parameter. Therefore, DB2 compares the data types of the second parameters. The data type of the second parameter in the invocation is SMALLINT. INTEGER, which is the data type of candidate 1, is a better fit to SMALLINT than REAL, which is the data type of candidate 2. Therefore, candidate 1 is the DB2 choice for execution. Related concepts: Promotion of data types (DB2 SQL) Related tasks: Creating triggers on page 468 Related information: Exit routines (DB2 Administration Guide)
Procedure
To check how DB2 resolves a function by using DSN_FUNCTION_TABLE: 1. If your_userID.DSN_FUNCTION_TABLE does not already exist, create this table by following the instructions in DSN_FUNCTION_TABLE (DB2 Performance).
767
2. Populate your_userID.DSN_FUNCTION_TABLE with information about which functions are invoked by a particular SQL statement by performing one of the following actions: v Execute the EXPLAIN statement on the SQL statement. v Ensure that the program that contains the SQL statement is bound with EXPLAIN(YES) and run the program. DB2 puts a row in your_userID.DSN_FUNCTION_TABLE for each function that is referenced in each SQL statement. 3. Check the rows that were added to your_userID.DSN_FUNCTION_TABLE to ensure that the appropriate function was invoked. Use the following columns to help you find applicable rows: QUERYNO, APPLNAME, PROGNAM, COLLID, and EXPLAIN_TIME. Related reference: BIND and REBIND options (DB2 Commands) EXPLAIN (DB2 SQL)
DSN_FUNCTION_TABLE
The function table, DSN_FUNCTION_TABLE, contains descriptions of functions that are used in specified SQL statements.
PSPI
| | | | | | | | | | | |
Recommendation: Do not manually insert data into system-maintained EXPLAIN tables, and use care when deleting obsolete EXPLAIN table data. The data is intended to be manipulated only by the DB2 EXPLAIN function and optimization tools. Certain optimization tools depend on instances of the various EXPLAIN tables. Be careful not to delete data from or drop instances EXPLAIN tables that are created for these tools. Important: If mixed data strings are allowed on a DB2 subsystem, EXPLAIN tables must be created with CCSID UNICODE. This includes, but is not limited to, mixed data strings that are used for tokens, SQL statements, application names, program names, correlation names, and collection IDs. Important: EXPLAIN tables in any pre-Version 8 format or EXPLAIN tables that are encoded in EBCDIC are deprecated.
Qualifiers
Your subsystem or data sharing group can contain more than one of these tables: SYSIBM One instance of each EXPLAIN table can be created with the SYSIBM qualifier. SQL optimization tools use these tables. You can find the SQL statement for creating these tables in member DSNTIJOS of the SDSNSAMP library. userID You can create additional instances of EXPLAIN tables that are qualified by user ID. These tables are populated with statement cost information when you issue the EXPLAIN statement or bind. They are also populated when you specify EXPLAIN(YES) in a BIND or REBIND command. SQL optimization tools might also create EXPLAIN tables that are qualified by a
768
user ID. You can find the SQL statement for creating an instance of these tables in member DSNTESC of the SDSNSAMP library. DB2OSCA SQL optimization tools, such as Optimization Service Center for DB2 for z/OS, create EXPLAIN tables to collect information about SQL statements and workloads to enable analysis and tuning processes. You can find the SQL statements for creating EXPLAIN tables in member DSNTIJOS of the SDSNSAMP library. You can also create this table from the Optimization Service Center interface on your workstation. | | |
Column descriptions
PSPI
| | | | | | | | | | | | | |
QUERYNO
APPLNAME
PROGNAME
769
Table 128. Descriptions of columns in DSN_FUNCTION_TABLE (continued) Column name COLLID Data type VARCHAR(128) NOT NULL WITH DEFAULT Description The collection ID for the package. Applies only to an embedded EXPLAIN statement that is executed from a package or to a statement that is explained when binding a package. A blank indicates that the column is not applicable. The value DSNDYNAMICSQLCACHE indicates that the row is for a cached statement. The member name of the DB2 that executed EXPLAIN. The column is blank if the DB2 subsystem was not in a data sharing environment when EXPLAIN was executed. The time when the EXPLAIN information was captured: All cached statements When the statement entered the cache, in the form of a full-precision timestamp value. Non-cached static statements When the statement was bound, in the form of a full precision timestamp value. Non-cached dynamic statements When EXPLAIN was executed, in the form of a value equivalent to a CHAR(16) representation of the time appended by 4 zeros. SCHEMA_NAME VARCHAR(128) NOT NULL WITH DEFAULT VARCHAR(128) NOT NULL WITH DEFAULT VARCHAR(128) NOT NULL WITH DEFAULT CHAR(2) NOT NULL WITH DEFAULT The schema name of the function invoked in the explained statement. The name of the function invoked in the explained statement.
GROUP_MEMBER
VARCHAR(24) NOT NULL WITH DEFAULT TIMESTAMP NOT NULL WITH DEFAULT
EXPLAIN_TIME
FUNCTION_NAME
| SPEC_FUNC_NAME
The specific name of the function invoked in the explained statement. The type of function invoked in the explained statement. Possible values are: CU Column function SU Scalar function TU Table function If the function specified in the FUNCTION_NAME column is referenced in a view definition, the creator of the view. Otherwise, blank. If the function specified in the FUNCTION_NAME column is referenced in a view definition, the name of the view. Otherwise, blank. The value of the SQL path that was used to resolve the schema name of the function. The text of the function reference (the function name and parameters). If the function reference is over 100 bytes, this column contains the first 100 bytes. For functions specified in infix notation, FUNCTION_TEXT contains only the function name. For example, for a function named /, which overloads the SQL divide operator, if the function reference is A/B, FUNCTION_TEXT contains only /.
FUNCTION_TYPE
| | |
VIEW_CREATOR
VARCHAR(128) NOT NULL WITH DEFAULT VARCHAR(128) NOT NULL WITH DEFAULT VARCHAR(2048) NOT NULL WITH DEFAULT VARCHAR(1500) NOT NULL WITH DEFAULT
VIEW_NAME
PATH
FUNCTION_TEXT
770
PSPI
Related tasks: Checking how DB2 resolves functions by using DSN_FUNCTION_TABLE (DB2 Application programming and SQL)
The HOUR function takes only the TIME or TIMESTAMP data type as an argument, so you need a sourced function that is based on the HOUR function that accepts the FLIGHT_TIME data type. You might declare a function like this:
CREATE FUNCTION HOUR(FLIGHT_TIME) RETURNS INTEGER SOURCE SYSIBM.HOUR(TIME);
Example: Casting function arguments to acceptable types: Another way you can invoke the HOUR function is to cast the argument of type FLIGHT_TIME to the TIME data type before you invoke the HOUR function. Suppose table FLIGHT_INFO contains column DEPARTURE_TIME, which has data type FLIGHT_TIME, and you want to use the HOUR function to extract the hour of departure from the departure time. You can cast DEPARTURE_TIME to the TIME data type, and then invoke the HOUR function:
SELECT HOUR(CAST(DEPARTURE_TIME AS TIME)) FROM FLIGHT_INFO;
Example: Using an infix operator with distinct type arguments: Suppose you want to add two values of type US_DOLLAR. Before you can do this, you must define a version of the + function that accepts values of type US_DOLLAR as operands:
Chapter 13. Invoking a user-defined function
771
Because the US_DOLLAR type is based on the DECIMAL(9,2) type, the source function must be the version of + with arguments of type DECIMAL(9,2). Example: Casting constants and host variables to distinct types to invoke a user-defined function: Suppose function CDN_TO_US is defined like this:
CREATE FUNCTION EURO_TO_US(EURO) RETURNS US_DOLLAR EXTERNAL NAME CDNCVT PARAMETER STYLE SQL LANGUAGE C;
This means that EURO_TO_US accepts only the EURO type as input. Therefore, if you want to call CDN_TO_US with a constant or host variable argument, you must cast that argument to distinct type EURO:
SELECT * FROM US_SALES WHERE TOTAL = EURO_TO_US(EURO(:H1)); SELECT * FROM US_SALES WHERE TOTAL = EURO_TO_US(EURO(10000));
Sourced user-defined function TAXFN2, which is sourced on TAXFN1, is defined like this:
CREATE FUNCTION TAXFN2(DEC(8,2)) RETURNS DEC(5,0) SOURCE TAXFN1;
772
Now suppose that PRICE2 has the DECIMAL(9,2) value 0001234.56. DB2 must first assign this value to the data type of the input parameter in the definition of TAXFN2, which is DECIMAL(8,2). The input parameter value then becomes 001234.56. Next, DB2 casts the parameter value to a source function parameter, which is DECIMAL(6,0). The parameter value then becomes 001234. (When you cast a value, that value is truncated, rather than rounded.) Now, if TAXFN1 returns the DECIMAL(5,2) value 123.45, DB2 casts the value to DECIMAL(5,0), which is the result type for TAXFN2, and the value becomes 00123. This is the value that DB2 assigns to column SALESTAX2 in the UPDATE statement.
773
774
775
| | | | | | | | | |
DB2 runs stored procedures under the DB2 thread of the calling application, which means that the stored procedures are part of the caller's unit of work. JDBC and ODBC applications: These instructions do not apply to JDBC and ODBC applications. Instead, see the following information for how to call stored procedures from those applications: v For ODBC applications, see Stored procedure calls in a DB2 ODBC application (DB2 Programming for ODBC). v For JDBC applications, see Calling stored procedures in JDBC applications (DB2 Application Programming for Java)
Procedure
To call a stored procedure from your application: 1. Assign values to the IN and INOUT parameters. 2. Optional: To improve application performance, initialize the length of LOB output parameters to zero. 3. If the stored procedure exists at a remote location, perform the following actions: a. Assign values to the OUT parameters. When you call a stored procedure at a remote location, the local DB2 server cannot determine whether the parameters are input (IN) or output (OUT or INOUT) parameters. Therefore, you must initialize the values of all output parameters before you call a stored procedure at a remote location. b. Optional: Issue an explicit CONNECT statement to connect to the remote server. If you do not issue this statement explicitly, you can implicitly connect to the server by using a three-part name to identify the stored procedure in the next step. The advantage of issuing an explicit CONNECT statement is that your CALL statement, which is described in the next step, is portable to other operating systems. The advantage of implicitly connecting is that you do not need to issue this extra CONNECT statement. Requirement: When deciding whether to implicitly or explicitly connect to the remote server, consider the requirement for programs that execute the ASSOCIATE LOCATORS or DESCRIBE PROCEDURE statements. You must use the same form of the procedure name on the CALL statement and on the ASSOCIATE LOCATORS or DESCRIBE PROCEDURE statement. 4. Invoke the stored procedure with the SQL CALL statement. Make sure that you pass parameter data types that are compatible. If the stored procedure exists on a remote server and you did not issue an explicit CONNECT statement, specify a three-part name to identify the stored procedure, and implicitly connect to the server where the stored procedure is located. For native SQL procedures, the active version of the stored procedure is invoked by default. Optionally, you can specify a version of the stored procedure other than the active version. To allow null values for parameters, use indicator variables. 5. Optional: Retrieve the status of the procedure.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
776
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
6. Process any output, including the OUT and INOUT parameters. 7. If the stored procedure returns multiple result sets, retrieve those result sets. Recommendation: Close the result sets after you retrieve them, and issue frequent commits to prevent DB2 storage shortages and EDM POOL FULL conditions. 8. For PL/I applications, also perform the following actions: a. Include the runtime option NOEXECOPS in your source code. b. Specify the compile-time option SYSTEM(MVS). These additional steps ensure that the linkage conventions work correctly on z/OS. 9. For C applications, include the following line in your source code:
#pragma runopts(PLIST(OS))
This code ensures that the linkage conventions work correctly on z/OS. This option is not applicable to other operating systems. If you plan to use a C stored procedure on other platforms besides z/OS, use one of the forms of conditional compilation, as shown in the following example, to include this option only when you compile on z/OS. Form 1:
#ifdef MVS #pragma runopts(PLIST(OS)) #endif
Form 2:
#ifndef WKSTN #pragma runopts(PLIST(OS)) #endif
10. Prepare the application as you would any other application by precompiling, compiling, and link-editing the application and binding the DBRM. If the application calls a remote stored procedure, perform the following additional steps when you bind the DBRM: v Bind the DBRM into a package at the local DB2 server. Use the bind option DBPROTOCOL(DRDA). If the stored procedure name cannot be resolved until run time, also specify the bind option VALIDATE(RUN). The stored procedure name might not be resolved at run time if you use a variable for the stored procedure name or if the stored procedure exists on a remote server. v Bind the DBRM into a package at the remote DB2 server. If your client program accesses multiple servers, bind the program at each server. v Bind all packages into a plan at the local DB2 server. Use the bind option DBPROTOCOL(DRDA). 11. Ensure that stored procedure completed successfully. If a stored procedure abnormally terminates, DB2 performs the following actions: v The calling program receives an SQL error as notification that the stored procedure failed. v DB2 places the calling program's unit of work in a must-rollback state. v DB2 stops the stored procedure, and subsequent calls fail, in either of the following conditions:
777
| | | | | | | | | | | | | | | |
The number of abnormal terminations equals the STOP AFTER n FAILURES value for the stored procedure. The number of abnormal terminations equals the default MAX ABEND COUNT value for the subsystem. v The stored procedure does not handle the abend condition, and DB2 refreshes the environment for Language Environment to recover the storage that the application uses. In most cases, the environment does not need to restart. v A data set is allocated in the DD statement CEEDUMP in the JCL procedure that starts the stored procedures address space. In this case, Language Environment writes a small diagnostic dump to this data set. Use the information in the dump to debug the stored procedure. v In a data sharing environment, the stored procedure is placed in STOPABN status only on the member where the abends occurred. A calling program can invoke the stored procedure from other members of the data sharing group. The status on all other members is STARTED.
Example
Example of simple CALL statement: The following example shows a simple CALL statement that you might use to invoke stored procedure A:
EXEC SQL CALL A (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, :CODE);
In this example, :EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, and :CODE are host variables that you have declared earlier in your application program. Example of using a host structure for multiple parameter values: Instead of passing each parameter separately, as shown in the example of a simple CALL statement, you can pass them together as a host structure. For example, assume that you defined the following host structure in your application:
struct { char EMP[7]; char PRJ[7]; short ACT; short EMT; char EMS[11]; char EME[11]; } empstruc;
You can then issue the following CALL statement to invoke stored procedure A:
EXEC SQL CALL A (:empstruc, :TYPE, :CODE);
Examples of calling a remote stored procedure: Suppose that stored procedure A is in schema SCHEMAA at remote location LOCA. To invoke stored procedure A, you can explicitly or implicitly connect to the server: v The following example shows how to explicitly connect to LOCA and then issue a CALL statement:
EXEC SQL CONNECT TO LOCA; EXEC SQL CALL SCHEMAA.A (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, :CODE);
v The following example shows how to implicitly connect to LOCA by specifying the three-part name for stored procedure A in the CALL statement:
EXEC SQL CALL LOCA.SCHEMAA.A (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, :CODE);
778
Example of passing parameters that can have null values: The preceding examples assume that none of the input parameters can have null values. The following example shows how to allow for null values for the parameters by passing indicator variables in the parameter list:
EXEC SQL CALL A (:EMP :IEMP, :PRJ :IPRJ, :ACT :IACT, :EMT :IEMT, :EMS :IEMS, :EME :IEME, :TYPE :ITYPE, :CODE :ICODE);
In this example, :IEMP, :IPRJ, :IACT, :IEMT, :IEMS, :IEME, :ITYPE, and :ICODE are indicator variables for the parameters. Example of passing string constants and null values: The following example CALL statement passes integer and character string constants, a null value, and several host variables:
EXEC SQL CALL A (000130, IF1000, 90, 1.0, NULL, 2009-10-01, :TYPE, :CODE);
Example of using a host variable for the stored procedure name: The following example CALL statement uses a host variable for the name of the stored procedure:
EXEC SQL CALL :procnm (:EMP, :PRJ, :ACT, :EMT, :EMS, :EME, :TYPE, :CODE);
Assume that the stored procedure name is A. The host variable procnm is a character variable of length 255 or less that contains the value 'A'. Use this technique if you do not know in advance the name of the stored procedure, but you do know the parameter list convention. Example of using an SQLDA to pass parameters in a single structure: The following example CALL statement shows how to pass parameters in a single structure, the SQLDA, rather than as separate host variables:
EXEC SQL CALL A USING DESCRIPTOR :sqlda;
sqlda is the name of an SQLDA. One advantage of using an SQLDA is that you can change the encoding scheme of the stored procedure parameter values. For example, if the subsystem on which the stored procedure runs has an EBCDIC encoding scheme, and you want to retrieve data in ASCII CCSID 437, you can specify the CCSIDs for the output parameters in the SQLVAR fields of the SQLDA. This technique for overriding the CCSIDs of parameters is the same as the technique for overriding the CCSIDs of variables. This technique involves including dynamic SQL for varying-list SELECT statements in your program. When you use this technique, the defined encoding scheme of the parameter must be different from the encoding scheme that you specify in the SQLDA. Otherwise, no conversion occurs. The defined encoding scheme for the parameter is the encoding scheme that you specify in the CREATE PROCEDURE statement. If you do not specify an encoding scheme in this statement, the defined encoding scheme for the parameter is the default encoding scheme for the subsystem.
779
Example of a reusable CALL statement: Because the following example CALL statement uses a host variable name for the stored procedure and an SQLDA for the parameter list, it can be reused to call different stored procedures with different parameter lists:
EXEC SQL CALL :procnm USING DESCRIPTOR :sqlda;
Your client program must assign a stored procedure name to the host variable procnm and load the SQLDA with the parameter information before issuing the SQL CALL statement. Related concepts: Stored procedure parameters on page 538 Related tasks: Including dynamic SQL for varying-list SELECT statements in your program on page 166 Chapter 17, Preparing an application to run on DB2 for z/OS, on page 941 Managing authorization for stored procedures (Managing Security) Temporarily overriding the active version of a native SQL procedure (DB2 Application programming and SQL) Related reference: Statements (DB2 SQL) Sample scenarios of program preparations (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond)
Procedure
To pass large output parameters to stored procedures by using indicator variables: 1. Declare an indicator variable for every large output parameter in the stored procedure. If you are using the GENERAL WITH NULLS or SQL linkage convention, you must declare indicator variables for all of your parameters. In this case, you do not need to declare another indicator variable. 2. Assign a negative value to each indicator variable that is associated with a large output variable. 3. Include the indicator variables in the CALL statement.
780
Example
For example, suppose that a stored procedure that is defined with the GENERAL linkage convention takes one integer input parameter and one character output parameter of length 6000. You do not want to pass the 6000 byte storage area to the stored procedure. The following example PL/I program passes only 2 bytes to the stored procedure for the output variable and receives all 6000 bytes from the stored procedure:
DCL INTVAR BIN FIXED(31); DCL BIGVAR(6000); DCL I1 BIN FIXED(15); . . . I1 = -1; /* This is the input variable */ /* This is the output variable */ /* This is an indicator variable */
/* Setting I1 to -1 causes only */ /* a two byte area representing */ /* I1 to be passed to the */ /* stored procedure, instead of */ /* the 6000 byte area for BIGVAR*/ EXEC SQL CALL PROCX(:INTVAR, :BIGVAR INDICATOR :I1);
Related reference: Linkage conventions for external stored procedures on page 599
781
Table 130. Parameter formats for a CALL statement in a REXX procedure SQL data type SMALLINT INTEGER BIGINT DECIMAL(p,s) NUMERIC(p,s) REAL FLOAT(n) DOUBLE DECFLOAT REXX format A string of numerics that does not contain a decimal point or exponent identifier. The first character can be a plus or minus sign. This format also applies to indicator variables that are passed as parameters. A string of numerics that has a decimal point but no exponent identifier. The first character can be a plus or minus sign. A string that represents a number in scientific notation. The string consists of a series of numerics followed by an exponent identifier (an E or e followed by an optional plus or minus sign and a series of numerics).
CHARACTER(n) A string of length n, enclosed in single quotation marks. VARCHAR(n) VARCHAR(n) FOR BIT DATA GRAPHIC(n) VARGRAPHIC(n) The character G followed by a string enclosed in single quotation marks. The string within the quotation marks begins with a shift-out character (X'0E') and ends with a shift-in character (X'0F'). Between the shift-out character and shift-in character are n double-byte characters. Recommendation: Pass BINARY and VARBINARY values by using the SQLDA. If you specify an SQLDA when you call the stored procedure, set the SQLTYPE in the SQLDA. SQLDATA is a string of characters. If you use host variables, the REXX format of BINARY and VARBINARY data is BX followed by a string that is enclosed in a single quotation mark. DATE A string of length 10, enclosed in single quotation marks. The format of the string depends on the value of field DATE FORMAT that you specify when you install DB2. A string of length 8, enclosed in single quotation marks. The format of the string depends on the value of field TIME FORMAT that you specify when you install DB2. A string of length 26, enclosed in single quotation marks. The string has the format yyyy-mm-dd-hh.mm.ss.nnnnnn. No equivalent.
| | | | | |
BINARY VARBINARY
TIME
TIMESTAMP
XML
The following figure demonstrates how a REXX procedure calls the stored procedure in REXX stored procedures on page 632. The REXX procedure performs the following actions: v Connects to the DB2 subsystem that was specified by the REXX procedure invoker. v Calls the stored procedure to execute a DB2 command that was specified by the REXX procedure invoker. v Retrieves rows from a result set that contains the command output messages.
/* REXX */ PARSE ARG SSID COMMAND /* Get the SSID to connect to /* and the DB2 command to be /* executed /****************************************************************/ /* Set up the host command environment for SQL calls. */ /****************************************************************/ "SUBCOM DSNREXX" /* Host cmd env available? */ IF RC THEN /* No--make one S_RC = RXSUBCOM(ADD,DSNREXX,DSNREXX) /****************************************************************/ */ */ */
*/
782
/* Connect to the DB2 subsystem. */ /****************************************************************/ ADDRESS DSNREXX "CONNECT" SSID IF SQLCODE = 0 THEN CALL SQLCA PROC = COMMAND RESULTSIZE = 32703 RESULT = LEFT( ,RESULTSIZE, ) /****************************************************************/ /* Call the stored procedure that executes the DB2 command. */ /* The input variable (COMMAND) contains the DB2 command. */ /* The output variable (RESULT) will contain the return area */ /* from the IFI COMMAND call after the stored procedure */ /* executes. */ /****************************************************************/ ADDRESS DSNREXX "EXECSQL" , "CALL" PROC "(:COMMAND, :RESULT)" IF SQLCODE < 0 THEN CALL SQLCA SAY RETCODE =RETCODE SAY SQLCODE =SQLCODE SAY SQLERRMC =SQLERRMC SAY SQLERRP =SQLERRP SAY SQLERRD =SQLERRD.1,, || SQLERRD.2,, || SQLERRD.3,, || SQLERRD.4,, || SQLERRD.5,, || SQLERRD.6 SAY SQLWARN =SQLWARN.0,, || SQLWARN.1,, || SQLWARN.2,, || SQLWARN.3,, || SQLWARN.4,, || SQLWARN.5,, || SQLWARN.6,, || SQLWARN.7,, || SQLWARN.8,, || SQLWARN.9,, || SQLWARN.10 SAY SQLSTATE=SQLSTATE SAY C2X(RESULT) ""||RESULT||"" /****************************************************************/ /* Display the IFI return area in hexadecimal. */ /****************************************************************/ OFFSET = 4+1 TOTLEN = LENGTH(RESULT) DO WHILE ( OFFSET < TOTLEN ) LEN = C2D(SUBSTR(RESULT,OFFSET,2)) SAY SUBSTR(RESULT,OFFSET+4,LEN-4-1) OFFSET = OFFSET + LEN END /****************************************************************/ /* Get information about result sets returned by the */ /* stored procedure. */ /****************************************************************/ ADDRESS DSNREXX "EXECSQL DESCRIBE PROCEDURE :PROC INTO :SQLDA" IF SQLCODE = 0 THEN CALL SQLCA DO I = 1 TO SQLDA.SQLD SAY "SQLDA."I".SQLNAME ="SQLDA.I.SQLNAME";" SAY "SQLDA."I".SQLTYPE ="SQLDA.I.SQLTYPE";" SAY "SQLDA."I".SQLLOCATOR ="SQLDA.I.SQLLOCATOR";" END I /****************************************************************/ /* Set up a cursor to retrieve the rows from the result */ /* set. */ /****************************************************************/ ADDRESS DSNREXX "EXECSQL ASSOCIATE LOCATOR (:RESULT) WITH PROCEDURE :PROC" IF SQLCODE = 0 THEN CALL SQLCA
Chapter 14. Calling a stored procedure from your application
783
SAY RESULT ADDRESS DSNREXX "EXECSQL ALLOCATE C101 CURSOR FOR RESULT SET :RESULT" IF SQLCODE = 0 THEN CALL SQLCA CURSOR = C101 ADDRESS DSNREXX "EXECSQL DESCRIBE CURSOR :CURSOR INTO :SQLDA" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Retrieve and display the rows from the result set, which */ /* contain the command output message text. */ /****************************************************************/ DO UNTIL(SQLCODE = 0) ADDRESS DSNREXX "EXECSQL FETCH C101 INTO :SEQNO, :TEXT" IF SQLCODE = 0 THEN DO SAY TEXT END END IF SQLCODE = 0 THEN CALL SQLCA ADDRESS DSNREXX "EXECSQL CLOSE C101" IF SQLCODE = 0 THEN CALL SQLCA ADDRESS DSNREXX "EXECSQL COMMIT" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Disconnect from the DB2 subsystem. */ /****************************************************************/ ADDRESS DSNREXX "DISCONNECT" IF SQLCODE = 0 THEN CALL SQLCA /****************************************************************/ /* Delete the host command environment for SQL. */ /****************************************************************/ S_RC = RXSUBCOM(DELETE,DSNREXX,DSNREXX) /* REMOVE CMD ENV */ RETURN /****************************************************************/ /* Routine to display the SQLCA */ /****************************************************************/ SQLCA: TRACE O SAY SQLCODE =SQLCODE SAY SQLERRMC =SQLERRMC SAY SQLERRP =SQLERRP SAY SQLERRD =SQLERRD.1,, || SQLERRD.2,, || SQLERRD.3,, || SQLERRD.4,, || SQLERRD.5,, || SQLERRD.6 SAY SQLWARN =SQLWARN.0,, || SQLWARN.1,, || SQLWARN.2,, || SQLWARN.3,, || SQLWARN.4,, || SQLWARN.5,, || SQLWARN.6,, || SQLWARN.7,, || SQLWARN.8,, || SQLWARN.9,, || SQLWARN.10 SAY SQLSTATE=SQLSTATE EXIT
784
Procedure
To prepare a client program that calls a remote stored procedure: 1. Precompile, compile, and link-edit the client program on the local DB2 subsystem. 2. Bind the resulting DBRM into a package at the local DB2 subsystem by using the BIND PACKAGE command with the option DBPROTOCOL(DRDA). Recommendation: If you have packages that contain SQL CALL statements that you bound before DB2 Version 6, rebind them in DB2 Version 6 or later to get better performance from those packages. Rebinding lets DB2 obtain some information from the catalog at bind time that it obtained at run time before Version 6. Therefore, after you rebind your packages, they run more efficiently because DB2 can do fewer catalog searches at run time. 3. Bind the same DBRM, the one for the client program, into a package at the remote location by using the BIND PACKAGE command and specifying a location name. If your client program needs to access multiple servers, bind the program at each server. Example: Suppose that you want a client program to call a stored procedure at location LOCA. You precompile the program to produce DBRM A. Then you can use the following command to bind DBRM A into package collection COLLA at location LOCA:
BIND PACKAGE (LOCA.COLLA) MEMBER(A)
4. Bind all packages into a plan on the local DB2 subsystem. Specify the bind option DBPROTOCOL(DRDA). 5. Bind any stored procedures that run under DB2 ODBC on a remote DB2 database server as a package at the remote site. Those procedures do not need to be bound into the DB2 ODBC plan.
785
| | | |
Related tasks: Binding DBRMs to create packages (DB2 Programming for ODBC) Related reference: BIND PACKAGE (DSN) (DB2 Commands)
DB2 uses schema names from the CURRENT PATH special register for CALL statements of the following form:
CALL host-variable
2. When DB2 finds a stored procedure definition, DB2 executes that stored procedure if the following conditions are true: v The caller is authorized to execute the stored procedure. v The stored procedure has the same number of parameters as in the CALL statement. If both conditions are not true, DB2 continues to go through the list of schemas until it finds a stored procedure that meets both conditions or reaches the end of the list. 3. If DB2 cannot find a suitable stored procedure, it returns an SQL error code for the CALL statement.
Procedure
To call different versions of a stored procedure from a single application: 1. When you define each version of the stored procedure, use the same stored procedure name but different schema names, different COLLID values, and different WLM environments. 2. In the program that invokes the stored procedure, specify the unqualified stored procedure name in the CALL statement.
786
3. Use the SQL path to indicate which version of the stored procedure that the client program should call. You can choose the SQL path in several ways: v If the client program is not an ODBC or JDBC application, use one of the following methods: Use the CALL procedure-name form of the CALL statement. When you bind plans or packages for the program that calls the stored procedure, bind one plan or package for each version of the stored procedure that you want to call. In the PATH bind option for each plan or package, specify the schema name of the stored procedure that you want to call. Use the CALL host-variable form of the CALL statement. In the client program, use the SET PATH statement to specify the schema name of the stored procedure that you want to call. v If the client program is an ODBC or JDBC application, choose one of the following methods: Use the SET PATH statement to specify the schema name of the stored procedure that you want to call. When you bind the stored procedure packages, specify a different collection for each stored procedure package. Use the COLLID value that you specified when defining the stored procedure to DB2. 4. When you run the client program, specify the plan or package with the PATH value that matches the schema name of the stored procedure that you want to call.
Results
For example, suppose that you want to write one program, PROGY, that calls one of two versions of a stored procedure named PROCX. The load module for both stored procedures is named SUMMOD. Each version of SUMMOD is in a different load library. The stored procedures run in different WLM environments, and the startup JCL for each WLM environment includes a STEPLIB concatenation that specifies the correct load library for the stored procedure module. First, define the two stored procedures in different schemas and different WLM environments:
CREATE PROCEDURE TEST.PROCX(IN V1 INTEGER, OUT V2 CHAR(9)) LANGUAGE C EXTERNAL NAME SUMMOD WLM ENVIRONMENT TESTENV; CREATE PROCEDURE PROD.PROCX(IN V1 INTEGER, OUT V2 CHAR(9)) LANGUAGE C EXTERNAL NAME SUMMOD WLM ENVIRONMENT PRODENV;
When you write CALL statements for PROCX in program PROGY, use the unqualified form of the stored procedure name:
CALL PROCX(V1,V2);
Bind two plans for PROGY. In one BIND statement, specify PATH(TEST). In the other BIND statement, specify PATH(PROD). To call TEST.PROCX, execute PROGY with the plan that you bound with PATH(TEST). To call PROD.PROCX, execute PROGY with the plan that you bound with PATH(PROD).
787
Procedure
To invoke multiple instances of a stored procedure: 1. To optimize storage usage and prevent storage shortages, ensure that you specify appropriate values for the following two subsystem parameters: MAX_ST_PROC Controls the maximum number of stored procedure instances that you can call within the same thread. MAX_NUM_CUR Controls the maximum number of cursors that can be opened by the same thread. When either of the values from these subsystem parameters is exceeded while an application is running, the CALL statement or the OPEN statement receives SQLCODE -904. 2. In your application, issue CALL statements to the stored procedure. 3. In the calling application for the stored procedure, close the result sets and issue frequent commits. Even read-only applications should perform these actions. Applications that fail to close result sets or issue an adequate number of commits might terminate abnormally with DB2 storage shortage and EDM POOL FULL conditions.
788
Related reference: MAX OPEN CURSORS field (MAX_NUM_CUR subsystem parameter) (DB2 Installation and Migration) MAX STORED PROCS field (MAX_ST_PROC subsystem parameter) (DB2 Installation and Migration) CALL (DB2 SQL)
789
Procedure
To temporarily override the active version of a native SQL procedure, specify the following statements in your program: 1. The SET CURRENT ROUTINE VERSION statement with the name of the version of the procedure that you want to use. If the specified version does not exist, the active version is used. 2. The CALL statement with the name of the procedure.
Example
The following CALL statement invokes version V1 of the UPDATE_BALANCE procedure, regardless of what the current active version of that procedure is.
SET CURRENT ROUTINE VERSION = V1; SET procname = UPDATE_BALANCE; CALL :procname USING DESCRIPTOR :x;
Procedure
| | | | | You can override that value in the following ways: v Edit the JCL procedures that start stored procedures address spaces, and modify the value of the NUMTCB parameter. v Specify the following parameter in the Start Parameters field of the Create An Application Environment panel when you set up a WLM application environment:
NUMTCB=number-of-TCBs
| | |
Special cases: For REXX stored procedures, you must set the NUMTCB parameter to 1. Stored procedures that invoke utilities can invoke only one utility at a time in a single address space. Consequently, the value of the NUMTCB parameter is forced to 1 for those procedures. Related tasks: Maximizing the number of procedures or functions that run in an address space (DB2 Performance)
790
Procedure
To retrieve the procedure status, perform one of the following actions in the calling program: v Issue the GET DIAGNOSTICS statement with the DB2_RETURN_STATUS item. The specified host variable in the GET DIAGNOSTICS statement is set to one of the following values: 0 This value indicates that the procedure returned with an SQLCODE that is greater or equal to zero. You can access the value directly from the SQLCA by retrieving the value of SQLERRD(1). For C applications, retrieve SQLERRD[0]. This value indicates that the procedure returned with an SQLCODE that is less than zero. In this case, the SQLERRD(1) value in the SQLCA is not set. DB2 returns -1 only. Any value other than 0 or -1 is the return value that was explicitly set in the procedure with the RETURN statement.
-1
Example of using GET DIAGNOSTICS to retrieve the return status: The following SQL code creates an SQL procedure that is named TESTIT, which calls another SQL procedure that is named TRYIT. The TRYIT procedure returns a status value. The TESTIT procedure retrieves that value with the DB2_RETURN_STATUS item of the GET DIAGNOSTICS statement.
CREATE PROCEDURE TESTIT () LANGUAGE SQL A1:BEGIN DECLARE RETVAL INTEGER DEFAULT 0; ... CALL TRYIT; GET DIAGNOSTICS RETVAL = DB2_RETURN_STATUS; IF RETVAL <> 0 THEN ... LEAVE A1; ELSE ... END IF; END A1
v Retrieve the value of SQLERRD(1) in the SQLCA. For C applications, retrieve SQLERRD[0]. This field contains the integer value that was set by the RETURN statement in the SQL procedure. This method is not applicable if the status was set by DB2.
791
Related concepts: SQL communication area (SQLCA) (DB2 SQL) Related reference: GET DIAGNOSTICS (DB2 SQL)
Procedure
To write a program to receive the result sets from a stored procedure: 1. Declare a locator variable for each result set that is to be returned. If you do not know how many result sets are to be returned, declare enough result set locators for the maximum number of result sets that might be returned. 2. Call the stored procedure and check the SQL return code. If the SQLCODE from the CALL statement is +466, the stored procedure has returned result sets. 3. Determine how many result sets the stored procedure is returning. If you already know how many result sets the stored procedure returns, skip this step. Use the SQL statement DESCRIBE PROCEDURE to determine the number of result sets. DESCRIBE PROCEDURE places information about the result sets in an SQLDA. Make this SQLDA large enough to hold the maximum number of result sets that the stored procedure might return. When the DESCRIBE PROCEDURE statement completes, the fields in the SQLDA contain the following values: v SQLD contains the number of result sets that are returned by the stored procedure. v Each SQLVAR entry gives the following information about a result set:
792
The SQLNAME field contains the name of the SQL cursor that is used by the stored procedure to return the result set. The SQLIND field contains the value -1, which indicates that no estimate of the number of rows in the result set is available. The SQLDATA field contains the value of the result set locator, which is the address of the result set. 4. Link result set locators to result sets by performing one of the following actions: v Use the ASSOCIATE LOCATORS statement. You must embed this statement in an application or SQL procedure. The ASSOCIATE LOCATORS statement assigns values to the result set locator variables. If you specify more locators than the number of result sets that are returned, DB2 ignores the extra locators. v If you executed the DESCRIBE PROCEDURE statement previously, the result set locator values are in the SQLDATA fields of the SQLDA. You can copy the values from the SQLDATA fields to the result set locators manually, or you can execute the ASSOCIATE LOCATORS statement to do it for you. The stored procedure name that you specify in an ASSOCIATE LOCATORS statement or DESCRIBE PROCEDURE statement must match the stored procedure name in the CALL statement as follows: v If the name is unqualified in the CALL statement, do not qualify it. v If the name is qualified with a schema name in the CALL statement, qualify it with the schema name. v If the name is qualified with a location name and schema name in the CALL statement, qualify it with a location name and schema name. 5. Allocate cursors for fetching rows from the result sets. Use the SQL statement ALLOCATE CURSOR to link each result set with a cursor. Execute one ALLOCATE CURSOR statement for each result set. The cursor names can differ from the cursor names in the stored procedure. To use the ALLOCATE CURSOR statement, you must embed it in an application or SQL procedure. 6. Determine the contents of the result sets. If you already know the format of the result set, skip this step. Use the SQL statement DESCRIBE CURSOR to determine the format of a result set and put this information in an SQLDA. For each result set, you need an SQLDA that is big enough to hold descriptions of all columns in the result set. You can use DESCRIBE CURSOR for only those cursors for which you executed ALLOCATE CURSOR previously. After you execute DESCRIBE CURSOR, if the cursor for the result set is declared WITH HOLD, the high-order bit of byte 8 of field SQLDAID in the SQLDA is set to 1. 7. Fetch rows from the result sets into host variables by using the cursors that you allocated with the ALLOCATE CURSOR statements. Fetching rows from a result set is the same as fetching rows from a table. If you executed the DESCRIBE CURSOR statement, perform the following steps before you fetch the rows: a. Allocate storage for host variables and indicator variables. Use the contents of the SQLDA from the DESCRIBE CURSOR statement to determine how much storage you need for each host variable. b. Put the address of the storage for each host variable in the appropriate SQLDATA field of the SQLDA.
Chapter 14. Calling a stored procedure from your application
793
c. Put the address of the storage for each indicator variable in the appropriate SQLIND field of the SQLDA.
Example
The following examples show C language code that accomplishes each of these steps. Coding for other languages is similar. The following example demonstrates how to receive result sets when you know how many result sets are returned and what is in each result set.
/*************************************************************/ /* Declare result set locators. For this example, */ /* assume you know that two result sets will be returned. */ /* Also, assume that you know the format of each result set. */ /*************************************************************/ EXEC SQL BEGIN DECLARE SECTION; static volatile SQL TYPE IS RESULT_SET_LOCATOR *loc1, *loc2; EXEC SQL END DECLARE SECTION; . . . /*************************************************************/ /* Call stored procedure P1. */ /* Check for SQLCODE +466, which indicates that result sets */ /* were returned. */ /*************************************************************/ EXEC SQL CALL P1(:parm1, :parm2, ...); if(SQLCODE==+466) { /*************************************************************/ /* Establish a link between each result set and its */ /* locator using the ASSOCIATE LOCATORS. */ /*************************************************************/ EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2) WITH PROCEDURE P1; . . . /*************************************************************/ /* Associate a cursor with each result set. */ /*************************************************************/ EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1; EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2; /*************************************************************/ /* Fetch the result set rows into host variables. */ /*************************************************************/ while(SQLCODE==0) { EXEC SQL FETCH C1 INTO :order_no, :cust_no; . . . } while(SQLCODE==0) { EXEC SQL FETCH C2 :order_no, :item_no, :quantity; . . . } }
The following example demonstrates how to receive result sets when you do not know how many result sets are returned or what is in each result set.
/*************************************************************/ /* Declare result set locators. For this example, */ /* assume that no more than three result sets will be */ /* returned, so declare three locators. Also, assume */
794
/* that you do not know the format of the result sets. */ /*************************************************************/ EXEC SQL BEGIN DECLARE SECTION; static volatile SQL TYPE IS RESULT_SET_LOCATOR *loc1, *loc2, *loc3; EXEC SQL END DECLARE SECTION; . . . /*************************************************************/ /* Call stored procedure P2. */ /* Check for SQLCODE +466, which indicates that result sets */ /* were returned. */ /*************************************************************/ EXEC SQL CALL P2(:parm1, :parm2, ...); if(SQLCODE==+466) { /*************************************************************/ /* Determine how many result sets P2 returned, using the */ /* statement DESCRIBE PROCEDURE. :proc_da is an SQLDA */ /* with enough storage to accommodate up to three SQLVAR */ /* entries. */ /*************************************************************/ EXEC SQL DESCRIBE PROCEDURE P2 INTO :proc_da; . . . /*************************************************************/ /* Now that you know how many result sets were returned, */ /* establish a link between each result set and its */ /* locator using the ASSOCIATE LOCATORS. For this example, */ /* we assume that three result sets are returned. */ /*************************************************************/ EXEC SQL ASSOCIATE LOCATORS (:loc1, :loc2, :loc3) WITH PROCEDURE P2; . . . /*************************************************************/ /* Associate a cursor with each result set. */ /*************************************************************/ EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :loc1; EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :loc2; EXEC SQL ALLOCATE C3 CURSOR FOR RESULT SET :loc3; /*************************************************************/ /* Use the statement DESCRIBE CURSOR to determine the */ /* format of each result set. */ /*************************************************************/ EXEC SQL DESCRIBE CURSOR C1 INTO :res_da1; EXEC SQL DESCRIBE CURSOR C2 INTO :res_da2; EXEC SQL DESCRIBE CURSOR C3 INTO :res_da3; . . . /*************************************************************/ /* Assign values to the SQLDATA and SQLIND fields of the */ /* SQLDAs that you used in the DESCRIBE CURSOR statements. */ /* These values are the addresses of the host variables and */ /* indicator variables into which DB2 will put result set */ /* rows. */ /*************************************************************/ . . . /*************************************************************/ /* Fetch the result set rows into the storage areas */ /* that the SQLDAs point to. */ /*************************************************************/ while(SQLCODE==0) { EXEC SQL FETCH C1 USING :res_da1;
795
. . . } while(SQLCODE==0) { EXEC SQL FETCH C2 USING :res_da2; . . . } while(SQLCODE==0) { EXEC SQL FETCH C3 USING :res_da3; . . . } }
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The following example demonstrates how you can use an SQL procedure to receive result sets. The logic assumes that no handler exists to intercept the +466 SQLCODE, such as DECLARE CONTINUE HANDLER FOR SQLWARNING ..... Such a handler causes SQLCODE to be reset to zero. Then the test for IF SQLCODE = 466 is never true and the statements in the IF body are never executed.
DECLARE RESULT1 RESULT_SET_LOCATOR VARYING; DECLARE RESULT2 RESULT_SET_LOCATOR VARYING; DECLARE AT_END, VAR1, VAR2 INT DEFAULT 0; DECLARE SQLCODE INTEGER DEFAULT 0; DECLARE CONTINUE HANDLER FOR NOT FOUND SET AT_END = 99; SET TOTAL1 = 0; SET TOTAL2 = 0; CALL TARGETPROCEDURE(); IF SQLCODE = 466 THEN ASSOCIATE RESULT SET LOCATORS(RESULT1,RESULT2) WITH PROCEDURE SPDG3091; ALLOCATE RSCUR1 CURSOR FOR RESULT1; ALLOCATE RSCUR2 CURSOR FOR RESULT2; WHILE AT_END = 0 DO FETCH RSCUR1 INTO VAR1; SET TOTAL1 = TOTAL1 + VAR1; SET VAR1 = 0; /* Reset so the last value fetched is not added after AT_END */ END WHILE; SET AT_END = 0; /* Reset for next loop */ WHILE AT_END = 0 DO FETCH RSCUR2 INTO VAR2; SET TOTAL2 = TOTAL2 + VAR2; SET VAR2 = 0; /* Reset so the last value fetched is not added after AT_END */ END WHILE; END IF;
796
Related concepts: Examples of programs that call stored procedures on page 227 Related reference: ALLOCATE CURSOR (DB2 SQL) ASSOCIATE LOCATORS (DB2 SQL) CALL (DB2 SQL) DESCRIBE CURSOR (DB2 SQL) DESCRIBE PROCEDURE (DB2 SQL) SQL descriptor area (SQLDA) (DB2 SQL)
ADMIN_COMMAND_DSN
ADMIN_COMMAND_UNIX
ADMIN_DS_BROWSE
797
Table 131. DB2-supplied stored procedures (continued) Stored procedure name ADMIN_DS_LIST Description The ADMIN_DS_LIST stored procedure is an administrative enablement routine. It returns a list of one of the following items: v data set names v generation data groups (GDG) v partitioned data set (PDS) members v partitioned data set extended (PDSE) members v generation data sets of a GDG ADMIN_DS_RENAME The ADMIN_DS_RENAME stored procedure is an administrative enablement routine. It renames one of the following entities: v a physical sequential (PS) data set v a partitioned data set (PDS) v a partitioned data set extended (PDSE) v a member of a PDS or PDSE ADMIN_DS_SEARCH The ADMIN_DS_SEARCH stored procedure is an administrative enablement routine. It determines if one of the following items is cataloged: v a physical sequential (PS) data set v a partitioned data set (PDS) v a partitioned data set extended (PDSE) v a generation data group (GDG) v a generation data set (GDS) Alternatively, ADMIN_DS_SEARCH determines if a library member of a cataloged PDS or PDSE exists. ADMIN_DS_WRIT The ADMIN_DS_WRITE stored procedure is an administrative enablement routine. It writes either text or binary records that are passed in a global temporary table to one of the following entities: v a physical sequential (PS) data set v partitioned data set (PDS) member v partitioned data set extended (PDSE) member v generation data set (GDS) ADMIN_DS_WRITE can either append or replace an existing PS data set, PDS member, PDSE member, or GDS. ADMIN_DS_WRITE can create one of the following entities: v a PS data set v PDS data set or member v PDSE data set or member v GDS for an existing generation data group (GDG) as needed This stored procedure supports only data sets with LRECL=80 and RECFM=FB. ADMIN_INFO_HOST The ADMIN_INFO_HOST stored procedure is an administrative enablement routine. It returns the host name of a connected DB2 subsystem or the host name of every member of a data sharing group. The ADMIN_INFO_SSID stored procedure is an administrative enablement routine. It returns the subsystem ID of the connected DB2 subsystem.
ADMIN_INFO_SSID
798
Table 131. DB2-supplied stored procedures (continued) Stored procedure name ADMIN_JOB_CANCEL ADMIN_JOB_FETCH ADMIN_JOB_QUERY Description The ADMIN_JOB_CANCEL stored procedure is an administrative enablement routine. It purges or cancels a job. The ADMIN_JOB_FETCH stored procedure is an administrative enablement routine. It retrieves the output from the JES spool. The ADMIN_JOB_QUERY stored procedure is an administrative enablement routine. It displays the status and completion information of a job. The ADMIN_JOB_SUBMIT stored procedure is an administrative enablement routine. It submits a job to a JES2 or JES3 system. The ADMIN_TASK_ADD stored procedure is an administrative task scheduler routine. It adds a task to the task list of the administrative task scheduler. The ADMIN_TASK_REMOVE stored procedure is an administrative task scheduler routine. It removes a task from the task list of the administrative task scheduler. The ADMIN_UTL_SCHEDULE stored procedure is an administrative enablement routine. It executes utilities in parallel. The ADMIN_UTL_SORT stored procedure is an administrative enablement routine. It sorts database objects for parallel utility execution using JCL or the ADMIN_UTL_SCHEDULE stored procedure. The real-time statistics stored procedure, DSNACCOR, queries the DB2 real-time statistics tables. This information helps you determine when you should run COPY, REORG, or RUNSTATS utility jobs, or enlarge your DB2 data sets. The enhanced DB2 real-time statistics stored procedure, DSNACCOX, makes recommendations to help you maintain your DB2 databases. The DSNACCOX stored procedure replaces the previous DSNACCOR stored procedure, beginning in Version 9. DSNACICS DSNAEXP The CICS transaction invocation stored procedure, DSNACICS, invokes CICS transactions from a remote workstation. The DB2 EXPLAIN stored procedure, DSN8EXP, invokes the EXPLAIN function on an SQL statement without requiring you to have the authorization to execute that SQL statement. The DSNAEXP stored procedure replaces the previous DSN8EXP stored procedure, beginning in Version 8. DSN8EXP handles SQL statements of up to 32,700 bytes in length. DSNAEXP can handle longer statements. DSNAHVPM The DSNAHVPM stored procedure is used by Optimization Service Center for DB2 for z/OS to convert host variables in a static SQL statement to typed parameter markers. The IMS transactions stored procedure, DSNAIMS, invokes IMS transactions and commands, without requiring a DB2 subsystem to maintain its own connection to IMS. The IMS transactions stored procedure 2, DSNAIMS2, performs the same function as DSNAIMS, except that DSNAIMS2 also includes multi-segment input support for IMS transactions.
ADMIN_JOB_SUBMIT ADMIN_TASK_ADD
ADMIN_TASK_REMOVE
ADMIN_UTL_SCHEDULE ADMIN_UTL_SORT
DSNACCOR
DSNACCOX
DSNAIMS
DSNAIMS2
799
Table 131. DB2-supplied stored procedures (continued) Stored procedure name DSNLEUSR Description The SYSIBM.USERNAMES encryption stored procedure, DSNLEUSR, stores encrypted values in the NEWAUTHID and PASSWORD fields of the SYSIBM.USERNAMES catalog table. The DSNTBIND stored procedure binds Java stored procedures. The DB2 for z/OS SQL procedure processor, DSNTPSMP, is a REXX stored procedure that prepares external SQL procedures for execution. The utilities stored procedure for EBCDIC input, DSNUTILS, invokes DB2 utilities from a local or remote client program. This stored procedure accepts utility control statements that are encoded in EBCDIC. The utilities stored procedure for Unicode input, DSNUTILU, invokes DB2 utilities from a local or remote client program. This stored procedure accepts utility control statements that are encoded in Unicode. The DSNWSPM stored procedure formats IFCID 148 records. The subsystem parameter stored procedure, DSNWZP, is used by the DB2-supplied stored procedure WLM_REFRESH. The GET_CONFIG stored procedure is a common SQL API stored procedure. It returns information about the data server configuration, including information about the following items: v the data sharing group v the DB2 subsystem parameters v the DDF status and configuration v the connected DB2 subsystem v the RLF tables v the active log data sets v the last DB2 restart This stored procedure is used primarily by DB2 tools. The GET_MESSAGE stored procedure is a common SQL API stored procedure. It returns the short message text for an SQL code. This stored procedure is used primarily by DB2 tools. The GET_SYSTEM_INFO stored procedure is a common SQL API stored procedure. It returns system information, including information about the following items: v operating system v product information v PTF level of each DB2 module v the SMP/E APPLY status of the requested SYSMOD v WLM classification rules that apply to the DB2 workload for subsystem types DB2 and DDF This stored procedure is used primarily by DB2 tools The SQLJ.ALTER_JAVA_PATH stored procedure specifies the class resolution path that the JVM searches to resolve class references. This action is needed if a JAR that you have installed refers to classes in other installed JARs. The SQLJ.DB2_INSTALL_JAR stored procedure installs a set of Java classes into a local or remote catalog.
DSNTBIND DSNTPSMP
DSNUTILS
DSNUTILU
800
Table 131. DB2-supplied stored procedures (continued) Stored procedure name Description The SQLJ.DB2_REMOVE_JAR stored procedure removes a Java JAR file and its classes from a local or remote catalog. The SQLJ.DB2_REPLACE_JAR stored procedure replaces a previously installed JAR file in a local or remote catalog. The SQLJ.DB2_UPDATEJARINFO stored procedure inserts class, class source, and associated options for a previously installed JAR file in a local or remote catalog. The SQLJ.INSTALL_JAR stored procedure installs a set of Java classes into the current SQL catalog and schema. The SQLJ.REMOVE_JAR stored procedure removes a Java JAR file and its classes from a specified, local catalog. The SQLJ.REPLACE_JAR stored procedure replaces a previously installed JAR file in a local catalog. The WLM environment refresh stored procedure, WLM_REFRESH, refreshes a WLM environment from a remote workstation. The WLM_SET_CLIENT_INFO stored procedure sets client information that is associated with the current connection at the DB2 server. The XML decomposition stored procedure, XDBDECOMPXML, extracts values from serialized XML data and populates relational tables with the values. The XDBDECOMPXML stored procedure uses an XML schema, which contains annotations that indicate which columns and tables are to be used to store the decomposed XML values. The add XML schema document stored procedure, XSR_ADDSCHEMADOC, adds every XML schema other than the primary XML schema document to the XSR. The XML schema registration completion stored procedure, XSR_COMPLETE, is the final stored procedure to be called as part of the XML schema registration process. The XML schema registration process registers XML schemas with the XSR. The XML schema registration stored procedure, XSR_REGISTER, is the first stored procedure to be called as part of the XML schema registration process. The XML schema registration process registers XML schemas with the XSR. The XML schema removal stored procedure, XSR_REMOVE, removes all components of an XML schema.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Deprecated: XDBDECOMPXML
XSR_ADDSCHEMADOC
XSR_COMPLETE
XSR_REGISTER
XSR_REMOVE
801
| Table 132. Deprecated MQ XML stored procedures (continued) | Stored procedure name | DXXMQSHRED | | | | | | | DXXMQINSERTCLOB | | | | | | | | DXXMQSHREDCLOB | | | | | | | | | DXXMQINSERTALL | | | | | | | | DXXMQSHREDALL | | | | | | | |
Description DXXMQSHRED performs the following actions: v returns a message that contains an XML document from an MQ message queue v decomposes the document v stores the data in DB2 tables that are specified in a document access definition (DAD) file DXXMQSHRED does not require an enabled XML collection. DXXMQINSERTCLOB performs the following actions: v returns a message that contains an XML document from an MQ message queue v decomposes the document v stores the data in DB2 tables that are specified by an enabled XML collection. DXXMQINSERTCLOB is intended for an XML document with a length of up to 1 MB. DXXMQSHREDCLOB performs the following actions: v returns a message that contains an XML document from an MQ message queue v decomposes the document v stores the data in DB2 tables that are specified in a document access definition (DAD) file DXXMQSHREDCLOB does not require an enabled XML collection. DXXMQSHREDCLOB is intended for an XML document with a length of up to 1 MB. DXXMQINSERTALL performs the following actions: v returns messages that contain XML documents from an MQ message queue v decomposes the documents v stores the data in DB2 tables that are specified by an enabled XML collection DXXMQINSERTALL is intended for XML documents with a length of up to 3 KB. DXXMQSHREDALL performs the following actions: v returns messages that contain XML documents from an MQ message queue v decomposes the documents v stores the data in DB2 tables that are specified in a document access definition (DAD) file DXXMQSHREDALL does not require an enabled XML collection. DXXMQSHREDALL is intended for XML documents with a length of up to 3 KB.
802
| | | | | | | | | | |
Table 132. Deprecated MQ XML stored procedures (continued) Stored procedure name DXXMQSHREDALLCLOB Description DXXMQSHREDALLCLOB performs the following actions: v returns messages that contain XML documents from an MQ message queue v decomposes the documents v stores the data in DB2 tables that are specified in a document access definition (DAD) file DXXMQSHREDALLCLOB does not require an enabled XML collection. DXXMQSHREDALLCLOB is intended for XML documents with a length of up to 1 MB. DXXMQINSERTALLCLOB performs the following actions: v returns messages that contain XML documents from an MQ message queue v decomposes the documents v stores the data in DB2 tables that are specified by an enabled XML collection. DXXMQINSERTALLCLOB is intended for XML documents with a length of up to 1 MB. DXXMQGEN performs the following actions: v constructs XML documents from data that is stored in the DB2 tables that are specified in a document access definition (DAD) file v sends the XML documents to an MQ message queue DXXMQGEN is intended for XML documents with a length of up to 3 KB. DXXMQRETRIEVE performs the following actions: v constructs XML documents from data that is stored in the DB2 tables that are specified in an enabled XML collection v sends the XML documents to an MQ message queue DXXMQRETRIEVE is intended for XML documents with a length of up to 3 KB. DXXMQGENCLOB performs the following actions: v constructs XML documents from data that is stored in the DB2 tables that are specified in a document access definition (DAD) file v sends the XML documents to an MQ message queue DXXMQGENCLOB is intended for XML documents with a length of up to 32 KB. DXXMQRETRIEVECLOB performs the following actions: v constructs XML documents from data that is stored in the DB2 tables that are specified in an enabled XML collection v sends the XML documents to an MQ message queue DXXMQRETRIEVECLOB is intended for XML documents with a length of up to 32 KB.
803
Related tasks: Enabling DB2-supplied routines (DB2 Installation and Migration) Related reference: Source code for activating DB2-supplied stored procedures (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond) Related information: Stored procedures for administration (DB2 Administration Guide)
804
805
998 999
The extended MCS console for DSNTWR posted an alert. See message DSNT534I for more information. The operating system denied an authorized WLM_REFRESH request. See message DSNT545I for more information.
For a complete example of setting up access to an SAF profile and calling WLM_REFRESH, see job DSNTEJ6W, which is in data set prefix.SDSNSAMP. Related information: Controlling Extended MCS Consoles Using RACF (z/OS MVS Planning: Operations) | | | | | | | | | | | | | | | | | | | | | | |
Environment
WLM_SET_CLIENT_INFO runs in a WLM-established stored procedures address space.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNADMSI v Ownership of the package
806
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Syntax
WLM_SET_CLIENT_INFO ( client_userid NULL client_acctstr NULL ) , client_wrkstnname NULL ,
client_applname NULL
Procedure parameters
client_userid An input argument of type VARCHAR(255) that specifies the user ID for the client. If NULL is specified, the value remains unchanged. If an empty string (") is specified, the user ID for the client is reset to the default value. If the value specified exceeds 16 bytes, it is truncated to 16 bytes. If the value specified is less than 16 bytes, it is padded on the right with blanks to a length of 16 bytes. client_wrkstnname An input argument of type VARCHAR(255) that specifies the workstation name for the client. If NULL is specified, the value remains unchanged. If an empty string (") is specified, the workstation name for the client is reset to the default value. If the value specified exceeds 18 bytes, it is truncated to 18 bytes. If the value specified is less than 18 bytes, it is padded on the right with blanks to a length of 18 bytes. client_applname An input argument of type VARCHAR(255) that specifies the application name for the client. If NULL is specified, the value remains unchanged. If an empty string (") is specified, the application name for the client is reset to the default value. If the value specified exceeds 32 bytes, it is truncated to 32 bytes. If the value specified is less than 32 bytes, it is padded on the right with blanks to a length of 32 bytes. client_acctstr An input argument of type VARCHAR(255) that specifies the accounting string for the client. If NULL is specified, the value remains unchanged. If an empty string (") is specified, the accounting string for the client is reset to the default value. If the requester is DB2 for z/OS, and the value that is specified exceeds 142 bytes, it is truncated to 142 bytes. Otherwise, if the value specified exceeds 200 bytes, it is truncated to 200 bytes.
Examples
Set the user ID, workstation name, application name, and accounting string for the client.
807
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
strcpy(user_id, "db2user"); strcpy(wkstn_name, "mywkstn"); strcpy(appl_name, "db2bp.exe"); strcpy(acct_str, "myacctstr"); iuser_id = 0; iwkstn_name = 0; iappl_name = 0; iacct_str = 0; EXEC SQL CALL SYSPROC.WLM_SET_CLIENT_INFO(:user_id:iuser_id, :wkstn_name:iwkstn_name, :appl_name:iappl_name, :acct_str:iacct_str);
Set the user ID to db2user for the client without setting the other client attributes.
strcpy(user_id, "db2user"); iuser_id = 0; iwkstn_name = -1; iappl_name = -1; iacct_str = -1; EXEC SQL CALL SYSPROC.WLM_SET_CLIENT_INFO(:user_id:iuser_id, :wkstn_name:iwkstn_name, :appl_name:iappl_name, :acct_str:iacct_str);
Reset the user ID for the client to blank without modifying the values of the other client attributes.
strcpy(user_id, ""); iuser_id = 0; iwkstn_name = -1; iappl_name = -1; iacct_str = -1; EXEC SQL CALL SYSPROC.WLM_SET_CLIENT_INFO(:user_id:iuser_id, :wkstn_name:iwkstn_name, :appl_name:iappl_name, :acct_str:iacct_str);
Environment
DSNACICS runs in a WLM-established stored procedure address space and uses the Resource Recovery Services attachment facility to connect to DB2. If you use CICS Transaction Server for OS/390 Version 1 Release 3 or later, you can register your CICS system as a resource manager with recoverable resource management services (RRMS). When you do that, changes to DB2 databases that are made by the program that calls DSNACICS and the CICS server program that DSNACICS invokes are in the same two-phase commit scope. This means that
808
when the calling program performs an SQL COMMIT or ROLLBACK, DB2 and RRS inform CICS about the COMMIT or ROLLBACK. If the CICS server program that DSNACICS invokes accesses DB2 resources, the server program runs under a separate unit of work from the original unit of work that calls the stored procedure. This means that the CICS server program might deadlock with locks that the client program acquires.
Authorization
To execute the CALL statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges: v The EXECUTE privilege on stored procedure DSNACICS v Ownership of the stored procedure v SYSADM authority The CICS server program that DSNACICS calls runs under the same user ID as DSNACICS. That user ID depends on the SECURITY parameter that you specify when you define DSNACICS. The DSNACICS caller also needs authorization from an external security system, such as RACF, to use CICS resources.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure. Because the linkage convention for DSNACICS is GENERAL WITH NULLS, if you pass parameters in host variables, you need to include a null indicator with every host variable. Null indicators for input host variables must be initialized before you execute the CALL statement.
CALL DSNACICS (
pgm-name NULL
CICS-level NULL
mirror-trans NULL
COMMAREA-total-len NULL
, return-code, msg-area )
Option descriptions
parm-level Specifies the level of the parameter list that is supplied to the stored procedure. This is an input parameter of type INTEGER. The value must be 1. pgm-name Specifies the name of the CICS program that DSNACICS invokes. This is the name of the program that the CICS mirror transaction calls, not the CICS transaction name.
809
This is an input parameter of type CHAR(8). CICS-applid Specifies the applid of the CICS system to which DSNACICS connects. This is an input parameter of type CHAR(8). CICS-level Specifies the level of the target CICS subsystem: 1 The CICS subsystem is CICS for MVS/ESA Version 4 Release 1, CICS Transaction Server for OS/390 Version 1 Release 1, or CICS Transaction Server for OS/390 Version 1 Release 2. The CICS subsystem is CICS Transaction Server for OS/390 Version 1 Release 3 or later.
This is an input parameter of type INTEGER. connect-type Specifies whether the CICS connection is generic or specific. Possible values are GENERIC or SPECIFIC. This is an input parameter of type CHAR(8). netname If the value of connection-type is SPECIFIC, specifies the name of the specific connection that is to be used. This value is ignored if the value of connection-type is GENERIC. This is an input parameter of type CHAR(8). mirror-trans Specifies the name of the CICS mirror transaction to invoke. This mirror transaction calls the CICS server program that is specified in the pgm-name parameter. mirror-trans must be defined to the CICS server region, and the CICS resource definition for mirror-trans must specify DFHMIRS as the program that is associated with the transaction. If this parameter contains blanks, DSNACICS passes a mirror transaction parameter value of null to the CICS EXCI interface. This allows an installation to override the transaction name in various CICS user-replaceable modules. If a CICS user exit routine does not specify a value for the mirror transaction name, CICS invokes CICS-supplied default mirror transaction CSMI. This is an input parameter of type CHAR(4). COMMAREA Specifies the communication area (COMMAREA) that is used to pass data between the DSNACICS caller and the CICS server program that DSNACICS calls. This is an input/output parameter of type VARCHAR(32704). In the length field of this parameter, specify the number of bytes that DSNACICS sends to the CICS server program. commarea-total-len Specifies the total length of the COMMAREA that the server program needs. This is an input parameter of type INTEGER. This length must be greater than or equal to the value that you specify in the length field of the COMMAREA parameter and less than or equal to 32704. When the CICS server program completes, DSNACICS passes the server program's entire COMMAREA, which is commarea-total-len bytes in length, to the stored procedure caller.
810
sync-opts Specifies whether the calling program controls resource recovery, using two-phase commit protocols that are supported by RRS. Possible values are: 1 The client program controls commit processing. The CICS server region does not perform a syncpoint when the server program returns control to CICS. Also, the server program cannot take any explicit syncpoints. Doing so causes the server program to abnormally terminate. The target CICS server region takes a syncpoint on successful completion of the server program. If this value is specified, the server program can take explicit syncpoints.
When CICS has been set up to be an RRS resource manager, the client application can control commit processing using SQL COMMIT requests. DB2 for z/OS ensures that CICS is notified to commit any resources that the CICS server program modifies during two-phase commit processing. When CICS has not been set up to be an RRS resource manager, CICS forces syncpoint processing of all CICS resources at completion of the CICS server program. This commit processing is not coordinated with the commit processing of the client program. This option is ignored when CICS-level is 1. This is an input parameter of type INTEGER. return-code Return code from the stored procedure. Possible values are: 0 12 The call completed successfully. The request to run the CICS server program failed. The msg-area parameter contains messages that describe the error.
This is an output parameter of type INTEGER. msg-area Contains messages if an error occurs during stored procedure execution. The first messages in this area are generated by the stored procedure. Messages that are generated by CICS or the DSNACICX user exit routine might follow the first messages. The messages appear as a series of concatenated, viewable text strings. This is an output parameter of type VARCHAR(500).
Example
The following PL/I example shows the variable declarations and SQL CALL statement for invoking the CICS transaction that is associated with program CICSPGM1.
Chapter 14. Calling a stored procedure from your application
811
/***********************/ /* DSNACICS PARAMETERS */ /***********************/ DECLARE PARM_LEVEL BIN FIXED(31); DECLARE PGM_NAME CHAR(8); DECLARE CICS_APPLID CHAR(8); DECLARE CICS_LEVEL BIN FIXED(31); DECLARE CONNECT_TYPE CHAR(8); DECLARE NETNAME CHAR(8); DECLARE MIRROR_TRANS CHAR(4); DECLARE COMMAREA_TOTAL_LEN BIN FIXED(31); DECLARE SYNC_OPTS BIN FIXED(31); DECLARE RET_CODE BIN FIXED(31); DECLARE MSG_AREA CHAR(500) VARYING; DECLARE1 COMMAREA BASED(P1), 3 COMMAREA_LEN BIN FIXED(15), 3COMMAREA_INPUT CHAR(30), 3 COMMAREA_OUTPUT CHAR(100); /***********************************************/ /* INDICATOR VARIABLES FOR DSNACICS PARAMETERS */ /***********************************************/ DECLARE 1 IND_VARS, 3 IND_PARM_LEVEL BIN FIXED(15), 3 IND_PGM_NAME BIN FIXED(15), 3 IND_CICS_APPLID BIN FIXED(15), 3 IND_CICS_LEVEL BIN FIXED(15), 3 IND_CONNECT_TYPE BINFIXED(15), 3 IND_NETNAME BIN FIXED(15), 3 IND_MIRROR_TRANSBIN FIXED(15), 3 IND_COMMAREA BIN FIXED(15), 3 IND_COMMAREA_TOTAL_LEN BIN FIXED(15), 3 IND_SYNC_OPTS BIN FIXED(15), 3 IND_RETCODE BIN FIXED(15), 3 IND_MSG_AREA BIN FIXED(15); /**************************/ /* LOCAL COPY OF COMMAREA */ /**************************/ DECLARE P1 POINTER; DECLARE COMMAREA_STG CHAR(130) VARYING; /**************************************************************/ /* ASSIGN VALUES TO INPUT PARAMETERS PARM_LEVEL, PGM_NAME, */ /* MIRROR_TRANS, COMMAREA, COMMAREA_TOTAL_LEN, AND SYNC_OPTS. */ /* SET THE OTHER INPUT PARAMETERS TO NULL. THE DSNACICX */ /* USER EXIT MUST ASSIGN VALUES FOR THOSE PARAMETERS. */ /**************************************************************/ PARM_LEVEL = 1; IND_PARM_LEVEL = 0; PGM_NAME = CICSPGM1; IND_PGM_NAME = 0 ; MIRROR_TRANS = MIRT; IND_MIRROR_TRANS = 0; P1 = ADDR(COMMAREA_STG); COMMAREA_INPUT = THIS IS THE INPUT FOR CICSPGM1; COMMAREA_OUTPUT = ; COMMAREA_LEN = LENGTH(COMMAREA_INPUT); IND_COMMAREA = 0; COMMAREA_TOTAL_LEN = COMMAREA_LEN + LENGTH(COMMAREA_OUTPUT); IND_COMMAREA_TOTAL_LEN = 0; SYNC_OPTS= 1;
812
IND_SYNC_OPTS = 0; IND_CICS_APPLID= -1; IND_CICS_LEVEL = -1; IND_CONNECT_TYPE = -1; IND_NETNAME = -1; /*****************************************/ /* INITIALIZE OUTPUT PARAMETERS TO NULL. */ /*****************************************/ IND_RETCODE = -1; IND_MSG_AREA= -1; /*****************************************/ /* CALL DSNACICS TO INVOKE CICSPGM1. */ /*****************************************/ EXEC SQL CALL SYSPROC.DSNACICS(:PARM_LEVEL :IND_PARM_LEVEL, :PGM_NAME :IND_PGM_NAME, :CICS_APPLID :IND_CICS_APPLID, :CICS_LEVEL :IND_CICS_LEVEL, :CONNECT_TYPE :IND_CONNECT_TYPE, :NETNAME :IND_NETNAME, :MIRROR_TRANS :IND_MIRROR_TRANS, :COMMAREA_STG :IND_COMMAREA, :COMMAREA_TOTAL_LEN :IND_COMMAREA_TOTAL_LEN, :SYNC_OPTS :IND_SYNC_OPTS, :RET_CODE :IND_RETCODE, :MSG_AREA :IND_MSG_AREA);
Output
DSNACICS places the return code from DSNACICS execution in the return-code parameter. If the value of the return code is non-zero, DSNACICS puts its own error messages and any error messages that are generated by CICS and the DSNACICX user exit routine in the msg-area parameter. The COMMAREA parameter contains the COMMAREA for the CICS server program that DSNACICS calls. The COMMAREA parameter has a VARCHAR type. Therefore, if the server program puts data other than character data in the COMMAREA, that data can become corrupted by code page translation as it is passed to the caller. To avoid code page translation, you can change the COMMAREA parameter in the CREATE PROCEDURE statement for DSNACICS to VARCHAR(32704) FOR BIT DATA. However, if you do so, the client program might need to do code page translation on any character data in the COMMAREA to make it readable.
Restrictions
Because DSNACICS uses the distributed program link (DPL) function to invoke CICS server programs, server programs that you invoke through DSNACICS can contain only the CICS API commands that the DPL function supports. | | | DSNACICS does not propagate the transaction identifier (XID) of the thread. The stored procedure runs under a new private context rather than under the native context of the task that called it.
Debugging
If you receive errors when you call DSNACICS, ask your system administrator to add a DSNDUMP DD statement in the startup procedure for the address space in which DSNACICS runs. The DSNDUMP DD statement causes DB2 to generate an
Chapter 14. Calling a stored procedure from your application
813
SVC dump whenever DSNACICS issues an error message. Related information: Accessing CICS systems through stored procedure DSNACICS (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond) The API commands (CICS Transaction Server for z/OS)
General considerations
The DSNACICX exit routine must follow these rules: v It can be written in assembler, COBOL, PL/I, or C. v It must follow the Language Environment calling linkage when the caller is an assembler language program. v The load module for DSNACICX must reside in an authorized program library that is in the STEPLIB concatenation of the stored procedure address space startup procedure. You can replace the default DSNACICX in the prefix.SDSNLOAD, library, or you can put the DSNACICX load module in a library that is ahead of prefix.SDSNLOAD in the STEPLIB concatenation. It is recommended that you put DSNACICX in the prefix.SDSNEXIT library. Sample installation job DSNTIJEX contains JCL for assembling and link-editing the sample source code for DSNACICX into prefix.SDSNEXIT. You need to modify the JCL for the libraries and the compiler that you are using. v The load module must be named DSNACICX. v The exit routine must save and restore the caller's registers. Only the contents of register 15 can be modified. v It must be written to be reentrant and link-edited as reentrant. v It must be written and link-edited to execute as AMODE(31),RMODE(ANY). v DSNACICX can contain SQL statements. However, if it does, you need to change the DSNACICS procedure definition to reflect the appropriate SQL access level for the types of SQL statements that you use in the user exit routine.
814
Parameter list
At invocation, registers are set as described in the following table
Table 133. Registers at invocation of DSNACICX Register 1 13 14 15 Contains Address of pointer to the exit parameter list (XPL). Address of the register save area. Return address. Address of entry point of exit routine.
The following table shows the contents of the DSNACICX exit parameter list, XPL. Member DSNDXPL in data set prefix.SDSNMACS contains an assembler language mapping macro for XPL. Sample exit routine DSNASCIO in data set prefix.SDSNSAMP includes a COBOL mapping macro for XPL.
Table 134. Contents of the XPL exit parameter list Corresponding DSNACICS parameter
Name XPL_EYEC XPL_LEN XPL_LEVEL XPL_PGMNAME XPL_CICSAPPLID XPL_CICSLEVEL XPL_CONNECTTYPE XPL_NETNAME XPL_MIRRORTRAN
Hex offset 0 4 8 C 14 1C 20 28 30
Data type Character, 4 bytes Character, 4 bytes 4-byte integer Character, 8 bytes Character, 8 bytes 4-byte integer Character, 8 bytes Character, 8 bytes Character, 8 bytes
Description Eye-catcher: 'XPL ' Length of the exit parameter list Level of the parameter list Name of the CICS server program CICS VTAM applid Level of CICS code Specific or generic connection to CICS Name of the specific connection to CICS Name of the mirror transaction that invokes the CICS server program Address of the COMMAREA
XPL_COMMAREAPTR
38
Address, 4 bytes
815
Table 134. Contents of the XPL exit parameter list (continued) Corresponding DSNACICS parameter
2
Name XPL_COMMINLEN
Hex offset 3C
Description Length of the COMMAREA that is passed to the server program Total length of the COMMAREA that is returned to the caller Syncpoint control option Return code from the exit routine Length of the output message area Output message area
XPL_COMMTOTLEN
40
4-byte integer
commarea-total-len
44 48 4C 50
Notes: 1. The area that this field points to is specified by DSNACICS parameter COMMAREA. This area does not include the length bytes. 2. This is the same value that the DSNACICS caller specifies in the length bytes of the COMMAREA parameter. 3. Although the total length of msg-area is 500 bytes, DSNACICX can use only 256 bytes of that area.
Environment
DSNAIMS runs in a WLM-established stored procedures address space. DSNAIMS requires DB2 with RRSAF enabled and IMS version 7 or later with OTMA Callable Interface enabled. | | To use a two-phase commit process, you must have IMS Version 8 with UQ70789 or later.
Authorization
To set up and run DSNAIMS, you must be authorized the perform the following steps:
816
1. Use the job DSNTIJIM to issue the CREATE PROCEDURE statement for DSNAIMS and to grant the execution of DSNAIMS to PUBLIC. DSNTIJIM is provided in the SDSNSAMP data set. You need to customize DSNTIJIM to fit the parameters of your system. 2. Ensure that OTMA C/I is initialized.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
, xcf-group-name,
xcf-ims-name, racf-userid,
racf-groupid NULL ,
ims-modname NULL ,
ims-data-out NULL
otma-tpipe-name NULL
Option descriptions
dsnaims-function A string that indicates whether the transaction is send-only, receive-only, or send-and-receive. Possible values are: SENDRECV Sends and receives IMS data. SENDRECV invokes an IMS transaction or command and returns the result to the caller. The transaction can be an IMS full function or a fast path. SENDRECV does not support multiple iterations of a conversational transaction SEND Sends IMS data. SEND invokes an IMS transaction or command, but does not receive IMS data. If result data exists, it can be retrieved with the RECEIVE function. A send-only transaction cannot be an IMS fast path transaction or a conversations transaction. RECEIVE Receives IMS data. The data can be the result of a transaction or command initiated by the SEND function or an unsolicited output message from an IMS application. The RECEIVE function does not initiate an IMS transaction or command. dsnaims-2pc Specifies whether to use a two-phase commit process to perform the transaction syncpoint service. Possible values are Y or N. For N, commits and rollbacks that are issued by the IMS transaction do not affect commit and rollback processing in the DB2 application that invokes DSNAIMS. Furthermore, IMS resources are not affected by commits and rollbacks that are issued by the calling DB2 application. If you specify Y, you must also specify SENDRECV. To use a two-phase commit process, you must set the IMS control region parameter (RRS) to Y.
Chapter 14. Calling a stored procedure from your application
| | |
817
This parameter is optional. The default is N. xcf-group-name Specifies the XCF group name that the IMS OTMA joins. You can obtain this name by viewing the GRNAME parameter in IMS PROCLIB member DFSPBxxx or by using the IMS command /DISPLAY OTMA. xcf-ims-name Specifies the XCF member name that IMS uses for the XCF group. If IMS is not using the XRF or RSR feature, you can obtain the XCF member name from the OTMANM parameter in IMS PROCLIB member DFSPBxxx. If IMS is using the XRF or RSR feature, you can obtain the XCF member name from the USERVAR parameter in IMS PROCLIB member DFSPBxxx. racf-userid Specifies the RACF user ID that is used for IMS to perform the transaction or command authorization checking. This parameter is required if DSNAIMS is running APF-authorized. If DSNAIMS is running unauthorized, this parameter is ignored and the EXTERNAL SECURITY setting for the DSNAIMS stored procedure definition determines the user ID that is used by IMS. racf-groupid Specifies the RACF group ID that is used for IMS to perform the transaction or command authorization checking. This field is used for stored procedures that are APF-authorized. It is ignored for other stored procedures. ims-lterm Specifies an IMS LTERM name that is used to override the LTERM name in the I/O program communication block of the IMS application program. This field is used as an input and an output field: v For SENDRECV, the value is sent to IMS on input and can be updated by IMS on output. v For SEND, the parameter is IN only. v For RECEIVE, the parameter is OUT only. An empty or NULL value tells IMS to ignore the parameter. ims-modname Specifies the formatting map name that is used by the server to map output data streams, such as 3270 streams. Although this invocation does not have IMS MFS support, the input MODNAME can be used as the map name to define the output data stream. This name is an 8-byte message output descriptor name that is placed in the I/O program communication block. When the message is inserted, IMS places this name in the message prefix with the map name in the program communication block of the IMS application program. For SENDRECV, the value is sent to IMS on input, and can be updated on output. For SEND, the parameter is IN only. For RECEIVE it is OUT only. IMS ignores the parameter when it is an empty or NULL value. ims-tran-name Specifies the name of an IMS transaction or command that is sent to IMS. If the IMS command is longer than eight characters, specify the first eight characters (including the "/" of the command). Specify the remaining characters of the command in the ims-tran-name parameter. If you use an empty or NULL value, you must specify the full transaction name or command in the ims-data-in parameter.
818
ims-data-in Specifies the data that is sent to IMS. This parameter is required in each of the following cases: v Input data is required for IMS v No transaction name or command is passed in ims-tran-name v The command is longer than eight characters This parameter is ignored when for RECEIVE functions. ims-data-out Data returned after successful completion of the transaction. This parameter is required for SENDRECV and RECEIVE functions. The parameter is ignored for SEND functions. | | The length of ims-data-out is 32,000 bytes. If the data that is returned from IMS is greater than the length of ims-data-out, the data will be truncated. otma-tpipe-name Specifies an 8-byte user-defined communication session name that IMS uses for the input and output data for the transaction or the command in a SEND or a RECEIVE function. If the otma_tpipe_name parameter is used for a SEND function to generate an IMS output message, the same otma_pipe_name must be used to retrieve output data for the subsequent RECEIVE function. otma-dru-name Specifies the name of an IMS user-defined exit routine, OTMA destination resolution user exit routine, if it is used. This IMS exit routine can format part of the output prefix and can determine the output destination for an IMS ALT_PCB output. If an empty or null value is passed, IMS ignores this parameter. user-data-in This optional parameter contains any data that is to be included in the IMS message prefix, so that the data can be accessed by IMS OTMA user exit routines (DFSYIOE0 and DFSYDRU0) and can be tracked by IMS log records. IMS applications that run in dependent regions do not access this data. The specified user data is not included in the output message prefix. You can use this parameter to store input and output correlator tokens or other information. This parameter is ignored for RECEIEVE functions. user-data-out On output, this field contains the user-data-in in the IMS output prefix. IMS user exit routines (DFSYIOE0 and DFSYDRU0) can also create user-data-out for SENDRECV and RECEIVE functions. The parameter is not updated for SEND functions. | | The length of user-data-out is 1,022 bytes. If the data that is returned from IMS is greater than the length of user-data-out, the data will be truncated. status-message Indicates any error message that is returned from the transaction or command, OTMA, RRS, or DSNAIMS. return-code Indicates the return code that is returned for the transaction or command, OTMA, RRS, or DSNAIMS.
819
Examples
The following examples show how to call DSNAIMS. Example 1: Sample parameters for executing an IMS command:
CALL SYSPROC.DSNAIMS("SENDRECV", "N", "IMS7GRP", "IMS7TMEM", "IMSCLNM", "", "", "", "", "", "/LOG Hello World.", ims_data_out, "", "", "", user_out, error_message, rc)
820
Related concepts: OTMA C/I initialization Related information: Accessing IMS databases from DB2 stored procedures (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond) | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Environment
DSNAIMS2 runs in a WLM-established stored procedures address space. DSNAIMS2 requires DB2 with RRSAF enabled and IMS version 7 or later with OTMA Callable Interface enabled. To use a two-phase commit process, you must have IMS Version 8 with UQ70789 or later.
Authorization
To set up and run DSNAIMS2, you must be authorized the perform the following steps: 1. Use the job DSNTIJI2 to issue the CREATE PROCEDURE statement for DSNAIMS2 and to grant the execution of DSNAIMS2 to PUBLIC. DSNTIJI2 is provided in the SDSNSAMP data set. You need to customize DSNTIJI2 to fit the parameters of your system. 2. Ensure that OTMA C/I is initialized.
Syntax
The following syntax diagram shows the SQL CALL statement for invoking this stored procedure:
821
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
, xcf-group-name,
xcf-ims-name, racf-userid,
racf-groupid NULL ,
ims-modname NULL ,
ims-data-out NULL
otma-tpipe-name NULL
, user-data-out, status-message,
otma-data-inseg NULL
Option descriptions
dsnaims-function A string that indicates whether the transaction is send-only, receive-only, or send-and-receive. Possible values are: SENDRECV Sends and receives IMS data. SENDRECV invokes an IMS transaction or command and returns the result to the caller. The transaction can be an IMS full function or a fast path. SENDRECV does not support multiple iterations of a conversational transaction SEND Sends IMS data. SEND invokes an IMS transaction or command, but does not receive IMS data. If result data exists, it can be retrieved with the RECEIVE function. A send-only transaction cannot be an IMS fast path transaction or a conversations transaction. RECEIVE Receives IMS data. The data can be the result of a transaction or command initiated by the SEND function or an unsolicited output message from an IMS application. The RECEIVE function does not initiate an IMS transaction or command. dsnaims-2pc Specifies whether to use a two-phase commit process to perform the transaction syncpoint service. Possible values are Y or N. For N, commits and rollbacks that are issued by the IMS transaction do not affect commit and rollback processing in the DB2 application that invokes DSNAIMS2. Furthermore, IMS resources are not affected by commits and rollbacks that are issued by the calling DB2 application. If you specify Y, you must also specify SENDRECV. To use a two-phase commit process, you must set the IMS control region parameter (RRS) to Y. This parameter is optional. The default is N. xcf-group-name Specifies the XCF group name that the IMS OTMA joins. You can obtain this name by viewing the GRNAME parameter in IMS PROCLIB member DFSPBxxx or by using the IMS command /DISPLAY OTMA. xcf-ims-name Specifies the XCF member name that IMS uses for the XCF group. If IMS is not using the XRF or RSR feature, you can obtain the XCF member name from the OTMANM parameter in IMS PROCLIB member DFSPBxxx. If IMS is using the
822
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
XRF or RSR feature, you can obtain the XCF member name from the USERVAR parameter in IMS PROCLIB member DFSPBxxx. racf-userid Specifies the RACF user ID that is used for IMS to perform the transaction or command authorization checking. This parameter is required if DSNAIMS2 is running APF-authorized. If DSNAIMS2 is running unauthorized, this parameter is ignored and the EXTERNAL SECURITY setting for the DSNAIMS2 stored procedure definition determines the user ID that is used by IMS. racf-groupid Specifies the RACF group ID that is used for IMS to perform the transaction or command authorization checking. This field is used for stored procedures that are APF-authorized. It is ignored for other stored procedures. ims-lterm Specifies an IMS LTERM name that is used to override the LTERM name in the I/O program communication block of the IMS application program. This field is used as an input and an output field: v For SENDRECV, the value is sent to IMS on input and can be updated by IMS on output. v For SEND, the parameter is IN only. v For RECEIVE, the parameter is OUT only. An empty or NULL value tells IMS to ignore the parameter. ims-modname Specifies the formatting map name that is used by the server to map output data streams, such as 3270 streams. Although this invocation does not have IMS MFS support, the input MODNAME can be used as the map name to define the output data stream. This name is an 8-byte message output descriptor name that is placed in the I/O program communication block. When the message is inserted, IMS places this name in the message prefix with the map name in the program communication block of the IMS application program. For SENDRECV, the value is sent to IMS on input, and can be updated on output. For SEND, the parameter is IN only. For RECEIVE it is OUT only. IMS ignores the parameter when it is an empty or NULL value. ims-tran-name Specifies the name of an IMS transaction or command that is sent to IMS. If the IMS command is longer than eight characters, specify the first eight characters (including the "/" of the command). Specify the remaining characters of the command in the ims-tran-name parameter. If you use an empty or NULL value, you must specify the full transaction name or command in the ims-data-in parameter. ims-data-in Specifies the data that is sent to IMS. This parameter is required in each of the following cases: v Input data is required for IMS v No transaction name or command is passed in ims-tran-name v The command is longer than eight characters This parameter is ignored when for RECEIVE functions.
823
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
ims-data-out Data returned after successful completion of the transaction. This parameter is required for SENDRECV and RECEIVE functions. The parameter is ignored for SEND functions. The length of ims-data-out is 32,000 bytes. If the data that is returned from IMS is greater than the length of ims-data-out, the data will be truncated. otma-tpipe-name Specifies an 8-byte user-defined communication session name that IMS uses for the input and output data for the transaction or the command in a SEND or a RECEIVE function. If the otma_tpipe_name parameter is used for a SEND function to generate an IMS output message, the same otma_pipe_name must be used to retrieve output data for the subsequent RECEIVE function. otma-dru-name Specifies the name of an IMS user-defined exit routine, OTMA destination resolution user exit routine, if it is used. This IMS exit routine can format part of the output prefix and can determine the output destination for an IMS ALT_PCB output. If an empty or null value is passed, IMS ignores this parameter. user-data-in This optional parameter contains any data that is to be included in the IMS message prefix, so that the data can be accessed by IMS OTMA user exit routines (DFSYIOE0 and DFSYDRU0) and can be tracked by IMS log records. IMS applications that run in dependent regions do not access this data. The specified user data is not included in the output message prefix. You can use this parameter to store input and output correlator tokens or other information. This parameter is ignored for RECEIEVE functions. user-data-out On output, this field contains the user-data-in in the IMS output prefix. IMS user exit routines (DFSYIOE0 and DFSYDRU0) can also create user-data-out for SENDRECV and RECEIVE functions. The parameter is not updated for SEND functions. The length of user-data-out is 1,022 bytes. If the data that is returned from IMS is greater than the length of user-data-out, the data will be truncated. status-message Indicates any error message that is returned from the transaction or command, OTMA, RRS, or DSNAIMS2. otma-data-inseg Specifies the number of segments followed by the lengths of the segments to be sent to IMS. All values should be separated by semicolons. This field is required to send multi-segment input to IMS. For single-segment transactions and commands, set the field to NULL, "0" or "0;". return-code Indicates the return code that is returned for the transaction or command, OTMA, RRS, or DSNAIMS2.
Examples
The following examples show how to call DSNAIMS2. Example 1: Sample parameters for executing a multi-segment IMS transaction:
824
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
CALL SYSPROC.DSNAIMS2("SEND","N","IMS7GRP","IMS7TMEM", "IMSCLNM","","","","","", "PART 1ST SEGMENT FROM CI 2ND SEGMENT FROM CI ", ims_data_out,"","","",user_out, error_message, "2;25;20",rc)
Related concepts: OTMA C/I initialization Related information: Accessing IMS databases from DB2 stored procedures (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond)
825
| |
advantage of enhancements, including new fields, improved formulas, and the option to select the formula that is used for making recommendations.
PSPI
| |
In particular, DSNACCOR performs the following actions: v Recommends when you should reorganize, image copy, or update statistics for table spaces or index spaces v Indicates when a data set has exceeded a specified threshold for the number of extents that it occupies. v Indicates whether objects are in a restricted state DSNACCOR uses data from the SYSIBM.SYSTABLESPACESTATS and SYSIBM.SYSSYSINDEXSPACESTATS real-time statistics tables to make its recommendations. DSNACCOR provides its recommendations in a result set. DSNACCOR uses the set of criteria that are shown in DSNACCOR formulas for recommending actions on page 835 to evaluate table spaces and index spaces. By default, DSNACCOR evaluates all table spaces and index spaces in the subsystem that have entries in the real-time statistics tables. However, you can override this default through input parameters. Important information about DSNACCOR recommendations: v DSNACCOR makes recommendations based on general formulas that require input from the user about the maintenance policies for a subsystem. These recommendations might not be accurate for every installation. v If the real-time statistics tables contain information for only a small percentage of your DB2 subsystem, the recommendations that DSNACCOR makes might not be accurate for the entire subsystem. v Before you perform any action that DSNACCOR recommends, ensure that the object for which DSNACCOR makes the recommendation is available, and that the recommended action can be performed on that object. For example, before you can perform an image copy on an index, the index must have the COPY YES attribute.
| | |
Environment
DSNACCOR must run in a WLM-established stored procedure address space. You should bind the package for DSNACCOR with isolation UR to avoid lock contention. You can find the installation steps for DSNACCOR in job DSNTIJSG.
Authorization required
To execute the CALL DSNACCOR statement, the owner of the package or plan that contains the CALL statement must have one or more of the following privileges on each package that the stored procedure uses: v The EXECUTE privilege on the package for DSNACCOR v Ownership of the package v PACKADM authority for the package collection v SYSADM authority
826
The owner of the package or plan that contains the CALL statement must also have: v SELECT authority on the real-time statistics tables v Select authority on catalog tables v The DISPLAY system privilege
Syntax diagram
The following syntax diagram shows the CALL statement for invoking DSNACCOR. Because the linkage convention for DSNACCOR is GENERAL WITH NULLS, if you pass parameters in host variables, you need to include a null indicator with every host variable. Null indicators for input host variables must be initialized before you execute the CALL statement.
CALL DSNACCOR (
ICType NULL ,
CatlgSchema NULL
Criteria NULL
CRChangesPct NULL ,
CRDaySncLastCopy NULL ,
ICRUpdatedPagesPct NULL ,
CRIndexSize NULL ,
RRTInsDelUpdPct NULL ,
RRTUnclustInsPct NULL ,
RRTMassDelLimit NULL ,
RRTIndRefLimit NULL ,
RRIMassDelLimit NULL ,
Option descriptions
In the following option descriptions, the default value for an input parameter is the value that DSNACCOR uses if you specify a null value. QueryType Specifies the types of actions that DSNACCOR recommends. This field contains one or more of the following values. Each value is enclosed in single quotation marks and separated from other values by a space. ALL Makes recommendations for all of the following actions.
827
RUNSTATS Makes a recommendation on whether to perform RUNSTATS. REORG Makes a recommendation on whether to perform REORG. Choosing this value causes DSNACCOR to process the EXTENTS value also. EXTENTS Indicates when data sets have exceeded a user-specified extents limit. RESTRICT Indicates which objects are in a restricted state. QueryType is an input parameter of type VARCHAR(40). The default is ALL. ObjectType Specifies the types of objects for which DSNACCOR recommends actions: ALL TS IX Table spaces and index spaces. Table spaces only. Index spaces only.
ObjectType is an input parameter of type VARCHAR(3). The default is ALL. ICType Specifies the types of image copies for which DSNACCOR is to make recommendations: F I B Full image copy. Incremental image copy. This value is valid for table spaces only. Full image copy or incremental image copy.
ICType is an input parameter of type VARCHAR(1). The default is B. StatsSchema Specifies the qualifier for the real-time statistics table names. StatsSchema is an input parameter of type VARCHAR(128). The default is SYSIBM. CatlgSchema Specifies the qualifier for DB2 catalog table names. CatlgSchema is an input parameter of type VARCHAR(128). The default is SYSIBM. LocalSchema Specifies the qualifier for the names of tables that DSNACCOR creates. LocalSchema is an input parameter of type VARCHAR(128). The default is DSNACC. ChkLvl Specifies the types of checking that DSNACCOR performs, and indicates whether to include objects that fail those checks in the DSNACCOR recommendations result set. This value is the sum of any combination of the following values: 0 1 DSNACCOR performs none of the following actions. For objects that are listed in the recommendations result set, check the SYSTABLESPACE or SYSINDEXES catalog tables to ensure that those objects have not been deleted. If value 16 is not also chosen, exclude rows for the deleted objects from the recommendations result set.
828
| | |
DSNACCOR excludes objects from the recommendations result set if those objects are not in the SYSTABLESPACE or SYSINDEXES catalog tables. When this setting is specified, DSNACCOR does not use EXTENTS>ExtentLimit to determine whether a LOB table space should be reorganized. 2 For index spaces that are listed in the recommendations result set, check the SYSTABLES, SYSTABLESPACE, and SYSINDEXES catalog tables to determine the name of the table space that is associated with each index space. Choosing this value causes DSNACCOR to also check for rows in the recommendations result set for objects that have been deleted but have entries in the real-time statistics tables (value 1). This means that if value 16 is not also chosen, rows for deleted objects are excluded from the recommendations result set. 4 Check whether rows that are in the DSNACCOR recommendations result set refer to objects that are in the exception table. For recommendations result set rows that have corresponding exception table rows, copy the contents of the QUERYTYPE column of the exception table to the INEXCEPTTABLE column of the recommendations result set. Check whether objects that have rows in the recommendations result set are restricted. Indicate the restricted status in the OBJECTSTATUS column of the result set. A row is added to the result set for each object that has a restricted state, even if a row for the same object is already included in the result set because utility operations are recommended. So, the result set might contain duplicate rows for the same object when you specify this option. For objects that are listed in the recommendations result set, check the SYSTABLESPACE or SYSINDEXES catalog tables to ensure that those objects have not been deleted (value 1). In result set rows for deleted objects, specify the word ORPHANED in the OBJECTSTATUS column. Exclude rows from the DSNACCOR recommendations result set for index spaces for which the related table spaces have been recommended for REORG. Choosing this value causes DSNACCOR to perform the actions for values 1 and 2. For index spaces that are listed in the DSNACCOR recommendations result set, check whether the related table spaces are listed in the exception table. For recommendations result set rows that have corresponding exception table rows, copy the contents of the QUERYTYPE column of the exception table to the INEXCEPTTABLE column of the recommendations result set.
8 | | | | | 16
32
64
ChkLvl is an input parameter of type INTEGER. The default is 7 (values 1+2+4). Criteria Narrows the set of objects for which DSNACCOR makes recommendations. This value is the search condition of an SQL WHERE clause. Criteria is an input parameter of type VARCHAR(4096). The default is that DSNACCOR makes recommendations for all table spaces and index spaces in the subsystem. The search condition can use any column in the result set and wildcards are allowed.
Chapter 14. Calling a stored procedure from your application
829
Restricted A parameter that is reserved for future use. Specify the null value for this parameter. Restricted is an input parameter of type VARCHAR(80). CRUpdatedPagesPct Specifies a criterion for recommending a full image copy on a table space or index space. If the following condition is true for a table space, DSNACCOR recommends an image copy: The total number of distinct updated pages, divided by the total number of preformatted pages (expressed as a percentage) is greater than CRUpdatedPagesPct. See item 2 in Figure 41 on page 836. If both of the following conditions are true for an index space, DSNACCOR recommends an image copy: v The total number of distinct updated pages, divided by the total number of preformatted pages (expressed as a percentage) is greater than CRUpdatedPagesPct. v The number of active pages in the index space or partition is greater than CRIndexSize. See items 2 and 3 in Figure 42 on page 836. CRUpdatedPagesPct is an input parameter of type INTEGER. The default is 20. CRChangesPct Specifies a criterion for recommending a full image copy on a table space or index space. If the following condition is true for a table space, DSNACCOR recommends an image copy: The total number of insert, update, and delete operations since the last image copy, divided by the total number of rows or LOBs in a table space or partition (expressed as a percentage) is greater than CRChangesPct. See item 3 in Figure 41 on page 836. If both of the following conditions are true for an index table space, DSNACCOR recommends an image copy: v The total number of insert and delete operations since the last image copy, divided by the total number of entries in the index space or partition (expressed as a percentage) is greater than CRChangesPct. v The number of active pages in the index space or partition is greater than CRIndexSize. See items 2 and 4 in Figure 42 on page 836. CRChangesPct is an input parameter of type INTEGER. The default is 10. CRDaySncLastCopy Specifies a criterion for recommending a full image copy on a table space or index space. If the number of days since the last image copy is greater than this value, DSNACCOR recommends an image copy. (See item 1 in Figure 41 on page 836 and item 1 in Figure 42 on page 836.) CRDaySncLastCopy is an input parameter of type INTEGER. The default is 7. ICRUpdatedPagesPct Specifies a criterion for recommending an incremental image copy on a table space. If the following condition is true, DSNACCOR recommends an incremental image copy: The number of distinct pages that were updated since the last image copy, divided by the total number of active pages in the table space or partition (expressed as a percentage) is greater than CRUpdatedPagesPct. (See item 1 in Figure 43 on page 836.) ICRUpdatedPagesPct is an input parameter of type INTEGER. The default is 1.
830
ICRChangesPct Specifies a criterion for recommending an incremental image copy on a table space. If the following condition is true, DSNACCOR recommends an incremental image copy: The ratio of the number of insert, update, or delete operations since the last image copy, to the total number of rows or LOBs in a table space or partition (expressed as a percentage) is greater than ICRChangesPct. (See item 2 in Figure 43 on page 836.) ICRChangesPct is an input parameter of type INTEGER. The default is 1. CRIndexSize Specifies, when combined with CRUpdatedPagesPct or CRChangesPct, a criterion for recommending a full image copy on an index space. (See items 2, 3, and 4 in Figure 42 on page 836.) CRIndexSize is an input parameter of type INTEGER. The default is 50. RRTInsDelUpdPct Specifies a criterion for recommending that the REORG utility is to be run on a table space. If the following condition is true, DSNACCOR recommends running REORG: The sum of insert, update, and delete operations since the last REORG, divided by the total number of rows or LOBs in the table space or partition (expressed as a percentage) is greater than RRTInsDelUpdPct (See item 1 in Figure 44 on page 837.) RRTInsDelUpdPct is an input parameter of type INTEGER. The default is 20. RRTUnclustInsPct Specifies a criterion for recommending that the REORG utility is to be run on a table space. If the following condition is true, DSNACCOR recommends running REORG: The number of unclustered insert operations, divided by the total number of rows or LOBs in the table space or partition (expressed as a percentage) is greater than RRTUnclustInsPct. (See item 2 in Figure 44 on page 837.) RRTUnclustInsPct is an input parameter of type INTEGER. The default is 10. RRTDisorgLOBPct Specifies a criterion for recommending that the REORG utility is to be run on a table space. If the following condition is true, DSNACCOR recommends running REORG: The number of imperfectly chunked LOBs, divided by the total number of rows or LOBs in the table space or partition (expressed as a percentage) is greater than RRTDisorgLOBPct. (See item 3 in Figure 44 on page 837.) RRTDisorgLOBPct is an input parameter of type INTEGER. The default is 10. RRTMassDelLimit Specifies a criterion for recommending that the REORG utility is to be run on a table space. If one of the following values is greater than RRTMassDelLimit, DSNACCOR recommends running REORG: v The number of mass deletes from a segmented or LOB table space since the last REORG or LOAD REPLACE v The number of dropped tables from a nonsegmented table space since the last REORG or LOAD REPLACE
831
(See item 5 in Figure 44 on page 837.) RRTMassDelLimit is an input parameter of type INTEGER. The default is 0. RRTIndRefLimit Specifies a criterion for recommending that the REORG utility is to be run on a table space. If the following value is greater than RRTIndRefLimit, DSNACCOR recommends running REORG: The total number of overflow records that were created since the last REORG or LOAD REPLACE, divided by the total number of rows or LOBs in the table space or partition (expressed as a percentage) (See item 4 in Figure 44 on page 837.) RRTIndRefLimit is an input parameter of type INTEGER. The default is 10. RRIInsertDeletePct Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRIInsertDeletePct, DSNACCOR recommends running REORG: The sum of the number of index entries that were inserted and deleted since the last REORG, divided by the total number of index entries in the index space or partition (expressed as a percentage) (See item 1 in Figure 45 on page 837.) This is an input parameter of type INTEGER. The default is 20. RRIAppendInsertPct Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRIAppendInsertPct, DSNACCOR recommends running REORG: The number of index entries that were inserted since the last REORG, REBUILD INDEX, or LOAD REPLACE with a key value greater than the maximum key value in the index space or partition, divided by the number of index entries in the index space or partition (expressed as a percentage) (See item 2 in Figure 45 on page 837.) RRIInsertDeletePct is an input parameter of type INTEGER. The default is 10. RRIPseudoDeletePct Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRIPseudoDeletePct, DSNACCOR recommends running REORG: The number of index entries that were pseudo-deleted since the last REORG, REBUILD INDEX, or LOAD REPLACE, divided by the number of index entries in the index space or partition (expressed as a percentage) (See item 3 in Figure 45 on page 837.) RRIPseudoDeletePct is an input parameter of type INTEGER. The default is 10. RRIMassDelLimit Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the number of mass deletes from an index space or partition since the last REORG, REBUILD, or LOAD REPLACE is greater than this value, DSNACCOR recommends running REORG. (See item 4 in Figure 45 on page 837.) RRIMassDelLimit is an input parameter of type INTEGER. The default is 0. RRILeafLimit Specifies a criterion for recommending that the REORG utility is to be run on
832
an index space. If the following value is greater than RRILeafLimit, DSNACCOR recommends running REORG: The number of index page splits that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE that resulted in a large separation between the parts of the original page, divided by the total number of active pages in the index space or partition (expressed as a percentage) (See item 5 in Figure 45 on page 837.) RRILeafLimit is an input parameter of type INTEGER. The default is 10. RRINumLevelsLimit Specifies a criterion for recommending that the REORG utility is to be run on an index space. If the following value is greater than RRINumLevelsLimit, DSNACCOR recommends running REORG: The number of levels in the index tree that were added or removed since the last REORG, REBUILD INDEX, or LOAD REPLACE (See item 6 in Figure 45 on page 837.) RRINumLevelsLimit is an input parameter of type INTEGER. The default is 0. SRTInsDelUpdPct Specifies, when combined with SRTInsDelUpdAbs, a criterion for recommending that the RUNSTATS utility is to be run on a table space. If both of the following conditions are true, DSNACCOR recommends running RUNSTATS: v The number of insert, update, or delete operations since the last RUNSTATS on a table space or partition, divided by the total number of rows or LOBs in table space or partition (expressed as a percentage) is greater than SRTInsDelUpdPct. v The sum of the number of inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRTInsDelUpdAbs. (See items 1 and 2 in Figure 46 on page 837.) SRTInsDelUpdPct is an input parameter of type INTEGER. The default is 20. SRTInsDelUpdAbs Specifies, when combined with SRTInsDelUpdPct, a criterion for recommending that the RUNSTATS utility is to be run on a table space. If both of the following conditions are true, DSNACCOR recommends running RUNSTATS: v The number of insert, update, and delete operations since the last RUNSTATS on a table space or partition, divided by the total number of rows or LOBs in table space or partition (expressed as a percentage) is greater than SRTInsDelUpdPct. v The sum of the number of inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRTInsDelUpdAbs. (See items 1 and 2 in Figure 46 on page 837.) SRTInsDelUpdAbs is an input parameter of type INTEGER. The default is 0. SRTMassDelLimit Specifies a criterion for recommending that the RUNSTATS utility is to be run on a table space. If the following condition is true, DSNACCOR recommends running RUNSTATS: v The number of mass deletes from a table space or partition since the last REORG or LOAD REPLACE is greater than SRTMassDelLimit. (See item 3 in Figure 46 on page 837.) SRTMassDelLimit is an input parameter of type INTEGER. The default is 0.
Chapter 14. Calling a stored procedure from your application
833
| |
| | | | |
SRIInsDelUpdPct Specifies, when combined with SRIInsDelUpdAbs, a criterion for recommending that the RUNSTATS utility is to be run on an index space. If both of the following conditions are true, DSNACCOR recommends running RUNSTATS: v The number of inserted and deleted index entries since the last RUNSTATS on an index space or partition, divided by the total number of index entries in the index space or partition (expressed as a percentage) is greater than SRIInsDelUpdPct. v The sum of the number of inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRIInsDelUpdAbs. (See items 1 and 2 in Figure 47 on page 837.) SRIInsDelUpdPct is an input parameter of type INTEGER. The default is 20. SRIInsDelUpdAbs Specifies, when combined with SRIInsDelUpdPct, a criterion for recommending that the RUNSTATS utility is to be run on an index space. If the following condition is true, DSNACCOR recommends running RUNSTATS: v The number of inserted and deleted index entries since the last RUNSTATS on an index space or partition, divided by the total number of index entries in the index space or partition (expressed as a percentage) is greater than SRIInsDelUpdPct. v The sum of the number of inserted and deleted index entries since the last RUNSTATS on an index space or partition is greater than SRIInsDelUpdAbs, (See items 1 and 2 in Figure 47 on page 837.) SRIInsDelUpdAbs is an input parameter of type INTEGER. The default is 0. SRIMassDelLimit Specifies a criterion for recommending that the RUNSTATS utility is to be run on an index space. If the number of mass deletes from an index space or partition since the last REORG, REBUILD INDEX, or LOAD REPLACE is greater than this value, DSNACCOR recommends running RUNSTATS. (See item 3 in Figure 47 on page 837.) SRIMassDelLimit is an input parameter of type INTEGER. The default is 0.
| | |
| | | | | | | | |
ExtentLimit Specifies a criterion for recommending that the REORG utility is to be run on a table space or index space. Also specifies that DSNACCOR is to warn the user that the table space or index space has used too many extents. DSNACCOR recommends running REORG, and altering data set allocations if the following condition is true: v The number of physical extents in the index space, table space, or partition is greater than ExtentLimit. (See Figure 48 on page 838.) ExtentLimit is an input parameter of type INTEGER. The default is 50. LastStatement When DSNACCOR returns a severe error (return code 12), this field contains the SQL statement that was executing when the error occurred. LastStatement is an output parameter of type VARCHAR(8012). ReturnCode The return code from DSNACCOR execution. Possible values are: 0 DSNACCOR executed successfully. The ErrorMsg parameter contains the approximate percentage of the total number of objects in the subsystem that have information in the real-time statistics tables.
834
DSNACCOR completed, but one or more input parameters might be incompatible. The ErrorMsg parameter contains the input parameters that might be incompatible. DSNACCOR terminated with errors. The ErrorMsg parameter contains a message that describes the error. DSNACCOR terminated with severe errors. The ErrorMsg parameter contains a message that describes the error. The LastStatement parameter contains the SQL statement that was executing when the error occurred. DSNACCOR terminated because it could not access one or more of the real-time statistics tables. The ErrorMsg parameter contains the names of the tables that DSNACCOR could not access. DSNACCOR terminated because it encountered a problem with one of the declared temporary tables that it defines and uses. DSNACCOR terminated because it could not define a declared temporary table. No table spaces were defined in the TEMP database.
8 12
14
15 16
NULL DSNACCOR terminated but could not set a return code. ReturnCode is an output parameter of type INTEGER. ErrorMsg Contains information about DSNACCOR execution. If DSNACCOR runs successfully (ReturnCode=0), this field contains the approximate percentage of objects in the subsystem that are in the real-time statistics tables. Otherwise, this field contains error messages. ErrorMsg is an output parameter of type VARCHAR(1331). IFCARetCode Contains the return code from an IFI COMMAND call. DSNACCOR issues commands through the IFI interface to determine the status of objects. IFCARetCode is an output parameter of type INTEGER. IFCAResCode Contains the reason code from an IFI COMMAND call. IFCAResCode is an output parameter of type INTEGER. ExcessBytes Contains the number of bytes of information that did not fit in the IFI return area after an IFI COMMAND call. ExcessBytes is an output parameter of type INTEGER.
835
((QueryType=COPY OR QueryType=ALL) AND (ObjectType=TS OR ObjectType=ALL) AND ICType=F) AND (COPYLASTTIME IS NULL OR REORGLASTTIME>COPYLASTTIME OR LOADRLASTTIME>COPYLASTTIME OR (CURRENT DATE-COPYLASTTIME)>CRDaySncLastCopy OR 1 (COPYUPDATEDPAGES*100)/NACTIVE>CRUpdatedPagesPct OR 2 (COPYCHANGES*100)/TOTALROWS>CRChangesPct) 3
Figure 41. DSNACCOR formula for recommending a full image copy on a table space
The figure below shows the formula that DSNACCOR uses to recommend a full image copy on an index space.
((QueryType=COPY OR QueryType=ALL) AND (ObjectType=IX OR ObjectType=ALL) AND (ICType=F OR ICType=B)) AND (COPYLASTTIME IS NULL OR REORGLASTTIME>COPYLASTTIME OR LOADRLASTTIME>COPYLASTTIME OR REBUILDLASTTIME>COPYLASTTIME OR (CURRENT DATE-COPYLASTTIME)>CRDaySncLastCopy OR 1 (NACTIVE>CRIndexSize AND 2 ((COPYUPDATEDPAGES*100)/NACTIVE>CRUpdatedPagesPct OR 3 (COPYCHANGES*100)/TOTALENTRIES>CRChangesPct))) 4
Figure 42. DSNACCOR formula for recommending a full image copy on an index space
The figure below shows the formula that DSNACCOR uses to recommend an incremental image copy on a table space.
((QueryType=COPY OR QueryType=ALL) AND (ObjectType=TS OR ObjectType=ALL) AND ICType=I AND COPYLASTTIME IS NOT NULL) AND (LOADRLASTTIME>COPYLASTTIME OR REORGLASTTIME>COPYLASTTIME OR (COPYUPDATEDPAGES*100)/NACTIVE>ICRUpdatedPagesPct OR 1 (COPYCHANGES*100)/TOTALROWS>ICRChangesPct)) 2
Figure 43. DSNACCOR formula for recommending an incremental image copy on a table space
The figure below shows the formula that DSNACCOR uses to recommend a REORG on a table space. If the table space is a LOB table space, and CHCKLVL=1, the formula does not include EXTENTS>ExtentLimit.
836
((QueryType=REORG OR QueryType=ALL) AND (ObjectType=TS OR ObjectType=ALL)) AND (REORGLASTTIME IS NULL OR ((REORGINSERTS+REORGDELETES+REORGUPDATES)*100)/TOTALROWS>RRTInsDelUpdPct OR 1 (REORGUNCLUSTINS*100)/TOTALROWS>RRTUnclustInsPct OR 2 (REORGDISORGLOB*100)/TOTALROWS>RRTDisorgLOBPct OR 3 ((REORGNEARINDREF+REORGFARINDREF)*100)/TOTALROWS>RRTIndRefLimit OR 4 REORGMASSDELETE>RRTMassDelLimit OR 5 EXTENTS>ExtentLimit) 6
The figure below shows the formula that DSNACCOR uses to recommend a REORG on an index space.
((QueryType=REORG OR QueryType=ALL) AND (ObjectType=IX OR ObjectType=ALL)) AND (REORGLASTTIME IS NULL OR ((REORGINSERTS+REORGDELETES)*100)/TOTALENTRIES>RRIInsertDeletePct OR (REORGAPPENDINSERT*100)/TOTALENTRIES>RRIAppendInsertPct OR (REORGPSEUDODELETES*100)/TOTALENTRIES>RRIPseudoDeletePct OR REORGMASSDELETE>RRIMassDeleteLimit OR (REORGLEAFFAR*100)/NACTIVE>RRILeafLimit OR REORGNUMLEVELS>RRINumLevelsLimit OR EXTENTS>ExtentLimit)
1 2 3 4 5 6 7
The figure below shows the formula that DSNACCOR uses to recommend RUNSTATS on a table space.
((QueryType=RUNSTATS OR QueryType=ALL) AND (ObjectType=TS OR ObjectType=ALL)) AND (STATSLASTTIME IS NULL OR (((STATSINSERTS+STATSDELETES+STATSUPDATES)*100)/TOTALROWS>SRTInsDelUpdPct AND 1 (STATSINSERTS+STATSDELETES+STATSUPDATES)>SRTInsDelUpdAbs) OR 2 STATSMASSDELETE>SRTMassDeleteLimit) 3
The figure below shows the formula that DSNACCOR uses to recommend RUNSTATS on an index space.
| |
((QueryType=RUNSTATS OR QueryType=ALL) AND (ObjectType=IX OR ObjectType=ALL)) AND (STATSLASTTIME IS NULL OR (((STATSINSERTS+STATSDELETES)*100)/TOTALENTRIES>SRIInsDelUpdPct AND (STATSINSERTS+STATSDELETES)>SRIInsDelUpdPct) OR STATSMASSDELETE>SRIInsDelUpdAbs)
1 2 3
The figure below shows the formula that DSNACCOR uses to that too many index space or table space extents have been used.
837
EXTENTS>ExtentLimit
Figure 48. DSNACCOR formula for warning that too many data set extents for a table space or index space are used
The meanings of the columns are: DBNAME The database name for an object in the exception table. NAME The table space name or index space name for an object in the exception table. QUERYTYPE The information that you want to place in the INEXCEPTTABLE column of the recommendations result set. If you put a null value in this column, DSNACCOR puts the value YES in the INEXCEPTTABLE column of the recommendations result set row for the object that matches the DBNAME and NAME values. Recommendation: If you plan to put many rows in the exception table, create a nonunique index on DBNAME, NAME, and QUERYTYPE. After you create the exception table, insert a row for each object for which you want to include information in the INEXCEPTTABLE column. Example: Suppose that you want the INEXCEPTTABLE column to contain the string 'IRRELEVANT' for table space STAFF in database DSNDB04. You also want the INEXCEPTTABLE column to contain 'CURRENT' for table space DSN8S91D in database DSN8D91A. Execute these INSERT statements:
INSERT INTO DSNACC.EXCEPT_TBL VALUES(DSNDB04 , STAFF , IRRELEVANT); INSERT INTO DSNACC.EXCEPT_TBL VALUES(DSN8D91A, DSN8S91D, CURRENT);
To use the contents of INEXCEPTTABLE for filtering, include a condition that involves the INEXCEPTTABLE column in the search condition that you specify in your Criteria input parameter. Example: Suppose that you want to include all rows for database DSNDB04 in the recommendations result set, except for those rows that contain the string
838
'IRRELEVANT' in the INEXCEPTTABLE column. You might include the following search condition in your Criteria input parameter:
DBNAME=DSNDB04 AND INEXCEPTTABLE<>IRRELEVANT
Example
The following COBOL example that shows variable declarations and an SQL CALL for obtaining recommendations for objects in databases DSN8D91A and DSN8D91L. This example also outlines the steps that you need to perform to retrieve the two result sets that DSNACCOR returns.
WORKING-STORAGE SECTION. . . . *********************** * DSNACCOR PARAMETERS * *********************** 01 QUERYTYPE. 49 QUERYTYPE-LN PICTURE S9(4) COMP VALUE 40. 49 QUERYTYPE-DTA PICTURE X(40) VALUE ALL. 01 OBJECTTYPE. 49 OBJECTTYPE-LN PICTURE S9(4) COMP VALUE 3. 49 OBJECTTYPE-DTA PICTURE X(3) VALUE ALL. 01 ICTYPE. 49 ICTYPE-LN PICTURE S9(4) COMP VALUE 1. 49 ICTYPE-DTA PICTURE X(1) VALUE B. 01 STATSSCHEMA. 49 STATSSCHEMA-LN PICTURE S9(4) COMP VALUE 128. 49 STATSSCHEMA-DTA PICTURE X(128) VALUE SYSIBM. 01 CATLGSCHEMA. 49 CATLGSCHEMA-LN PICTURE S9(4) COMP VALUE 128. 49 CATLGSCHEMA-DTA PICTURE X(128) VALUE SYSIBM. 01 LOCALSCHEMA. 49 LOCALSCHEMA-LN PICTURE S9(4) COMP VALUE 128. 49 LOCALSCHEMA-DTA PICTURE X(128) VALUE DSNACC. 01 CHKLVL PICTURE S9(9) COMP VALUE +3. 01 CRITERIA. 49 CRITERIA-LN PICTURE S9(4) COMP VALUE 4096. 49 CRITERIA-DTA PICTURE X(4096) VALUE SPACES. 01 RESTRICTED. 49 RESTRICTED-LN PICTURE S9(4) COMP VALUE 80. 49 RESTRICTED-DTA PICTURE X(80) VALUE SPACES. 01 CRUPDATEDPAGESPCT PICTURE S9(9) COMP VALUE +0. 01 CRCHANGESPCT PICTURE S9(9) COMP VALUE +0. 01 CRDAYSNCLASTCOPY PICTURE S9(9) COMP VALUE +0. 01 ICRUPDATEDPAGESPCT PICTURE S9(9) COMP VALUE +0. 01 ICRCHANGESPCT PICTURE S9(9) COMP VALUE +0. 01 CRINDEXSIZE PICTURE S9(9) COMP VALUE +0. 01 RRTINSDELUPDPCT PICTURE S9(9) COMP VALUE +0. 01 RRTUNCLUSTINSPCT PICTURE S9(9) COMP VALUE +0. 01 RRTDISORGLOBPCT PICTURE S9(9) COMP VALUE +0. 01 RRTMASSDELLIMIT PICTURE S9(9) COMP VALUE +0. 01 RRTINDREFLIMIT PICTURE S9(9) COMP VALUE +0. 01 RRIINSERTDELETEPCT PICTURE S9(9) COMP VALUE +0. 01 RRIAPPENDINSERTPCT PICTURE S9(9) COMP VALUE +0. 01 RRIPSEUDODELETEPCT PICTURE S9(9) COMP VALUE +0. 01 RRIMASSDELLIMIT PICTURE S9(9) COMP VALUE +0. 01 RRILEAFLIMIT PICTURE S9(9) COMP VALUE +0. 01 RRINUMLEVELSLIMIT PICTURE S9(9) COMP VALUE +0. 01 SRTINSDELUPDPCT PICTURE S9(9) COMP VALUE +0. 01 SRTINSDELUPDABS PICTURE S9(9) COMP VALUE +0. 01 SRTMASSDELLIMIT PICTURE S9(9) COMP VALUE +0. 01 SRIINSDELUPDPCT PICTURE S9(9) COMP VALUE +0. 01 SRIINSDELUPDABS PICTURE S9(9) COMP VALUE +0. 01 SRIMASSDELLIMIT PICTURE S9(9) COMP VALUE +0. 01 EXTENTLIMIT PICTURE S9(9) COMP VALUE +0.
Chapter 14. Calling a stored procedure from your application
839
01 01 01 01 01 01
LASTSTATEMENT. 49 LASTSTATEMENT-LN 49 LASTSTATEMENT-DTA RETURNCODE ERRORMSG. 49 ERRORMSG-LN 49 ERRORMSG-DTA IFCARETCODE IFCARESCODE EXCESSBYTES
PICTURE S9(4) COMP VALUE 8012. PICTURE X(8012) VALUE SPACES. PICTURE S9(9) COMP VALUE +0. PICTURE PICTURE PICTURE PICTURE PICTURE S9(4) COMP VALUE 1331. X(1331) VALUE SPACES. S9(9) COMP VALUE +0. S9(9) COMP VALUE +0. S9(9) COMP VALUE +0.
***************************************** * INDICATOR VARIABLES. * * INITIALIZE ALL NON-ESSENTIAL INPUT * * VARIABLES TO -1, TO INDICATE THAT THE * * INPUT VALUE IS NULL. * ***************************************** 01 QUERYTYPE-IND PICTURE S9(4) 01 OBJECTTYPE-IND PICTURE S9(4) 01 ICTYPE-IND PICTURE S9(4) 01 STATSSCHEMA-IND PICTURE S9(4) 01 CATLGSCHEMA-IND PICTURE S9(4) 01 LOCALSCHEMA-IND PICTURE S9(4) 01 CHKLVL-IND PICTURE S9(4) 01 CRITERIA-IND PICTURE S9(4) 01 RESTRICTED-IND PICTURE S9(4) 01 CRUPDATEDPAGESPCT-IND PICTURE S9(4) 01 CRCHANGESPCT-IND PICTURE S9(4) 01 CRDAYSNCLASTCOPY-IND PICTURE S9(4) 01 ICRUPDATEDPAGESPCT-IND PICTURE S9(4) 01 ICRCHANGESPCT-IND PICTURE S9(4) 01 CRINDEXSIZE-IND PICTURE S9(4) 01 RRTINSDELUPDPCT-IND PICTURE S9(4) 01 RRTUNCLUSTINSPCT-IND PICTURE S9(4) 01 RRTDISORGLOBPCT-IND PICTURE S9(4) 01 RRTMASSDELLIMIT-IND PICTURE S9(4) 01 RRTINDREFLIMIT-IND PICTURE S9(4) 01 RRIINSERTDELETEPCT-IND PICTURE S9(4) 01 RRIAPPENDINSERTPCT-IND PICTURE S9(4) 01 RRIPSEUDODELETEPCT-IND PICTURE S9(4) 01 RRIMASSDELLIMIT-IND PICTURE S9(4) 01 RRILEAFLIMIT-IND PICTURE S9(4) 01 RRINUMLEVELSLIMIT-IND PICTURE S9(4) 01 SRTINSDELUPDPCT-IND PICTURE S9(4) 01 SRTINSDELUPDABS-IND PICTURE S9(4) 01 SRTMASSDELLIMIT-IND PICTURE S9(4) 01 SRIINSDELUPDPCT-IND PICTURE S9(4) 01 SRIINSDELUPDABS-IND PICTURE S9(4) 01 SRIMASSDELLIMIT-IND PICTURE S9(4) 01 EXTENTLIMIT-IND PICTURE S9(4) 01 LASTSTATEMENT-IND PICTURE S9(4) 01 RETURNCODE-IND PICTURE S9(4) 01 ERRORMSG-IND PICTURE S9(4) 01 IFCARETCODE-IND PICTURE S9(4) 01 IFCARESCODE-IND PICTURE S9(4) 01 EXCESSBYTES-IND PICTURE S9(4)
COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4 COMP-4
VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE VALUE
+0. +0. +0. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. +0. +0. +0. +0. +0. +0.
PROCEDURE DIVISION. . . . ********************************************************* * SET VALUES FOR DSNACCOR INPUT PARAMETERS: * * - USE THE CHKLVL PARAMETER TO CAUSE DSNACCOR TO CHECK * * FOR ORPHANED OBJECTS AND INDEX SPACES WITHOUT * * TABLE SPACES, BUT INCLUDE THOSE OBJECTS IN THE * * RECOMMENDATIONS RESULT SET (CHKLVL=1+2+16=19) * * - USE THE CRITERIA PARAMETER TO CAUSE DSNACCOR TO * * MAKE RECOMMENDATIONS ONLY FOR OBJECTS IN DATABASES * * DSN8D91A AND DSN8D91L. *
840
* - FOR THE FOLLOWING PARAMETERS, SET THESE VALUES, * * WHICH ARE LOWER THAN THE DEFAULTS: * * CRUPDATEDPAGESPCT 4 * * CRCHANGESPCT 2 * * RRTINSDELUPDPCT 2 * * RRTUNCLUSTINSPCT 5 * * RRTDISORGLOBPCT 5 * * RRIAPPENDINSERTPCT 5 * * SRTINSDELUPDPCT 5 * * SRIINSDELUPDPCT 5 * * EXTENTLIMIT 3 * ********************************************************* MOVE 19 TO CHKLVL. MOVE SPACES TO CRITERIA-DTA. MOVE DBNAME = DSN8D91A OR DBNAME = DSN8D91L TO CRITERIA-DTA. MOVE 46 TO CRITERIA-LN. MOVE 4 TO CRUPDATEDPAGESPCT. MOVE 2 TO CRCHANGESPCT. MOVE 2 TO RRTINSDELUPDPCT. MOVE 5 TO RRTUNCLUSTINSPCT. MOVE 5 TO RRTDISORGLOBPCT. MOVE 5 TO RRIAPPENDINSERTPCT. MOVE 5 TO SRTINSDELUPDPCT. MOVE 5 TO SRIINSDELUPDPCT. MOVE 3 TO EXTENTLIMIT. ******************************** * INITIALIZE OUTPUT PARAMETERS * ******************************** MOVE SPACES TO LASTSTATEMENT-DTA. MOVE 1 TO LASTSTATEMENT-LN. MOVE 0 TO RETURNCODE-O2. MOVE SPACES TO ERRORMSG-DTA. MOVE 1 TO ERRORMSG-LN. MOVE 0 TO IFCARETCODE. MOVE 0 TO IFCARESCODE. MOVE 0 TO EXCESSBYTES. ******************************************************* * SET THE INDICATOR VARIABLES TO 0 FOR NON-NULL INPUT * * PARAMETERS (PARAMETERS FOR WHICH YOU DO NOT WANT * * DSNACCOR TO USE DEFAULT VALUES) AND FOR OUTPUT * * PARAMETERS. * ******************************************************* MOVE 0 TO CHKLVL-IND. MOVE 0 TO CRITERIA-IND. MOVE 0 TO CRUPDATEDPAGESPCT-IND. MOVE 0 TO CRCHANGESPCT-IND. MOVE 0 TO RRTINSDELUPDPCT-IND. MOVE 0 TO RRTUNCLUSTINSPCT-IND. MOVE 0 TO RRTDISORGLOBPCT-IND. MOVE 0 TO RRIAPPENDINSERTPCT-IND. MOVE 0 TO SRTINSDELUPDPCT-IND. MOVE 0 TO SRIINSDELUPDPCT-IND. MOVE 0 TO EXTENTLIMIT-IND. MOVE 0 TO LASTSTATEMENT-IND. MOVE 0 TO RETURNCODE-IND. MOVE 0 TO ERRORMSG-IND. MOVE 0 TO IFCARETCODE-IND. MOVE 0 TO IFCARESCODE-IND. MOVE 0 TO EXCESSBYTES-IND. . . . ***************** * CALL DSNACCOR * ***************** EXEC SQL CALL SYSPROC.DSNACCOR (:QUERYTYPE :QUERYTYPE-IND,
Chapter 14. Calling a stored procedure from your application
841
:OBJECTTYPE :OBJECTTYPE-IND, :ICTYPE :ICTYPE-IND, :STATSSCHEMA :STATSSCHEMA-IND, :CATLGSCHEMA :CATLGSCHEMA-IND, :LOCALSCHEMA :LOCALSCHEMA-IND, :CHKLVL :CHKLVL-IND, :CRITERIA :CRITERIA-IND, :RESTRICTED :RESTRICTED-IND, :CRUPDATEDPAGESPCT :CRUPDATEDPAGESPCT-IND, :CRCHANGESPCT :CRCHANGESPCT-IND, :CRDAYSNCLASTCOPY :CRDAYSNCLASTCOPY-IND, :ICRUPDATEDPAGESPCT :ICRUPDATEDPAGESPCT-IND, :ICRCHANGESPCT :ICRCHANGESPCT-IND, :CRINDEXSIZE :CRINDEXSIZE-IND, :RRTINSDELUPDPCT :RRTINSDELUPDPCT-IND, :RRTUNCLUSTINSPCT :RRTUNCLUSTINSPCT-IND, :RRTDISORGLOBPCT :RRTDISORGLOBPCT-IND, :RRTMASSDELLIMIT :RRTMASSDELLIMIT-IND, :RRTINDREFLIMIT :RRTINDREFLIMIT-IND, :RRIINSERTDELETEPCT :RRIINSERTDELETEPCT-IND, :RRIAPPENDINSERTPCT :RRIAPPENDINSERTPCT-IND, :RRIPSEUDODELETEPCT :RRIPSEUDODELETEPCT-IND, :RRIMASSDELLIMIT :RRIMASSDELLIMIT-IND, :RRILEAFLIMIT :RRILEAFLIMIT-IND, :RRINUMLEVELSLIMIT :RRINUMLEVELSLIMIT-IND, :SRTINSDELUPDPCT :SRTINSDELUPDPCT-IND, :SRTINSDELUPDABS :SRTINSDELUPDABS-IND, :SRTMASSDELLIMIT :SRTMASSDELLIMIT-IND, :SRIINSDELUPDPCT :SRIINSDELUPDPCT-IND, :SRIINSDELUPDABS :SRIINSDELUPDABS-IND, :SRIMASSDELLIMIT :SRIMASSDELLIMIT-IND, :EXTENTLIMIT :EXTENTLIMIT-IND, :LASTSTATEMENT :LASTSTATEMENT-IND, :RETURNCODE :RETURNCODE-IND, :ERRORMSG :ERRORMSG-IND, :IFCARETCODE :IFCARETCODE-IND, :IFCARESCODE :IFCARESCODE-IND, :EXCESSBYTES :EXCESSBYTES-IND) END-EXEC. ************************************************************* * ASSUME THAT THE SQL CALL RETURNED +466, WHICH MEANS THAT * * RESULT SETS WERE RETURNED. RETRIEVE RESULT SETS. * ************************************************************* * LINK EACH RESULT SET TO A LOCATOR VARIABLE EXEC SQL ASSOCIATE LOCATORS (:LOC1, :LOC2) WITH PROCEDURE SYSPROC.DSNACCOR END-EXEC. * LINK A CURSOR TO EACH RESULT SET EXEC SQL ALLOCATE C1 CURSOR FOR RESULT SET :LOC1 END-EXEC. EXEC SQL ALLOCATE C2 CURSOR FOR RESULT SET :LOC2 END-EXEC. * PERFORM FETCHES USING C1 TO RETRIEVE ALL ROWS FROM FIRST RESULT SET * PERFORM FETCHES USING C2 TO RETRIEVE ALL ROWS FROM SECOND RESULT SET
Output
If DSNACCOR executes successfully, in addition to the output parameters described in Option descriptions on page 827, DSNACCOR returns two result sets.
842
The first result set contains the results from IFI COMMAND calls that DSNACCOR makes. The following table shows the format of the first result set.
Table 135. Result set row for first DSNACCOR result set Column name RS_SEQUENCE RS_DATA Data type INTEGER CHAR(80) Contents Sequence number of the output line A line of command output
The result set contains rows for table spaces, index spaces, or partitions, if both of the following conditions are true for the object: v If the Criteria input parameter contains a search condition, and the search condition is true for the table space, index space, or partition. v DSNACCOR recommends at least one action for the table space, index space, or partition. | | | | The result set contains one row for each nonpartitioned table space or nonpartitioning index space. For partitioned table spaces or partitioning indexes, the result set contains one row for each partition. If ChkLvl 8 is specified, the result set might contain additional rows, including duplicate rows for the same object. The following table shows the columns of a result set row.
Table 136. Result set row for second DSNACCOR result set Column name DBNAME Data type CHAR(8) CHAR(8) INTEGER CHAR(2) Description Name of the database that contains the object. Table space or index space name. Data set number or partition number. DB2 object type: v TS for a table space v IX for an index space OBJECTSTATUS CHAR(36) Status of the object: v ORPHANED, if the object is an index space with no corresponding table space, or if the object does not exist v If the object is in a restricted state, one of the following values: TS=restricted-state, if OBJECTTYPE is TS IX=restricted-state, if OBJECTTYPE is IX restricted-state is one of the status codes that appear in DISPLAY DATABASE output. Related information: DSNT362I (DB2 Messages) -DISPLAY DATABASE (DB2) (DB2 Commands) v A, if the object is in an advisory state. v L, if the object is a logical partition, but not in an advisory state. v AL, if the object is a logical partition and in an advisory state.
843
Table 136. Result set row for second DSNACCOR result set (continued) Column name IMAGECOPY Data type CHAR(3) Description COPY recommendation: v If OBJECTTYPE is TS: FUL (full image copy), INC (incremental image copy), or NO v If OBJECTTYPE is IX: YES or NO RUNSTATS recommendation: YES or NO. Indicates whether the data sets for the object have exceeded ExtentLimit: YES or NO. REORG recommendation: YES or NO. A string that contains one of the following values: v Text that you specify in the QUERYTYPE column of the exception table. v YES, if you put a row in the exception table for the object that this result set row represents, but you specify NULL in the QUERYTYPE column. v NO, if the exception table exists but does not have a row for the object that this result set row represents. v Null, if the exception table does not exist, or if the ChkLvl input parameter does not include the value 4. ASSOCIATEDTS CHAR(8) If OBJECTTYPE is IX and the ChkLvl input parameter includes the value 2, this value is the name of the table space that is associated with the index space. Otherwise null. Timestamp of the last full or incremental image copy on the object. Null if COPY was never run, or if the last COPY execution was terminated. Timestamp of the last LOAD REPLACE on the object. Null if LOAD REPLACE was never run, or if the last LOAD REPLACE execution was terminated. Timestamp of the last REBUILD INDEX on the object. Null if REBUILD INDEX was never run, or if the last REBUILD INDEX execution was terminated. If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the ratio of distinct updated pages to preformatted pages, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS and IMAGECOPY is YES, the ratio of the total number insert, update, and delete operations since the last image copy to the total number of rows or LOBs in the table space or partition, expressed as a percentage. If OBJECTTYPE is IX and IMAGECOPY is YES, the ratio of the total number of insert and delete operations since the last image copy to the total number of entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS or IX and IMAGECOPY is YES, the number of days since the last image copy. Otherwise null. If OBJECTTYPE is IX and IMAGECOPY is YES, the number of active pages in the index space or partition. Otherwise null. Timestamp of the last REORG on the object. Null if REORG was never run, or if the last REORG execution was terminated.
| COPYLASTTIME | |
LOADRLASTTIME
TIMESTAMP
TIMESTAMP
REBUILDLASTTIME
TIMESTAMP
CRUPDPGSPCT
INTEGER
CRCPYCHGPCT
INTEGER
844
Table 136. Result set row for second DSNACCOR result set (continued) Column name RRTINSDELUPDPCT Data type INTEGER Description If OBJECTTYPE is TS and REORG is YES, the ratio of the sum of insert, update, and delete operations since the last REORG to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS and REORG is YES, the ratio of the number of unclustered insert operations to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS and REORG is YES, the ratio of the number of imperfectly chunked LOBs to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS, REORG is YES, and the table space is a segmented table space or LOB table space, the number of mass deletes since the last REORG or LOAD REPLACE. If OBJECTTYPE is TS, REORG is YES, and the table space is nonsegmented, the number of dropped tables since the last REORG or LOAD REPLACE. Otherwise null. If OBJECTTYPE is TS, REORG is YES, the ratio of the total number of overflow records that were created since the last REORG or LOAD REPLACE to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the ratio of the total number of insert and delete operations since the last REORG to the total number of index entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index entries that were inserted since the last REORG, REBUILD INDEX, or LOAD REPLACE that had a key value greater than the maximum key value in the index space or partition, to the number of index entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index entries that were pseudo-deleted (the RID entry was marked as deleted) since the last REORG, REBUILD INDEX, or LOAD REPLACE to the number of index entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the number of mass deletes from the index space or partition since the last REORG, REBUILD, or LOAD REPLACE. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the ratio of the number of index page splits that occurred since the last REORG, REBUILD INDEX, or LOAD REPLACE in which the higher part of the split page was far from the location of the original page, to the total number of active pages in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and REORG is YES, the number of levels in the index tree that were added or removed since the last REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise null.
RRTUNCINSPCT
INTEGER
RRTDISORGLOBPCT
INTEGER
RRTMASSDELETE
INTEGER
RRTINDREF
INTEGER
RRIINSDELPCT
INTEGER
RRIAPPINSPCT
INTEGER
RRIPSDDELPCT
INTEGER
RRIMASSDELETE
INTEGER
RRILEAF
INTEGER
RRINUMLEVELS
INTEGER
845
Table 136. Result set row for second DSNACCOR result set (continued) Column name STATSLASTTIME Data type TIMESTAMP Description Timestamp of the last RUNSTATS on the object. Null if RUNSTATS was never run, or if the last RUNSTATS execution was terminated. If OBJECTTYPE is TS and RUNSTATS is YES, the ratio of the total number of insert, update, and delete operations since the last RUNSTATS on a table space or partition, to the total number of rows or LOBs in the table space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is TS and RUNSTATS is YES, the total number of insert, update, and delete operations since the last RUNSTATS on a table space or partition. Otherwise null. If OBJECTTYPE is TS and RUNSTATS is YES, the number of mass deletes from the table space or partition since the last REORG or LOAD REPLACE. Otherwise null. If OBJECTTYPE is IX and RUNSTATS is YES, the ratio of the total number of insert and delete operations since the last RUNSTATS on the index space or partition, to the total number of index entries in the index space or partition, expressed as a percentage. Otherwise null. If OBJECTTYPE is IX and RUNSTATS is YES, the number insert and delete operations since the last RUNSTATS on the index space or partition. Otherwise null. If OBJECTTYPE is IX and RUNSTATS is YES, the number of mass deletes from the index space or partition since the last REORG, REBUILD INDEX, or LOAD REPLACE. Otherwise, this value is null. If EXTENTS is YES, the number of physical extents in the table space, index space, or partition. Otherwise, this value is null.
SRTINSDELUPDPCT
INTEGER
SRTINSDELUPDABS
INTEGER
SRTMASSDELETE
INTEGER
| SRIINSDELPCT
INTEGER
| SRIINSDELABS
INTEGER
SRIMASSDELETE
INTEGER
TOTALEXTENTS
SMALLINT
PSPI
Related reference: CREATE DATABASE (DB2 SQL) CREATE TABLESPACE (DB2 SQL)
846
CALL
DMQXML1C DMQXML2C
.DXXMQINSERT
service-name NULL
policy-name NULL
XML-collection-name, status )
847
status Contains information that indicates whether DXXMQINSERT ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQINSERT ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQINSERT ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
*/ */ */ */ */ */ */ */ */ */
848
/* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQINSERT output
If DXXMQINSERT executes successfully, the mq-num-msgs field of the status parameter is set to 1, to indicate that a message was retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQINSERT does not execute successfully, the contents of the status parameter indicate the problem.
849
CALL
DMQXML1C DMQXML2C
.DXXMQSHRED
service-name NULL
policy-name NULL
, DAD-file-name,
status )
*/ */ */
850
char status[20]; /* Status of DXXMQSHRED call /* DXXMQSHRED is GENERAL WITH NULLS, so parameters need indicators short serviceName_ind; /* Indicator var for serviceName short policyName_ind; /* Indicator var for policyName short dadFileName_ind; /* Indicator var for dadFileName short status_ind; /* Indicator var for status EXEC SQL END DECLARE SECTION; /* Initialize status fields int dxx_rc=0; int dxx_sql=0; int dxx_mq=0; /* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the DAD file name */ strcpy(dadFileName,"/tmp/neworder2.dad"); /* Initialize the output variable */ status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; /* Initialize the indicator for the output parameter */ status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQSHRED(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :status:status_ind); printf("SQLCODE from CALL: /* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
*/ */ */ */ */ */ */
DXXMQSHRED output
If DXXMQSHRED executes successfully, the mq-num-msgs field of the status parameter is set to 1, to indicate that a message was retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQSHRED does not execute successfully, the contents of the status parameter indicate the problem.
851
CALL
DMQXML1C DMQXML2C
.DXXMQINSERTCLOB
service-name NULL
policy-name NULL
XML-collection-name, status )
852
v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQINSERTCLOB ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQINSERTCLOB ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
853
DXXMQINSERTCLOB output
If DXXMQINSERTCLOB executes successfully, the mq-num-msgs field of the status parameter is set to 1, to indicate that a message was retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQINSERTCLOB does not execute successfully, the contents of the status parameter indicate the problem.
CALL
DMQXML1C DMQXML2C
.DXXMQSHREDCLOB
service-name NULL
policy-name NULL
, DAD-file-name,
status )
854
855
int dxx_mq=0; /* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the DAD file name */ strcpy(dadFileName,"/tmp/neworder2.dad"); /* Initialize the output variable */ status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; /* Initialize the indicator for the output parameter */ status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQSHREDCLOB(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :status:status_ind); printf("SQLCODE from CALL: /* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQSHREDCLOB output
If DXXMQSHREDCLOB executes successfully, the mq-num-msgs field of the status parameter is set to 1, to indicate that a message was retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQSHREDCLOB does not execute successfully, the contents of the status parameter indicate the problem.
856
CALL
DMQXML1C DMQXML2C
.DXXMQINSERTALL
service-name NULL
policy-name NULL
XML-collection-name, status )
857
DXXMQINSERTALL output
If DXXMQINSERTALL executes successfully, the mq-num-msgs field of the status parameter is set to the number of messages that were retrieved from the MQ message queue and decomposed. If DXXMQINSERTALL does not execute successfully, the contents of the status parameter indicate the problem.
858
Use DXXMQSHREDALL for XML documents with a length of up to 3 KB. | Restriction: DXXMQSHREDALL has been deprecated. There are two versions of DXXMQSHREDALL: v A single-phase commit version, with schema name DMQXML1C. v A two-phase commit version, with schema name DMQXML2C.
CALL
DMQXML1C DMQXML2C
.DXXMQSHREDALL
service-name NULL
policy-name NULL
, DAD-file-name,
status )
859
policy-name is an input parameter of type VARCHAR(48). policy-name cannot be blank, a null string, or have trailing blanks. DAD-file-name Specifies the name of the (DAD) file that maps the XML document to DB2 tables. DAD-file-name must be specified, and must be the name of a valid DAD file that exists on the system on which DXXMQSHREDALL runs.DAD-file-name is an input parameter of type VARCHAR(80). status Contains information that indicates whether DXXMQSHREDALL ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQSHREDALL ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQSHREDALL ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
860
EXEC SQL CALL DMQXML1C.DXXMQSHREDALL(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :status:status_ind); printf("SQLCODE from CALL: /* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQSHREDALL output
If DXXMQSHREDALL executes successfully, the mq-num-msgs field of the status parameter is set to the number of messages that were retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQSHREDALL does not execute successfully, the contents of the status parameter indicate the problem.
861
indicators for input host variables must be initialized before you execute the CALL statement.
CALL
DMQXML1C DMQXML2C
.DXXMQSHREDALLCLOB
service-name NULL
policy-name NULL
DAD-file-name, status )
862
#include "dxx.h" #include "dxxrc.h" EXEC SQL INCLUDE SQLCA; EXEC SQL BEGIN DECLARE SECTION; char serviceName[48]; /* WebSphere MQ service name */ char policyName[48]; /* WebSphere MQ policy name */ char dadFileName[30]; /* DAD file name */ char status[20]; /* Status of DXXMQSHREDALLCLOB call */ /* DXXMQSHREDALLCLOB is GENERAL WITH NULLS, so parameters need indicators */ short serviceName_ind; /* Indicator var for serviceName */ short policyName_ind; /* Indicator var for policyName */ short dadFileName_ind; /* Indicator var for dadFileName */ short status_ind; /* Indicator var for status */ EXEC SQL END DECLARE SECTION; /* Initialize status fields */ int dxx_rc=0; int dxx_sql=0; int dxx_mq=0; /* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the DAD file name */ strcpy(dadFileName,"/tmp/neworder2.dad"); /* Initialize the output variable */ status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; /* Initialize the indicator for the output parameter */ status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQSHREDALLCLOB(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :status:status_ind); printf("SQLCODE from CALL: /* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQSHREDALLCLOB output
If DXXMQSHREDALLCLOB executes successfully, the mq-num-msgs field of the status parameter is set to the number of messages that were retrieved from the MQ message queue and inserted into DB2 tables. If DXXMQSHREDALLCLOB does not execute successfully, the contents of the status parameter indicate the problem.
863
CALL
DMQXML1C DMQXML2C
.DXXMQINSERTALLCLOB
service-name NULL
policy-name NULL
XML-collection-name, status )
864
must be specified, and must be the name of a valid XML collection that exists on the system on which DXXMQINSERTALLCLOB runs. XML-collection-name is an input parameter of type VARCHAR(80). status Contains information that indicates whether DXXMQINSERTALLCLOB ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQINSERTALLCLOB ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQINSERTALLCLOB ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
*/ */
865
/* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQINSERTALLCLOB output
If DXXMQINSERTALLCLOB executes successfully, the mq-num-msgs field of the status parameter is set to the number of messages that were retrieved from the MQ message queue and decomposed. If DXXMQINSERTALLCLOB does not execute successfully, the contents of the status parameter indicate the problem.
866
CALL
DMQXML1C DMQXML2C ,
.DXXMQGEN
policy-name NULL
, DAD-file-name,
override-type NULL
override NULL
, num-msgs, status )
867
v If override-type is XML_OVERRIDE, override contains one or more expressions that are separated by AND. Each expression must be enclosed in double quotation marks. This override value overrides the RDB_node mapping in the DAD file. override is an input parameter of type VARCHAR(1024). max-rows Specifies the maximum number of XML documents that DXXMQGEN can send to the MQ message queue. The default is 1. max-rows is an input parameter of type INTEGER. num-rows The actual number of XML documents that DXXMQGEN sends to the MQ message queue. num-rows is an output parameter of type INTEGER. status Contains information that indicates whether DXXMQGEN ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQGEN ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQGEN ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
868
/* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the name of the DAD file for the DB2 tables */ strcpy(dadFileName,"/tmp/getstart_xcollection.dad"); /* Put null in the override parameter because we are not going */ /* to override the values in the DAD file */ override[0] = \0; overrideType = NO_OVERRIDE; /* Indicate that we do not want to transfer more than 500 */ /* documents */ max_row = 500; /* Initialize the output variables */ num_row = 0; status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ dadFileName_ind = 0; serviceName_ind = 0; policyName_ind = 0; maxrow_ind = 0; /* Initialize the indicators for the output parameters */ numrow_ind = -1; status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQGEN(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :overrideType:ovtype_ind, :override:ov_ind, :max_row:maxrow_ind, :num_row:numrow_ind, :status:status_ind); printf("SQLCODE from CALL: /* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQGEN output
If DXXMQGEN executes successfully, the number of documents indicated by the mq-num-msgs field of the status parameter are extracted from DB2 tables and inserted into the MQ message queue. If DXXMQGEN does not execute successfully, the contents of the status parameter indicate the problem.
869
CALL
DMQXML1C DMQXML2C
.DXXMQRETRIEVE
XML-collection-name,
override-type NULL
, num-msgs, status )
870
specified, and must be the name of a valid XML collection that exists on the system on which DXXMQRETRIEVE runs. XML-collection-name is an input parameter of type VARCHAR(80). override-type Specifies what the override parameter does. Possible values are: NO_OVERRIDE The override parameter does not override the XML collection. This is the default. SQL_OVERRIDE The DAD file uses SQL mapping, and the override parameter contains an SQL statement that overrides the SQL statement in the XML collection. XML_OVERRIDE The DAD file uses RDB_node mapping, and the override parameter contains conditions that override the conditions in the XML collection. override-type is an input parameter of type INTEGER. The integer equivalents of the override-type values are defined in the dxx.h file. override-type Specifies what the override parameter does. Possible values are: NO_OVERRIDE The override parameter does not override the condition in the DAD file. This is the default. SQL_OVERRIDE The DAD file uses SQL mapping, and the override parameter contains an SQL statement that overrides the SQL statement in the DAD file. XML_OVERRIDE The DAD file uses RDB_node mapping, and the override parameter contains conditions that override the RDB_node mapping in the DAD file. override-type is an input parameter of type INTEGER. The integer equivalents of the override-type values are defined in the dxx.h file. override Specifies a string that overrides the condition in the DAD file. The contents of the string depend on the value of the override-type parameter: v If override-type is NO_OVERRIDE, override contains a null string. This is the default. v If override-type is SQL_OVERRIDE, override contains a valid SQL statement that overrides the SQL statement in the DAD file. v If override-type is XML_OVERRIDE, override contains one or more expressions that are separated by AND. Each expression must be enclosed in double quotation marks. This override value overrides the RDB_node mapping in the DAD file. override is an input parameter of type VARCHAR(1024). max-rows Specifies the maximum number of XML documents that DXXMQRETRIEVE can send to the MQ message queue. The default is 1. max-rows is an input parameter of type INTEGER.
871
num-rows The actual number of XML documents that DXXMQRETRIEVE sends to the MQ message queue. num-rows is an output parameter of type INTEGER. status Contains information that indicates whether DXXMQRETRIEVE ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQRETRIEVE ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQRETRIEVE ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
*/ */ */ */ */ */ */ */ */ */
872
/* Initialize the output variables */ num_row = 0; status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; maxrow_ind = 0; /* Initialize the indicators for the output parameters */ numrow_ind = -1; status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQRETRIEVE(:serviceName:serviceName_ind, :policyName:policyName_ind, :collectionName:collectionName_ind, :overrideType:ovtype_ind, :override:ov_ind, :max_row:maxrow_ind, :num_row:numrow_ind, :status:status_ind); printf("SQLCODE from CALL: /* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQRETRIEVE output
If DXXMQRETRIEVE executes successfully, the number of documents indicated by the mq-num-msgs field of the status parameter are extracted from DB2 tables and inserted into the MQ message queue. If DXXMQRETRIEVE does not execute successfully, the contents of the status parameter indicate the problem.
873
CALL
DMQXML1C DMQXML2C ,
.DXXMQGENCLOB
policy-name NULL
, DAD-file-name,
override-type NULL
override NULL
, num-msgs, status )
874
XML_OVERRIDE The DAD file uses RDB_node mapping, and the override parameter contains conditions that override the RDB_node mapping in the DAD file. override-type is an input parameter of type INTEGER. The integer equivalents of the override-type values are defined in the dxx.h file. override Specifies a string that overrides the condition in the DAD file. The contents of the string depend on the value of the override-type parameter: v If override-type is NO_OVERRIDE, override contains a null string. This is the default. v If override-type is SQL_OVERRIDE, override contains a valid SQL statement that overrides the SQL statement in the DAD file. v If override-type is XML_OVERRIDE, override contains one or more expressions that are separated by AND. Each expression must be enclosed in double quotation marks. This override value overrides the RDB_node mapping in the DAD file. override is an input parameter of type VARCHAR(1024). max-rows Specifies the maximum number of XML documents that DXXMQGENCLOB can send to the MQ message queue. The default is 1. max-rows is an input parameter of type INTEGER. num-rows The actual number of XML documents that DXXMQGENCLOB sends to the MQ message queue. num-rows is an output parameter of type INTEGER. status Contains information that indicates whether DXXMQGENCLOB ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQGENCLOB ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQGENCLOB ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
*/ */
875
char dadFileName[80]; /* DAD file name */ short overrideType; /* defined in dxx.h */ char override[2]; /* Override string for DAD */ short max_row; /* Maximum number of documents*/ short num_row; /* Actual number of documents */ char status[20]; /* Status of DXXMQGENCLOB call */ /* DXXMQGENCLOB is GENERAL WITH NULLS, so parameters need indicators */ short serviceName_ind; /* Indicator var for serviceName */ short policyName_ind; /* Indicator var for policyName */ short dadFileName_ind; /* Indicator var for dadFileName */ short ovtype_ind; /* Indicator var for overrideType */ short ov_ind; /* Indicator var for override */ short maxrow_ind; /* Indicator var for maxrow */ short numrow_ind; /* Indicator var for numrow */ short status_ind; /* Indicator var for status */ EXEC SQL END DECLARE SECTION; /* Status fields */ int dxx_rc=0; int dxx_sql=0; int dxx_mq=0; /* Get the service name and policy name for the MQ message queue */ strcpy(serviceName,"DB2.DEFAULT.SERVICE"); strcpy(policyName,"DB2.DEFAULT.POLICY"); /* Get the name of the DAD file for the DB2 tables */ strcpy(dadFileName,"/tmp/getstart_xcollection.dad"); /* Put null in the override parameter because we are not going */ /* to override the values in the DAD file */ override[0] = \0; overrideType = NO_OVERRIDE; /* Indicate that we do not want to transfer more than 500 */ /* documents */ max_row = 500; /* Initialize the output variables */ num_row = 0; status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ dadFileName_ind = 0; serviceName_ind = 0; policyName_ind = 0; maxrow_ind = 0; /* Initialize the indicators for the output parameters */ numrow_ind = -1; status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML2C.DXXMQGENCLOB(:serviceName:serviceName_ind, :policyName:policyName_ind, :dadFileName:dadFileName_ind, :overrideType:ovtype_ind, :override:ov_ind, :max_row:maxrow_ind, :num_row:numrow_ind, :status:status_ind); printf("SQLCODE from CALL: /* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQGENCLOB output
If DXXMQGENCLOB executes successfully, the number of documents indicated by the mq-num-msgs field of the status parameter are extracted from DB2 tables and inserted into the MQ message queue. If DXXMQGENCLOB does not execute successfully, the contents of the status parameter indicate the problem.
876
CALL
DMQXML1C DMQXML2C
.DXXMQRETRIEVECLOB
policy-name NULL
XML-collection-name,
override-type NULL
max-rows NULL
, num-msgs, status )
877
repository file. If service-name is not listed in the DSNAMT repository file, or service-name is not specified, DB2.DEFAULT.SERVICE is used. service-name is an input parameter of type VARCHAR(48). service-name cannot be blank, a null string, or have trailing blanks. policy-name Specifies the WebSphere MQ AMI service policy that is used to handle the message. The service policy is defined in the DSNAMT repository file. If policy-name is not listed in the DSNAMT repository file, or policy-name is not specified, DB2.DEFAULT.POLICY is used. policy-name is an input parameter of type VARCHAR(48). policy-name cannot be blank, a null string, or have trailing blanks. XML-collection-name Specifies the name of the XML collection that specifies the DB2 tables from which the XML documents are to be retrieved. XML-collection-name must be specified, and must be the name of a valid XML collection that exists on the system on which DXXMQRETRIEVECLOB runs. XML-collection-name is an input parameter of type VARCHAR(80). override-type Specifies what the override parameter does. Possible values are: NO_OVERRIDE The override parameter does not override the condition in the DAD file. This is the default. SQL_OVERRIDE The DAD file uses SQL mapping, and the override parameter contains an SQL statement that overrides the SQL statement in the DAD file. XML_OVERRIDE The DAD file uses RDB_node mapping, and the override parameter contains conditions that override the RDB_node mapping in the DAD file. override-type is an input parameter of type INTEGER. The integer equivalents of the override-type values are defined in the dxx.h file. override Specifies a string that overrides the condition in the DAD file. The contents of the string depend on the value of the override-type parameter: v If override-type is NO_OVERRIDE, override contains a null string. This is the default. v If override-type is SQL_OVERRIDE, override contains a valid SQL statement that overrides the SQL statement in the DAD file. v If override-type is XML_OVERRIDE, override contains one or more expressions that are separated by AND. Each expression must be enclosed in double quotation marks. This override value overrides the RDB_node mapping in the DAD file. override is an input parameter of type VARCHAR(1024). max-rows Specifies the maximum number of XML documents that DXXMQRETRIEVECLOB can send to the MQ message queue. The default is 1. max-rows is an input parameter of type INTEGER.
878
num-rows The actual number of XML documents that DXXMQRETRIEVECLOB sends to the MQ message queue. num-rows is an output parameter of type INTEGER. status Contains information that indicates whether DXXMQRETRIEVECLOB ran successfully. The format of status is dxx-rc:sqlcode:mq-num-msgs, where: v dxx-rc is the return code from accessing XML Extender. dxx-rc values are defined in dxxrc.h. v sqlcode is 0 if DXXMQRETRIEVECLOB ran successfully, or the SQLCODE from the most recent unsuccessful SQL statement if DXXMQRETRIEVECLOB ran unsuccessfully. v mq-num-msgs is the number of messages that were successfully sent to the MQ message queue. status is an output parameter of type CHAR(20).
879
max_row = 500; /* Initialize the output variables */ num_row = 0; status[0] = \0; /* Set the indicators to 0 for the parameters that have non-null */ /* values */ collectionName_ind = 0; serviceName_ind = 0; policyName_ind = 0; maxrow_ind = 0; /* Initialize the indicators for the output parameters */ numrow_ind = -1; status_ind = -1; /* Call the store procedure */ EXEC SQL CALL DMQXML1C.DXXMQRETRIEVECLOB(:serviceName:serviceName_ind, :policyName:policyName_ind, :collectionName:collectionName_ind, :overrideType:ovtype_ind, :override:ov_ind, :max_row:maxrow_ind, :num_row:numrow_ind, :status:status_ind); printf("SQLCODE from CALL: /* Get the status fields from the status parameter and print them */ sscanf(status,"dxx_rc,&dxx_sql,&dxx_mq); printf("Status fields: dxx_rc=%d dxx_sql=%d dxx_mq=
DXXMQRETRIEVECLOB output
If DXXMQRETRIEVECLOB executes successfully, the number of documents indicated by the mq-num-msgs field of the status parameter are extracted from DB2 tables and inserted into the MQ message queue. If DXXMQRETRIEVECLOB does not execute successfully, the contents of the status parameter indicate the problem.
880
content
docproperty NULL
Example of XSR_REGISTER
The following example calls the XSR_REGISTER stored procedure:
CALL SYSPROC.XSR_REGISTER( SYSXSR, POschema, http://myPOschema/PO.xsd, :content_host_var, :docproperty_host_var)
| | | |
In this example, XSR_REGISTER folds the name POschema to uppercase, so the registered schema name is POSCHEMA. If you do not want XSR_REGISTER to fold POschema to uppercase, you need to delimit the name with double quotation marks ("), as in the following example.
Chapter 14. Calling a stored procedure from your application
881
| | | | | |
Related concepts: Command line processor (DB2 Commands) Example of XML schema registration and removal using stored procedures (DB2 Programming for XML)
content
docproperty NULL
882
The XML schema name must already exist as a result of calling the XSR_REGISTER stored procedure, and XML schema registration cannot yet be completed. This argument cannot have a NULL value. Rules for valid characters and delimiters that apply to any SQL identifier also apply to this argument. schemalocation An input argument of type VARCHAR(1000), which can have a NULL value, that indicates the schema location of the primary XML schema document to which the XML schema document is being added. This argument is the "external name" of the XML schema, that is, the primary document can be identified in the XML instance documents with the xsi:schemaLocation attribute. The document that references the schemalocation must use valid a URI format. | | | | content An input parameter of type BLOB(30M) that contains the content of the XML schema document being added. This argument cannot have a NULL value. An XML schema document must be supplied. The content of the XML schema document must be encoded in Unicode. docproperty An input parameter of type BLOB(5M) that indicates the properties for the XML schema document being added. This parameter can have a NULL value; otherwise, the value is an XML document.
Example of XSR_ADDSCHEMADOC
The following example calls the XSR_ADDSCHEMADOC stored procedure: | | | | | | | | | | | | | | | |
CALL SYSPROC.XSR_ADDSCHEMADOC( SYSXSR, POschema, http://myPOschema/PO.xsd, :schema_content, :schema_properties)
In this example, XSR_ADDSCHEMADOC folds the name POschema to uppercase, so the name of the XML schema that is added is POSCHEMA. If you do not want XSR_ADDSCHEMADOC to fold POschema to uppercase, you need to delimit the name with double quotation marks ("), as in the following example.
CALL SYSPROC.XSR_ADDSCHEMADOC( SYSXSR, "POschema", http://myPOschema/PO.xsd, :schema_content, :schema_properties)
Command line processor (DB2 Commands) Example of XML schema registration and removal using stored procedures (DB2 Programming for XML)
883
An XML schema is not available for validation until the schema registration completes through a call to this stored procedure.
issuedfordecomposition )
884
Example of XSR_COMPLETE
The following example calls the XSR_COMPLETE stored procedure:
CALL SYSPROC.XSR_COMPLETE( SYSXSR, POschema, :schemaproperty_host_var, 0)
| | | |
In this example, XSR_COMPLETE folds the name POschema to uppercase, so the name of the XML schema for which registration is completed is POSCHEMA. If you do not want XSR_COMPLETE to fold POschema to uppercase, you need to delimit the name with double quotation marks ("), as in the following example.
CALL SYSPROC.XSR_COMPLETE( SYSXSR, "POschema", :schemaproperty_host_var, 0)
Related concepts: Example of XML schema registration and removal using stored procedures (DB2 Programming for XML) Command line processor (DB2 Commands) Related tasks: Additional steps for enabling the stored procedures and objects for XML schema support (DB2 Installation and Migration)
885
Example of XSR_REMOVE
The following example calls the XSR_REMOVE stored procedure:
CALL SYSPROC.XSR_REMOVE( SYSXSR, POschema)
| | | |
In this example, XSR_REMOVE folds the name POschema to uppercase, so the name of the XML schema that is removed is POSCHEMA. If you do not want XSR_REMOVE to fold POschema to uppercase, you need to delimit the name with double quotation marks ("), as in the following example.
CALL SYSPROC.XSR_REMOVE( SYSXSR, "POschema")
Related concepts: Command line processor (DB2 Commands) Example of XML schema registration and removal using stored procedures (DB2 Programming for XML)
886
887
888
Example
The following example assumes that all systems involved implement two-phase commit. This example suggests updating several systems in a loop and ending the
Copyright IBM Corp. 1983, 2013
889
unit of work by committing only when the loop is complete. Updates are coordinated across the entire set of systems. Spiffy's application uses a location name to construct a three-part table name in an INSERT statement. It then prepares the statement and executes it dynamically. The values to be inserted are transmitted to the remote location and substituted for the parameter markers in the INSERT statement. The following overview shows how the application uses three-part names:
Read input values Do for all locations Read location name Set up statement to prepare Prepare statement a Execute statement End loop Commit
After the application obtains a location name, for example 'SAN_JOSE', it next creates the following character string:
INSERT INTO SAN_JOSE.DSN8910.PROJ VALUES (?,?,?,?,?,?,?,?)
The application assigns the character string to the variable INSERTX and then executes these statements:
EXEC SQL PREPARE STMT1 FROM :INSERTX; EXEC SQL EXECUTE STMT1 USING :PROJNO, :PROJNAME, :DEPTNO, :RESPEMP, :PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ;
The host variables for Spiffy's project table match the declaration for the sample project table. To keep the data consistent at all locations, the application commits the work only when the loop has executed for all locations. Either every location has committed the INSERT or, if a failure has prevented any location from inserting, all other locations have rolled back the INSERT. (If a failure occurs during the commit process, the entire unit of work can be indoubt.) Recommendation: You might find it convenient to use aliases when creating character strings that become prepared statements, instead of using full three-part names like SAN_JOSE.DSN8910.PROJ.
890
Related concepts: Dynamic SQL on page 158 Aliases and synonyms (DB2 SQL) Related tasks: Binding packages at a remote location on page 970 Related reference: Project table (DSN8910.PROJ) (Introduction to DB2 for z/OS)
Example
You can perform the following series of actions, which includes a forward reference to a declared temporary table:
EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */ EXEC SQL DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */ (CHARCOL CHAR(6) NOT NULL) /* at the remote site */ ON COMMIT DROP TABLE; EXEC SQL CONNECT RESET; /* Connect back to local site */ EXEC SQL INSERT INTO CHICAGO.SESSION.T1 (VALUES ABCDEF); /* Access the temporary table*/ /* at the remote site (forward reference) */
However, you cannot perform the following series of actions, which includes a backward reference to the declared temporary table:
EXEC SQL DECLARE GLOBAL TEMPORARY TABLE T1 /* Define the temporary table */ (CHARCOL CHAR(6) NOT NULL) /* at the local site (ATLANTA)*/ ON COMMIT DROP TABLE; EXEC SQL CONNECT TO CHICAGO; /* Connect to the remote site */ EXEC SQL INSERT INTO ATLANTA.SESSION.T1 (VALUES ABCDEF); /* Cannot access temp table */ /* from the remote site (backward reference)*/
891
For example, the application inserts a new location name into the variable LOCATION_NAME and executes the following statements:
EXEC SQL CONNECT TO :LOCATION_NAME; EXEC SQL INSERT INTO DSN8910.PROJ VALUES (:PROJNO, :PROJNAME, :DEPTNO, :RESPEMP, :PRSTAFF, :PRSTDATE, :PRENDATE, :MAJPROJ);
To keep the data consistent at all locations, the application commits the work only when the loop has executed for all locations. Either every location has committed the INSERT or, if a failure has prevented any location from inserting, all other locations have rolled back the INSERT. (If a failure occurs during the commit process, the entire unit of work can be indoubt.) The host variables for Spiffy's project table match the declaration for the sample project table. LOCATION_NAME is a character-string variable of length 16. Related reference: Project table (DSN8910.PROJ) (Introduction to DB2 for z/OS)
892
The DBALIAS column applies to DRDA connections only. For example, suppose that an employee database is deployed across two sites and that both sites make themselves known as location name EMPLOYEE. To access each site, insert a row for each site into SYSIBM.LOCATIONS with the location names SVL_EMPLOYEE and SJ_EMPLOYEE. Both rows contain EMPLOYEE as the DBALIAS value. When an application issues a CONNECT TO SVL_EMPLOYEE statement, DB2 searches the SYSIBM.LOCATIONS table to retrieve the location and network attributes of the database server. Because the DBALIAS value is not blank, DB2 uses the alias EMPLOYEE, and not the location name, to access the database. If the application uses fully qualified object names in its SQL statements, DB2 sends the statements to the remote server without modification. For example, suppose that the application issues the statement SELECT * FROM SVL_EMPLOYEE.authid.table with the fully-qualified object name. However, DB2 accesses the remote server by using the EMPLOYEE alias. The remote server must identify itself as both SVL_EMPLOYEE and EMPLOYEE; otherwise, it rejects the SQL statement with a message indicating that the database is not found. If the remote server is DB2, the location SVL_EMPLOYEE might be defined as a location alias for EMPLOYEE. DB2 z/OS servers are defined with this alias by using the DDF ALIAS statement of the DSNJU003 change log inventory utility. DB2 locally executes any SQL statements that contain fully qualified object names if the high-level qualifier is the location name or any of its alias names.
Releasing connections
When you connect to remote locations explicitly, you must also terminate those connections explicitly.
Example
By using the RELEASE statement, you can place any of the following connections in the release-pending state: v A specific connection that the next unit of work does not use:
EXEC SQL RELEASE SPIFFY1;
v All DB2 private protocol connections. If the first phase of your application program uses DB2 private protocol access and the second phase uses DRDA
Chapter 15. Coding methods for distributed data
893
access, open DB2 private protocol connections from the first phase could cause a CONNECT operation to fail in the second phase. To prevent that error, execute the following statement before the commit operation that separates the two phases:
EXEC SQL RELEASE ALL PRIVATE;
PRIVATE refers to DB2 private protocol connections, which exist only between instances of DB2 for z/OS.
894
Example: DB2 for z/OS supports ROWID columns; DB2 for Linux, UNIX, and Windows does not support ROWID columns. Any data definition statements that use ROWID columns cannot run across all platforms. v Statements can have different limits. Example: A query in DB2 for z/OS can have 750 columns; for other systems, the maximum is higher. But a query using 750 or fewer columns could execute in all systems. v Some statements are not sent to the server but are processed completely by the requester. You cannot use those statements in a remote package even though the server supports them. v In general, if a statement to be executed at a remote server contains host variables, a DB2 requester assumes them to be input host variables unless it supports the syntax of the statement and can determine otherwise. If the assumption is not valid, the server rejects the statement. Related reference: Characteristics of SQL statements in DB2 for z/OS(DB2 SQL)
895
You can bind a plan or package with the ENCODING bind option to control the CCSIDs for all static data in that plan or package. For example, if you specify ENCODING(UNICODE) when you bind a package at a remote DB2 for z/OS system, the data that is returned in host variables from the remote system is encoded in the default Unicode CCSID for that system. For static or dynamic SQL An application program can specify overriding CCSIDs for individual host variables in DECLARE VARIABLE statements. An application program that uses an SQLDA can specify an overriding CCSID for the returned data in the SQLDA. When the application program executes a FETCH statement, you receive the data in the CCSID that is specified in the SQLDA. Related tasks: Setting the CCSID for host variables on page 141 Related reference: BIND and REBIND options (DB2 Commands)
896
| | | | | | | | |
v Application Messaging Interface (AMI) v WebSphere MQ classes for Java v WebSphere MQ classes for Java Message Service (JMS) Restriction: The AMI has been deprecated. DB2 provides its own application programming interface to the WebSphere MQ message handling system through a set of external user-defined functions, which are called DB2 MQ functions. You can use these functions in SQL statements to combine DB2 database access with WebSphere MQ message handling. The DB2 MQ functions use either the AMI or the MQI. Restriction: All DB2 MQ functions that use AMI are deprecated. You can convert those applications that use the AMI-based functions to use the MQI-based functions. Related tasks: Converting applications to use the MQI functions on page 916 Related reference: WebSphere MQ information center
WebSphere MQ messages
WebSphere MQ uses messages to pass information between applications. Messages consist of the following parts: v The message attributes, which identify the message and its properties. v The message data, which is the application data that is carried in the message. Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures on page 901 | | | | | | | | | | | | | | | | | |
897
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
a system administrator. The complexity of the parameters in the service is hidden from the application program. policy Defines how the message is handled. Policies control such items as: v The attributes of the message, for example, the priority. v Options for send and receive operations, for example, whether an operation is part of a unit of work. The default service and policy are set as part of defining the WebSphere MQ configuration for a particular installation of DB2. (This action is typically performed by a system administrator.) DB2 provides the default service DB2.DEFAULT.SERVICE and the default policy DB2.DEFAULT.POLICY. How services and policies are stored and managed depends on whether you are using the AMI or the MQI. Related tasks: Additional steps for enabling WebSphere MQ user-defined functions (DB2 Installation and Migration) Related reference: WebSphere MQ information center WebSphere MQ message handling with the MQI: One way to send and receive WebSphere MQ messages from DB2 applications is to use the DB2 MQ functions that use MQI. These MQI-based functions use the services and policies that are defined in two DB2 tables, SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. These tables are user-managed and are typically created and maintained by a system administrator. Each table contains a row for the default service and policy that are provided by DB2. The application program does not need know the details of the services and policies that are defined in these tables. The application need only specify which service and policy to use for each message that it sends and receives. The application specifies this information when it calls a DB2 MQ function. Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures on page 901 Related reference: DB2 MQ tables on page 907 DB2 MQI services: A service describes a destination to which an application sends messages or from which an application receives messages. DB2 Message Queue Interface (MQI) services are defined in the DB2 table SYSIBM.MQSERVICE_TABLE. The MQI-based DB2 MQ functions use the services that are defined in the DB2 table SYSIBM.MQSERVICE_TABLE. This table is user-managed and is typically created and maintained by a system administrator. This table contains a row for each defined service, including your customized services and the default service that is provided by DB2.
898
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
The application program does not need know the details of the defined services. When an application program calls an MQI-based DB2 MQ function, the program selects a service from SYSIBM.MQSERVICE_TABLE by specifying it as a parameter. Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures on page 901 WebSphere MQ message handling on page 897 Related reference: DB2 MQ tables on page 907 DB2 MQI policies: A policy controls how the MQ messages are handled. DB2 Message Queue Interface (MQI) policies are defined in the DB2 table SYSIBM.MQPOLICY_TABLE. The MQI-based DB2 MQ functions use the policies that are defined in the DB2 table SYSIBM.MQPOLICY_TABLE. This table is user-managed and is typically created and maintained by a system administrator. This table contains a row for each defined policy, including your customized policies and the default policy that is provided by DB2. The application program does not need know the details of the defined policies. When an application program calls an MQI-based DB2 MQ function, the program selects a policy from SYSIBM.MQPOLICY_TABLE by specifying it as a parameter. Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures on page 901 WebSphere MQ message handling on page 897 Related reference: DB2 MQ tables on page 907 WebSphere MQ message handling with the AMI: One way to send and receive WebSphere MQ messages from DB2 applications is to use the DB2 MQ functions that use AMI. However, be aware that this interface and the associated DB2 MQ functions have been deprecated. Restriction: The AMI and the DB2 MQ functions that use the AMI have been deprecated. You can convert those applications that use the AMI-based functions to use the MQI-based functions. The AMI-based functions use the services and policies that are defined in AMI configuration files, which are in XML format. Typically, these files are created and maintained by a system administrator. These files also define any default services and policies, including the defaults that are provided by DB2. The application program does not need know the details of the services and policies that are defined in these files. The application need only specify which service and policy to use for each message that it sends and receives. The application specifies this information when it calls a DB2 MQ function. The AMI uses the service and policy to interpret and construct the MQ headers and message descriptors. The AMI does not act on the message data.
899
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures on page 901 Related tasks: Converting applications to use the MQI functions on page 916 Related reference: WebSphere MQ information center AMI services: A service describes a destination to which an application sends messages or from which an application receives messages. AMI services are defined in AMI configuration files. Restriction: The AMI and the DB2 MQ functions that use the AMI have been deprecated. The AMI-based DB2 MQ functions use the services that are defined in AMI configuration files, which are in XML format. These files are typically created and maintained by a system administrator. These files contain all of the defined services, including your customized services and any default services, such as the one that DB2 provides. The application program does not need know the details of the defined services. When an application program calls an AMI-based DB2 MQ function, the program selects a service from the AMI configuration file by specifying it as a parameter. Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures on page 901 WebSphere MQ message handling on page 897 AMI policies: A policy controls how the MQ messages are handled. AMI policies are defined in AMI configuration files. Restriction: The AMI and the DB2 MQ functions that use the AMI have been deprecated. The AMI-based DB2 MQ functions use the policies that are defined in AMI configuration files, which are in XML format. These files are typically created and maintained by a system administrator. These files contain all of the defined policies, including your customized policies and any default policies, such as the one that DB2 provides. The application program does not need know the details of the defined policies. When an application program calls an AMI-based DB2 MQ function, the program selects a policy from the AMI configuration file by specifying it as a parameter.
900
| | |
Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures WebSphere MQ message handling on page 897
901
| Table 137. DB2 MQ scalar functions | Scalar function | MQPUBLISH (publisher-service, | service-policy, msg-data, topic-list, | correlation-id) | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
MQREAD (receive-service, service-policy) Description MQPUBLISH publishes a message, as specified in the msg-data variable, to the WebSphere MQ publisher that is specified in the publisher-service variable. It uses the quality of service policy as specified in the service-policy variable. The topic-list variable specifies a list of topics for the message. The optional correlation-id variable specifies the correlation id that is to be associated with this message. The return value is 1 if successful or 0 if not successful. Restriction: MQPublish uses the AMI only. A version of MQPublish that uses the MQI is not available. MQREAD returns a message in a VARCHAR variable from the MQ location specified by receive-service, using the policy defined in service-policy. This operation does not remove the message from the head of the queue but instead returns it. If no messages are available to be returned, a null value is returned. MQREADCLOB returns a message in a CLOB variable from the MQ location specified by receive-service, using the policy defined in service-policy. This operation does not remove the message from the head of the queue but instead returns it. If no messages are available to be returned, a null value is returned. MQRECEIVE returns a message in a VARCHAR variable from the MQ location specified by receive-service, using the policy defined in service-policy. This operation removes the message from the queue. If correlation-id is specified, the first message with a matching correlation identifier is returned; if correlation-id is not specified, the message at the beginning of queue is returned. If no messages are available to be returned, a null value is returned. MQRECEIVECLOB returns a message in a CLOB variable from the MQ location specified by receive-service, using the policy defined in service-policy. This operation removes the message from the queue. If correlation-id is specified, the first message with a matching correlation identifier is returned; if correlation-id is not specified, the message at the head of queue is returned. If no messages are available to be returned, a null value is returned.
MQSEND (send-service, MQSEND sends the data in a VARCHAR or CLOB variable msg-data to the MQ service-policy, msg-data, correlation-id) location specified by send-service, using the policy defined in service-policy. An optional user-defined message correlation identifier can be specified by correlation-id. The return value is 1 if successful or 0 if not successful. MQSUBSCRIBE (subscriber-service, service-policy, topic-list) MQSUBSCRIBE registers interest in WebSphere MQ messages that are published to the list of topics that are specified in the topic-list variable. The subscriber-service variable specifies a logical destination for messages that match the specified list of topics. Messages that match each topic are placed on the queue at the specified destination, using the policy specified in the service-policy variable. These messages can be read or received by issuing a subsequent call to MQREAD, MQREADALL, MQREADCLOB, MQREADALLCLOB, MQRECEIVE, MQRECEIVEALL, MQRECEIVECLOB, or MQRECEIVEALLCLOB. The return value is 1 if successful or 0 if not successful. Restriction: MQSUBSCRIBE uses the AMI only. A version of MQSUBSCRIBE that uses the MQI is not available. MQUNSUBSCRIBE unregisters previously specified interest in WebSphere MQ messages that are published to the list of topics that are specified in the topic-list variable. The subscriber-service, service-policy, and topic-list variables specify which subscription is to be cancelled. The return value is 1 if successful or 0 if not successful. Restriction: MQUNSUBSCRIBE uses the AMI only. A version of MQUNSUBSCRIBE that uses the MQI is not available.
902
| | | | | | | | | | | |
Table 137. DB2 MQ scalar functions (continued) Scalar function Notes: 1. You can send or receive messages in VARCHAR variables or CLOB variables. The maximum length for a message in a VARCHAR variable is 32 KB. The maximum length for a message in a CLOB variable is 2 MB. 2. Restriction: The versions of these MQ functions that are in the DB2MQ1C, DB2MQ1N, DB2MQ2C, and DB2MQ2N schemas are deprecated. (Those functions use the AMI.) Instead use the version of these functions in the DB2MQ schema. (Those functions use the MQI.) The exceptions are MQPUBLISH, MQSUBSCRIBE, and MQUNSUBSCRIBE. Although the AMI-based versions of these functions are deprecated, a version of these functions does not exist in the DB2MQ schema. Description
The following table describes the MQ table functions that DB2 can use.
Table 138. DB2 MQ table functions Table function MQREADALL (receive-service, service-policy, num-rows) Description MQREADALL returns a table that contains the messages and message metadata in VARCHAR variables from the MQ location specified by receive-service, using the policy defined in service-policy. This operation does not remove the messages from the queue. If num-rows is specified, a maximum of num-rows messages is returned; if num-rows is not specified, all available messages are returned.
MQREADALLCLOB (receive-service, MQREADALLCLOB returns a table that contains the messages and message service-policy, num-rows) metadata in CLOB variables from the MQ location specified by receive-service, using the policy defined in service-policy. This operation does not remove the messages from the queue. If num-rows is specified, a maximum of num-rows messages is returned; if num-rows is not specified, all available messages are returned. MQRECEIVEALL (receive-service, service-policy, correlation-id, num-rows) MQRECEIVEALL returns a table that contains the messages and message metadata in VARCHAR variables from the MQ location specified by receive-service, using the policy defined in service-policy. This operation removes the messages from the queue. If correlation-id is specified, only those messages with a matching correlation identifier are returned; if correlation-id is not specified, all available messages are returned. If num-rows is specified, a maximum of num-rows messages is returned; if num-rows is not specified, all available messages are returned. MQRECEIVEALLCLOB returns a table that contains the messages and message metadata in CLOB variables from the MQ location specified by receive-service, using the policy defined in service-policy. This operation removes the messages from the queue. If correlation-id is specified, only those messages with a matching correlation identifier are returned; if correlation-id is not specified, all available messages are returned. If num-rows is specified, a maximum of num-rows messages is returned; if num-rows is not specified, all available messages are returned.
Notes: 1. You can send or receive messages in VARCHAR variables or CLOB variables. The maximum length for a message in a VARCHAR variable is 32 KB. The maximum length for a message in a CLOB variable is 2 MB. 2. The first column of the result table of a DB2 MQ table function contains the message. 3. Restriction: The versions of these MQ functions that are in the DB2MQ1C, DB2MQ1N, DB2MQ2C, and DB2MQ2N schemas are deprecated. (Those functions use the AMI.) Instead use the version of these functions in the DB2MQ schema. (Those functions use the MQI.)
The following table describes the MQ functions that DB2 can use to work with XML data.
Chapter 15. Coding methods for distributed data
903
| | | |
Restriction: All of these DB2 MQ XML-specific functions have been deprecated. Instead of using these XML-specific functions, you can use the other DB2 MQ functions with data of type XML by first casting that data to type VARCHAR or CLOB.
Table 139. DB2 MQ XML-specific functions XML-specific function MQREADXML (receive-service, service-policy) MQREADALLXML (receive-service, service-policy) MQRECEIVEXML (receive-service, service-policy, correlation-id) MQRECEIVEALLXML (receive-service, service-policy, correlation-id) MQSENDXML (send-service, service-policy, correlation-id) MQSENDXMLFILE (send-service, service-policy, correlation-id) MQSENDXMLFILECLOB (send-service, service-policy, correlation-id) MQPUBLISHXML (publisher-service, service-policy, correlation-id) Description MQREADXML returns the first message in a queue without removing the message from the queue. MQREADALLXML returns a table that contains messages from a queue without removing the messages from the queue. MQRECEIVEXML returns a message from the queue and removes that message from the queue. MQRECEIVEALLXML returns a table that contains messages from a queue and removes the messages from the queue. MQSENDXML sends a message and does not expect a reply. MQSENDXMLFILE sends a message that contains a file and does not expect a reply. MQSENDXMLFILECLOB sends a message that contains a file and does not expect a reply. MQPUBLISHXML sends a message to a queue to be picked up by applications that monitor the queue.
You can use the WebSphere MQ XML stored procedures to retrieve an XML document from a message queue, decompose it into untagged data, and store the data in DB2 tables. You can also compose an XML document from DB2 data and send the document to an MQSeries(R) message queue. The following table shows WebSphere MQ XML stored procedures for decomposition. | | | Restriction: All of these DB2 MQ XML decomposition stored procedures have been deprecated. Instead of using these decomposition stored procedures, you can shred XML documents from an MQ message queue.
Table 140. DB2 MQ XML decomposition stored procedures XML decomposition stored procedure DXXMQINSERT and DXXMQINSERTALL Description Decompose incoming XML documents from a message queue and store the data in new or existing database tables. The DXXMQINSERT and DXXMQINSERTALL stored procedures require an enabled XML collection name as input. Decompose incoming XML documents from a message queue and store the data in new or existing database tables. The DXXMQINSERTCLOB and DXXMQINSERTALLCLOB stored procedures require an enabled XML collection name as input. Shred incoming XML documents from a message queue and store the data in new or existing database tables. The DXXMQSHRED and DXXMQSHREDAll stored procedures take a DAD file as input; they do not require an enabled XML collection name as input.
904
Table 140. DB2 MQ XML decomposition stored procedures (continued) XML decomposition stored procedure DXXMQSHREDCLOB and DXXMQSHREDALLCLOB Description Shred incoming XML documents from a message queue and store the data in new or existing database tables. The DXXMQSHREDCLOB and DXXMQSHREDALLCLOB stored procedures take a DAD file as input; they do not require an enabled XML collection name as input.
The following table shows WebSphere MQ XML stored procedures for composition. | | | | Restriction: All of these DB2 MQ XML composition stored procedures have been deprecated. Instead of using these composition stored procedures, you can generate XML documents from existing tables and send them to an MQ message queue.
Table 141. DB2 MQ XML composition stored procedures XML composition stored procedure DXXMQGEN and DXXMQGENALL Description Generate XML documents from existing database tables and send the generated XML documents to a message queue. The DXXMQGEN and DXXMQGENALL stored procedures take a DAD file as input; they do not require an enabled XML collection name as input. Generate XML documents from existing database tables and send the generated XML documents to a message queue. The DXXMQRETRIEVE and DXXMQRETRIEVECLOB stored procedures require an enabled XML collection name as input.
Related concepts: DB2-supplied stored procedures on page 797 Related tasks: Converting applications to use the MQI functions on page 916 Additional steps for enabling WebSphere MQ user-defined functions (DB2 Installation and Migration) Related reference: MQREADALL (DB2 SQL) MQREADALLCLOB (DB2 SQL) MQRECEIVEALL (DB2 SQL) MQRECEIVEALLCLOB (DB2 SQL) WebSphere MQ information center
905
| | | | | | | | | | | | |
Restriction: The AMI and the DB2 MQ functions that use the AMI have been deprecated. The schema name when you use AMI-based DB2 MQ functions and stored procedures for a single-phase commit is DB2MQ1N. The schema name when you use AMI-based DB2 MQ functions and stored procedures for a two-phase commit is DB2MQ2N. The schema names DB2MQ1C and DB2MQ2C, are still valid, but they do not support the parameter style that allows the value to contain binary '0'. You need to assign these two versions of the AMI-based DB2 MQ functions and stored procedures to different WLM environments, which guarantees that the versions are never invoked from the same address space. For MQI-based DB2 MQ functions, you can specify whether the function is for one-phase commit or two-phase commit by using the value in the SYNCPOINT column of the table SYSIBM.MQPOLICY_TABLE. Single-phase commit in WebSphere MQ: If your application uses single-phase commit, any DB2 COMMIT or ROLLBACK operations are independent of WebSphere MQ operations. If a transaction is rolled back, the messages that have been sent to a queue within the current unit of work are not discarded. This type of commit is typically used in the case of application error. You might want to use WebSphere MQ messaging functions to notify a system programmer that an application error has occurred. The application issues a ROLLBACK after the error occurs, but the message is still delivered to the queue that contains the error messages. In a single-phase commit environment, WebSphere MQ controls its own queue operations. A DB2 COMMIT or ROLLBACK does not affect when or if messages are added to or deleted from an MQ queue. Two-phase commit in WebSphere MQ: If your application uses two-phase commit, RRS coordinates the commit process. If a transaction is rolled back, the messages that have been sent to a queue within the current unit of work are discarded. This type of commit is typically used when a transaction causes a message to be sent, which causes another transaction to be initiated. For example, assume that a sales transaction causes a WebSphere MQ message to be sent to a queue. The message causes your inventory system to order replacement merchandise. That message should be discarded if the transaction representing the sale is rolled back.
| | |
In a two-phase commit environment, if you want to force messages to be added to or deleted from an MQ queue, you need to issue a COMMIT in your application program after you call an AMI-based DB2 MQ function.
906
Generating XML documents from existing tables and sending them to an MQ message queue
You can send data from a DB2 table to the MQ message queue. First put the data in an XML document and then send that document to the message queue.
Procedure
To generate XML documents from existing tables and send them to an MQ message queue: 1. Compose an XML document by using the DB2 XML publishing functions. 2. Cast the XML document to type VARCHAR or CLOB. 3. Send the document to an MQ message queue by using the appropriate DB2 MQ function. Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures on page 901 Functions for constructing XML values (DB2 Programming for XML)
Procedure
To shred XML documents from an MQ message queue: 1. Retrieve the XML document from an MQ message queue by using the appropriate MQ function. 2. Shred the retrieved message to DB2 tables by using the XML decomposition stored procedure (XDBDECOMPXML). Related concepts: DB2 MQ functions and DB2 MQ XML stored procedures on page 901
DB2 MQ tables
The DB2 MQ tables contain service and policy definitions that are used by the Message Queue Interface (MQI) based DB2 MQ functions. You must populate the DB2 MQ tables before you can use these MQI-based functions.
907
| | | | | | | | |
The DB2 MQ tables are SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. These tables are user-managed. You need to create them during the installation or migration process. Sample job DSNTIJSG creates these tables with one default row in each table. If you previously used the AMI-based DB2 MQ functions, you used AMI configuration files instead of these tables. To use the MQI-based DB2 MQ functions, you need to move the data from those configuration files to the DB2 tables SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE . The following table describes the columns for SYSIBM.MQSERVICE_TABLE.
Description This column contains the service name, which is an optional input parameter of the MQ functions. This column is the primary key for the SYSIBM.MQSERVICE_TABLE table. This column contains the name of the queue manager where the MQ functions are to establish a connection. This column contains the name of the queue from which the MQ functions are to send and retrieve messages. This column contains the character set identifier for character data in the messages that are sent and received by the MQ functions. This column corresponds to the CodedCharSetId field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the CodedCharSetId field. The default value for this column is 0, which sets the CodedCharSetId field of the MQMD to the value MQCCSI_Q_MGR. This column contains the encoding value for the numeric data in the messages that are sent and received by the MQ functions. This column corresponds to the Encoding field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Encoding field. The default value for this column is 0, which sets the Encoding field in the MQMD to the value MQENC_NATIVE. This column contains the description of the service.
| Table 142. SYSIBM.MQSERVICE_TABLE column descriptions | Column name | SERVICENAME | | | | QUEUEMANAGER | | INPUTQUEUE | | CODEDCHARSETID | | | | | | | | | | ENCODING | | | | | | | | | DESCRIPTION | |
908
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 143. SYSIBM.MQPOLICY_TABLE column descriptions Column name POLICYNAME Description This column contains the policy name, which is an optional input parameter of the MQ functions. This column is the primary key for the SYSIBM.MQPOLICY_TABLE table. SEND_PRIORITY This column contains the priority of the message. This column corresponds to the Priority field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Priority field. The default value for this column is -1, which sets the Priority field in the MQMD to the value MQQPRI_PRIORITY_AS_Q_DEF. SEND_PERSISTENCE This column indicates whether the message persists despite any system failures or instances of restarting the queue manager. This column corresponds to the Persistence field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Persistence field. This column can have the following values: Q Sets the Persistence field in the MQMD to the value MQPER_PERSISTENCE_AS_Q_DEF. This value is the default. Sets the Persistence field in the MQMD to the value MQPER_PERSISTENT. Sets the Persistence field in the MQMD to the value MQPER_NOT_ PERSISTENT.
Y N SEND_EXPIRY
This column contains the message expiration time, in tenths of a second. This column corresponds to the Expiry field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Expiry field. The default value is -1, which sets the Expiry field to the value MQEI_UNLIMITED.
SEND_RETRY_COUNT
This column contains the number of times that the MQ function is to try to send a message if the procedure fails. The default value is 5.
SEND_RETRY_INTERVAL
This column contains the interval, in milliseconds, between each attempt to send a message. The default value is 1000.
909
| Table 143. SYSIBM.MQPOLICY_TABLE column descriptions (continued) | Column name | SEND_NEW_CORRELID | | | | | | | | | | | | | | SEND_RESPONSE_MSGID | | | | | | | | | | | SEND_RESPONSE_CORRELID | | | | | | | | | | |
Description This column specifies how the correlation identifier is to be set if a correlation identifier is not passed as an input parameter in the MQ function. The correlation identifier is set in the CorrelId field in the message descriptor structure (MQMD). This column can have one of the following values: N Y Sets the CorrelId field in the MQMD to binary zeros. This value is the default. Specifies that the queue manager is to generate a new correlation identifier and set the CorrelId field in the MQMD to that value. This 'Y' value is equivalent to setting the MQPMO_NEW_CORREL_ID option in the Options field in the put message options structure (MQPMO).
This column specifies how the MsgId field in the message descriptor structure (MQMD) is to be set for report and reply messages. This column corresponds to the Report field in the MQMD. MQ functions use the value in this column to set the Report field. This column can have one of the following values: N P Sets the MQRO_NEW_MSG_ID option in the Report field in the MQMD. This value is the default. Sets the MQRO_PASS_MSG_ID option in the Report field in the MQMD.
This column specifies how the CorrelID field in the message descriptor structure (MQMD) is to be set for report and reply messages. This column corresponds to the Report field in the MQMD. MQ functions use the value in this column to set the Report field. This column can have one of the following values: C Sets the MQRO_COPY_MSG_ID_TO_CORREL_ID option in the Report field in the MQMD. This value is the default. Sets the MQRO_PASS_CORREL_ID option in the Report field in the MQMD.
910
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 143. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name SEND_EXCEPTION_ACTION Description This column specifies what to do with the original message when it cannot be delivered to the destination queue. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: Q D P SEND_REPORT_EXCEPTION Sets the MQRO_DEAD_LETTER_Q option in the Report field in the MQMD. This value is the default. Sets the MQRO_DISCARD_MSG option in the Report field in the MQMD. Sets the MQRO_PASS_DISCARD_AND_EXPIRY option in the Report field in the MQMD.
This column specifies whether an exception report message is to be generated when a message cannot be delivered to the specified destination queue and if so, what that report message should contain. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that an exception report message is not to be generated. No options in the Report field are set. This value is the default. Sets the MQRO_EXCEPTION option in the Report field in the MQMD. Sets the MQRO_EXCEPTION_WITH_DATA option in the Report field in the MQMD. Sets the MQRO_EXCEPTION_WITH_FULL_DATA option in the Report field in the MQMD.
E D F
911
| Table 143. SYSIBM.MQPOLICY_TABLE column descriptions (continued) | Column name | SEND_REPORT_COA | | | | | | | | | | | | | | | | | SEND_REPORT_COD | | | | | | | | | | | | | | | | |
Description This column specifies whether the queue manager is to send a confirm-on-arrival (COA) report message when the message is placed in the destination queue, and if so, what that COA message is to contain. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that a COA message is not to be sent. No options in the Report field are set. This value is the default Sets the MQRO_COA option in the Report field in the MQMD Sets the MQRO_COA_WITH_DATA option in the Report field in the MQMD. Sets the MQRO_COA_WITH_FULL_DATA option in the Report field in the MQMD.
C D F
This column specifies whether the queue manager is to send a confirm-on-delivery (COD) report message when an application retrieves and deletes a message from the destination queue, and if so, what that COD message is to contain. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that a COD message is not to be sent. No options in the Report field are set. This value is the default. Sets the MQRO_COD option in the Report field in the MQMD. Sets the MQRO_COD_WITH_DATA option in the Report field in the MQMD. Sets the MQRO_COD_WITH_FULL_DATA option in the Report field in the MQMD.
C D F
912
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 143. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name SEND_REPORT_EXPIRY Description This column specifies whether the queue manager is to send an expiration report message if a message is discarded before it is delivered to an application, and if so, what that message is to contain. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that an expiration report message is not to be sent. No options in the Report field are set.This value is the default. Sets the MQRO_EXPIRATION option in the Report field in the MQMD. Sets the MQRO_EXPIRATION_WITH_DATA option in the Report field in the MQMD. Sets the MQRO_EXPIRATION_WITH_FULL_DATA option in the Report field in the MQMD.
C D F SEND_REPORT_ACTION
This column specifies whether the receiving application sends a positive action notification (PAN), a negative action notification (NAN), or both. This column corresponds to the Report field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the Report field. This column can have one of the following values: N Specifies that neither notification is to be sent. No options in the Report field are set. This value is the default. Sets the MQRO_PAN option in the Report field in the MQMD. Sets the MQRO_NAN option in the Report field in the MQMD. Sets both the MQRO_PAN and MQRO_NAN options in the Report field in the MQMD.
P T B
913
| Table 143. SYSIBM.MQPOLICY_TABLE column descriptions (continued) | Column name | SEND_MSG_TYPE | | | | | | | | | | | | | | | | | REPLY_TO_Q | | | | | | | | | | | | REPLY_TO_QMGR | | | | | | | | | | | RCV_WAIT_INTERVAL | | | | |
Description This column contains the type of message. This column corresponds to the MsqType field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the MsqType field. This column can have one of the following values: DTG Sets the MsgType field in the MQMD to MQMT_DATAGRAM. This value is the default. REQ Sets the MsgType field in the MQMD to MQMT_REQUEST. RLY Sets the MsgType field in the MQMD to MQMT_REPLY. RPT Sets the MsgType field in the MQMD to MQMT_REPORT. This column contains the name of the message queue to which the application that issued the MQGET call is to send reply and report messages. This column corresponds to the ReplyToQ field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the ReplyToQ field. The default value for this column is SAME AS INPUT_Q, which sets the name to the queue name that is defined in the service that was used for sending the message. If no service was specified, the name is set to DB2MQ_DEFAULT_Q, which is the name of the input queue for the default service. This column contains the name of the queue manager to which the reply and report messages are to be sent. This column corresponds to the ReplyToQMgr field in the message descriptor structure (MQMD). MQ functions use the value in this column to set the ReplyToQMgr field. The default value for this column is SAME AS INPUT_QMGR, which sets the name to the queue manager name that is defined in the service that was used for sending the message. If no service was specified, the name is set to the name of the queue manager for the default service. This column contains the time, in milliseconds, that DB2 is to wait for messages to arrive in the queue. This column corresponds to the WaitInterval field in the get message options structure (MQGMO). MQ functions use the value in this column to set the WaitInterval field. The default is 10.
914
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 143. SYSIBM.MQPOLICY_TABLE column descriptions (continued) Column name RCV_CONVERT Description This column indicates whether to convert the application data in the message to conform to the CodedCharSetId and Encoding values of the specified MQ service. This column corresponds to the Options field in the get message options structure (MQGMO). MQ functions use the value in this column to set the Options field. This column can have one of the following values: Y N RCV_ACCEPT_TRUNC_MSG Sets the MQGMO_CONVERT option in the Options field in the MQGMO. This value is the default. Specifies that no data is to be converted.
This column specifies the behavior of the MQ function when oversized messages are retrieved. This column corresponds to the Options field in the get message options structure (MQGMO). MQ functions use the value in this column to set the Options field. This column can have one of the following values: Y Sets the MQGMO_ACCEPT_TRUNCATED_MSG option in the Options field in the MQGMO. This value is the default.
Specifies that no messages are to be truncated. If the message is too large to fit in the buffer, the MQ function terminates with an error. Recommendation: Set this column to Y. In this case, if the message buffer is too small to hold the complete message, the MQ function can fill the buffer with as much of the message as the buffer can hold. N REV_OPEN_SHARED This column specifies the input queue mode when messages are retrieved. This column corresponds to the Options parameter for an MQOPEN call. MQ functions use the value in this column to set the Options parameter. This column can have one of the following values: S E D Sets the MQOO_INPUT_SHARED option. This value is the default. Sets the MQ option MQOO_INPUT_EXCLUSIVE option. Sets the MQ option MQOO_INPUT_AS_Q_DEF option.
915
| Table 143. SYSIBM.MQPOLICY_TABLE column descriptions (continued) | Column name | SYNCPOINT | | | | | | | | | | DESC | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Description This column indicates whether the MQ function is to operate within the protocol for a normal unit of work. This column can have one of the following values: Y Specifies that the MQ function is to operate within the protocol for a normal unit of work. Use this value for two-phase commit environments. This value is the default. Specifies that the MQ function is to operate outside the protocol for a normal unit of work. Use this value for one-phase commit environments.
Related tasks: Converting applications to use the MQI functions Related reference: WebSphere MQ information center
Procedure
To convert an application to use the MQI functions, perform the following actions: 1. Set up the DB2 MQ functions that are based on MQI by performing the following actions: a. Run installation job DSNTIJSG. This job binds the new MQI-based DB2 MQ functions and creates the tables SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. b. Convert the contents of the AMI configuration files to rows in the tables SYSIBM.MQSERVICE_TABLE and SYSIBM.MQPOLICY_TABLE. 2. If the application contains unqualified references to DB2 MQ functions, set the CURRENT PATH special register to the schema name DB2MQ. 3. If the application contains qualified references to DB2 MQ functions, change the schema names in those references from the old names (DB2MQ1N, DB2MQ2N, DB2MQ1C, and DB2MQ2C) to DB2MQ.
916
| | | | | | | | | | |
4. Change the size of any host variables to accommodate for the following larger message sizes: v DB2 MQ functions for VARCHAR data can have a maximum message size of 32 KB. v DB2 MQ functions for CLOB data can have a maximum message size of 2 MB. Related tasks: Converting from the AMI-based MQ functions to the MQI-based MQ functions (DB2 Installation and Migration) Related reference: DB2 MQ tables on page 907
917
3. The WebSphere MQ server on machine B accepts the message from the server on machine A and places it in the destination queue on machine B. 4. A WebSphere MQ client on machine B requests the message at the head of the queue.
The MQSEND function is invoked once because SYSIBM.SYSDUMMY1 has only one row. Because this MQSEND function uses two-phase commit, the COMMIT statement ensures that the message is added to the queue. When you use single-phase commit, you do not need to use a COMMIT statement. For example:
SELECT DB2MQ1N.MQSEND (Testing msg) FROM SYSIBM.SYSDUMMY1;
The MQ operation causes the message to be added to the queue. Example: Assume that you have an EMPLOYEE table, with VARCHAR columns LASTNAME, FIRSTNAME, and DEPARTMENT. To send a message that contains this information for each employee in DEPARTMENT 5LGA, issue the following SQL SELECT statement:
SELECT DB2MQ2N.MQSEND (LASTNAME || || FIRSTNAME || || DEPARTMENT) FROM EMPLOYEE WHERE DEPARTMENT = 5lGA; COMMIT;
Message content can be any combination of SQL statements, expressions, functions, and user-specified data. Because this MQSEND function uses two-phase commit, the COMMIT statement ensures that the message is added to the MQ queue.
918
The MQREAD function is invoked once because SYSIBM.SYSDUMMY1 has only one row. The SELECT statement returns a VARCHAR(4000) string. If no messages are available to be read, a null value is returned. Because MQREAD does not change the queue, you do not need to use a COMMIT statement. Example: The following SQL SELECT statement causes the contents of a queue to be materialized as a DB2 table:
SELECT T.* FROM TABLE(DB2MQ2N.MQREADALL()) T;
The result table T of the table function consists of all the messages in the queue, which is defined by the default service, and the metadata about those messages. The first column of the materialized result table is the message itself, and the remaining columns contain the metadata. The SELECT statement returns both the messages and the metadata. To return only the messages, issue the following statement:
SELECT T.MSG FROM TABLE(DB2MQ2N.MQREADALL()) T;
The result table T of the table function consists of all the messages in the queue, which is defined by the default service, and the metadata about those messages. This SELECT statement returns only the messages. Example: The following SQL SELECT statement receives (removes) the message at the head of the queue:
SELECT DB2MQ2N.MQRECEIVE() FROM SYSIBM.SYSDUMMY1; COMMIT;
919
The MQRECEIVE function is invoked once because SYSIBM.SYSDUMMY1 has only one row. The SELECT statement returns a VARCHAR(4000) string. Because this MQRECEIVE function uses two-phase commit, the COMMIT statement ensures that the message is removed from the queue. If no messages are available to be retrieved, a null value is returned, and the queue does not change. Example: Assume that you have a MESSAGES table with a single VARCHAR(2000) column. The following SQL INSERT statement inserts all of the messages from the default service queue into the MESSAGES table in your DB2 database:
INSERT INTO MESSAGES SELECT T.MSG FROM TABLE(DB2MQ2N.MQRECEIVEALL()) T; COMMIT;
The result table T of the table function consists of all the messages in the default service queue and the metadata about those messages. The SELECT statement returns only the messages. The INSERT statement stores the messages into a table in your database.
920
The MQSEND function is invoked once because SYSIBM.SYSDUMMY1 has only one row. Because this MQSEND uses single-phase commit, WebSphere MQ adds the message to the queue, and you do not need to use a COMMIT statement. Example: The following SQL SELECT statement receives the first message that matches the identifier CORRID1 from the queue that is specified by the service MYSERVICE, using the policy MYPOLICY:
SELECT DB2MQ1N.MQRECEIVE (MYSERVICE, MYPOLICY, CORRID1) FROM SYSIBM.SYSDUMMY1;
The SELECT statement returns a VARCHAR(4000) string. If no messages are available with this correlation identifier, a null value is returned, and the queue does not change. Publish-and-subscribe method: Another common method of application integration is for one application to notify other applications about events of interest. An application can do this by sending a message to a queue that is monitored by other applications. The message can contain a user-defined string or can be composed from database columns. Simple data publication: In many cases, only a simple message needs to be sent using the MQSEND function. When a message needs to be sent to multiple recipients concurrently, the distribution list facility of the MQSeries AMI can be used. You define distribution lists by using the AMI administration tool. A distribution list comprises a list of individual services. A message that is sent to a distribution list is forwarded to every service defined within the list. Publishing messages to a distribution list is especially useful when there are multiple services that are interested in every message. Example: The following example shows how to send a message to the distribution list "InterestedParties":
SELECT DB2MQ2N.MQSEND (InterestedParties,Information of general interest) FROM SYSIBM.SYSDUMMY1;
When you require more control over the messages that a particular service should receive, you can use the MQPUBLISH function, in conjunction with the WebSphere MQSeries Integrator facility. This facility provides a publish-and-subscribe system, which provides a scalable, secure environment in which many subscribers can register to receive messages from multiple publishers. Subscribers are defined by queues, which are represented by service names. MQPUBLISH enables you to specify a list of topics that are associated with a message. Topics enable subscribers to more clearly specify the messages they receive. The following sequence illustrates how the publish-and-subscribe capabilities are used: 1. An MQSeries administrator configures the publish-and-subscribe capability of the WebSphere MQSeries Integrator facility. 2. Interested applications subscribe to subscriber services that are defined in the WebSphere MQSeries Integrator configuration. Each subscriber selects relevant topics and can also use the content-based subscription techniques that are provided by Version 2 of the WebSphere MQSeries Integrator facility.
921
3. A DB2 application publishes a message to a specified publisher service. The message indicates the topic it concerns. 4. The MQSeries functions provided by DB2 for z/OS handle the mechanics of publishing the message. The message is sent to the WebSphere MQSeries Integrator facility by using the specified service policy. 5. The WebSphere MQSeries Integrator facility accepts the message from the specified service, performs any processing defined by the WebSphere MQSeries Integrator configuration, and determines which subscriptions the message satisfies. It then forwards the message to the subscriber queues that match the subscriber service and topic of the message. 6. Applications that subscribe to the specific service, and register an interest in the specific topic, will receive the message in their receiving service. Example: To publish the last name, first name, department, and age of employees who are in department 5LGA, using all the defaults and a topic of EMP, you can use the following statement:
SELECT DB2MQ2N.MQPUBLISH (LASTNAME || || FIRSTNAME || || DEPARTMENT || || char(AGE), EMP) FROM DSN8910.EMP WHERE DEPARTMENT = 5LGA;
Example: The following statement publishes messages that contain only the last name of employees who are in department 5LGA to the HR_INFO_PUB publisher service using the SPECIAL_POLICY service policy:
SELECT DB2MQ2N.MQPUBLISH (HR_INFO_PUB, SPECIAL_POLICY, LASTNAME, ALL_EMP:5LGA, MANAGER) FROM DSN8910.EMP WHERE DEPARTMENT = 5LGA;
The messages indicate that the sender has the MANAGER correlation id. The topic string demonstrates that multiple topics, concatenated using a ':' (a colon) can be specified. In this example, the use of two topics enables subscribers of both the ALL_EMP and the 5LGA topics to receive these messages. To receive published messages, you must first register your application's interest in messages of a given topic and indicate the name of the subscriber service to which messages are sent. An AMI subscriber service defines a broker service and a receiver service. The broker service is how the subscriber communicates with the publish-and-subscribe broker. The receiver service is the location where messages that match the subscription request are sent. Example: The following statement subscribes to the topic ALL_EMP and indicates that messages be sent to the subscriber service, "aSubscriber":
SELECT DB2MQ2N.MQSUBSCRIBE (aSubscriber,ALL_EMP) FROM SYSIBM.SYSDUMMY1;
When an application is subscribed, messages published with the topic, ALL_EMP, are forwarded to the receiver service that is defined by the subscriber service. An application can have multiple concurrent subscriptions. Messages that match the subscription topic can be retrieved by using any of the standard message retrieval functions. Example: The following statement non-destructively reads the first message, where the subscriber service, "aSubscriber", defines the receiver service as "aSubscriberReceiver":
922
To display both the messages and the topics with which they are published, you can use one of the table functions. Example: The following statement receives the first five messages from "aSubscriberReceiver" and display both the message and the topic for each of the five messages:
SELECT t.msg, t.topic FROM table (DB2MQ2N.MQRECEIVEALL (aSubscriberReceiver,5)) t;
Example: To read all of the messages with the topic ALL_EMP, issue the following statement:
SELECT t.msg FROM table (DB2MQ2N.MQREADALL (aSubscriberReceiver)) t WHERE t.topic = ALL_EMP;
Note: If you use MQRECEIVEALL with a constraint, your application receives the entire queue, not just those messages that are published with the topic ALL_EMP. This is because the table function is performed before the constraint is applied. When you are no longer interested in having your application subscribe to a particular topic, you must explicitly unsubscribe. Example: The following statement unsubscribes from the ALL_EMP topic of the "aSubscriber" subscriber service:
SELECT DB2MQ2N.MQUNSUBSCRIBE (aSubscriber, ALL_EMP) FROM SYSIBM.SYSDUMMY1;
After you issue the preceding statement, the publish-and-subscribe broker no longer delivers messages that match the ALL_EMP topic to the "aSubscriber" subscriber service. Automated Publication: Another important method in application message publishing is automated publication. Using the trigger facility within DB2 for z/OS, you can automatically publish messages as part of a trigger invocation. Although other techniques exist for automated message publication, the trigger-based approach gives you more freedom in constructing the message content and more flexibility in defining the actions of a trigger. As with the use of any trigger, you must be aware of the frequency and cost of execution. Example: The following example shows how you can use the MQSeries functions of DB2 for z/OS with a trigger to publish a message each time a new employee is hired:
CREATE TRIGGER new_employee AFTER INSERT ON DSN8910.EMP REFERENCING NEW AS n FOR EACH ROW MODE DB2SQL SELECT DB2MQ2N.MQPUBLISH (HR_INFO_PUB, current date || || LASTNAME || || DEPARTMENT, NEW_EMP);
Any users or applications that subscribe to the HR_INFO_PUB service with a registered interest in the NEW_EMP topic will receive a message that contains the date, the name, and the department of each new employee when rows are inserted into the DSN8910.EMP table.
923
924
v An asynchronous listener can respond to a message from a supplied client, or from a user-defined application. The number of environments that can act as a database client is greatly expanded. Clients such as factory automation equipment, pervasive devices, or embedded controllers can communicate with DB2 either directly through WebSphere MQ or through some gateway that supports WebSphere MQ.
The data type for inMsgType and the data type for outMsgType can be VARCHAR, VARBINARY, CLOB, or BLOB of any length and are determined at startup. The input data type and output data type can be different data types. If an incoming message is a request and has a specified reply-to queue, the message in outMsg will be sent to the specified queue. The incoming message can be one of the following message types:
Chapter 15. Coding methods for distributed data
925
| | | |
v v v v
Datagram Datagram with report requested Request message with reply Request message with reply and report requested
Configuring MQListener in DB2 for z/OS: Before you can use MQListener, you must configure your database environment so that your applications can use messaging with database operations. You must also configure WebSphere MQ for MQListener. About this task Use the following procedure to configure the environment for MQListener and to develop a simple application that receives a message, inserts the message in a table, and creates a simple response message: 1. 2. 3. 4. 5. Configure MQListener to run in the DB2 environment. Configure WebSphere MQ for MQListener. Configure MQListener task. Create the sample stored procedure to work with MQListener. Run a simple MQListener application.
Configuring MQListener to run in the DB2 environment: Configure your database environment so that your applications can use messaging with database operations. Customize and run installation job DSNTIJML, which is located in prefix.SDSNSAMP data set. The job will do the following tasks: 1. Untar and create the necessary files and libraries in z/OS UNIX System Services under the path where MQListener is installed. 2. Create the MQListener configuration table (SYSMQL.LISTENERS) in the default database DSNDB04. 3. Bind the DBRM's to the plan DB2MQLSN. Note: If MQListener is not installed in the default path, '/usr/lpp/db2mql_910' , you must replace all occurrences of the string '/usr/lpp/db2910_mql' in the samples DSNTEJML, DSNTEJSP and DSNTIJML with the path name where MQListener is installed before you run DSNTIJML. The samples DSNTEJML, DSNTEJSP and DSNTIJML are located in prefix.SDSNSAMP data set. Ensure that the person who runs the installation job has required authority to create the configuration table and to bind the DBRM's. Follow the instructions in the README file that is created in the MQListener installation path in z/OS UNIX System Services to complete the configuration process. Configuring WebSphere MQ for MQListener:
926
You can run a simple MQListener application with a simple WebSphere MQ configuration. More complex applications might need a more complex configuration. Configure at least two kinds of WebSphere MQ entities: the queue manager and some local queues. Configure these entities for use in such instances as transaction management, deadletter queue, backout queue, and backout retry threshold. To configure WebSphere MQ for a simple MQListener application, do the following: 1. Create MQSeries QueueManager. Define the MQSeries subsystem to z/OS and then issue the following command from a z/OS console to start the queue manager:
<command-prefix-string> START QMGR
command-prefix-string is the command prefix for the MQSeries subsystem. 2. Create Queues under MQSeries QueueManager: In a simple MQListener application, you typically use the following WebSphere MQ queues: Deadletter queue The deadletter queue in WebSphere MQ holds messages that cannot be processed. MQListener uses this queue to hold replies that cannot be delivered, for example, because the queue to which the replies should be sent is full. A deadletter queue is useful in any MQ installation especially for recovering messages that are not sent. Backout queue For MQListener tasks that use two-phase commit, the backout queue serves a similar purpose as the deadletter queue. MQListener places the original request in the backout queue after the request is rolled back a specified number of times (called the backout threshold). Administration queue The administration queue is used for routing control messages, such as shutdown and restart, to MQListener. If you do not supply an administration queue, the only way to shut down MQListener is to issue a kill command. Application input and output queues The application uses input queues and output queues. The application receives messages from the input queue and sends replies and exceptions to the output queue. Create your local queues by using CSQUTIL utility or by using MQSeries operations and control panels from ISPF (csqorexx). The following is an example of the JCL that is used to create your local queues. In this example, MQND is the name of the queue manager:
//* //* ADMIN_Q : Admin queue //* BACKOUT_Q : Backout queue //* IN_Q : Input queue having a backout queue with threshold=3 //* REPLY_Q : output queue or reply queue //* DEADLLETTER_Q: Dead letter queue //* //DSNTECU EXEC PGM=CSQUTIL,PARM=MQND //STEPLIB DD DSN=MQS.SCSQANLE,DISP=SHR // DD DSN=MQS.SCSQAUTH,DISP=SHR //SYSPRINT DD SYSOUT=* //SYSIN DD *
Chapter 15. Coding methods for distributed data
927
COMMAND DDNAME(CREATEQ) /* //CREATEQ DD * DEFINE QLOCAL(ADMIN_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) DEFINE QLOCAL(BACKOUT_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) DEFINE QLOCAL(REPLY_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) DEFINE QLOCAL(IN_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) + BOQNAME(BACKOUT_Q) + BOTHRESH(3) DEFINE QLOCAL(DEADLETTER_Q) REPLACE + DESCR(INPUT-OUTPUT) + PUT(ENABLED) + DEFPRTY(0) + DEFPSIST(NO) + SHARE + DEFSOPT(SHARED) + GET(ENABLED) ALTER QMGR DEADQ (DEADLETTER_Q) REPLACE /*
Environment variables for logging and tracing MQListener: Two environment variables control logging and tracing for MQListener. These variables are defined in the file .profile. MQLSNTRC When this ENV variable is set to 1, it will write function entry, data, and exit points to a unique HFS or zFS file. A unique trace file will be generated whenever any of the MQListener commands are run. This trace file will be used by IBM software support for debugging if the customer reports any problem. Unless requested, this variable should not be defined. MQLSNLOG The log file contains diagnostic information about the major events. This ENV variable is set to the name of the file where all log information will be written. All instances of MQListener daemon running one or more tasks will share the same file. For monitoring MQListener daemon, this variable
928
should always be set. When MQListener daemon is running, open the log/trace files only in read mode (use cat/more/tail commands in z/OS UNIX System Services to open the files) as they are used by the daemon process for writing. Refer to the README file for more details about these variables. Configuration table: SYSMQL.LISTENERS: If you use MQListener, you must create the MQListener configuration table SYSMQL.LISTENERS by running installation job DSNTIJML. The following table describes each of the columns of the configuration table SYSMQL.LISTENERS.
Table 144. Description of columns in the SYSMQL.LISTENERS table Column name CONFIGURATIONNAME Description The configuration name. The configuration name enables you to group several tasks into the same configuration. A single instance of MQListener can run all of the tasks that are defined within a configuration name. The name of the WebSphere MQ subsystem that contains the queues that are to be used. The name of the queue in the WebSphere MQ subsystem that is to be monitored for incoming messages. The combination of the input queue and the queue manager are unique within a configuration Currently unused The schema name of the stored procedure that will be called by MQListener The name of the stored procedure that will be called by MQListener Currently unused The number of duplicate instances of a single task that are to run in this configuration The time MQListener waits (in milliseconds) after processing the current message before it looks for the next message Currently unused
QUEUEMANAGER INPUTQUEUE
MINQUEDEPTH
Configuring MQListener tasks: As part of configuring MQListener in DB2 for z/OS, you must configure at least one MQListener task. About this task Use the MQListener command, db2mqln1 or db2mqln2, to configure MQListener tasks. Issue the command from z/OS UNIX System Services command line in any directory. Alternatively, You can put the command in a file, grant execute permission and use the BPXBATCH utility to invoke the script from JCL. The sample script files are provided and are located in /MQListener-install-path/ mqlsn/listener/script directory in z/OS UNIX System Services. Sample JCL
Chapter 15. Coding methods for distributed data
929
(DSNTEJML) is also provided that invokes the script files and is located in prefix.SDSNSAMP. The add parameter with the db2mqln1 or db2mqln2 command updates a row in the DB2 table SYSMQL.LISTENERS. v To add an MQListener configuration, issue the following command:
db2mqln1/db2mqln2 add -ssID <subsystem name> -config <configuration name> -queueManager <queuemanager name> -inputQueue <inputqueue name> -procName <stored-procedure name> -procSchema <stored-procedure schema name> -numInstances <number of instances>
To display information about all the configurations, issue the following command:
db2mqln1/db2mqln2 show -ssID <subsystem name> -config all
v To get help with the command and the valid parameters, issue the following command:
db2mqln1/db2mqln2 help
v To get help for a particular parameter, issue the following command, where 'command' is a specific parameter:
db2mqln1/db2mqln2 help <command>
Restriction: v Use the same queue manager for the request queue and the reply queue. v MQListener does not support logical messages that are composed of multiple physical messages. MQListener processes physical messages independently.
930
Creating a sample stored procedure to use with MQ Listener: You can create a sample stored procedure, APROC, that can be used by MQListener to store a message in a table. The stored procedure returns the string OK if the message is successfully inserted into the table. Procedure The following steps create DB2 objects that you can use with MQListener applications: 1. Create a table using SPUFI, DSNTEP2, or the command line processor in the subsystem where you want to run MQListener:
CREATE TABLE PROCTABLE (MSG VARCHAR(25) CHECK (MSG NOT LIKE FAIL%));
| | | | | |
The table contains a check constraint so that messages that start with the characters FAIL cannot be inserted into the table. The check constraint is used to demonstrate the behavior of MQListener when the stored procedure fails. 2. Create the following SQL procedure and define it to the same DB2 subsystem:
CREATE PROCEDURE TEST.APROC ( IN PIN VARCHAR(25), OUT POUT VARCHAR(2)) LANGUAGE SQL FENCED NOT DETERMINISTIC NO DBINFO COLLID TESTLSRN WLM ENVIRONMENT TESTWLMX ASUTIME NO LIMIT STAY RESIDENT NO PROGRAM TYPE MAIN SECURITY USER PROCEDURE1: BEGIN INSERT INTO PROCTABLE VALUES(PIN); SET POUT = OK; END PROCEDURE1
TESTLSRN is the name of the collection that is used for this stored procedure and TESTWLMX is the name of the WLM environment where this stored procedure will run. 3. Optional: Bind the collection TESTLSRN to the plan DB2MQLSN, which is used by MQListener:
BIND PLAN(DB2MQLSN) + PKLIST(LSNR.*,TESTLSRN.*) + ACTION(REP) DISCONNECT(EXPLICIT);
If your application calls a stored procedure or user defined function that is defined with the COLLID option, the application does not need to include the collection ID in its plan. Thus, this step is optional. | | | | | | | MQListener error processing: MQListener reads from WebSphere MQ message queues and calls DB2 stored procedures with those messages. If any errors occur during this process and the message is to be sent to the deadletter queue, MQListener returns a reason code to the deadletter queue. Specifically, MQListener performs the following actions: v prefixes the message with an MQ dead letter header (MQDLH) structure
Chapter 15. Coding methods for distributed data
931
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
901
v sets the reason field in the MQDLH structure to the appropriate reason code v sends the message to the deadletter queue The following table describes the reason codes that the MQListener daemon returns.
Table 145. Reason codes that MQListener returns Reason code 900 Explanation The call to a stored procedure was successful but an error occurred during the DB2 commit process and either of the following conditions were true: v No exception report was requested.1 v An exception report was requested, but could not be delivered. This reason code applies only to one-phase commit environments. The call to the specified stored procedure failed and the disposition of the MQ message is that an exception report be generated and the original message be sent the deadletter queue. All of the following conditions occurred: v The disposition of the MQ message is that an exception report is not to be generated.
1
902
v The stored procedure was called unsuccessfully the number of times that is specified as the backout threshold. v The name of the backout queue is the same as the deadletter queue. This reason code applies only to two-phase commit environments. MQRC_TRUNCATED_ The size of the MQ message is greater than the input parameter of the stored procedure MSG__FAILED that is to be invoked. In one-phase commit environments, this oversized message is sent to the dead letter queue. In two-phase commit environments, this oversized message is sent to the deadletter queue only when the message cannot be delivered to the backout queue.
Note: 1. To specify that the receiver application generate exception reports if errors occur, set the report field in the MQMD structure that was used when sending the message to one of the following values: v MQRO_EXCEPTION v MQRO_EXCEPTION_WITH_DATA v MQRO_EXCEPTION_WITH_FULL_DATA Related reference: WebSphere MQ information center MQListener examples: The application receives a message, inserts the message into a table, and generates a simple response message. To simulate a processing failure, the application includes a check constraint on the table that contains the message. The constraint prevents any string that begins with the characters 'fail' from being inserted into the table. If you attempt to insert a message that violates the check constraint, the example application returns an error message and re-queues the failing message to the backout queue. In this example, the following assumptions are made: v MQListener is installed and configured for subsystem DB7A.
932
v MQND is the name of MQSeries subsystem that is defined. The Queue Manager is running, and the following local queues are defined in the DB7A subsystem: ADMIN_Q : Admin queue BACKOUT_Q : Backout queue IN_Q : Input queue that has a backout queue withthreshold = 3 REPLY_Q : Output queue or Reply queue DEADLLETTER_Q : Dead letter queue v The person who is running the MQListener daemon has execute permission on the DB2MQLSN plan. Before you run the MQListener daemon, add the following configuration, named ACFG, to the configuration table by issuing the following command:
db2mqln2 add -ssID DB7A -config ACFG -queueManager MQND -inputQueue IN_Q -procName APROC -procSchema TEST
Run the MQListener daemon for two-phase commit for the added configuration 'ACFG'. To run MQListener with all of the tasks specified in a configuration, issue the following command:
db2mqln2 run -ssID DB7A -config ACFG -adminQueue ADMIN_Q -adminQMgr MQND
The following examples show how to use MQListener to send a simple message and then inspect the results of the message in the WebSphere MQ queue manager and the database. The examples include queries to determine if the input queue contains a message or to determine if a record is placed in the table by the stored procedure. MQListener example 1: Running a simple application: 1. Start with a clean database table by issuing the following SQL statement:
delete from PROCTABLE
2. Send a datagram to the input queue, 'IN_Q', with the message as 'sample message'. Refer to WebSphere MQ sample CSQ4BCK1 to send a message to the queue. Specify the MsgType option for 'Message Descriptor' as 'MQMT_DATAGRAM'. 3. Query the table by using the following statement to verify that the sample message is inserted:
select * from PROCTABLE
4. Display the number of messages that remain on the input queue to verify that the message has been removed. Issue the following command from a z/OS console:
/-MQND display queue(In_Q) curdepth
MQListener example 2: Sending requests to the input queue and inspecting the reply: 1. Start with a clean database table by issuing the following SQL statement:
delete from PROCTABLE
933
2. Send a request to the input queue, 'IN_Q', with the message as 'another sample message'. Refer to WebSphere MQ sample CSQ4BCK1 to send a message to the queue. Specify the MsgType option for 'Message Descriptor' as 'MQMT_REQUEST' and the queue name for ReplytoQ option. 3. Query the table by using the following statement to verify that the sample message is inserted:
select * from PROCTABLE
4. Display the number of messages that remain on the input queue to verify that the message has been removed. Issue the following command from a z/OS console:
/-MQND display queue(In_Q) curdepth
5. Look at the ReplytoQ name that you specified when you sent the request message for the reply by using the WebSphere MQ sample program CSQ4BCJ1. Verify that the string 'OK' is generated by the stored procedure. MQListener example 3: Testing an unsuccessful insert operation: If you send a message that starts with the string 'fail', the constraint in the table definition is violated, and the stored procedure fails. 1. Start with a clean database table by issuing the following SQL statement:
delete from PROCTABLE
2. Send a request to the input queue, 'IN_Q', with the message as 'failing sample message'. Refer to WebSphere MQ sample CSQ4BCK1 to send a message to the queue. Specify the MsgType option for 'Message Descriptor' as 'MQMT_REQUEST' and the queue name for ReplytoQ option. 3. Query the table by using the following statement to verify that the sample message is not inserted:
select * from PROCTABLE
4. Display the number of messages that remain on the input queue to verify that the message has been removed. Issue the following command from a z/OS console:
/-MQND display queue(In_Q) curdepth
5. Look at the Backout queue and find the original message by using the WebSphere MQ sample program CSQ4BCJ1. Note: In this example, if a request message with added options for 'exception report' is sent (the Report option is specified for 'Message Descriptor'), an exception report is sent to the reply queue and the original message is sent to the deadletter queue.
934
935
When a consumer receives the result of a web services request, the SOAP envelope is stripped and the XML document is returned. An application program can process the result data and perform a variety of operations, including inserting or updating a table with the result data. SOAPHTTPV and SOAPHTTPC are user-defined functions that enable DB2 to work with SOAP and to consume web services in SQL statements. These functions are overloaded functions that are used for VARCHAR or CLOB data of different sizes, depending on the SOAP body. Web services can be invoked in one of four ways, depending on the size of the input data and the result data. SOAPHTTPV returns VARCHAR(32672) data and SOAPHTTPC returns CLOB(1M) data. Both functions accept either VARCHAR(32672) or CLOB(1M) as the input body. Example: The following example shows an HTTP post header that posts a SOAP request envelope to a host. The SOAP envelope body shows a temperature request for Barcelona.
POST /soap/servlet/rpcrouter HTTP/1.0 Host: services.xmethods.net Connection: Keep-Alive User-Agent: DB2SOAP/1.0 Content-Type: text/xml; charset="UTF-8" SOAPAction: "" Content-Length: 410 <?xml version=1.0 encoding=UTF-8?> <SOAP-ENV:Envelope xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/ xmlns:SOAP-ENC=http://schemas.xmlsoap.org/soap/encoding/ xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xmlns:xsd=http://www.w3.org/2001/XMLSchema > <SOAP-ENV:Body> <ns:getTemp xmlns:ns="urn:xmethods-Temperature"> <city>Barcelona</city> </ns:getTemp> </SOAP-ENV:Body> </SOAP-ENV:Envelope>
Example: The following example is the result of the preceding example. This example shows the HTTP response header with the SOAP response envelope. The result shows that the temperature is 85 degrees Fahrenheit in Barcelona.
HTTP/1.1 200 OK Date: Wed, 31 Jul 2002 22:06:41 GMT Server: Enhydra-MultiServer/3.5.2 Status: 200 Content-Type: text/xml; charset=utf-8 Servlet-Engine: Lutris Enhydra Application Server/3.5.2 (JSP 1.1; Servlet 2.2; Java 1.3.1_04; Linux 2.4.7-10smp i386; java.vendor=Sun Microsystems Inc.) Content-Length: 467 Set-Cookie:JSESSIONID=JLEcR34rBc2GTIkn-0F51ZDk;Path=/soap X-Cache: MISS from www.xmethods.net Keep-Alive: timeout=15, max=10 Connection: Keep-Alive <?xml version=1.0 encoding=UTF-8?> <SOAP-ENV:Envelope xmlns:SOAP-ENV=http://schemas.xmlsoap.org/soap/envelope/ xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xmlns:xsd=http://www.w3.org/2001/XMLSchema > <SOAP-ENV:Body> <ns1:getTempResponse xmlns:ns1="urn:xmethods-Temperature" SOAP-ENV:encodingStyle=http://schemas.xmlsoap.org/soap/encoding/ > <return xsi:type="xsd:float">85</return> </ns1:getTempResponse> </SOAP-ENV:Body></SOAP-ENV:Envelope>
936
Example: The following example shows how to insert the result from a web service into a table
INSERT INTO MYTABLE(XMLCOL) VALUES (DB2XML.SOAPHTTPC( http://www.myserver.com/services/db2sample/list.dadx/SOAP, http://tempuri.org/db2sample/list.dadx <listDepartments xmlns="http://tempuri.org/db2sample/listdadx"> <deptno>A00</deptno> </ListDepartments>))
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Example
The following example shows how to insert the complete result from a web service into a table using SOAPHTTPNC.
INSERT INTO EMPLOYEE(XMLCOL) VALUES (DB2XML.SOAPHTTPNC( http://www.myserver.com/services/db2sample/list.dadx/SOAP, http://tempuri.org/db2sample/list.dadx, <?xml version="1.0" encoding="UTF-8" ?> || <SOAP-ENV:Envelope || xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" || xmlns:xsd="http://www.w3.org/2001/XMLSchema" || xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> || <SOAP-ENV:Body> || <listDepartments xmlns="http://tempuri.org/db2sample/list.dadx"> <deptNo>A00</deptNo> </listDepartments> || </SOAP-ENV:Body> || </SOAP-ENV:Envelope>))
937
| | |
Related tasks: Additional steps for enabling web service user-defined functions (DB2 Installation and Migration)
938
Table 147. SQLSTATE values for SOAPHTTPNV and SOAPHTTPNC user-defined functions SQLSTATE Description An unexpected NULL value was specified for the endpoint, action, or SOAP input. A dynamic memory allocation error. An unknown or unsupported transport protocol. An invalid URL was specified. An error occurred while resolving the hostname. A memory exception for socket. An error occurred during socket connect. An error occurred while setting socket options. An error occurred during input/output control (ioctl) to verify HTTPS enablement. An error occurred while reading from the socket. An error occurred due to socket timeout. No response from the specified host. An error occurred due to an unexpected HTTP return or content type The TCP/IP stack was not enabled for HTTPS.
| | | | | | | | | | | | | | | |
38350 38351 38352 38353 38354 38355 38356 38357 38358 38359 38360 38361 38362 38363
Related tasks: Additional steps for enabling web service user-defined functions (DB2 Installation and Migration)
939
940
941
| | | | |
v For C and C++ only, you can invoke the coprocessor from UNIX System Services and, if the DBRM is generated in a HFS file, you can use the command line processor to bind the resulting DBRM. Optionally, you can also copy the DBRM into a partitioned data set member by using the oput and oget commands and then bind it by using conventional JCL. This topic describes how to use JCL procedures to prepare a program. For information about using the DB2I panels, see Chapter 17, Preparing an application to run on DB2 for z/OS, on page 941. Preparing applications by the DB2 Program: If you develop programs using TSO and ISPF, you can prepare them to run by using the DB2 Program Preparation panels. These panels guide you step by step through the process of preparing your application to run. Other ways of preparing a program to run are available, but using DB2 Interactive (DB2I) is the easiest because it leads you automatically from task to task. Important: If your C++ program satisfies both of the following conditions, you must use a JCL procedure to prepare it: v The program consists of more than one data set or member. v More than one data set or member contains SQL statements. To prepare an application by using the DB2 Program Preparation panels: 1. If you want to display or suppress message IDs during program preparation, specify one of the following commands on the ISPF command line: TSO PROFILE MSGID Message IDs are displayed TSO PROFILE NOMSGID Message IDs are supressed 2. Open the DB2I Primary Option Menu. 3. Select the option that corresponds to the Program Preparation panel. 4. Complete the Program Preparation panel and any subsequent panels. After you complete each panel, DB2I automatically displays the next appropriate panel. Preparation guidelines for DL/I batch programs: Use the following guidelines when you prepare a program to access DB2 and DL/I in a batch program: v Processing SQL statements by using the DB2 precompiler on page 946 v Binding a batch program on page 984 v Compiling and link-editing an application on page 968 v Loading and running a batch program on page 1054
942
Related concepts: Command line processor (DB2 Commands) TSO attachment facility (Introduction to DB2 for z/OS) Related reference: DSNH (TSO CLIST) (DB2 Commands)
Procedure
To set the DB2I defaults: As DB2I leads you through a series a panels, enter the default values that you want on the following panels when they are displayed.
Table 148. DB2I panels to use to set default values If you want to set the following default values... v subsystem ID v number of additional times to attempt to connect to DB2 v programming language v number of lines on each page of listing or SPUFI output v lowest level of message to return to you during the BIND phase v SQL string delimiter for COBOL programs v how to represent decimal separators v smallest value of the return code (from precompile, compile, link-edit, or bind) that prevents later steps from running v default number of input entry rows to generate on the initial display of ISPF panels v user ID to associate with the trusted connection for the current DB2I session v default JOB statement v symbol used to delimit a string in a COBOL statement in a COBOL application v whether DCLGEN generates a picture clause that has the form PIC G(n) DISPLAY-1 or PIC N(n). DB2I Defaults Panel 2 panel Use this panel DB2I Defaults Panel 1 panel
943
Table 148. DB2I panels to use to set default values (continued) If you want to set the following default values... The following package and plan characteristics v isolation level v whether to check authorization at run time or at bind time v when to release locks on resources v whether to obtain EXPLAIN information about how SQL statements in the plan or package execute v whether you need data currency for ambiguous cursors opened at remote locations v whether to use parallel processing v whether DB2 determines access paths at bind time and again at execution time v whether to defer preparation of dynamic SQL statements v whether DB2 keeps dynamic SQL statements after commit points v whether DB2 uses DRDA protocol or DB2 private protocol to execute statements that contain three-part names v the application encoding scheme v whether you want to use optimization hints to determine access paths v when DB2 writes the changes for updated group buffer pool-dependent pages v whether run time (RUN) or bind time (BIND) rules apply to dynamic SQL statements at run time v whether to continue to create a package after finding SQL errors (packages only) v when to acquire locks on resources (plans only) v whether a CONNECT (Type 2) statement executes according to DB2 rules (DB2) or the SQL standard (STD). (plans only) v which remote connections end during a commit or a rollback (plans only) Use this panel Defaults for Bind Package panel Defaults for Bind Plan panel
Related reference: DB2I Defaults Panel 1 on page 1016 DB2I Defaults Panel 2 on page 1019 Defaults for Bind Package and Defaults for Rebind Package panels on page 1029 Defaults for Bind Plan and Defaults for Rebind Plan panels on page 1031
944
| | | | | | | | | | | |
945
Related concepts: Using the DB2 C/C++ precompiler (XL C/C++ Programming Guide) DB2 coprocessor (Enterprise COBOL for z/OS Programming Guide) Differences between the DB2 precompiler and the DB2 coprocessor on page 956 DB2 program preparation overview on page 1002 Related tasks: Translating command-level statements in a CICS program on page 955 Related reference: Enterprise COBOL for z/OS Related information: DB2 Program Directory
Procedure
To process SQL statements by using the DB2 precompiler: 1. Ensure that your program is ready to be processed by the DB2 precompiler by performing the following actions: For information about the criteria for programs that are passed to the precompiler, see Input to the DB2 precompiler on page 949. 2. If you plan to run multiple precompilation jobs and are not using the DFSMSdfp partitioned data set extended (PDSE), change the DB2 language preparation procedures (DSNHCOB, DSNHCOB2, DSNHICOB, DSNHFOR, DSNHC, DSNHPLI, DSNHASM, DSNHSQL) to specify the DISP=OLD parameter instead of the DISP=SHR parameter. The DB2 language preparation
946
procedures in job DSNTIJMV use the DISP=OLD parameter to enforce data integrity. However, the installation process converts the DISP=OLD parameter for the DBRM library data set to DISP=SHR, which can cause data integrity problems when you run multiple precompilation jobs. 3. Start the precompile process by using one of the following methods: v DB2I panels. Use the Precompile panel or the DB2 Program Preparation panels. v The DSNH command procedure (a TSO CLIST). v JCL procedures that are supplied with DB2. For more information about this method, see DB2-supplied JCL procedures for preparing an application on page 1007. Recommendation: Specify the SOURCE and XREF precompiler options to get complete diagnostic output from the DB2 precompiler. This output is useful if you need to precompile and compile program source statements several times before they are error-free and ready to link-edit. The output that is returned from the DB2 precompiler is described in Output from the DB2 precompiler on page 951.
Results
Preparing a program with object-oriented extensions by using JCL: If your C++ or Enterprise COBOL for z/OS program satisfies both of these conditions, you need special JCL to prepare it: v The program consists of more than one data set or member. v More than one data set or member contains SQL statements. You must precompile the contents of each data set or member separately, but the prelinker must receive all of the compiler output together. JCL procedure DSNHCPP2, which is in member DSNTIJMV of data set DSN910.SDSNSAMP, shows you one way to do this for C++. Precompiling a batch program: When you add SQL statements to an application program, you must precompile the application program and bind the resulting DBRM into a plan or package, as described in Chapter 17, Preparing an application to run on DB2 for z/OS, on page 941. Related concepts: DCLGEN (declarations generator) on page 125 Related reference: DSNH (TSO CLIST) (DB2 Commands) | | | | |
947
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 149. DD statements and data sets that the DB2 precompiler uses DD statement DBRMLIB Data set description Required?
Yes Output data set, which contains the SQL statements and host variable information that the DB2 precompiler extracted from the source program. It is called Database Request Module (DBRM). This data set becomes the input to the DB2 bind process. The DCB attributes of the data set are RECFM FB, LRECL 80. DBRMLIB has to be a PDS and a member name must be specified. You can use IEBCOPY, IEHPROGM, TSO commands, COPY and DELETE, or PDS management tools for maintaining the data set. Step library for the job step. In this DD statement, you can specify the name of the library for the precompiler load module, DSNHPC, and the name of the library for your DB2 application programming defaults member, DSNHDECP. Recommendation: Always use the STEPLIB DD statement to specify the library where your DB2 DSNHDECP module resides to ensure that the proper application defaults are used by the DB2 precompiler. The library that contains your DB2 DSNHDECP module needs to be allocated ahead of the prefix.SDSNLOAD library. No, but recommended
STEPLIB
SYSCIN
Yes Output data set, which contains the modified source that the DB2 precompiler writes out. This data set becomes the input data set to the compiler or assembler. This data set must have attributes RECFM F or FB, and LRECL 80. SYSCIN can be a PDS or a sequential data set. If a PDS is used, the member name must be specified.
948
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Table 149. DD statements and data sets that the DB2 precompiler uses (continued) DD statement SYSIN Data set description Input data set, which contains statements in the host programming language and embedded SQL statements. This data set must have the attributes RECFM F or FB, LRECL 80. SYSIN can be a PDS or a sequential data set. If a PDS is used, the member name must be specified. Required? Yes
SYSLIB
No INCLUDE library, which contains additional SQL and host language statements. The DB2 precompiler includes the member or members that are referenced by SQL INCLUDE statements in the SYSIN input from this DD statement. Multiple data sets can be specified, but they must be partitioned data sets with attributes RECFM F or FB, LRECL 80. SQL INCLUDE statements cannot be nested. Yes Output data set, which contains the output listing from the DB2 precompiler. This data set must have an LRECL of 133 and a RECFM of FBA. SYSPRINT must be a sequential data set No Terminal output file, which contains diagnostic messages from the DB2 precompiler. SYSTERM) must be a sequential data set
SYSPRINT
SYSTERM
949
| | | | | | | | | | | | | | | | | | | |
v The size of a source program that DB2 can precompile is limited by the region size and the virtual memory available to the precompiler. These amounts vary with each system installation. v The forms of source statements that can pass through the precompiler are limited. For example, constants, comments, and other source syntax that are not accepted by the host compilers (such as a missing right brace in C) can interfere with precompiler source scanning and cause errors. To check for such unacceptable source statements, run the host compiler before the precompiler. You can ignore the compiler error messages for SQL statements or comment out the SQL statements. After the source statements are free of unacceptable compiler errors, you can then uncomment any SQL statements that you previously commented out and continue with the normal DB2 program preparation process for that host language. v You must write host language statements and SQL statements using the same margins, as specified in the precompiler option MARGINS. v The input data set, SYSIN, must have the attributes RECFM F or FB, LRECL 80. v SYSLIB must be a partitioned data set, with attributes RECFM F or FB, LRECL 80. v Input from the INCLUDE library cannot contain other precompiler INCLUDE statements.
950
The DD name list must begin on a 2-byte boundary. The first 2 bytes contain a binary count of the number of bytes in the list (excluding the count field). Each entry in the list is an 8-byte field, left-justified, and padded with blanks if needed. The following table gives the following sequence of entries:
Table 150. DDNAME list entries Entry 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Standard ddname Not applicable Not applicable Not applicable SYSLIB SYSIN SYSPRINT Not applicable SYSUT1 SYSUT2 SYSUT3 Not applicable SYSTERM Not applicable SYSCIN Not applicable DBRMLIB DBRM output Changed source output Diagnostic listing Work data Work data Work data Library input Source input Diagnostic listing Usage
Page number format: When you call the precompiler, you can specify a page number to use for the first page of the compiler listing on SYSPRINT. You must specify this page number in a particular format. A 6-byte field beginning on a 2-byte boundary contains the page number. The first 2 bytes must contain the binary value 4 (the length of the remainder of the field). The last 4 bytes contain the page number in character or zoned-decimal format. The precompiler adds 1 to the last page number that is used in the precompiler listing and puts this value into the page-number field before returning control to the calling routine. Thus, if you call the precompiler again, page numbering is continuous.
951
listing output The DB2 precompiler writes the following information in the SYSPRINT data set: v Precompiler source listing If the DB2 precompiler option SOURCE is specified, a source listing is produced. The source listing includes precompiler source statements, with line numbers that are assigned by the precompiler. v Precompiler diagnostics The precompiler produces diagnostic messages that include precompiler line numbers of statements that have errors. v Precompiler cross-reference listing If the DB2 precompiler option XREF is specified, a cross-reference listing is produced. The cross-reference listing shows the precompiler line numbers of SQL statements that refer to host names and columns. | | | The SYSPRINT data set has an LRECL of 133 and a RECFM of FBA. This data set uses the CCSID of the source program. Statement numbers in the output of the precompiler listing are displayed as they appear in the listing. Terminal diagnostics If a terminal output file, SYSTERM, exists, the DB2 precompiler writes diagnostic messages to it. A portion of the source statement accompanies the messages in this file. You can often use the SYSTERM file instead of the SYSPRINT file to find errors. This data set uses EBCDIC. Modified source statements The DB2 precompiler writes the source statements that it processes to SYSCIN, the input data set to the compiler or assembler. This data set must have attributes RECFM F or FB, and LRECL 80. The modified source code contains calls to the DB2 language interface. The SQL statements that the calls replace appear as comments. This data set uses the CCSID of the source program. Database request modules The database request module (DBRM) is a data set that contains the SQL statements and host variable information that is extracted from the source program, along with information that identifies the program and ties the DBRM to the translated source statements. It becomes the input to the bind process. The data set requires space to hold all the SQL statements plus space for each host variable name and some header information. The header information alone requires approximately two records for each DBRM, 20 bytes for each SQL record, and 6 bytes for each host variable. For an exact format of the DBRM, see the DBRM mapping macros, DSNXDBRM and DSNXNBRM, in library prefix.SDSNMACS. The DCB attributes of the data set are RECFM FB, LRECL 80. The precompiler sets the characteristics. You can use IEBCOPY, IEHPROGM, TSOCOPY and DELETE commands, or other PDS management tools for maintaining these data sets. | | | | | | | Restriction: Do not modify the contents of the DBRM. If you do, unpredictable results can occur. DB2 does not support modified DBRMs. In a DBRM, the SQL statements and the list of host variable names use the UTF-8 character encoding scheme. All other character fields in a DBRM use EBCDIC. The current release marker (DBRMMRIC) in the header of a DBRM is marked according to the release of the precompiler, regardless of the value of NEWFUN.
952
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
COBOL
SQL("APOSTSQL STDSQL(NO)")
PL/I
PP(SQL("APOSTSQL,STDSQL(NO)")
For PL/I programs that use BIGINT or LOB data types, specify the following compiler options when you compile your program: LIMITS(FIXEDBIN(63), FIXEDDEC(31)) If needed, increase the user's region size so that it can accommodate more memory for the DB2 coprocessor. Include DD statements for the following data sets in the JCL for your compile step:
Chapter 17. Preparing an application to run on DB2 for z/OS
953
| | | | | | | | | | | | | | | | | | | | | | | | | |
- DB2 load library (prefix.SDSNLOAD) The DB2 coprocessor calls DB2 modules to process the SQL statements. You therefore need to include the name of the DB2 load library data set in the STEPLIB concatenation for the compile step. - DBRM library The DB2 coprocessor produces a DBRM. DBRMs and the DBRM library are described in Output from the DB2 precompiler on page 951. You need to include a DBRMLIB DD statement that specifies the DBRM library data set. - Library for SQL INCLUDE statements If your program contains SQL INCLUDE member-name statements that specify secondary input to the source program, you need to also specify the data set for member-name. Include the name of the data set that contains member-name in the SYSLIB concatenation for the compile step. v For C/C++ only: Invoke the DB2 coprocessor from UNIX System Services on z/OS. If you invoke the C/C++ DB2 coprocessor from UNIX System Services, you can choose to have the DBRM generated in a partitioned data set or an HFS file. When you invoke the DB2 coprocessor, include the following information: Specify the SQL compiler option. The SQL compiler option indicates that you want the compiler to invoke the DB2 coprocessor. Specify a list of SQL processing options in parentheses after the SQL keyword. Table 152 on page 959 lists the options that you can specify. Specify a location for the DBRM as the parameter for the dbrmlib option. You can specify one of the following items: - The name of a partitioned data set
| Example: The following example invokes the C/C++ DB2 coprocessor to | compile (with the c89 compiler) a sample C program and requests that the | resulting DBRM is stored in the test member of the userid.dbrmlib.data | data set: | c89 -Wc,"sql,dbrmlib(//userid.dbrmlib.data(test)),langlvl(extended)" -c t.c - The name of an HFS file | The file name can be qualified, partially qualified, or unqualified. The file | path can contain a maximum of 1024 characters, and the file name can | contain a maximum of 255 characters. The first 8 characters of the file | name, not including the file extension, should be unique within the file | system. | Assume that your directory structure is /u/USR001/c/example and that your | current working directory is /u/USR001/c. The following table shows | examples of how to specify the HFS file names with the dbrmlib option | and how the file names are resolved. | | | | | | | | | | |
Table 151. How to specify HFS files to store DBRMs If you specify... dbrmlib(/u/USR001/sample.dbrm) dbrmlib(example/sample.dbrm) dbrmlib(../sample.dbrm) dbrmlib(sample.dbrm) The DBRM is generated in... /u/USR001/sample.dbrm /u/USR001/c/example/sample.dbrm /u/USR001/sample.dbrm /u/USR001/c/sample.dbrm
954
| | | | | | | | | | | | | | | | | | | | | | | | | | Example: The following example invokes the DB2 coprocessor to compile (with the c89 compiler) a sample C program and requests that the resulting DBRM is stored in the file test.dbrm in the tmp directory:
c89 -Wc,"sql,dbrmlib(/tmp/test.dbrm),langlvl(extended)" -c t.c
If you request that the DBRM be generated in an HFS file, you can bind the resulting DBRM by using the command line processor BIND command. For more information about using the command line processor BIND command, see Binding a DBRM that is in an HFS file to a package or collection on page 972. Optionally, you can also copy the DBRM into a partitioned data set member by using the oput and oget commands and then bind the DBRM by using conventional JCL.
Support for compiling a COBOL program that includes SQL from an assembler program
The COBOL compiler provides a facility that enables you to invoke the COBOL compiler by using an assembler program. If you intend to use the DB2 coprocessor and start the COBOL compiler from an assembler program as part of your DB2 application preparation, you can use the SQL compiler option and provide the alternate DBRMLIB DD name the same way that you can specify other alternate DD names. The DB2 coprocessor creates the DBRM member according to your DBRM PDS library and the DBRM member that you specified using the alternate DBRMLIB DD name. To use the alternate DBRMLIB DD name, Enterprise COBOL V4.1 and above is required. Related reference: IBM System z Enterprise Development Tools & Compilers
955
Program and process requirements: Use the DB2 precompiler before the CICS translator to prevent the precompiler from mistaking CICS translator output for graphic data. If your source program is in COBOL, you must specify a string delimiter that is the same for the DB2 precompiler, COBOL compiler, and CICS translator. The defaults for the DB2 precompiler and COBOL compiler are not compatible with the default for the CICS translator. If the SQL statements in your source program refer to host variables that a pointer stored in the CICS TWA addresses, you must make the host variables addressable to the TWA before you execute those statements. For example, a COBOL application can issue the following statement to establish addressability to the TWA:
EXEC CICS ADDRESS TWA (address-of-twa-area) END-EXEC
You can run CICS applications only from CICS address spaces. This restriction applies to the RUN option on the second program DSN command processor. All of those possibilities occur in TSO. To prepare an application program, you can append JCL from a job that is created by the DB2 Program Preparation panels to the JCL for the CICS command language translator. To run the prepared program under CICS, you might need to define programs and transactions to CICS. Your system programmer must make the appropriate CICS resource or table entries. prefix.SDSNSAMP contains examples of the JCL that is used to prepare and run a CICS program that includes SQL statements. The set of JCL includes: v PL/I macro phase v DB2 precompiling v CICS Command Language Translation v Compiling of the host language source statements v Link-editing of the compiler output v Binding of the DBRM v Running of the prepared application. Related reference: Sample applications in CICS on page 1134 Related information: Resource Definition Guide (CICS Transaction Server for z/OS)
956
| | | | |
v Differences in handling source CCSIDs: The DB2 precompiler and DB2 coprocessor convert the SQL statements of your source program to UTF-8 for parsing. The precompiler or DB2 coprocessor uses the source CCSID(n) value to convert from that CCSID to CCSID 1208 (UTF-8). The CCSID value must be an EBCDIC CCSID. If you want to prepare a source program that is written in a CCSID that cannot be directly converted to or from CCSID 1208, you must create an indirect conversion. v Differences in handling host variable CCSIDs: COBOL: DB2 precompiler: The DB2 precompiler sets CCSIDs for alphanumeric host variables only when the program includes an explicit DECLARE :hv VARIABLE statement. DB2 coprocessor: The COBOL compiler with National Character Support always sets CCSIDs for alphanumeric variables, including host variables that are used within SQL, to the source CCSID. Alternatively, you can specify that you want the COBOL DB2 coprocessor to handle CCSIDs the same way as the precompiler. Recommendation: If you have problems with host variable CCSIDs, use the DB2 precompiler or change your application to include the DECLARE :hv VARIABLE statement to overwrite the CCSID that is specified by the COBOL compiler. Example: Assume that DB2 has mapped a FOR BIT DATA column to a host variable in the following way:
01 01 hv1 hv2 pic x(5). pic x(5). CREATE TABLE T1 (colwbit rowid char(5) for bit data, char(5)) END-EXEC.
EXEC SQL
DB2 precompiler: In the modified source from the DB2 precompiler, hv1 and hv2 are represented to DB2 through SQLDA in the following way, without CCSIDs:
for hv1: NO CCSID 20 SQL-PVAR-NAMEL1 20 SQL-PVAR-NAMEC1 for hv2: NO CCSID PIC S9(4) COMP-4 VALUE +0. PIC X(30) VALUE PIC S9(4) COMP-4 VALUE +0. PIC X(30) VALUE .
20 SQL-PVAR-NAMEL2 20 SQL-PVAR-NAMEC2
DB2 coprocessor: In the modified source from the DB2 coprocessor with the National Character Support for COBOL, hv1 and hv2 are represented to DB2 in the following way, with CCSIDs: (Assume that the source CCSID is 1140.)
957
for hv1 and hv2, the value for CCSID is set to 1140 (474x) in input SQLDA of the INSERT statement. 7F00000474000000007Fx
To ensure that no discrepancy exists between the column with FOR BIT DATA and the host variable with CCSID 1140, add the following statement for :hv1 or use the DB2 precompiler:
EXEC SQL DECLARE : hv1 VARIABLE FOR BIT DATA END-EXEC. for hv1 declared with for bit data. The value in set to FFFFx for CCSID instead of 474x. 7F0000FFFF000000007Fx vs. 7F00000474000000007Fx SQL---AVAR-NAME-DATA is
<<= with DECLARE :hv1 VARIABLE FOR BIT DATA <<= without
PL/I DB2 coprocessor: You can specify whether CCSIDs are to be associated with host variables by using the following PL/I SQL preprocessor options: CCSID0 Specifies that the PL/I SQL preprocessor is not to set the CCSIDs for all host variables unless they are defined with the SQL DECLARE :hv VARIABLE statement. NOCCSID0 Specifies that the PL/I SQL preprocessor is to set the CCSIDs for all host variables. Related concepts: z/OS: Unicode Services Users Guide and Reference Related reference: Descriptions of SQL processing options on page 959 Enterprise COBOL for z/OS SQL preprocessor options (PL/I) (Enterprise PL/I for z/OS Programming Guide:)
958
For examples of how to specify the DB2 coprocessor options, see Processing SQL statements by using the DB2 coprocessor on page 953 DB2 assigns default values for any SQL processing options for which you do not explicitly specify a value. Those defaults are the values that are specified on the APPLICATION PROGRAMMING DEFAULTS installation panels.
Meaning Indicates that the DB2 precompiler is to use the apostrophe (') as the string delimiter in host language statements that it generates. This option is not available in all languages. APOST and QUOTE are mutually exclusive options. The default is in the field STRING DELIMITER on Application Programming Defaults Panel 1 during installation. If STRING DELIMITER is the apostrophe ('), APOST is the default.
APOSTSQL
Recognizes the apostrophe (') as the string delimiter and the double quotation mark (") as the SQL escape character within SQL statements. APOSTSQL and QUOTESQL are mutually exclusive options. The default is in the field SQL STRING DELIMITER on Application Programming Defaults Panel 1 during installation. If SQL STRING DELIMITER is the apostrophe ('), APOSTSQL is the default.
ATTACH(TSO|CAF|RRSAF)
Specifies the attachment facility that the application uses to access DB2. TSO, CAF, and RRSAF applications that load the attachment facility can use this option to specify the correct attachment facility, instead of coding a dummy DSNHLI entry point. This option is not available for Fortran applications. The default is ATTACH(TSO).
959
Table 152. SQL processing options (continued) Option keyword Meaning Specifies the numeric value n of the CCSID in which the source program is written. The number n must be an EBCDIC CCSID. The default setting is the EBCDIC system CCSID as specified on the panel DSNTIPF during installation. The DB2 coprocessor uses the following process to determine the CCSID of the source statements: 1. If the CCSID of the source program is specified by a compiler option, such as the COBOL CODEPAGE compiler option, the DB2 coprocessor uses that CCSID. a. If the CCSID suboption of the SQL compiler option is specified and contains a valid EBCDIC CCSID, that CCSID is used. b. If the CCSID suboption of the SQL compiler option is not specified, and the compiler supports an option for specifying the CCSID, such as the COBOL CODEPAGE compiler option, the default for the CCSID compiler option is used. c. If the CCSID suboption of the SQL compiler option is not specified, and the compiler does not support an option for specifying the CCSID, the default CCSID from DSNHDECP is used. d. If the CCSID suboption of the SQL option is specified and contains an invalid CCSID, compilation terminates.
| CCSID(n) |
CCSID supersedes the GRAPHIC and NOGRAPHIC SQL processing options. If you specify CCSID(1026) or CCSID(1155), the DB2 coprocessor does not support the code point 'FC'X for the double quotation mark ("). COMMA Recognizes the comma (,) as the decimal point indicator in decimal or floating point literals in the following cases: v For static SQL statements in COBOL programs v For dynamic SQL statements, when the value of installation parameter DYNRULS is NO and the package or plan that contains the SQL statements has DYNAMICRULES bind, define, or invoke behavior. COMMA and PERIOD are mutually exclusive options. The default (COMMA or PERIOD) is chosen under DECIMAL POINT IS on Application Programming Defaults Panel 1 during installation. CONNECT(2|1) CT(2|1) Determines whether to apply type 1 or type 2 CONNECT statement rules. CONNECT(2) Default: Apply rules for the CONNECT (Type 2) statement CONNECT(1) Apply rules for the CONNECT (Type 1) statement If you do not specify the CONNECT option when you precompile a program, the rules of the CONNECT (Type 2) statement apply. DATE(ISO|USA |EUR|JIS|LOCAL) Specifies that date output should always be returned in a particular format, regardless of the format that is specified as the location default. The default is specified in the field DATE FORMAT on Application Programming Defaults Panel 2 during installation. The default format is determined by the installation defaults of the system where the program is bound, not by the installation defaults of the system where the program is precompiled. You cannot use the LOCAL option unless you have a date exit routine.
960
Table 152. SQL processing options (continued) Option keyword DEC(15|31) DEC15 | DEC31 D15.s | D31.s Meaning Specifies the maximum precision for decimal arithmetic operations. The default is in the field DECIMAL ARITHMETIC on Application Programming Defaults Panel 1 during installation. If the form Dpp.s is specified, pp must be either 15 or 31, and s, which represents the minimum scale to be used for division, must be a number between 1 and 9. FLAG(I|W|E|S)1 Suppresses diagnostic messages below the specified severity level (Informational, Warning, Error, and Severe error for severity codes 0, 4, 8, and 12 respectively). The default setting is FLAG(I). FLOAT(S390|IEEE) Determines whether the contents of floating-point host variables in assembler, C, C++, or PL/I programs are in IEEE floating-point format or z/Architecture hexadecimal floating-point format. DB2 ignores this option if the value of HOST is anything other than ASM, C, CPP, or PLI. The default setting is FLOAT(S390).
| GRAPHIC |
This option is no longer used for SQL statement processing. Use the CCSID option instead. Indicates that the source code might use mixed data, and that X'0E'and X'0F' are special control characters (shift-out and shift-in) for EBCDIC data. GRAPHIC and NOGRAPHIC are mutually exclusive options. The default (GRAPHIC or NOGRAPHIC) is specified in the field MIXED DATA on Application Programming Defaults Panel 1 during installation.
Defines the host language that contains the SQL statements. Use IBMCOB for Enterprise COBOL for z/OS. For C, specify: v C if you do not want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers to uppercase v C(FOLD) if you want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers to uppercase For C++, specify: v CPP if you do not want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers to uppercase v CPP(FOLD) if you want DB2 to fold lowercase letters in SBCS SQL ordinary identifiers to uppercase
| | | |
For SQL procedural language, specify: v SQL, to perform syntax checking and conversion to a generated C program for an external SQL procedure. v SQLPL, to perform syntax checking for a native SQL procedure. If you omit the HOST option, the DB2 precompiler issues a level-4 diagnostic message and uses the default value for this option. The default is in the field LANGUAGE DEFAULT on Application Programming Defaults Panel 1 during installation. This option also sets the language-dependent defaults.
961
Table 152. SQL processing options (continued) Option keyword LEVEL[(aaaa)] L Meaning Defines the level of a module, where aaaa is any alphanumeric value of up to seven characters. This option is not recommended for general use, and the DSNH CLIST and the DB2I panels do not support it. For assembler, C, C++, Fortran, and PL/I, you can omit the suboption (aaaa). The resulting consistency token is blank. For COBOL, you need to specify the suboption. LINECOUNT1(n) LC MARGINS1(m,n[,c]) MAR Defines the number of lines per page to be n for the DB2 precompiler listing. This includes header lines that are inserted by the DB2 precompiler. The default setting is LINECOUNT(60). Specifies what part of each source record contains host language or SQL statements. For assembler, this option also specifies where column continuations begin. The first option (m) is the beginning column for statements. The second option (n) is the ending column for statements. The third option (c) specifies where assembler continuations begin. Otherwise, the DB2 precompiler places a continuation indicator in the column immediately following the ending column. Margin values can range from 1 to 80. Default values depend on the HOST option that you specify. The DSNH CLIST and the DB2I panels do not support this option. In assembler, the margin option must agree with the ICTL instruction, if presented in the source.
| NEWFUN(V8|V9|YES|NO) | | | | | | | | | | | | | | | | | | | | | | | | | |
Indicates whether to accept the function syntax that is new for the current version of DB2. NEWFUN(V8) Specifies that any syntax up to V8 will be allowed. NEWFUN(V8) is equivalent to NEWFUN(NO) NEWFUN(V9) Specifies that any syntax up to V9 will be allowed. NEWFUN(V9) is equivalent to NEWFUN(YES) NEWFUN(YES) Causes the precompiler to accept syntax that is new for the current version of DB2. NEWFUN(NO) Causes the precompiler to reject any syntax that is introduced in the current version. Regardless of what value you specify, a successful precompilation produces a Unicode DBRM. If the DBRM does not contain any syntax that is new for the current version, you can bind it on DB2 Version 8 or later. If the DBRM does contain new syntax for the current version, you can bind it only under the current version or later releases. The NEWFUN option applies only to the precompilation process by either the precompiler or the DB2 coprocessor, regardless of the current migration mode. You are responsible for ensuring that you bind the resulting DBRM on a subsystem in the correct migration mode. For example, suppose that you have an application that contains new syntax for the current version of DB2. You can use NEWFUN(YES) to precompile that application on a subsystem in any migration mode. However, you cannot bind the resulting DBRM on a subsystem that is not in new-function mode. NEWFUN(YES) and NEWFUN(NO) options are deprecated.
962
Table 152. SQL processing options (continued) Option keyword NOFOR Meaning In static SQL, eliminates the need for the FOR UPDATE or FOR UPDATE OF clause in DECLARE CURSOR statements. When you use NOFOR, your program can make positioned updates to any columns that the program has DB2 authority to update. When you do not use NOFOR, if you want to make positioned updates to any columns that the program has DB2 authority to update, you need to specify FOR UPDATE with no column list in your DECLARE CURSOR statements. The FOR UPDATE clause with no column list applies to static or dynamic SQL statements. Regardless of whether you use NOFOR, you can specify FOR UPDATE OF with a column list to restrict updates to only the columns that are named in the clause, and you can specify the acquisition of update locks. You imply NOFOR when you use the option STDSQL(YES). If the resulting DBRM is very large, you might need extra storage when you specify NOFOR or use the FOR UPDATE clause with no column list.
| |
NOGRAPHIC
This option is no longer used for SQL statement processing. Use the CCSID option instead. Indicates the use of X'0E'and X'0F' in a string, but not as control characters. GRAPHIC and NOGRAPHIC are mutually exclusive options. The default (GRAPHIC or NOGRAPHIC) is specified in the field MIXED DATA on Application Programming Defaults Panel 1 during installation.
|
NOOPTIONS NOOPTN NOPADNTSTR
The NOGRAPHIC option applies to only EBCDIC data. Suppresses the DB2 precompiler options listing. Indicates that output host variables that are NUL-terminated strings are not padded with blanks. That is, additional blanks are not inserted before the NUL-terminator is placed at the end of the string. PADNTSTR and NOPADNTSTR are mutually exclusive options. The default (PADNTSTR or NOPADNTSTR) is specified in the field PAD NUL-TERMINATED on Application Programming Defaults Panel 2 during installation.
|
NOSOURCE2 NOS NOXREF2 ONEPASS ON
This option applies to only C and C++ applications. Suppresses the DB2 precompiler source listing. This is the default. Suppresses the DB2 precompiler cross-reference listing. This is the default. Processes in one pass, to avoid the additional processing time for making two passes. Declarations must appear before SQL references. Default values depend on the HOST option specified. ONEPASS and TWOPASS are mutually exclusive options. OPTIONS OPTN
1
963
Table 152. SQL processing options (continued) Option keyword Meaning Indicates that output host variables that are NUL-terminated strings are padded with blanks with the NUL-terminator placed at the end of the string. PADNTSTR and NOPADNTSTR are mutually exclusive options. The default (PADNTSTR or NOPADNTSTR) is specified in the field PAD NUL-TERMINATED on Application Programming Defaults Panel 2 during installation.
| PADNTSTR |
|
PERIOD
This option applies to only C and C++ applications. Recognizes the period (.) as the decimal point indicator in decimal or floating point literals in the following cases: v For static SQL statements in COBOL programs v For dynamic SQL statements, when the value of installation parameter DYNRULS is NO and the package or plan that contains the SQL statements has DYNAMICRULES bind, define, or invoke behavior. COMMA and PERIOD are mutually exclusive options. The default (COMMA or PERIOD) is specified in the field DECIMAL POINT IS on Application Programming Defaults Panel 1 during installation. QUOTE1 Q Indicates that the DB2 precompiler is to use the quotation mark (") as the string delimiter in host language statements that it generates. QUOTE is valid only for COBOL applications. QUOTE is not valid for either of the following combinations of precompiler options: v CCSID(1026) and HOST(IBMCOB) v CCSID(1155) and HOST(IBMCOB) The default is specified in the field STRING DELIMITER on Application Programming Defaults Panel 1 during installation. If STRING DELIMITER is the double quotation mark (") or DEFAULT, QUOTE is the default. APOST and QUOTE are mutually exclusive options. QUOTESQL Recognizes the double quotation mark (") as the string delimiter and the apostrophe (') as the SQL escape character within SQL statements. This option applies only to COBOL. The default is specified in the field SQL STRING DELIMITER on Application Programming Defaults Panel 1 during installation. If SQL STRING DELIMITER is the double quotation mark (") or DEFAULT, QUOTESQL is the default. APOSTSQL and QUOTESQL are mutually exclusive options. SOURCE S
1
964
Table 152. SQL processing options (continued) Option keyword SQL(ALL|DB2) Meaning Indicates whether the source contains SQL statements other than those recognized by DB2 for z/OS. SQL(ALL) is recommended for application programs whose SQL statements must execute on a server other that DB2 for z/OS using DRDA access. SQL(ALL) indicates that the SQL statements in the program are not necessarily for DB2 for z/OS. Accordingly, the SQL statement processor then accepts statements that do not conform to the DB2 syntax rules. The SQL statement processor interprets and processes SQL statements according to distributed relational database architecture (DRDA) rules. The SQL statement processor also issues an informational message if the program attempts to use IBM SQL reserved words as ordinary identifiers. SQL(ALL) does not affect the limits of the SQL statement processor. SQL(DB2), the default, means to interpret SQL statements and check syntax for use by DB2 for z/OS. SQL(DB2) is recommended when the database server is DB2 for z/OS. STDSQL(NO|YES)3 Indicates to which rules the output statements should conform. STDSQL(YES)3 indicates that the precompiled SQL statements in the source program conform to certain rules of the SQL standard. STDSQL(NO) indicates conformance to DB2 rules. The default is specified in the field STD SQL LANGUAGE on Application Programming Defaults Panel 2 during installation. STDSQL(YES) automatically implies the NOFOR option. TIME(ISO|USA|EUR|JIS| LOCAL) Specifies that time output always return in a particular format, regardless of the format that is specified as the location default. The default is specified in the field TIME FORMAT on Application Programming Defaults Panel 2 during installation. The default format is determined by the installation defaults of the system where the program is bound, not by the installation defaults of the system where the program is precompiled. You cannot use the LOCAL option unless you have a time exit routine. TWOPASS TW Processes in two passes, so that declarations need not precede references. Default values depend on the HOST option that is specified. ONEPASS and TWOPASS are mutually exclusive options.
| | | | VERSION(aaaa|AUTO) |
For the DB2 coprocessor, you can specify the TWOPASS option for only PL/I applications. For C/C++ and COBOL applications, the DB2 coprocessor uses the ONEPASS option. Defines the version identifier of a package, program, and the resulting DBRM. A version identifier is an SQL identifier of up to 64 EBCDIC bytes. When you specify VERSION, the SQL statement processor creates a version identifier in the program and DBRM. This affects the size of the load module and DBRM. DB2 uses the version identifier when you bind the DBRM to a plan or package. If you do not specify a version at precompile time, an empty string is the default version identifier. If you specify AUTO, the SQL statement processor uses the consistency token to generate the version identifier. If the consistency token is a timestamp, the timestamp is converted into ISO character format and is used as the version identifier. The timestamp that is used is based on the store clock value.
Chapter 17. Preparing an application to run on DB2 for z/OS
965
Meaning Includes a sorted cross-reference listing of symbols that are used in SQL statements in the listing output.
Notes: 1. The DB2 coprocessor ignores this option when the DB2 coprocessor is invoked by the compiler to prepare the application. 2. This option is always in effect when the DB2 coprocessor is invoked by the compiler to prepare the application. 3. You can use STDSQL(86) as in prior releases of DB2. The SQL statement processor treats it the same as STDSQL(YES). 4. Precompiler options do not affect ODBC behavior.
Related concepts: Precision for operations with decimal numbers on page 691 Datetime values (DB2 SQL) Related tasks: Creating a package version on page 971 Setting the program level on page 986 Related reference: Defaults for SQL processing options
| MIXED DATA
LANGUAGE DEFAULT
NO ISO
STDSQL(NO) TIME(ISO)
966
Table 153. IBM-supplied installation default SQL statement processing options (continued). The installer can change these defaults. Install option Install default Equivalent SQL statement processing option Available SQL statement processing options
| | | |
Notes: For dynamic SQL statements, another application programming default, USE FOR DYNAMICRULES, determines whether DB2 uses the application programming default or the SQL statement processor option for the following installation options: v STRING DELIMITER v SQL STRING DELIMITER v DECIMAL POINT IS v DECIMAL ARITHMETIC If the value of USE FOR DYNAMICRULES is YES, dynamic SQL statements use the application programming defaults. If the value of USE FOR DYNAMICRULES is NO, dynamic SQL statements in packages or plans with bind, define, and invoke behavior use the SQL statement processor options.
Some SQL statement processor options have default values based on the host language. Some options do not apply to some languages. The following table shows the language-dependent options and defaults.
Table 154. Language-dependent DB2 precompiler options and defaults HOST value ASM C or CPP IBMCOB FORTRAN PLI SQL or SQLPL Notes: 1. Forced for this language; no alternative is allowed. 2. The default is chosen on Application Programming Defaults Panel 1 during installation. The IBM-supplied installation defaults for string delimiters are QUOTE (host language delimiter) and QUOTESQL (SQL escape character). The installer can replace the IBM-supplied defaults with other defaults. The precompiler options that you specify override any defaults that are in effect. Defaults APOST1, APOSTSQL1, PERIOD1, TWOPASS, MARGINS(1,71,16) APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(1,72) QUOTE2, QUOTESQL2, PERIOD, ONEPASS1, MARGINS(8,72)1 APOST1, APOSTSQL1, PERIOD1, ONEPASS1, MARGINS(1,72)1 APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(2,72) APOST1, APOSTSQL1, PERIOD1, ONEPASS, MARGINS(1,72)
| | | |
967
The following SQL statement processing options are relevant for DRDA access: CONNECT Use CONNECT(2), explicitly or by default. CONNECT(1) causes your CONNECT statements to allow only the restricted function known as remote unit of work. Be particularly careful to avoid CONNECT(1) if your application updates more than one DBMS in a single unit of work. SQL Use SQL(ALL) explicitly for a package that runs on a server that is not DB2 for z/OS. The precompiler then accepts any statement that obeys DRDA rules. Use SQL(DB2), explicitly or by default, if the server is DB2 for z/OS only. The precompiler then rejects any statement that does not obey the rules of DB2 for z/OS.
968
v If you run your application program only under DB2, be sure to concatenate the DB2 library first. CICS: Include the DB2 CICS language interface module (DSNCLI). You can link DSNCLI with your program in either 24-bit or 31-bit addressing mode (AMODE=31). If your application program runs in 31-bit addressing mode, you should link-edit the DSNCLI stub to your application with the attributes AMODE=31 and RMODE=ANY so that your application can run above the 16-MB line. You also need the CICS EXEC interface module that is appropriate for the programming language. CICS requires that this module be the first control section (CSECT) in the final load module. The size of the executable load module that is produced by the link-edit step varies depending on the values that the SQL statement processor inserts into the source code of the program. Link-editing a batch program: DB2 has language interface routines for each unique supported environment. DB2 requires the IMS language interface routine for DL/I batch. You need to have DFSLI000 link-edited with the application program. Related tasks: Chapter 17, Preparing an application to run on DB2 for z/OS, on page 941 Related reference: DSNH (TSO CLIST) (DB2 Commands) Related information: CICS DB2 program preparation steps (CICS Transaction Server for z/OS)
Binding an application
You must bind the DBRM that is produced by the SQL statement processor to a plan or package before your DB2 application can run.
969
You must bind plans locally, regardless of whether they reference packages that run remotely. However, you must bind the packages that run at remote locations at those remote locations. | | | | For C and C++ programs whose corresponding DBRMs are in HFS files, you can use the command line processor to bind the DBRMs to packages. Optionally, you can also copy the DBRM into a partitioned data set member by using the oput and oget commands and then bind it by using conventional JCL. From a DB2 requester, you can run a plan by specifying it in the RUN subcommand, but you cannot run a package directly. You must include the package in a plan and then run the plan. Develop a naming convention and strategy for the most effective and efficient use of your plans and packages. v To bind a new plan or package, other than a trigger package, use the subcommand BIND PLAN or BIND PACKAGE with the option ACTION(REPLACE). To bind a new trigger package, recreate the trigger associated with the trigger package.
970
bind a package at another type of a system, such as DB2 Server for VSE & VM, you need any privileges that the other system requires to execute its SQL statements and use its data objects. The bind process for a remote package is the same as for a local package, except that the local communications database must be able to recognize the location name that you use as resolving to a remote location. Example: To bind the DBRM PROGA at the location PARIS, in the collection GROUP1, use:
BIND PACKAGE(PARIS.GROUP1) MEMBER(PROGA)
Then, include the remote package in the package list of a local plan, such as PLANB, by using:
BIND PLAN (PLANB) PKLIST(PARIS.GROUP1.PROGA)
The ENCODING bind option has the following effect on a remote application: v If you bind a package locally, which is recommended, and you specify the ENCODING bind option for the local package, the ENCODING bind option for the local package applies to the remote application. v If you do not bind a package locally, and you specify the ENCODING bind option for the plan, the ENCODING bind option for the plan applies to the remote application. v If you do not specify an ENCODING bind option for the package or plan at the local site, the value of APPLICATION ENCODING that was specified on installation panel DSNTIPF at the local site applies to the remote application. When you bind or rebind, DB2 checks authorizations, reads and updates the catalog, and creates the package in the directory at the remote site. DB2 does not read or update catalogs or check authorizations at the local site. If you specify the option EXPLAIN(YES) and you do not specify the option SQLERROR(CONTINUE), PLAN_TABLE must exist at the location that is specified on the BIND or REBIND subcommand. This location could also be the default location. If you bind with the option COPY, the COPY privilege must exist locally. DB2 performs authorization checking, reads and updates the catalog, and creates the package in the directory at the remote site. DB2 reads the catalog records that are related to the copied package at the local site. DB2 converts values that are returned from the remote site in ISO format if all of the following conditions are true: v If the local site is installed with time or date format LOCAL v A package is created at a remote site with the COPY option v The SQL statement does not specify a different format. After you bind a package, you can rebind, free, or bind it with the REPLACE option using either a local or a remote bind.
971
useful if you need to make changes to your program without causing an interruption to the availability of the program.
Procedure
To create a package version: 1. Precompile your program with the option VERSION(version-identifier). 2. Bind the resulting DBRM with the same collection name and package name as any existing versions of that package. When you run the program, DB2 uses the package version that you specified when you precompiled it.
Example
Suppose that you bound a plan with the following statement:
BIND PLAN (PLAN1) PKLIST (COLLECT.*)
The following steps show how to create two versions of a package, one for each of two programs.
Step number 1 2 For package version 1 Precompile program 1. Specify VERSION(1). Bind the DBRM with the collection name COLLECT and the package name PACKA. Link-edit program 1 into your application. Run the application; it uses program 1 and PACKA, VERSION 1. For package version 2 Precompile program version 2. Specify VERSION(2). Bind the DBRM with the collection name COLLECT and package name PACKA. Link-edit program 2 into your application. Run the application; it uses program 2 and PACKA, VERSION 2.
3 4
| | | | | | | | | |
972
| | | | | | | | | | | | | | | | | | | |
BIND
You cannot specify the FREE PACKAGE command with the command line processor. Alternatively, specify the DROP PACKAGE statement to drop the existing packages.
Procedure
To bind a DBRM that is in an HFS file to a package or collection: 1. Invoke the command line processor and connect to the target DB2 server. 2. Specify the BIND command with the appropriate options. Related concepts: Command line processor (DB2 Commands) Related tasks: Processing SQL statements by using the DB2 coprocessor on page 953 Related reference: Command line processor BIND command Command line processor BIND command: Use the command line processor BIND command to bind DBRMs that are in z/OS UNIX HFS files to packages. The following diagram shows the syntax for the command line processor BIND command.
| | | | | | | | Notes: 1 2 If you do not specify a collection, DB2 uses NULLID. You can specify the options after collection-name in any order.
options-clause:
973
| |
| |
NODEFER(PREPARE) DEFER(PREPARE)
| |
DBPROTOCOL(DRDA) DBPROTOCOL(PRIVATE)
DEGREE(1) DEGREE(ANY)
| |
DYNAMICRULES(RUN) DYNAMICRULES( BIND DEFINEBIND DEFINERUN INVOKEBIND INVOKERUN ) ENCODING ( ASCII EBCDIC UNICODE ccsid )
| |
KEEPDYNAMIC(NO) KEEPDYNAMIC(YES)
ISOLATION(CS) ISOLATION( RR RS UR NC )
| |
(1) REOPT(NONE) (2) REOPT(ALWAYS) RELEASE(COMMIT) RELEASE(DEALLOCATE) RELEASE(INHERITFROMPLAN) VALIDATE(RUN) ) VALIDATE(BIND) path-clause OPTHINT('hint-ID')
| |
| | | | | | | | Notes: 1 2 You can specify NOREOPT(VARS) as a synonym of REOPT(NONE). You can specify REOPT(VARS) as a synonym of REOPT(ALWAYS).
path-clause:
974
PATH(
) ,
PATH(
schema-name USER
| | | | | | | | | | | | | | | | | | The following options are unique to this diagram: CURRENTDATA (ALL) Specifies that for all cursors data currency is required and block fetching is inhibited. SQLERROR(CHECK) Specifies that the command line processor is to only check for SQL errors in the DBRM. No package is generated. IMMEDWRITE(PH1) Specifies that normal write activity is done. This option is equivalent to IMMEDWRITE(NO). EXPLAIN(ALL) Specifies that DB2 is to insert information into the appropriate EXPLAIN tables. This option is equivalent to EXPLAIN (YES). Related reference: BIND and REBIND options (DB2 Commands)
975
v Any programs that are associated with DBRMs in the MEMBER list v Any programs that are associated with packages and collections that are identified in PKLIST Example of binding DBRMS directly to an application plan: The following statement binds three DBRMs, PROGA, PROGB, and PROGC, directly to plan PLANW.
BIND PLAN(PLANW) MEMBER(PROGA,PROGB,PROGC)
Example of including both DBRMs and packages in an application plan: This example statement binds a plan that includes: v The DBRMs PROG1 and PROG2 v All the packages in the collection TEST2 v The packages PROGA and PROGC in the collection GROUP1
BIND PLAN(PLANY) MEMBER(PROG1,PROG2) PKLIST(TEST2.*,GROUP1.PROGA,GROUP1.PROGC)
Specifying the package list for the PKLIST option of BIND PLAN: The order in which you specify packages in a package list can affect run time performance. Searching for the specific package involves searching the DB2 directory, which can be costly. When you use collection-id.* with the PKLIST keyword, you should specify first the collections in which DB2 is most likely to find a package. For example, assume that you perform the following bind:
BIND PLAN (PLAN1) PKLIST (COLL1.*, COLL2.*, COLL3.*, COLL4.*)
Then you execute program PROG1. DB2 does the following package search: 1. Checks to see if program PROG1 is bound as part of the plan 2. Searches for COLL1.PROG1.timestamp 3. If it does not find COLL1.PROG1.timestamp, searches for COLL2.PROG1.timestamp 4. If it does not find COLL2.PROG1.timestamp, searches for COLL3.PROG1.timestamp 5. If it does not find COLL3.PROG1.timestamp, searches for COLL4.PROG1.timestamp. When both special registers CURRENT PACKAGE PATH and CURRENT PACKAGESET contain an empty string: If you do not set these special registers, DB2 searches for a DBRM or a package in one of these sequences: v At the local location (if CURRENT SERVER is blank or specifies that location explicitly), the order is: 1. All DBRMs that are bound directly to the plan. 2. All packages that are already allocated to the plan while the plan is running. 3. All unallocated packages that are explicitly specified in, and all collections that are completely included in, the package list of the plan. DB2 searches for packages in the order that they appear in the package list. v At a remote location, the order is: 1. All packages that are already allocated to the plan at that location while the plan is running.
| | | | | | | | | | | |
976
| | | | | |
2. All unallocated packages that are explicitly specified in, and all collections that are completely included in, the package list of the plan, whose locations match the value of CURRENT SERVER. DB2 searches for packages in the order that they appear in the package list. If you use the BIND PLAN option DEFER(PREPARE), DB2 does not search all collections in the package list. If the order of search is not important: In many cases, the order in which DB2 searches the packages is not important to you and does not affect performance. For an application that runs only at your local DB2 system, you can name every package differently and include them all in the same collection. The package list on your BIND PLAN subcommand can read:
PKLIST (collection.*)
You can add packages to the collection even after binding the plan. DB2 lets you bind packages having the same package name into the same collection only if their version IDs are different. If your application uses DRDA access, you must bind some packages at remote locations. Use the same collection name at each location, and identify your package list as:
PKLIST (*.collection.*)
If you use an asterisk for part of a name in a package list, DB2 checks the authorization for the package to which the name resolves at run time. To avoid the checking at run time in the preceding example, you can grant EXECUTE authority for the entire collection to the owner of the plan before you bind the plan. Related concepts: CURRENT PACKAGE PATH (DB2 SQL) CURRENT PACKAGESET (DB2 SQL) Related tasks: Improving performance for applications that access distributed data (DB2 Performance) Related reference: BIND PLAN (DSN) (DB2 Commands)
977
packages in the package list of the same plan. All those packages will have the same consistency token. You can specify a particular location or a particular collection at run time. Related tasks: Setting the program level on page 986
978
Related tasks: Binding an application plan on page 975 Overriding the values that DB2 uses to resolve package lists
979
Table 155. Scope of CURRENT PACKAGE PATH (continued) Example SET CURRENT PACKAGE PATH = A,B CONNECT TO S2 ... SET CURRENT PACKAGE PATH = X,Y SELECT ... FROM T1 ... SET CURRENT PACKAGE PATH SELECT ... FROM S2.QUAL.T1 ... What happens The collections in PACKAGE PATH that are set at server S2 determine which package is invoked.
Three-part table name. On implicit connection to server S2, PACKAGE PATH at server S2 is inherited from the local server. The collections in PACKAGE PATH at server S2 determine which package is invoked.
Notes: 1. When CURRENT PACKAGE PATH is set at the requester (and not at the remote server), DB2 passes one collection at a time from the list of collections to the remote server until a package is found or until the end of the list. Each time a package is not found at the server, DB2 returns an error to the requester. The requester then sends the next collection in the list to the remote server.
Example
| | Suppose that you need to access data at a remote server CHICAGO, by using the following query:
SELECT * FROM CHICAGO.DSN8910.EMP WHERE EMPNO = 0001000;
This statement can be executed with DRDA access or DB2 private protocol access. The method of access depends on the DBPROTOCOL bind option that you specify when you bind your DBRMs into packages. DRDA is used by default if you do not specify a DBPROTOCOL bind option. Recommendation: Specify DRDA as the DBPROTOCOL bind option. If you bind the DBRM that contains the statement into a plan at the local DB2 and specify the bind option DBPROTOCOL(PRIVATE), you access the server by using DB2 private protocol access. If you bind the DBRM that contains the statement by using one of the following processes, you access the server using DRDA access: Local-bind DRDA access process: 1. Bind the DBRM into a package at the local DB2 using the bind option DBPROTOCOL(DRDA). 2. Bind the DBRM into a package at the remote location (CHICAGO). 3. Bind the packages into a plan using bind option DBPROTOCOL(DRDA). Remote-bind DRDA access process: 1. Bind the DBRM into a package at the remote location.
980
2. Bind the remote package and the DBRM into a plan at the local site, using the bind option DBPROTOCOL(DRDA). In some cases you cannot use private protocol to access distributed data. The following examples require DRDA access.
Example
Suppose that you need to access data at a remote server CHICAGO, by using the following CONNECT and SELECT statements:
EXEC SQL CONNECT TO CHICAGO; EXEC SQL SELECT * FROM DSN8910.EMP WHERE EMPNO = 0001000;
This example requires DRDA access and the correct binding procedure to work from a remote server. Before you can execute the query at location CHICAGO, you must bind the application as a remote package at the CHICAGO server. Before you can run the application, you must also bind a local package and a local plan with a package list that includes the local and remote package.
Example
Suppose that you need to call a stored procedure at the remote server ATLANTA, by using the following CONNECT and CALL statements:
EXEC SQL CONNECT TO ATLANTA; EXEC SQL CALL procedure_name (parameter_list);
This example requires DRDA access because private protocol does not support stored procedures. The parameter list is a list of host variables that is passed to the stored procedure and into which it returns the results of its execution. To execute, the stored procedure must already exist at the ATLANTA server.
981
DISCONNECT(CONDITIONAL) ends remote connections during a commit operation except when an open cursor defined as WITH HOLD is associated with the connection. SQLRULES Use SQLRULES(DB2), explicitly or by default. SQLRULES(STD) applies the rules of the SQL standard to your CONNECT statements, so that CONNECT TO x is an error if you are already connected to x. Use STD only if you want that statement to return an error code. If your program selects LOB data from a remote location, and you bind the plan for the program with SQLRULES(DB2), the format in which you retrieve the LOB data with a cursor is restricted. After you open the cursor to retrieve the LOB data, you must retrieve all of the data using a LOB variable, or retrieve all of the data using a LOB locator variable. If the value of SQLRULES is STD, this restriction does not exist. If you intend to switch between LOB variables and LOB locators to retrieve data from a cursor, execute the SET SQLRULES=STD statement before you connect to the remote location. CURRENTDATA Use CURRENTDATA(NO) to force block fetch for ambiguous cursors. DBPROTOCOL Use DBPROTOCOL(PRIVATE) if you want DB2 to use DB2 private protocol access for accessing remote data that is specified with three-part names. Use DBPROTOCOL(DRDA) if you want DB2 to use DRDA access to access remote data that is specified with three-part names. You must bind a package at all locations whose names are specified in three-part names. The package value for the DBPROTOCOL option overrides the plan option. For example, if you specify DBPROTOCOL(DRDA) for a remote package and DBPROTOCOL(PRIVATE) for the plan, DB2 uses DRDA access when it accesses data at that location using a three-part name. If you do not specify any value for DBPROTOCOL, DB2 uses the value of DATABASE PROTOCOL on installation panel DSNTIP5. ENCODING Use this option to control the encoding scheme that is used for static SQL statements in the plan and to set the initial value of the CURRENT APPLICATION ENCODING SCHEME special register. For applications that execute remotely and use explicit CONNECT statements, DB2 uses the ENCODING value for the plan. For applications that execute remotely and use implicit CONNECT statements, DB2 uses the ENCODING value for the package that is at the site where a statement executes.
982
SQLERROR Use SQLERROR(CONTINUE) if you used SQL(ALL) when precompiling. That creates a package even if the bind process finds SQL errors, such as statements that are valid on the remote server but that the precompiler did not recognize. Otherwise, use SQLERROR(NOPACKAGE), explicitly or by default. CURRENTDATA Use CURRENTDATA(NO) to force block fetch for ambiguous cursors. OPTIONS When you make a remote copy of a package using BIND PACKAGE with the COPY option, use this option to control the default bind options that DB2 uses. Specify: COMPOSITE to cause DB2 to use any options you specify in the BIND PACKAGE command. For all other options, DB2 uses the options of the copied package. COMPOSITE is the default. COMMAND to cause DB2 to use the options you specify in the BIND PACKAGE command. For all other options, DB2 uses the defaults for the server on which the package is bound. This helps ensure that the server supports the options with which the package is bound. DBPROTOCOL Use DBPROTOCOL(PRIVATE) if you want DB2 to use DB2 private protocol access for accessing remote data that is specified with three-part names. Use DBPROTOCOL(DRDA) if you want DB2 to use DRDA access to access remote data that is specified with three-part names. You must bind a package at all locations whose names are specified in three-part names. ENCODING Use this option to control the encoding scheme that is used for static SQL statements in the package and to set the initial value of the CURRENT APPLICATION ENCODING SCHEME special register. The default ENCODING value for a package that is bound at a remote DB2 for z/OS server is the system default for that server. The system default is specified at installation time in the APPLICATION ENCODING field of panel DSNTIPF. For applications that execute remotely and use explicit CONNECT statements, DB2 uses the ENCODING value for the plan. For applications that execute remotely and use implicit CONNECT statements, DB2 uses the ENCODING value for the package that is at the site where a statement executes. Related concepts: Bind options for locks (DB2 Performance) Related tasks: BIND options for distributed applications (DB2 Performance) Related reference: BIND and REBIND options (DB2 Commands)
983
Conversion of DBRMs that are bound to a plan to DBRMs that are bound to a package
In future releases, you will need to bind all DBRMs into a package, and bind the packages into a plan. You can execute the REBIND PLAN command with the COLLID option to convert all plans with DBRMs into plans with a package list. You can use this technique for local applications only. If the plan that you specify already contains both DBRMs and package list, the newly converted package entries will be inserted into the front of the existing package list.
984
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
985
Procedure
To turn an existing plan into packages to run remotely, perform the following actions for each remote location: 1. Choose a name for a collection to contain all the packages in the plan, such as REMOTE1. (You can use more than one collection if you like, but one is enough.) 2. Assuming that the server is a DB2 system, at the remote location execute: a. GRANT CREATE IN COLLECTION REMOTE1 TO authorization-name; b. GRANT BINDADD TO authorization-name; where authorization-name is the owner of the package. 3. Bind each DBRM as a package at the remote location, using the instructions under Binding packages at a remote location on page 970. Before run time, the package owner must have all the data access privileges that are needed at the remote location. If the owner does not yet have those privileges when you are binding, use the VALIDATE(RUN) option. The option lets you create the package, even if the authorization checks fail. DB2 checks the privileges again at run time. 4. Bind a new application plan at your local DB2 system, using these options:
PKLIST (location-name.REMOTE1.*) CURRENTSERVER (location-name)
where location-name is the value of LOCATION, in SYSIBM.LOCATIONS at your local DB2 system, that denotes the remote location at which you intend to run. You do not need to bind any DBRMs directly to that plan; the package list is sufficient.
Results
When you now run the existing application at your local DB2 system using the new application plan, these things happen: v You connect immediately to the remote location that is named in the CURRENTSERVER option. v DB2 searches for the package in the collection REMOTE1 at the remote location. v Any UPDATE, DELETE, or INSERT statements in your application affect tables at the remote location. v Any results from SELECT statements are returned to your existing application program, which processes them as though they came from your local DB2 system.
Procedure
To override the construction of the consistency token by DB2: Use the LEVEL (aaaa) option. DB2 uses the value that you choose for aaaa to generate the consistency token. Although this method is not recommended for
986
general use and the DSNH CLIST or the DB2 Program Preparation panels do not support it, this method enables you to perform the following actions: 1. Change the source code (but not the SQL statements) in the DB2 precompiler output of a bound program. 2. Compile and link-edit the changed program. 3. Run the application without rebinding a plan or package.
987
Table 156. How DYNAMICRULES and the run time environment determine dynamic SQL statement behavior (continued) Behavior of dynamic SQL statements in a stand-alone program environment Bind behavior Run behavior Bind behavior Run behavior Behavior of dynamic SQL statements in a user-defined function or stored procedure environment Define behavior Define behavior Invoke behavior Invoke behavior
Note: The BIND and RUN values can be specified for packages and plans. The other values can be specified only for packages.
The following table shows the dynamic SQL attribute values for each type of dynamic SQL behavior.
Table 157. Definitions of dynamic SQL statement behaviors Setting for dynamic SQL attributes Dynamic SQL attribute Authorization ID Bind behavior Plan or package owner Bind OWNER or QUALIFIER value Not applicable Determined by DSNHDECP parameter DYNRULS3 No Run behavior Current SQLID Define behavior User-defined function or stored procedure owner User-defined function or stored procedure owner Not applicable Determined by DSNHDECP parameter DYNRULS3 No Invoke behavior Authorization ID of invoker1 Authorization ID of invoker Not applicable Determined by DSNHDECP parameter DYNRULS3 No
Yes
1. If the invoker is the primary authorization ID of the process or the CURRENT SQLID value, secondary authorization IDs are also checked if they are needed for the required authorization. Otherwise, only one ID, the ID of the invoker, is checked for the required authorization. 2. DB2 uses the value of CURRENT SQLID as the authorization ID for dynamic SQL statements only for plans and packages that have run behavior. For the other dynamic SQL behaviors, DB2 uses the authorization ID that is associated with each dynamic SQL behavior, as shown in this table. The value to which CURRENT SQLID is initialized is independent of the dynamic SQL behavior. For stand-alone programs, CURRENT SQLID is initialized to the primary authorization ID. You can execute the SET CURRENT SQLID statement to change the value of CURRENT SQLID for packages with any dynamic SQL behavior, but DB2 uses the CURRENT SQLID value only for plans and packages with run behavior. 3. The value of DSNHDECP parameter DYNRULS, which you specify in field USE FOR DYNAMICRULES in installation panel DSNTIP4, determines whether DB2 uses the SQL statement processing options or the application programming defaults for dynamic SQL statements. See Options for SQL statement processing on page 958 for more information.
988
Authorization cache
DB2 uses the authorization cache for caching the authorization IDs of those users that are running a plan. When DB2 determines that you have the EXECUTE privilege on a plan, package collection, stored procedure, or user-defined function, DB2 can cache your authorization ID. When you run the plan, package, stored procedure, or user-defined function, DB2 can check your authorization more quickly.
989
Related reference: Protection panel: DSNTIPP (DB2 Installation and Migration) PACKAGE AUTH CACHE field (CACHEPAC subsystem parameter) (DB2 Installation and Migration)
You could create packages and plans using the following bind statements:
BIND PACKAGE(PKGB) MEMBER(PKGB) BIND PLAN(MAIN) MEMBER(MAIN,PLANA) PKLIST(*.PKGB.*) BIND PLAN(PLANC) MEMBER(PLANC)
The following scenario illustrates thread association for a task that runs program MAIN. Suppose that you execute the following SQL statements in the indicated order. For each SQL statement, the resulting event is described. 1. EXEC CICS START TRANSID(MAIN) TRANSID(MAIN) executes program MAIN. 2. EXEC SQL SELECT... Program MAIN issues an SQL SELECT statement. The default dynamic plan exit routine selects plan MAIN. 3. EXEC CICS LINK PROGRAM(PROGA) Program PROGA is invoked. 4. EXEC SQL SELECT... DB2 does not call the default dynamic plan exit routine, because the program does not issue a sync point. The plan is MAIN. 5. EXEC CICS LINK PROGRAM(PROGB) Program PROGB is invoked.
990
6. EXEC SQL SELECT... DB2 does not call the default dynamic plan exit routine, because the program does not issue a sync point. The plan is MAIN and the program uses package PKGB. 7. EXEC CICS SYNCPOINT DB2 calls the dynamic plan exit routine when the next SQL statement executes. 8. EXEC CICS LINK PROGRAM(PROGC) Program PROGC is invoked. 9. EXEC SQL SELECT... DB2 calls the default dynamic plan exit routine and selects PLANC. 10. EXEC SQL SET CURRENT SQLID = ABC The CURRENT SQLID special register is assigned the value 'ABC.' 11. EXEC CICS SYNCPOINT DB2 does not call the dynamic plan exit routine when the next SQL statement executes because the previous statement modifies the special register CURRENT SQLID. 12. EXEC CICS RETURN Control returns to program PROGB. 13. EXEC SQL SELECT... CICS With packages, you probably do not need dynamic plan selection and its accompanying exit routine. A package that is listed within a plan is not accessed until it is executed. However, you can use dynamic plan selection and packages together, which can reduce the number of plans in an application and the effort to maintain the dynamic plan exit routine.
Rebinding an application
You need to rebind an application if you want to change any bind options. You also need to rebind an application when you make changes that affect the plan or package, such as creating an index, but you have not changed the SQL statements.
Rebinding a package
You need to rebind a package when you make changes that affect the package but that do not involve changes to the SQL statements. For example, if you create a new index, you need to rebind the package. If you change the SQL, you need to use the BIND PACKAGE command with the ACTION(REPLACE) option.
991
The following table clarifies which packages are bound, depending on how you specify collection-id (coll-id), package-id (pkg-id), and version-id (ver-id) on the REBIND PACKAGE subcommand. REBIND PACKAGE does not apply to packages for which you do not have the BIND privilege. An asterisk (*) used as an identifier for collections, packages, or versions does not apply to packages at remote sites.
Table 159. Behavior of REBIND PACKAGE specification. "All" means all collections, packages, or versions at the local DB2 server for which the authorization ID that issues the command has the BIND privilege. Input * *.*.(*) *.* *.*.(ver-id) *.*.() coll-id.* coll-id.*.(*) coll-id.*.(ver-id) coll-id.*.() coll-id.pkg-id.(*) coll-id.pkg-id coll-id.pkg-id.() coll-id.pkg-id.(ver-id) *.pkg-id.(*) *.pkg-id *.pkg-id.() *.pkg-id.(ver-id) Collections affected all all all all all coll-id coll-id coll-id coll-id coll-id coll-id coll-id coll-id all all all all Packages affected all all all all all all all all all pkg-id pkg-id pkg-id pkg-id pkg-id pkg-id pkg-id pkg-id Versions affected all all all ver-id empty string all all ver-id empty string all empty string empty string ver-id all empty string empty string ver-id
Example: The following example shows the options for rebinding a package at the remote location. The location name is SNTERSA. The collection is GROUP1, the package ID is PROGA, and the version ID is V1. The connection types shown in the REBIND subcommand replace connection types that are specified on the original BIND subcommand.
REBIND PACKAGE(SNTERSA.GROUP1.PROGA.(V1)) ENABLE(CICS,REMOTE)
You can use the asterisk on the REBIND subcommand for local packages, but not for packages at remote sites. Any of the following commands rebinds all versions of all packages in all collections, at the local DB2 system, for which you have the BIND privilege.
REBIND PACKAGE (*) REBIND PACKAGE (*.*) REBIND PACKAGE (*.*.(*))
992
Either of the following commands rebinds all versions of all packages in the local collection LEDGER for which you have the BIND privilege.
REBIND PACKAGE (LEDGER.*) REBIND PACKAGE (LEDGER.*.(*))
Either of the following commands rebinds the empty string version of the package DEBIT in all collections, at the local DB2 system, for which you have the BIND privilege.
REBIND PACKAGE (*.DEBIT) REBIND PACKAGE (*.DEBIT.())
Related reference: BIND and REBIND options (DB2 Commands) REBIND PACKAGE (DSN) (DB2 Commands)
Rebinding a plan
You need to rebind a plan when you make changes that affect the plan but that do not involve changes to the SQL statements. For example, if you create a new index, you need to rebind the plan. If you change the SQL, you need to use the BIND PLAN command with the ACTION(REPLACE) option.
Example: Rebinds the plan and drops the entire package list:
REBIND PLAN(PLANA) NOPKLIST
993
not rebound, and issue those subcommands, DB2 does not repeat any work that was already done and is not likely to run out of resources. For a description of the technique and several examples of its use, see Sample program to create REBIND subcommands for lists of plans and packages.
Sample program to create REBIND subcommands for lists of plans and packages
If you cannot use asterisks to identify a list of packages or plans that you want to rebind, you might be able to create the needed REBIND subcommands automatically, by using the sample program DSNTIAUL. One situation in which this technique might be useful is when a resource becomes unavailable during a rebind of many plans or packages. DB2 normally terminates the rebind and does not rebind the remaining plans or packages. Later, however, you might want to rebind only the objects that remain to be rebound. You can build REBIND subcommands for the remaining plans or packages by using DSNTIAUL to select the plans or packages from the DB2 catalog and to create the REBIND subcommands. You can then submit the subcommands through the DSN command processor, as usual. You might first need to edit the output from DSNTIAUL so that DSN can accept it as input. The CLIST DSNTEDIT can perform much of that task for you. This section contains the following topics: v Generating lists of REBIND commands v Sample SELECT statements for generating REBIND commands on page 995 v Sample JCL for running lists of REBIND commands on page 997
994
Example: REBIND all versions of all packages without terminating because of unavailable resources.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE;
Example: REBIND all plans bound before a given date and time.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE BINDDATE <= yymmdd OR (BINDDATE <= yymmdd AND BINDTIME <= hhmmssth);
995
where yymmdd represents the date portion and hhmmssth represents the time portion of the timestamp string. If the date specified is after 2000, you need to include another condition that includes plans that were bound before year 2000:
WHERE BINDDATE BINDDATE (BINDDATE BINDTIME >= <= <= <= 830101 OR yymmdd OR yymmdd AND hhmmssth);
Example: REBIND all versions of all packages bound before a given date and time.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE BINDTIME <= timestamp;
where timestamp is an ISO timestamp string. Example: REBIND all plans bound since a given date and time.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE BINDDATE >= yymmdd AND BINDTIME >= hhmmssth;
where yymmdd represents the date portion and hhmmssth represents the time portion of the timestamp string. Example: REBIND all versions of all packages bound since a given date and time.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT.CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE BINDTIME >= timestamp;
where timestamp is an ISO timestamp string. Example: REBIND all plans bound within a given date and time range.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE (BINDDATE >= yymmdd AND BINDTIME >= hhmmssth) AND BINDDATE <= yymmdd AND BINDTIME <= hhmmssth);
where yymmdd represents the date portion and hhmmssth represents the time portion of the timestamp string. Example: REBIND all versions of all packages bound within a given date and time range.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE BINDTIME >= timestamp1 AND BINDTIME <= timestamp2;
996
Example: REBIND all plans bound with ISOLATION level of cursor stability.
SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) ,1,45) FROM SYSIBM.SYSPLAN WHERE ISOLATION = S;
Example: REBIND all versions of all packages that allow CPU and/or I/O parallelism.
SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE DEGREE=ANY;
The date and time period has the following format: YYYY The four-digit year. For example: 2008. MM DD hh mm ss The two-digit month, which can be a value between 01 and 12. The two-digit day, which can be a value between 01 and 31. The two-digit hour, which can be a value between 01 and 24. The two-digit minute, which can be a value between 00 and 59. The two-digit second, which can be a value between 00 and 59.
//REBINDS JOB MSGLEVEL=(1,1),CLASS=A,MSGCLASS=A,USER=SYSADM, // REGION=1024K //*********************************************************************/ //SETUP EXEC PGM=IKJEFT01 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) PARMS(SQL) LIB(DSN910.RUNLIB.LOAD) END //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSPUNCH DD SYSOUT=* //SYSREC00 DD DSN=SYSADM.SYSTSIN.DATA, // UNIT=SYSDA,DISP=SHR
997
//*********************************************************************/ //* //* GENER= <SUBCOMMANDS TO REBIND ALL PACKAGES BOUND IN YYYY //* //*********************************************************************/ //SYSIN DD * SELECT SUBSTR(REBIND PACKAGE(CONCAT COLLID CONCAT. CONCAT NAME CONCAT.(*)) ,1,55) FROM SYSIBM.SYSPACKAGE WHERE BINDTIME >= YYYY-MM-DD-hh.mm.ss AND BINDTIME <= YYYY-MM-DD-hh.mm.ss; /* //*********************************************************************/ //* //* STRIP THE BLANKS OUT OF THE REBIND SUBCOMMANDS //* //*********************************************************************/ //STRIP EXEC PGM=IKJEFT01 //SYSPROC DD DSN=SYSADM.DSNCLIST,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SYSTSIN DD * DSNTEDIT SYSADM.SYSTSIN.DATA //SYSIN DD DUMMY /* //*********************************************************************/ //* //* PUT IN THE DSN COMMAND STATEMENTS //* //*********************************************************************/ //EDIT EXEC PGM=IKJEFT01 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * EDIT SYSADM.SYSTSIN.DATA DATA NONUM TOP INSERT DSN SYSTEM(DSN) BOTTOM INSERT END TOP LIST * 99999 END SAVE /* //*********************************************************************/ //* //* EXECUTE THE REBIND PACKAGE SUBCOMMANDS THROUGH DSN //* //*********************************************************************/ //LOCAL EXEC PGM=IKJEFT01 //DBRMLIB DD DSN=DSN910.DBRMLIB.DATA, // DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD DSN=SYSADM.SYSTSIN.DATA, // UNIT=SYSDA,DISP=SHR /*
The following example shows some sample JCL for rebinding all plans bound without specifying the DEGREE keyword on BIND with DEGREE(ANY).
//REBINDS JOB MSGLEVEL=(1,1),CLASS=A,MSGCLASS=A,USER=SYSADM, // REGION=1024K //*********************************************************************/ //SETUP EXEC TSOBATCH //SYSPRINT DD SYSOUT=* //SYSPUNCH DD SYSOUT=*
998
//SYSREC00 DD DSN=SYSADM.SYSTSIN.DATA, // UNIT=SYSDA,DISP=SHR //*********************************************************************/ //* //* REBIND ALL PLANS THAT WERE BOUND WITHOUT SPECIFYING THE DEGREE //* KEYWORD ON BIND WITH DEGREE(ANY) //* //*********************************************************************/ //SYSTSIN DD * DSN S(DSN) RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) PARM(SQL) END //SYSIN DD * SELECT SUBSTR(REBIND PLAN(CONCAT NAME CONCAT) DEGREE(ANY) ,1,45) FROM SYSIBM.SYSPLAN WHERE DEGREE = ; /* //*********************************************************************/ //* //* PUT IN THE DSN COMMAND STATEMENTS //* //*********************************************************************/ //EDIT EXEC PGM=IKJEFT01 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * EDIT SYSADM.SYSTSIN.DATA DATA NONUM TOP INSERT DSN S(DSN) BOTTOM INSERT END TOP LIST * 99999 END SAVE /* //*********************************************************************/ //* //* EXECUTE THE REBIND SUBCOMMANDS THROUGH DSN //* //*********************************************************************/ //REBIND EXEC PGM=IKJEFT01 //STEPLIB DD DSN=SYSADM.TESTLIB,DISP=SHR // DD DSN=DSN910.SDSNLOAD,DISP=SHR //DBRMLIB DD DSN=SYSADM.DBRMLIB.DATA,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SYSTSIN DD DSN=SYSADM.SYSTSIN.DATA,DISP=SHR //SYSIN DD DUMMY /*
Automatic rebinding
Automatic rebind might occur if an authorized user invokes a plan or package under some situations. These situations include when the attributes of the data on which the plan or package depends change, or if the environment in which the package executes changes. Whether the automatic rebind occurs depends on the value of the ABIND subsystem parameter. | | | | In general, the option values that are used for an automatic rebind are the values that were used during the most recent bind process. However, if an option is no longer supported, the automatic rebind option process substitutes a supported option.
999
| | | |
If a package has previous or original copies as a result of rebinding with the PLANMGMT(BASIC) or PLANMGMT(EXTENDED) options or having the PLANMGMT subsystem parameter set to BASIC or EXTENDED, those copies are not affected by automatic rebind. Automatic rebind replaces only the current copy. In most cases, DB2 marks a plan or package that needs to be automatically rebound as invalid. A few common situations in which DB2 marks a plan or package as invalid are: v When a package is dropped v When a plan depends on the execute privilege of a package that is dropped v When a table, index, or view on which the plan or package depends is dropped v When the authorization of the owner to access any of those objects is revoked v When the authorization to execute a stored procedure is revoked from a plan or package owner, and the plan or package uses the CALL procedure-name form of the CALL statement to call the stored procedure v When a table on which the plan or package depends is altered to add a TIME, TIMESTAMP, or DATE column v When a table is altered to add a self-referencing constraint or a constraint with a delete rule of SET NULL or CASCADE v When the limit key value of a partitioned index on which the plan or package depends is altered v When the definition of an index on which the plan or package depends is altered from NOT PADDED to PADDED v When the definition of an index on which the plan or package depends is altered from PADDED to NOT PADDED v When the AUDIT attribute of a table on which the plan or package depends is altered v When the length attribute of a CHAR, VARCHAR, GRAPHIC, VARGRAPHIC, BINARY, or VARBINARY column in a table on which the plan or package depends is altered v When the data type, precision, or scale of a column in a table on which the plan or package depends is altered v When a plan or package depends on a view that DB2 cannot regenerate after a column in the underlying table is altered v When a created temporary table on which the plan or package depends is altered to add a column
| | |
v When a user-defined function on which the plan or package depends is altered v When a column is renamed in a table on which a plan or package is dependent Whether a plan or package is valid is recorded in column VALID of catalog tables SYSPLAN and SYSPACKAGE.
| | |
In the following cases, DB2 automatically rebinds a plan or package that has not been marked as invalid if the ABIND subsystem parameter is set to YES (the default): v A plan or package that is bound on a release of DB2 that is more recent than the release in which it is being run. This situation can happen in a data sharing environment or after a DB2 subsystem has fallen back to a previous release of DB2.
1000
| | |
v A plan or package that was bound prior to DB2 Version 4 Release 1. Plans and packages that are bound prior to Version 4 Release 1 are automatically rebound when they are run on the current release of DB2. v A plan or package that has a location dependency and runs at a location other than the one at which it was bound. This situation can happen when members of a data sharing group are defined with location names, and a package runs on a different member from the one on which it was bound.
| | | | |
In the following cases, DB2 automatically rebinds a plan or package that has not been marked as invalid if the ABIND subsystem parameter is set to COEXIST: v The subsystem on which the plan or package runs is in a data sharing group. v The plan or package was previously bound on the current DB2 release and is now running on the previous DB2 release. If the ABIND subsystem parameter is set to NO and you attempt to execute a plan or package that requires a rebind, but cannot be automatically rebound, DB2 returns an error. DB2 marks a plan or package as inoperative if an automatic rebind fails. Whether a plan or package is operative is recorded in column OPERATIVE of SYSPLAN and SYSPACKAGE. Whether EXPLAIN runs during automatic rebind depends on the value of the field EXPLAIN PROCESSING on installation panel DSNTIPO, and on whether you specified EXPLAIN(YES). Automatic rebind fails for all EXPLAIN errors except PLAN_TABLE not found. The SQLCA is not available during automatic rebind. Therefore, if you encounter lock contention during an automatic rebind, DSNT501I messages cannot accompany any DSNT376I messages that you receive. To see the matching DSNT501I messages, you must issue the subcommand REBIND PLAN or REBIND PACKAGE.
| | |
If an autobind occurs while running in ACCESS(MAINT) mode the autobind is run under the authorization id of SYSOPR. If SYSOPR is not defined as an installation SYSOPR the autobind fails. Related reference: AUTO BIND field (ABIND subsystem parameter) (DB2 Installation and Migration)
1001
CURRENT RULES determines the SQL rules, DB2 or SQL standard, that apply to SQL behavior at run time. For example, the value in CURRENT RULES affects the behavior of defining check constraints by issuing the ALTER TABLE statement on a populated table: v If CURRENT RULES has a value of STD and no existing rows in the table violate the check constraint, DB2 adds the constraint to the table definition. Otherwise, an error occurs and DB2 does not add the check constraint to the table definition. If the table contains data and is already in a check pending status, the ALTER TABLE statement fails. v If CURRENT RULES has a value of DB2, DB2 adds the constraint to the table definition, defers the enforcing of the check constraints, and places the table space or partition in CHECK-pending status. You can use the statement SET CURRENT RULES to control the action that the statement ALTER TABLE takes. Assuming that the value of CURRENT RULES is initially STD, the following SQL statements change the SQL rules to DB2, add a check constraint, defer validation of that constraint, place the table in CHECK-pending status, and restore the rules to STD.
EXEC SQL SET CURRENT RULES = DB2; EXEC SQL ALTER TABLE DSN8910.EMP ADD CONSTRAINT C1 CHECK (BONUS <= 1000.0); EXEC SQL SET CURRENT RULES = STD;
See Check constraints on page 444 for information about check constraints. You can also use CURRENT RULES in host variable assignments. For example, if you want to store the value of the CURRENT RULES special register at a particular point in time, you can use assign the value to a host variable, as in the following statement:
SET :XRULE = CURRENT RULES;
You can also use CURRENT RULES as the argument of a search-condition. For example, the following statement retrieves rows where the COL1 column contains the same value as the CURRENT RULES special register.
SELECT * FROM SAMPTBL WHERE COL1 = CURRENT RULES;
1002
The following figure illustrates the program preparation process when you use the DB2 precompiler. After you process SQL statements in your source program by using the DB2 precompiler, you create a load module, possibly one or more packages, and an application plan. Creating a load module involves compiling the modified source code that is produced by the precompiler into an object program, and link-editing the object program to create a load module. Creating a package or an application plan, a process unique to DB2, involves binding one or more DBRMs, which are created by the DB2 precompiler, using the BIND PACKAGE or BIND PLAN commands.
The following figure illustrates the program preparation process when you use the DB2 coprocessor. The process is similar to the process for the DB2 precompiler, except that the DB2 coprocessor does not create modified source for your application program.
1003
Source program
DBRM
Object program
Bind package
Link edit
Package
Load module
Figure 51. Program preparation with the DB2 coprocessor
You can specify values for the following parameters only in a DDITV02 data set:
CONNECTION_NAME,PLAN,PROG
If you use the DDITV02 data set and specify a subsystem member, the values in the DDITV02 DD statement override the values in the specified subsystem member. If you provide neither, DB2 abnormally terminates the application program with system abend code X'04E' and a unique reason code in register 15. DDITV02 is the DD name for a data set that has DCB options of LRECL=80 and RECFM=F or FB. A subsystem member is a member in the IMS procedure library. Its name is derived by concatenating the value of the SSM parameter to the value of the
1004
IMSID parameter. You specify the SSM parameter and the IMSID parameter when you invoke the DLIBATCH procedure, which starts the DL/I batch processing environment. The meanings of the input parameters are: Field SSN Content Specifies the name of the DB2 subsystem. This value is required. You must specify a name in order to make a connection to DB2. The SSN value can be from one to four characters long. If the value in the SSN parameter is the name of an active subsystem in the data sharing group, the application attaches to that subsystem. If the SSN parameter value is not the name of an active subsystem, but the value is a group attachment name, the application attaches to an active DB2 subsystem in the data sharing group. LIT Specifies a language interface token. DB2 requires a language interface token to route SQL statements when operating in the online IMS environment. Because a batch application program can connect to only one DB2 system, DB2 does not use the LIT value. The LIT value can be from zero to four characters long. Recommendation: Specify the LIT value as SYS1. You can omit the LIT value by entering SSN,,ESMT. ESMT Specifies the name of the DB2 initialization module, DSNMIN10. This value is required. The ESMT value must be eight characters long. RTT Specifies the resource translation table. This value is optional. The RTT can be from zero to eight characters long. REO Specifies the region error option. This option determines what to do if DB2 is not operational or the plan is not available. The three options are: v R, the default, results in returning an SQL return code to the application program. The most common SQLCODE issued in this case is -923 (SQLSTATE '57015'). v Q results in an abend in the batch environment; however, in the online environment, this value places the input message in the queue again. v A results in an abend in both the batch environment and the online environment. If the application program uses the XRST call, and if coordinated recovery is required on the XRST call, REO is ignored. In that case, the application program terminates abnormally if DB2 is not operational. The REO value can be from zero to one character long. CRC Specifies the command recognition character. Because DB2 commands are not supported in the DL/I batch environment, the command recognition character is not used at this time. The CRC value can be from zero to one character long.
1005
CONNECTION_NAME Represents the name of the job step that coordinates DB2 activities. This value is optional. If you do not specify this option, the connection name defaults are: Type of application Default connection name Batch job Job name Started task Started task name TSO user TSO authorization ID If a batch update job fails, you must use a separate job to restart the batch job. The connection name used in the restart job must be the same as the name that is used in the batch job that failed. Alternatively, if the default connection name is used, the restart job must have the same job name as the batch update job that failed. DB2 requires unique connection names. If two applications try to connect with the same connection name, the second application program fails to connect to DB2. The CONNECTION_NAME value can be from one to eight characters long. PLAN Specifies the DB2 plan name. This value is optional. If you do not specify the plan name, the application program module name is checked against the optional resource translation table. If the resource translation table has a match, the translated name is used as the DB2 plan name. If no match exists in the resource translation table, the application program module name is used as the plan name. The PLAN value can be from zero to eight characters long. PROG Specifies the application program name. This value is required. It identifies the application program that is to be loaded and to receive control. The PROG value can be from one to eight characters long. Example: An example of the fields in the record is shown below:
DSN,SYS1,DSNMIN10,,R,-,BATCH001,DB2PLAN,PROGA
DB2 DL/I batch output: In an online IMS environment, DB2 sends unsolicited status messages to the master terminal operator (MTO) and records on indoubt processing and diagnostic information to the IMS log. In a batch environment, DB2 sends this information to the output data set that is specified in the DDOTV02 DD statement. Ensure that the output data set has DCB options of RECFM=V or VB, LRECL=4092, and BLKSIZE of at least LRECL + 4. If the DD statement is missing, DB2 issues the message IEC130I and continues processing without any output. You might want to save and print the data set, as the information is useful for diagnostic purposes. You can use the IMS module, DFSERA10, to print the variable-length data set records in both hexadecimal and character format.
1006
Related concepts: Submitting work to be processed (DB2 Data Sharing Planning and Administration)
Invocation included in... DSNTEJ2A DSNTEJ2D DSNTEJ2EN/A DSNTEJ2C1 DSNTEJ2F DSNTEJ2P DSNTEJ63
1. You must customize these programs to invoke the procedures that are listed in this table. 2. This procedure demonstrates how you can prepare an object-oriented program that consists of two data sets or members, both of which contain SQL.
If you use the PL/I macro processor, you must not use the PL/I *PROCESS statement in the source to pass options to the PL/I compiler. You can specify the needed options on the PARM.PLI= parameter of the EXEC statement in the DSNHPLI procedure.
JCL to include the appropriate interface code when using the DB2-supplied JCL procedures
To include the proper interface code when you submit the JCL procedures, use an INCLUDE SYSLIB statement in your link-edit JCL. The statement should specify the correct language interface module for the environment.
member must be DSNELI, except for FORTRAN, in which case member must be DSNHFT.
1007
IMS
//LKED.SYSIN DD * INCLUDE SYSLIB(DFSLI000) ENTRY (specification) /*
DFSLI000 is the module for DL/I batch attach. ENTRY specification varies depending on the host language. Include one of the following: DLITCBL, for COBOL applications PLICALLA, for PL/I applications The program name, for assembler language applications. | | | | | Recommendation: For COBOL applications, specify the PSB linkage directly on the PROCEDURE DIVISION statement instead of on a DLITCBL entry point. When you specify the PSB linkage directly on the PROCEDURE DIVISION statement, you can either omit the ENTRY specification or specify the application program name instead of the DLITCBL entry point. CICS
//LKED.SYSIN DD * INCLUDE SYSLIB(DSNCLI) /*
Related tasks: Making the CAF language interface (DSNALI) available on page 47 Compiling and link-editing an application on page 968
(1) (1)
1008
(1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1)
//STEPLIB // //DBRMLIB //SYSCIN // //SYSLIB //SYSPRINT //SYSTERM //SYSUDUMP //SYSUT1 //SYSUT2 //SYSIN //*
DD DISP=SHR,DSN=prefix.SDSNEXIT DD DISP=SHR,DSN=prefix.SDSNLOAD DD DISP=OLD,DSN=USER.DBRMLIB.DATA(TESTC01) DD DSN=&&DSNHOUT,DISP=(MOD,PASS),UNIT=SYSDA, SPACE=(800,(500,500)) DD DISP=SHR,DSN=USER.SRCLIB.DATA DD SYSOUT=* DD SYSOUT=* DD SYSOUT=* DD SPACE=(800,(500,500),,,ROUND),UNIT=SYSDA DD SPACE=(800,(500,500),,,ROUND),UNIT=SYSDA DD DISP=SHR,DSN=USER.SRCLIB.DATA(TESTC01)
(2) (2) (2) (2) (2) (2) (2) (2) (2) (2) (2) (2)
//******************************************************************** //*** BIND THIS PROGRAM. //******************************************************************** //BIND EXEC PGM=IKJEFT01, // COND=((4,LT,PC)) //STEPLIB DD DISP=SHR,DSN=prefix.SDSNEXIT // DD DISP=SHR,DSN=prefix.SDSNLOAD //DBRMLIB DD DISP=OLD,DSN=USER.DBRMLIB.DATA(TESTC01) //SYSPRINT DD SYSOUT=* //SYSTSPRT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSTSIN DD * DSN S(DSN) BIND PLAN(TESTC01) MEMBER(TESTC01) ACTION(REP) RETAIN ISOLATION(CS) END //******************************************************************** //* COMPILE THE COBOL PROGRAM //******************************************************************** //CICS EXEC DFHEITVL //TRN.SYSIN DD DSN=&&DSNHOUT,DISP=(OLD,DELETE) //LKED.SYSLMOD DD DSN=USER.RUNLIB.LOAD //LKED.CICSLOAD DD DISP=SHR,DSN=prefix.SDFHLOAD //LKED.SYSIN DD * INCLUDE CICSLOAD(DSNCLI) NAME TESTC01(R) //********************************************************************
The procedure accounts for these steps: Step 1. Precompile the program. The output of the DB2 precompiler becomes the input to the CICS command language translator. Step 2. Bind the application plan. Step 3. Call the CICS procedure to translate, compile, and link-edit a COBOL program. This procedure has several options that you need to consider. Step 4. Reflect an application load library in the data set name of the SYSLMOD DD statement. You must include the name of this load library in the DFHRPL DD statement of the CICS run time JCL. Step 5. Name the CICS load library that contains the module DSNCLI. Step 6. Direct the linkage editor to include the CICS-DB2 language interface module (DSNCLI). In this example, the order of the various control sections (CSECTs) is of no concern because the structure of the procedure automatically satisfies any order requirements. For more information about the procedure DFHEITVL, other CICS procedures, or CICS requirements for application programs, please see the appropriate CICS manual. If you are preparing a particularly large or complex application, you can use another preparation method. For example, if your program requires four of your own link-edit include libraries, you cannot prepare the program with DB2I,
Chapter 17. Preparing an application to run on DB2 for z/OS
1009
because DB2I limits the number of include libraries to three, plus language, IMS or CICS, and DB2 libraries. Therefore, you would need another preparation method. Be careful to use the correct language interface.
SSID: DSN
Select one of the following DB2 functions and press ENTER. 1 2 3 4 5 6 7 8 D X SPUFI DCLGEN PROGRAM PREPARATION PRECOMPILE BIND/REBIND/FREE RUN DB2 COMMANDS UTILITIES DB2I DEFAULTS EXIT (Process SQL statements) (Generate SQL and source language declarations) (Prepare a DB2 application program to run) (Invoke DB2 precompiler) (BIND, REBIND, or FREE plans or packages) (RUN an SQL program) (Issue DB2 commands) (Invoke DB2 utilities) (Set global parameters) (Leave DB2I)
Figure 52. Initiating program preparation through DB2I. Specify Program Preparation on the DB2I Primary Option Menu.
The following descriptions explain the functions on the DB2I Primary Option Menu. 1 SPUFI Lets you develop and execute one or more SQL statements interactively. For further information, see Executing SQL by using SPUFI on page 1067. 2 DCLGEN Lets you generate C, COBOL, or PL/I data declarations of tables. For further information, see DCLGEN (declarations generator) on page 125. 3 PROGRAM PREPARATION Lets you prepare and run an application program to run. For more information, see DB2 Program Preparation panel on page 1012. 4 PRECOMPILE Lets you convert embedded SQL statements into statements that your host language can process. For further information, see Precompile panel on page 1020. 5 BIND/REBIND/FREE Lets you bind, rebind, or free a package or application plan. For more information, see Bind/Rebind/Free Selection panel on page 1038. 6 RUN Lets you run an application program in a TSO or batch environment. For more information, see DB2I Run panel on page 1048.
1010
7 DB2 COMMANDS Lets you issue DB2 commands. 8 UTILITIES Lets you call DB2 utility programs. D DB2I DEFAULTS Lets you set DB2I defaults. See DB2I Defaults Panel 1 on page 1016. X EXIT Lets you exit DB2I.
Bind PackageBind Package Lets you change many options when you bind a package. panel on page 1022 You can reach this panel directly from the DB2I Primary Option Menu or from the DB2 Program Preparation panel. If you reach this panel from the DB2 Program Preparation panel, many of the fields contain values from the Primary and Precompile panels. Bind PlanBind Plan panel Lets you change options when you bind an application plan. on page 1025 You can reach this panel directly from the DB2I Primary Option Menu or as a part of the program preparation process. This panel also follows the Bind Package panels.
1011
Table 161. DB2I panels used for program preparation (continued) Panel name Panel description
Defaults for Bind or Rebind Let you change the defaults for BIND or REBIND PACKAGE Package or Plan or PLAN. panelsDefaults for Bind Package and Defaults for Rebind Package panels on page 1029 System Connection Types panelSystem Connection Types panel on page 1033 Panels for entering lists of valuesPanels for entering lists of values on page 1034 Program Prep: Compile, Prelink, Link, and RunProgram Preparation: Compile, Link, and Run panel on page 1035 Lets you specify a system connection type. This panel displays if you choose to enable or disable connections on the Bind or Rebind Package or Plan panels. Let you enter or modify an unlimited number of values. A list panel looks similar to an ISPF edit session and lets you scroll and use a limited set of commands. Lets you perform the last two steps in the program preparation process (compile and link-edit). This panel also lets you do the PL/I MACRO PHASE for programs that require this option. For TSO programs, the panel also lets you run programs.
1012
On the DB2 Program Preparation panel, shown in the following figure, enter the name of the source program data set (this example uses SAMPLEPG.COBOL) and specify the other options you want to include. When finished, press ENTER to view the next panel.
SSID: DSN
Enter the following: 1 INPUT DATA SET NAME .... 2 DATA SET NAME QUALIFIER 3 PREPARATION ENVIRONMENT 4 RUN TIME ENVIRONMENT ... 5 OTHER DSNH OPTIONS .....
SAMPLEPG.COBOL TEMP (For building data set names) FOREGROUND (FOREGROUND, BACKGROUND, EDITJCL) TSO (TSO, CAF, CICS, IMS, RRSAF) (Optional DSNH keywords) Perform function? ===> ===> ===> ===> ===> ===> ===> ===> ===> N Y N Y Y Y N Y Y (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N)
Select functions: Display 6 CHANGE DEFAULTS ........ ===> Y 7 PL/I MACRO PHASE ....... ===> N 8 PRECOMPILE ............. ===> Y 9 CICS COMMAND TRANSLATION 10 BIND PACKAGE ........... ===> Y 11 BIND PLAN............... ===> Y 12 COMPILE OR ASSEMBLE .... ===> Y 13 PRELINK................. ===> N 14 LINK.................... ===> N 15 RUN..................... ===> N
panel? (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N)
Figure 53. The DB2 Program Preparation panel. Enter the source program data set name and other options.
The following explains the functions on the DB2 Program Preparation panel and how to fill in the necessary fields in order to start program preparation. 1 INPUT DATA SET NAME Lets you specify the input data set name. The input data set name can be a PDS or a sequential data set, and can also include a member name. If you do not enclose the data set name in apostrophes, a standard TSO prefix (user ID) qualifies the data set name. The input data set name you specify is used to precompile, bind, link-edit, and run the program. 2 DATA SET NAME QUALIFIER Lets you qualify temporary data set names involved in the program preparation process. Use any character string from 1 to 8 characters that conforms to normal TSO naming conventions. (The default is TEMP.) For programs that you prepare in the background or that use EDITJCL for the PREPARATION ENVIRONMENT option, DB2 creates a data set named tsoprefix.qualifier.CNTL to contain the program preparation JCL. The name tsoprefix represents the prefix TSO assigns, and qualifier represents the value you enter in the DATA SET NAME QUALIFIER field. If a data set with this name already exists, DB2 deletes it. 3 PREPARATION ENVIRONMENT Lets you specify whether program preparation occurs in the foreground or background. You can also specify EDITJCL, in which case you are able to edit and then submit the job. Use: FOREGROUND to use the values you specify on the Program Preparation panel and to run immediately. BACKGROUND to create and submit a file containing a DSNH CLIST that runs immediately using the JOB control statement from either the DB2I Defaults panel or your site's SUBMIT exit. The file is saved.
Chapter 17. Preparing an application to run on DB2 for z/OS
1013
EDITJCL to create and open a file containing a DSNH CLIST in edit mode. You can then submit the CLIST or save it. 4 RUN TIME ENVIRONMENT Lets you specify the environment (TSO, CAF, CICS, IMS, RRSAF) in which your program runs. All programs are prepared under TSO, but can run in any of the environments. If you specify CICS, IMS, or RRSAF, then you must set the RUN field to NO because you cannot run such programs from the Program Preparation panel. If you set the RUN field to YES, you can specify only TSO or CAF. (Batch programs also run under the TSO Terminal Monitor Program. You therefore need to specify TSO in this field for batch programs.) 5 OTHER DSNH OPTIONS Lets you specify a list of DSNH options that affect the program preparation process, and that override options specified on other panels. If you are using CICS, these can include options you want to specify to the CICS command translator. If you specify options in this field, separate them by commas. You can continue listing options on the next line, but the total length of the option list can be no more than 70 bytes. Fields 6 through 15 let you select the function to perform and to choose whether to show the DB2I panels for the functions you select. Use Y for YES, or N for NO. If you are willing to accept default values for all the steps, enter N under Display panel? for all the other preparation panels listed. To make changes to the default values, entering Y under Display panel? for any panel you want to see. DB2I then displays each of the panels that you request. After all the panels display, DB2 proceeds with the steps involved in preparing your program to run. Variables for all functions used during program preparation are maintained separately from variables entered from the DB2I Primary Option Menu. For example, the bind plan variables you enter on the Program Preparation panel are saved separately from those on any Bind Plan panel that you reach from the Primary Option Menu. 6 CHANGE DEFAULTS Lets you specify whether to change the DB2I defaults. Enter Y in the Display panel? field next to this option; otherwise enter N. Minimally, you should specify your subsystem identifier and programming language on the Defaults panel. 7 PL/I MACRO PHASE Lets you specify whether to display the Program Preparation: Compile, Link, and Run panel to control the PL/I macro phase by entering PL/I options in the OPTIONS field of that panel. That panel also displays for options COMPILE OR ASSEMBLE, LINK, and RUN. This field applies to PL/I programs only. If your program is not a PL/I program or does not use the PL/I macro processor, specify N in the Perform function field for this option, which sets the Display panel? field to the default N.
1014
8 PRECOMPILE Lets you specify whether to display the Precompile panel. To see this panel enter Y in the Display panel? field next to this option; otherwise enter N. 9 CICS COMMAND TRANSLATION Lets you specify whether to use the CICS command translator. This field applies to CICS programs only. IMS and TSO: If you run under TSO or IMS, ignore this step; this allows the Perform function field to default to N. CICS: If you are using CICS and have precompiled your program, you must translate your program using the CICS command translator. The command translator does not have a separate DB2I panel. You can specify translation options on the Other Options field of the DB2 Program Preparation panel, or in your source program if it is not an assembler program. Because you specified a CICS run time environment, the Perform function column defaults to Y. Command translation takes place automatically after you precompile the program. 10 BIND PACKAGE Lets you specify whether to display the Bind Package panel. To see it, enter Y in the Display panel? field next to this option; otherwise, enter N. 11 BIND PLAN Lets you specify whether to display the Bind Plan panel. To see it, enter Y in the Display panel? field next to this option; otherwise, enter N. 12 COMPILE OR ASSEMBLE Lets you specify whether to display the Program Preparation: Compile, Link, and Run panel. To see this panel enter Y in the Display panel? field next to this option; otherwise, enter N. 13 PRELINK Lets you use the prelink utility to make your C, C++, or Enterprise COBOL for z/OS program reentrant. This utility concatenates compile-time initialization information from one or more text decks into a single initialization unit. To use the utility, enter Y in the Display panel? field next to this option; otherwise, enter N. If you request this step, then you must also request the compile step and the link-edit step. 14 LINK Lets you specify whether to display the Program Preparation: Compile, Link, and Run panel. To see it, enter Y in the Display panel? field next to this option; otherwise, enter N. If you specify Y in the Display panel? field for the COMPILE OR ASSEMBLE option, you do not need to make any changes to this field; the panel displayed for COMPILE OR ASSEMBLE is the same as the panel displayed for LINK. You can make the changes you want to affect the link-edit step at the same time you make the changes to the compile step. 15 RUN Lets you specify whether to run your program. The RUN option is available only if you specify TSO or CAF for RUN TIME ENVIRONMENT.
1015
If you specify Y in the Display panel? field for the COMPILE OR ASSEMBLE or LINK option, you can specify N in this field, because the panel displayed for COMPILE OR ASSEMBLE and for LINK is the same as the panel displayed for RUN. IMS and CICS: IMS and CICS programs cannot run using DB2I. If you are using IMS or CICS, use N in these fields. TSO and batch: If you are using TSO and want to run your program, you must enter Y in the Perform function column next to this option. You can also indicate that you want to specify options and values to affect the running of your program, by entering Y in the Display panel column. Pressing ENTER takes you to the first panel in the series you specified, in this example to the DB2I Defaults panel. If, at any point in your progress from panel to panel, you press the END key, you return to this first panel, from which you can change your processing specifications. Asterisks (*) in the Display panel? column of rows 7 through 14 indicate which panels you have already examined. You can see a panel again by writing a Y over an asterisk. Related reference: Bind Package panel on page 1022 Bind Plan panel on page 1025 DB2I Defaults Panel 1 Defaults for Bind Package and Defaults for Rebind Package panels on page 1029 Defaults for Bind Plan and Defaults for Rebind Plan panels on page 1031 Precompile panel on page 1020 Program Preparation: Compile, Link, and Run panel on page 1035 DSNH (TSO CLIST) (DB2 Commands) Language Environment Programming Guide (z/OS Language Environment Programming Guide)
1016
| | | | | | | | | | | | | | | | | |
Change defaults as desired: 1 2 3 4 5 6 7 8 9 10 DB2 NAME ............. DB2 CONNECTION RETRIES APPLICATION LANGUAGE LINES/PAGE OF LISTING MESSAGE LEVEL ........ SQL STRING DELIMITER DECIMAL POINT ........ STOP IF RETURN CODE >= NUMBER OF ROWS AS USER ===> DSN ===> 0 ===> IBMCOB ===> 60 ===> I ===> DEFAULT ===> . ===> 8 ===> 20 ===> (Subsystem identifier) (How many retries for DB2 connection) (ASM, C, CPP, IBMCOB, FORTRAN, PLI) (A number from 5 to 999) (Information, Warning, Error, Severe) (DEFAULT, or ") (. or ,) (Lowest terminating return code) (For ISPF Tables) (User ID to associate with trusted connection)
The following explains the fields on DB2I Defaults Panel 1. 1 DB2 NAME Lets you specify the DB2 subsystem that processes your DB2I requests. If you specify a different DB2 subsystem, its identifier displays in the SSID (subsystem identifier) field located at the top, right side of your screen. The default is DSN. 2 DB2 CONNECTION RETRIES Lets you specify the number of additional times to attempt to connect to DB2, if DB2 is not up when the program issues the DSN command. The program preparation process does not use this option. Use a number from 0 to 120. The default is 0. Connections are attempted at 30-second intervals. 3 APPLICATION LANGUAGE Lets you specify the default programming language for your application program. You can specify any of the following: ASM For High Level Assembler/z/OS C For C language CPP For C++ IBMCOB For Enterprise COBOL for z/OS. This option is the default. FORTRAN For VS Fortran PLI For PL/I If you specify IBMCOB, DB2 prompts you for more COBOL defaults on panel DSNEOP02. See DB2I Defaults Panel 2 on page 1019. You cannot specify FORTRAN for IMS or CICS programs. 4 LINES/PAGE OF LISTING Lets you specify the number of lines to print on each page of listing or SPUFI output. The default is 60. 5 MESSAGE LEVEL Lets you specify the lowest level of message to return to you during the BIND phase of the preparation process. Use:
Chapter 17. Preparing an application to run on DB2 for z/OS
1017
I W E S
all information, warning, error, and severe error messages warning, error, and severe error messages error and severe error messages severe error messages only
6 SQL STRING DELIMITER Lets you specify the symbol used to delimit a string in SQL statements in COBOL programs. This option is valid only when the application language is IBMCOB. Use: DEFAULT To use the default defined at installation time ' For an apostrophe " For a quotation mark 7 DECIMAL POINT Lets you specify how your host language source program represents decimal separators and how SPUFI displays decimal separators in its output. Use a comma (,) or a period (.). The default is a period (.). 8 STOP IF RETURN CODE >= Lets you specify the smallest value of the return code (from precompile, compile, link-edit, or bind) that will prevent later steps from running. Use: 4 To stop on warnings and more severe errors. 8 To stop on errors and more severe errors. The default is 8. 9 NUMBER OF ROWS Lets you specify the default number of input entry rows to generate on the initial display of ISPF panels. The number of rows with non-blank entries determines the number of rows that appear on later displays. | | | | | | | | | | | | | | | | | | | | | 10 AS USER Lets you specify a user ID to associate with the trusted connection for the current DB2I session. DB2 establishes the trusted connection for the user that you specify if the following conditions are true: v The primary authorization ID that DB2 obtains after running the connection exit is allowed to use the trusted connection without authentication. v The security label, if defined either implicitly or explicitly in the trusted context for the user, is defined in RACF for the user. After DB2 establishes the trusted connection, the primary authorization ID, any secondary authorization IDs, any role, and any security label that is associated with the user ID that is specified in the AS USER field are used for the trusted connection. DB2 uses this security label to verify multilevel security for the user. If the primary authorization ID that is associated with the user ID that is specified in the AS USER field is not allowed to use the trusted connection or requires authentication information, the connection request fails. If DB2 cannot verify the security label, the connection request also fails. The value that you enter in this field is retained only for the length of the DB2I session. The field is reset to blank when you exit DB2I. Suppose that the default programming language is PL/I and the default number of lines per page of program listing is 60. Your program is in COBOL, so you want to change field 3, APPLICATION LANGUAGE. You also want to print 80 lines to the page, so you need to change field 4, LINES/PAGE OF LISTING, as well. Figure 54 on page 1017
1018
on page 1017 shows the entries that you make in DB2I Defaults Panel 1 to make these changes. In this case, pressing ENTER takes you to DB2 Defaults Panel 2.
Change defaults as desired: 1 DB2I ===> ===> ===> ===> JOB STATEMENT: (Optional if your site has a SUBMIT exit) //USRT001A JOB (ACCOUNT),NAME //* //* //* (For IBMCOB) (DEFAULT, or ") (G/N - Character in PIC clause)
2 3
COBOL DEFAULTS: COBOL STRING DELIMITER ===> DEFAULT DBCS SYMBOL FOR DCLGEN ===> G
1 DB2I JOB STATEMENT Lets you change your default job statement. Specify a job control statement, and optionally, a JOBLIB statement to use either in the background or the EDITJCL program preparation environment. Use a JOBLIB statement to specify run time libraries that your application requires. If your program has a SUBMIT exit routine, DB2 uses that routine. If that routine builds a job control statement, you can leave this field blank. 2 COBOL STRING DELIMITER Lets you specify the symbol used to delimit a string in a COBOL statement in a COBOL application. Use: DEFAULT To use the default defined at install time ' For an apostrophe " For a quotation mark Leave this field blank to accept the default value. 3 DBCS SYMBOL FOR DCLGEN Lets you enter either G (the default) or N, to specify whether DCLGEN generates a picture clause that has the form PIC G(n) DISPLAY-1 or PIC N(n). Leave this field blank to accept the default value. Pressing ENTER takes you to the next panel you specified on the DB2 Program Preparation panel, in this case, to the Precompile panel.
1019
Precompile panel
After you set the DB2I defaults, you can precompile your application. You can reach the Precompile panel by specifying it as a part of the program preparation process from the DB2 Program Preparation panel. Or you can reach it directly from the DB2I Primary Option Menu. The way you choose to reach the panel determines the default values of the fields it contains. The following figure shows the Precompile panel.
PRECOMPILE
SSID: DSN
Enter precompiler data sets: 1 INPUT DATA SET .... ===> SAMPLEPG.COBOL 2 INCLUDE LIBRARY ... ===> SRCLIB.DATA 3 4 DSNAME QUALIFIER .. ===> TEMP DBRM DATA SET ..... ===> (For building data set names)
Enter processing options as desired: 5 WHERE TO PRECOMPILE ===> FOREGROUND 6 VERSION ........... ===> 7 OTHER OPTIONS ..... ===>
Figure 56. The Precompile panel. Specify the include library, if any, that your program should use, and any other options you need.
The following explains the functions on the Precompile panel, and how to enter the fields for preparing to precompile. 1 INPUT DATA SET Lets you specify the data set name of the source program and SQL statements to precompile. If you reached this panel through the DB2 Program Preparation panel, this field contains the data set name specified there. You can override it on this panel. If you reached this panel directly from the DB2I Primary Option Menu, you must enter the data set name of the program you want to precompile. The data set name can include a member name. If you do not enclose the data set name with apostrophes, a standard TSO prefix (user ID) qualifies the data set name. 2 INCLUDE LIBRARY Lets you enter the name of a library containing members that the precompiler should include. These members can contain output from DCLGEN. If you do not enclose the name in apostrophes, a standard TSO prefix (user ID) qualifies the name. You can request additional INCLUDE libraries by entering DSNH CLIST parameters of the form PnLIB(dsname), where n is 2, 3, or 4) on the OTHER OPTIONS field of this panel or on the OTHER DSNH OPTIONS field of the Program Preparation panel. 3 DSNAME QUALIFIER Lets you specify a character string that qualifies temporary data set names during precompile. Use any character string from 1 to 8 characters in length that conforms to normal TSO naming conventions.
1020
If you reached this panel through the DB2 Program Preparation panel, this field contains the data set name qualifier specified there. You can override it on this panel. If you reached this panel from the DB2I Primary Option Menu, you can either specify a DSNAME QUALIFIER or let the field take its default value, TEMP. IMS and TSO: For IMS and TSO programs, DB2 stores the precompiled source statements (to pass to the compile or assemble step) in a data set named tsoprefix.qualifier.suffix. A data set named tsoprefix.qualifier.PCLIST contains the precompiler print listing. For programs prepared in the background or that use the PREPARATION ENVIRONMENT option EDITJCL (on the DB2 Program Preparation panel), a data set named tsoprefix.qualifier.CNTL contains the program preparation JCL. In these examples, tsoprefix represents the prefix TSO assigns, often the same as the authorization ID. qualifier represents the value entered in the DSNAME QUALIFIER field. suffix represents the output name, which is one of the following: COBOL, FORTRAN, C, PLI, ASM, DECK, CICSIN, OBJ, or DATA. In the Precompile Panel that is shown above, the data set tsoprefix.TEMP.COBOL contains the precompiled source statements, and tsoprefix.TEMP.PCLIST contains the precompiler print listing. If data sets with these names already exist, then DB2 deletes them. CICS: For CICS programs, the data set tsoprefix.qualifier.suffix receives the precompiled source statements in preparation for CICS command translation. If you do not plan to do CICS command translation, the source statements in tsoprefix.qualifier.suffix, are ready to compile. The data set tsoprefix.qualifier.PCLIST contains the precompiler print listing. When the precompiler completes its work, control passes to the CICS command translator. Because there is no panel for the translator, translation takes place automatically. The data set tsoprefix.qualifier.CXLIST contains the output from the command translator. 4 DBRM DATA SET Lets you name the DBRM library data set for the precompiler output. The data set can also include a member name. When you reach this panel, the field is blank. When you press ENTER, however, the value contained in the DSNAME QUALIFIER field of the panel, concatenated with DBRM, specifies the DBRM data set: qualifier.DBRM. You can enter another data set name in this field only if you allocate and catalog the data set before doing so. This is true even if the data set name that you enter corresponds to what is otherwise the default value of this field. The precompiler sends modified source code to the data set qualifier.host, where host is the language specified in the APPLICATION LANGUAGE field of DB2I Defaults panel 1. 5 WHERE TO PRECOMPILE Lets you indicate whether to precompile in the foreground or background. You can also specify EDITJCL, in which case you are able to edit and then submit the job.
Chapter 17. Preparing an application to run on DB2 for z/OS
1021
If you reached this panel from the DB2 Program Preparation panel, the field contains the preparation environment specified there. You can override that value if you want. If you reached this panel directly from the DB2I Primary Option Menu, you can either specify a processing environment or allow this field to take its default value. Use: FOREGROUND to immediately precompile the program with the values you specify in these panels. BACKGROUND to create and immediately submit to run a file containing a DSNH CLIST using the JOB control statement from either DB2I Defaults Panel 2 or your site's SUBMIT exit. The file is saved. EDITJCL to create and open a file containing a DSNH CLIST in edit mode. You can then submit the CLIST or save it. 6 VERSION Lets you specify the version of the program and its DBRM. If the version contains the maximum number of characters permitted (64), you must enter each character with no intervening blanks from one line to the next. This field is optional. 7 OTHER OPTIONS Lets you enter any option that the DSNH CLIST accepts, which gives you greater control over your program. The DSNH options you specify in this field override options specified on other panels. The option list can continue to the next line, but the total length of the list can be no more than 70 bytes. Related reference: DSNH (TSO CLIST) (DB2 Commands)
1022
BIND PACKAGE
SSID: DSN
Specify output location and collection names: 1 LOCATION NAME ............. ===> 2 COLLECTION-ID ............. ===> Specify package source (DBRM or COPY): 3 DBRM: COPY: ===> DBRM 4 MEMBER or COLLECTION-ID ===> 5 PASSWORD or PACKAGE-ID .. ===> 6 LIBRARY or VERSION ..... ===> 7 ........ -- OPTIONS ..... Enter options as desired: 8 CHANGE CURRENT DEFAULTS? 9 ENABLE/DISABLE CONNECTIONS? 10 OWNER OF PACKAGE (AUTHID).. 11 QUALIFIER ................ 12 ACTION ON PACKAGE ........ 13 INCLUDE PATH? ............ 14 REPLACE VERSION .......... ===> ===> ===> ===> ===> ===> ===> ===> NO NO
(Blank, or COPY version-id) (COMPOSITE or COMMAND) (NO or YES) (NO or YES) (Leave blank for primary ID) (Leave blank for OWNER) (ADD or REPLACE) (NO or YES) (Replacement version-id)
REPLACE NO
The following information explains the functions on the Bind Package panel and how to fill the necessary fields in order to bind your program. 1 LOCATION NAME Lets you specify the system at which to bind the package. You can use from 1 to 16 characters to specify the location name. The location name must be defined in the catalog table SYSIBM.LOCATIONS. The default is the local DBMS. | | | 2 COLLECTION-ID Lets you specify the collection the package is in. You can use from 1 to 18 characters to specify the collection, and the first character must be alphabetic. 3 DBRM: COPY: Lets you specify whether you are creating a new package (DBRM) or making a copy of a package that already exists (COPY). Use: DBRM To create a new package. You must specify values in the LIBRARY, PASSWORD, and MEMBER fields. COPY To copy an existing package. You must specify values in the COLLECTION-ID and PACKAGE-ID fields. (The VERSION field is optional.) 4 MEMBER or COLLECTION-ID MEMBER (for new packages): If you are creating a new package, this option lets you specify the DBRM to bind. You can specify a member name from 1 to 8 characters. The default name depends on the input data set name. v If the input data set is partitioned, the default name is the member name of the input data set specified in the INPUT DATA SET NAME field of the DB2 Program Preparation panel. v If the input data set is sequential, the default name is the second qualifier of this input data set.
1023
| | | |
COLLECTION-ID (for copying a package): If you are copying a package, this option specifies the collection ID that contains the original package. You can specify a collection ID from 1 to 18 characters, which must be different from the collection ID specified on the PACKAGE ID field. 5 PASSWORD or PACKAGE-ID PASSWORD (for new packages): If you are creating a new package, this lets you enter password for the library you list in the LIBRARY field. You can use this field only if you reached the Bind Package panel directly from the DB2 Primary Option Menu. PACKAGE-ID (for copying packages): If you are copying a package, this option lets you specify the name of the original package. You can enter a package ID from 1 to 8 characters. 6 LIBRARY or VERSION LIBRARY (for new packages): If you are creating a new package, this lets you specify the names of the libraries that contain the DBRMs specified on the MEMBER field for the bind process. Libraries are searched in the order specified and must in the catalog tables. VERSION (for copying packages): If you are copying a package, this option lets you specify the version of the original package. You can specify a version ID from 1 to 64 characters. 7 OPTIONS Lets you specify which bind options DB2 uses when you issue BIND PACKAGE with the COPY option. Specify: v COMPOSITE (default) to cause DB2 to use any options you specify in the BIND PACKAGE command. For all other options, DB2 uses the options of the copied package. v COMMAND to cause DB2 to use the options you specify in the BIND PACKAGE command. For all other options, DB2 uses the following values: For a local copy of a package, DB2 uses the defaults for the local DB2 subsystem. For a remote copy of a package, DB2 uses the defaults for the server on which the package is bound. 8 CHANGE CURRENT DEFAULTS? Lets you specify whether to change the current defaults for binding packages. If you enter YES in this field, you see the Defaults for Bind Package panel as your next step. You can enter your new preferences there; for instructions, see Defaults for Bind Package and Defaults for Rebind Package panels on page 1029. 9 ENABLE/DISABLE CONNECTIONS? Lets you specify whether you want to enable and disable system connections types to use with this package. This is valid only if the LOCATION NAME field names your local DB2 system. Placing YES in this field displays a panel (shown in Figure 63 on page 1033) that lets you specify whether various system connections are valid for this application. You can specify connection names to further identify enabled connections within a connection type. A connection name is valid only when you also specify its corresponding connection type. The default enables all connection types.
1024
10 OWNER OF PACKAGE (AUTHID) Lets you specify the primary authorization ID of the owner of the new package. That ID is the name owning the package, and the name associated with all accounting and trace records produced by the package. The owner must have the privileges required to run SQL statements contained in the package. The default is the primary authorization ID of the bind process. | | | 11 QUALIFIER Lets you specify the default schema for unqualified tables, views, indexes, and aliases. You can specify a schema name from 1 to 8 characters. The default is the authorization ID of the package owner. 12 ACTION ON PACKAGE Lets you specify whether to replace an existing package or create a new one. Use: REPLACE (default) to replace the package named in the PACKAGE-ID field if it already exists, and add it if it does not. (Use this option if you are changing the package because the SQL statements in the program changed. If only the SQL environment changes but not the SQL statements, you can use REBIND PACKAGE.) ADD to add the package named in the PACKAGE-ID field, only if it does not already exist. 13 INCLUDE PATH? Indicates whether you will supply a list of schema names that DB2 searches when it resolves unqualified distinct type, user-defined function, and stored procedure names in SQL statements. The default is NO. If you specify YES, DB2 displays a panel in which you specify the names of schemas for DB2 to search. 14 REPLACE VERSION Lets you specify whether to replace a specific version of an existing package or create a new one. If the package and the version named in the PACKAGE-ID and VERSION fields already exist, you must specify REPLACE. You can specify a version ID from 1 to 64 characters. The default version ID is that specified in the VERSION field.
1025
BIND PLAN
SSID: DSN
Enter DBRM data set name(s): 1 MEMBER .......... ===> SAMPLEPG 2 PASSWORD ........ ===> 3 LIBRARY ......... ===> TEMP.DBRM 4 ADDITIONAL DBRMS? ........ ===> NO Enter options as desired: 5 PLAN NAME ................ ===> 6 CHANGE CURRENT DEFAULTS? ===> 7 ENABLE/DISABLE CONNECTIONS?===> 8 INCLUDE PACKAGE LIST?..... ===> 9 OWNER OF PLAN (AUTHID) ... ===> 10 QUALIFIER ................ ===> 11 CACHESIZE ................ ===> 12 ACTION ON PLAN ........... ===> 13 RETAIN EXECUTION AUTHORITY ===> 14 CURRENT SERVER ........... ===> 15 INCLUDE PATH? ............ ===>
SAMPLEPG NO NO NO
0 REPLACE YES
(Required to create a plan) (NO or YES) (NO or YES) (NO or YES) (Leave blank for your primary ID) (For tables, views, and aliases) (Blank, or value 0-4096) (REPLACE or ADD) (YES to retain user list) (Location name) (NO or YES)
The following explains the functions on the Bind Plan panel and how to fill the necessary fields in order to bind your program. 1 MEMBER Lets you specify the DBRMs to include in the plan. You can specify a name from 1 to 8 characters. You must specify MEMBER or INCLUDE PACKAGE LIST, or both. If you do not specify MEMBER, fields 2, 3, and 4 are ignored. The default member name depends on the input data set. v If the input data set is partitioned, the default name is the member name of the input data set specified in field 1 of the DB2 Program Preparation panel. v If the input data set is sequential, the default name is the second qualifier of this input data set. If you reached this panel directly from the DB2I Primary Option Menu, you must provide values for the MEMBER and LIBRARY fields. If you plan to use more than one DBRM, you can include the library name and member name of each DBRM in the MEMBER and LIBRARY fields, separating entries with commas. You can also specify more DBRMs by using the ADDITIONAL DBRMS? field on this panel. 2 PASSWORD Lets you enter passwords for the libraries you list in the LIBRARY field. You can use this field only if you reached the Bind Plan panel directly from the DB2 Primary Option Menu. 3 LIBRARY Lets you specify the name of the library or libraries that contain the DBRMs to use for the bind process. You can specify a name up to 44 characters long. 4 ADDITIONAL DBRMS? Lets you specify more DBRM entries if you need more room. Or, if you reached this panel as part of the program preparation process, you can include more DBRMs by entering YES in this field. A separate panel then displays, where you can enter more DBRM library and member names; see Panels for entering lists of values on page 1034.
1026
5 PLAN NAME Lets you name the application plan to create. You can specify a name from 1 to 8 characters, and the first character must be alphabetic. If there are no errors, the bind process prepares the plan and enters its description into the EXPLAIN table. If you reached this panel through the DB2 Program Preparation panel, the default for this field depends on the value you entered in the INPUT DATA SET NAME field of that panel. If you reached this panel directly from the DB2 Primary Option Menu, you must include a plan name if you want to create an application plan. The default name for this field depends on the input data set: v If the input data set is partitioned, the default name is the member name. v If the input data set is sequential, the default name is the second qualifier of the data set name. 6 CHANGE CURRENT DEFAULTS? Lets you specify whether to change the current defaults for binding plans. If you enter YES in this field, you see the Defaults for Bind Plan panel as your next step. You can enter your new preferences there. 7 ENABLE/DISABLE CONNECTIONS? Lets you specify whether you want to enable and disable system connections types to use with this package. This is valid only if the LOCATION NAME field names your local DB2 system. Placing YES in this field displays a panel (shown in Figure 63 on page 1033) that lets you specify whether various system connections are valid for this application. You can specify connection names to further identify enabled connections within a connection type. A connection name is valid only when you also specify its corresponding connection type. The default enables all connection types. 8 INCLUDE PACKAGE LIST? Lets you include a list of packages in the plan. If you specify YES, a separate panel displays on which you must enter the package location, collection name, and package name for each package to include in the plan (see Panels for entering lists of values on page 1034). This list is optional if you use the MEMBER field. | | | | You can specify a location name from 1 to 16 characters, a collection ID from 1 to 18 characters, and a package ID from 1 to 8 characters. If you specify a location name, which is optional, it must be in the catalog table SYSIBM.LOCATIONS; the default location is the local DBMS. You must specify INCLUDE PACKAGE LIST? or MEMBER, or both, as input to the bind plan. 9 OWNER OF PLAN (AUTHID) Lets you specify the primary authorization ID of the owner of the new plan. That ID is the name owning the plan, and the name associated with all accounting and trace records produced by the plan. The owner must have the privileges required to run SQL statements contained in the plan. | 10 QUALIFIER Lets you specify the default schema for unqualified tables, views and
Chapter 17. Preparing an application to run on DB2 for z/OS
1027
| | |
aliases. You can specify a schema name from 1 to 8 characters, which must conform to the rules for SQL identifiers. If you leave this field blank, the default qualifier is the authorization ID of the plan owner. 11 CACHESIZE Lets you specify the size (in bytes) of the authorization cache. Valid values are in the range 0 to 4096. Values that are not multiples of 256 round up to the next highest multiple of 256. A value of 0 indicates that DB2 does not use an authorization cache. The default is 1024. Each concurrent user of a plan requires 8 bytes of storage, with an additional 32 bytes for overhead. 12 ACTION ON PLAN Lets you specify whether this is a new or changed application plan. Use: REPLACE (default) to replace the plan named in the PLAN NAME field if it already exists, and add the plan if it does not exist. ADD to add the plan named in the PLAN NAME field, only if it does not already exist. 13 RETAIN EXECUTION AUTHORITY Lets you choose whether or not those users with the authority to bind or run the existing plan are to keep that authority over the changed plan. This applies only when you are replacing an existing plan. If the plan ownership changes and you specify YES, the new owner grants BIND and EXECUTE authority to the previous plan owner. If the plan ownership changes and you do not specify YES, then everyone but the new plan owner loses EXECUTE authority (but not BIND authority), and the new plan owner grants BIND authority to the previous plan owner. 14 CURRENT SERVER Lets you specify the initial server to receive and process SQL statements in this plan. You can specify a name from 1 to 16 characters, which you must previously define in the catalog table SYSIBM.LOCATIONS. If you specify a remote server, DB2 connects to that server when the first SQL statement executes. The default is the name of the local DB2 subsystem. 15 INCLUDE PATH? Indicates whether you will supply a list of schema names that DB2 searches when it resolves unqualified distinct type, user-defined function, and stored procedure names in SQL statements. The default is NO. If you specify YES, DB2 displays a panel in which you specify the names of schemas for DB2 to search. When you finish making changes to this panel, press ENTER to go to the second of the program preparation panels, Program Prep: Compile, Link, and Run.
1028
Related concepts: Authorization cache on page 989 Related reference: Defaults for Bind Plan and Defaults for Rebind Plan panels on page 1031 BIND and REBIND options (DB2 Commands)
Defaults for Bind Package and Defaults for Rebind Package panels
These DB2I panels lets you change your defaults for BIND PACKAGE and REBIND PACKAGE options. On the following panel, enter new defaults for binding a package.
DSNEBP10 COMMAND ===> _ DEFAULTS FOR BIND PACKAGE SSID: DSN
Change default options as necessary: 1 ISOLATION LEVEL ......... 2 VALIDATION TIME ......... 3 RESOURCE RELEASE TIME ... 4 EXPLAIN PATH SELECTION .. 5 DATA CURRENCY ........... 6 PARALLEL DEGREE ......... 7 SQLERROR PROCESSING ..... 8 REOPTIMIZE FOR INPUT VARS 9 DEFER PREPARE ........... 10 KEEP DYN SQL PAST COMMIT 11 DBPROTOCOL .............. 12 APPLICATION ENCODING ... 13 14 15 ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> (RR, RS, CS, UR, or NC) (RUN or BIND) (COMMIT or DEALLOCATE) (NO or YES) (NO or YES) (1 or ANY) (NOPACKAGE or CONTINUE) (ALWAYS, NONE, or ONCE) (NO OR YES) (NO or YES) (DRDA OR PRIVATE) (Blank, ASCII, EBCDIC, UNICODE, or ccsid) (Blank or hint-id) (YES, NO) (RUN, BIND, DEFINE, or INVOKE)
OPTIMIZATION HINT ...... ===> IMMEDIATE WRITE ......... ===> DYNAMIC RULES ........... ===>
| | | | | | |
On the following panel, enter new defaults for rebinding a package. With a few minor exceptions, the options on this panel are the same as the options for the defaults for rebinding a package. However, the defaults for REBIND PACKAGE are different from those shown in the preceding figure, and you can specify SAME in any field to specify the values used the last time the package was bound. For rebinding, the default value for all fields is SAME.
1029
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DSNEBP11 DEFAULTS FOR REBIND PACKAGE SSID: DSN COMMAND ===> _ ----------------- Use the UP/DOWN keys to access all options -----------------More: + Change default options as necessary: 1 2 3 4 5 6 7 8 ISOLATION LEVEL ......... ===> PLAN VALIDATION TIME .... ===> RESOURCE RELEASE TIME ... ===> EXPLAIN PATH SELECTION .. DATA CURRENCY ........... PARALLEL DEGREE ......... REOPTIMIZE FOR INPUT VARS DEFER PREPARE ........... ===> ===> ===> ===> ===> (SAME, CS, RR, RS, UR, or NC) (SAME, RUN, or BIND) (SAME, DEALLOCATE, COMMIT, OR INHERITFROMPLAN) (SAME, NO, or YES) (SAME, NO, or YES) (SAME, 1 or ANY) (SAME, ALWAYS, NONE, ONCE, AUTO) (SAME, NO, YES, OR INHERITFROMPLAN) (SAME, NO, or YES) (SAME, DRDA, or PRIVATE) (SAME, Blank, ASCII, EBCDIC, UNICODE, or ccsid) (Blank or hint-id) (SAME, NO, YES, OR INHERITFROMPLAN)
9 KEEP DYN SQL PAST COMMIT ===> 10 DBPROTOCOL .............. ===> 11 APPLICATION ENCODING ... ===> 12 13 OPTIMIZATION HINT ...... ===> IMMEDIATE WRITE ......... ===>
(SAME, RUN, BIND, DEFINERUN, DEFINEBIND, INVOKERUN or INVOKEBIND) ------------------------------------------------------------------------------PRESS: ENTER to continue UP/DOWN to scroll RETURN to EXIT
14
The following table lists the fields on the Defaults for Bind Package and Defaults for Rebind Package panels, and the corresponding bind and rebind options.
Table 162. Defaults for Bind Package and Defaults for Rebind Package panel fields and corresponding bind or rebind options Field name APPLICATION ENCODING DATA CURRENCY DBPROTOCOL DEFER PREPARE DYNAMIC RULES EXPLAIN PATH SELECTION IMMEDIATE WRITE ISOLATION LEVEL KEEP DYN SQL PAST COMMIT OPTIMIZATION HINT PARALLEL DEGREE REOPTIMIZE FOR INPUT VARS RESOURCE RELEASE TIME SQLERROR PROCESSING VALIDATION TIME and PLAN VALIDATION TIME Bind or rebind option ENCODING CURRENTDATA DBPROTOCOL DEFER and NODEFER DYNAMICRULES EXPLAIN IMMEDWRITE ISOLATION KEEPDYNAMIC OPTHINT DEGREE REOPT RELEASE SQLERROR VALIDATE
1030
Related concepts: DYNAMICRULES bind option on page 987 Parallel processing (DB2 Performance) Investigating SQL performance by using EXPLAIN (DB2 Performance) Related tasks: Setting the isolation level of SQL statements in a REXX program on page 421 Related reference: BIND and REBIND options (DB2 Commands)
Defaults for Bind Plan and Defaults for Rebind Plan panels
These DB2I panels let you change your defaults for BIND PLAN and REBIND PLAN options. On the following panel, enter new defaults for binding a plan.
DSNEBP10 COMMAND ===> DEFAULTS FOR BIND PLAN SSID: DSN
Change default options as necessary: 1 ISOLATION LEVEL ......... 2 VALIDATION TIME ......... 3 RESOURCE RELEASE TIME ... 4 EXPLAIN PATH SELECTION .. 5 DATA CURRENCY ........... 6 PARALLEL DEGREE ......... 7 RESOURCE ACQUISITION TIME 8 REOPTIMIZE FOR INPUT VARS 9 DEFER PREPARE ........... 10 KEEP DYN SQL PAST COMMIT. 11 DBPROTOCOL .............. 12 APPLICATION ENCODING ... 13 14 15 16 17 OPTIMIZATION HINT ...... IMMEDIATE WRITE ......... DYNAMIC RULES ........... SQLRULES................. DISCONNECT .............. ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> RR RUN COMMIT NO NO 1 USE NONE NO NO (RR, RS, CS, or UR) (RUN or BIND) (COMMIT or DEALLOCATE) (NO or YES) (NO or YES) (1 or ANY) (USE or ALLOCATE) (ALWAYS, NONE, ONCE) (NO or YES) (NO or YES) (Blank, DRDA, OR PRIVATE) (Blank, ASCII, EBCDIC, UNICODE, or ccsid) (Blank or hint-id) (YES, NO) (RUN or BIND) (DB2 or STD) (EXPLICIT, AUTOMATIC, or CONDITIONAL)
| |
1031
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
SSID: DSN
Change default options as necessary: 1 ISOLATION LEVEL ......... 2 PLAN VALIDATION TIME .... 3 RESOURCE RELEASE TIME ... 4 EXPLAIN PATH SELECTION .. 5 DATA CURRENCY ........... 6 PARALLEL DEGREE ......... 7 REOPTIMIZE FOR INPUT VARS 8 DEFER PREPARE ........... 9 KEEP DYN SQL PAST COMMIT. 10 DBPROTOCOL 11 APPLICATION ENCODING ... 12 13 14 15 16 17 OPTIMIZATION HINT ...... IMMEDIATE WRITE ........ DYNAMIC RULES ........... RESOURCE ACQUISITION TIME SQLRULES ............... DISCONNECT .............. ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> ===> (SAME, RR, RS, CS, or UR) (SAME, RUN, or BIND) (SAME, DEALLOCATE, or COMMIT) (SAME, NO, or YES) (SAME, NO, or YES) (SAME, 1 or ANY) (SAME, ALWAYS, NONE, ONCE, AUTO) (SAME, NO, or YES) (SAME, NO, or YES) (SAME, DRDA or PRIVATE) (SAME, Blank, ASCII, EBCDIC, UNICODE, or ccsid) (SAME, hint-id) (SAME, YES, NO) (SAME, RUN, or BIND) (SAME, ALLOCATE, or USE) (SAME, DB2 or STD) (SAME, EXPLICIT, AUTOMATIC, or CONDITIONAL)
The following table lists the fields on the Defaults for Bind Package and Defaults for Rebind Package, and the corresponding bind and rebind options.
Table 163. Defaults for Bind Plan and Defaults for Rebind Plan panel fields and corresponding bind or rebind options Field name APPLICATION ENCODING DATA CURRENCY DBPROTOCOL DEFER PREPARE DISCONNECT DYNAMIC RULES EXPLAIN PATH SELECTION IMMEDIATE WRITE ISOLATION LEVEL KEEP DYN SQL PAST COMMIT OPTIMIZATION HINT PARALLEL DEGREE REOPTIMIZE FOR INPUT VARS RESOURCE ACQUISITION TIME RESOURCE RELEASE TIME VALIDATION TIME and PLAN VALIDATION TIME Bind or rebind option ENCODING CURRENTDATA DBPROTOCOL DEFER and NODEFER DISCONNECT DYNAMICRULES EXPLAIN IMMEDWRITE ISOLATION KEEPDYNAMIC OPTHINT DEGREE REOPT ACQUIRE RELEASE VALIDATE
1032
Related concepts: Authorization cache on page 989 DYNAMICRULES bind option on page 987 Parallel processing (DB2 Performance) Investigating SQL performance by using EXPLAIN (DB2 Performance) Related tasks: Setting the isolation level of SQL statements in a REXX program on page 421 Specifying the rules that apply to SQL behavior at run time on page 1001
DSNEBP13 SYSTEM CONNECTION TYPES FOR BIND ... COMMAND ===> Select system connection types to be Enabled/Disabled: 1 or 2 ENABLE ALL CONNECTION TYPES? ===>
SSID: DSN
ENABLE/DISABLE SPECIFIC CONNECTION TYPES ===> BATCH ....... DB2CALL ..... RRSAF ....... CICS ........ IMS ......... DLIBATCH .... IMSBMP ...... IMSMPP ...... REMOTE ...... ===> ===> ===> ===> ===> ===> ===> ===> ===> (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N) (Y/N)
To enable or disable connection types (that is, allow or prevent the connection from running the package or plan), enter the following information. 1 ENABLE ALL CONNECTION TYPES? Lets you enter an asterisk (*) to enable all connections. After that entry, you can ignore the rest of the panel. 2 ENABLE/DISABLE SPECIFIC CONNECTION TYPES Lets you specify a list of types to enable or disable; you cannot enable some types and disable others in the same operation. If you list types to enable, enter E; that disables all other connection types. If you list types to disable, enter D; that enables all other connection types. For each connection type that follows, enter Y (yes) if it is on your list, N (no) if it is not. The connection types are: v BATCH for a TSO connection v DB2CALL for a CAF connection v RRSAF for an RRSAF connection v CICS for a CICS connection
Chapter 17. Preparing an application to run on DB2 for z/OS
1033
v v v v v
IMS for all IMS connections: DLIBATCH, IMSBMP, and IMSMPP DLIBATCH for a DL/I Batch Support Facility connection IMSBMP for an IMS connection to a BMP region IMSMPP for an IMS connection to an MPP or IFP region REMOTE for remote location names and LU names
For each connection type that has a second arrow, under SPECIFY CONNECTION NAMES?, enter Y if you want to list specific connection names of that type. Leave N (the default) if you do not. If you use Y in any of those fields, you see another panel on which you can enter the connection names. If you use the DISPLAY command under TSO on this panel, you can determine what you have currently defined as enabled or disabled in your ISPF DSNSPFT library (member DSNCONNS). The information does not reflect the current state of the DB2 Catalog. If you type DISPLAY ENABLED on the command line, you get the connection names that are currently enabled for your TSO connection types. For example:
Display OF ALL CONNECTION CICS1 CICS2 CICS3 CICS4 DLI1 DLI2 DLI3 DLI4 DLI5 connection name(s) to be SUBSYSTEM ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED
Related reference: Panels for entering lists of values BIND and REBIND options (DB2 Commands)
1034
panelid Specific subcommand function COMMAND ===>_ SCROLL ===> Subcommand operand values: CMD """" """" """" """" """" """"
SSID: DSN
All of the list panels let you enter limited commands in two places: v On the system command line, prefixed by ====> v In a special command area, identified by """" On the system command line, you can use: END Saves all entered variables, exits the table, and continues to process.
CANCEL Discards all entered variables, terminates processing, and returns to the previous panel. SAVE Saves all entered variables and remains in the table. In the special command area, you can use: Inn Dnn Rnn Insert nn lines after this one. Delete this and the following lines for nn lines. Repeat this line nn number of times.
The default for nn is 1. When you finish with a list panel, specify END to same the current panel values and continue processing.
1035
DSNEPP02 PROGRAM PREP: COMPILE, PRELINK, LINK, AND RUN COMMAND ===>_ Enter compiler or assembler options: 1 INCLUDE LIBRARY ===> SRCLIB.DATA 2 INCLUDE LIBRARY ===> 3 OPTIONS ....... ===> NUM, OPTIMIZE, ADV Enter linkage editor options: 4 INCLUDE LIBRARY ===> SAMPLIB.COBOL 5 INCLUDE LIBRARY ===> 6 INCLUDE LIBRARY ===> 7 LOAD LIBRARY .. ===> RUNLIB.LOAD 8 PRELINK OPTIONS ===> 9 LINK OPTIONS... ===> Enter run options: 10 PARAMETERS .... ===> D01, D02, D03/ 11 SYSIN DATA SET ===> TERM 12 SYSPRINT DS ... ===> TERM
SSID: DSN
Figure 65. The Program Preparation: Compile, Link, and Run panel
1,2 INCLUDE LIBRARY Lets you specify up to two libraries containing members for the compiler to include. The members can also be output from DCLGEN. You can leave these fields blank. There is no default. 3 OPTIONS Lets you specify compiler, assembler, or PL/I macro processor options. You can also enter a list of compiler or assembler options by separating entries with commas, blanks, or both. You can leave these fields blank. There is no default. 4,5,6 INCLUDE LIBRARY Lets you enter the names of up to three libraries containing members for the linkage editor to include. You can leave these fields blank. There is no default. 7 LOAD LIBRARY Lets you specify the name of the library to hold the load module. The default value is RUNLIB.LOAD. If the load library specified is a PDS, and the input data set is a PDS, the member name specified in INPUT DATA SET NAME field of the Program Preparation panel is the load module name. If the input data set is sequential, the second qualifier of the input data set is the load module name. You must fill in this field if you request LINK or RUN on the Program Preparation panel. 8 PRELINK OPTIONS Lets you enter a list of prelinker options. Separate items in the list with commas, blanks, or both. You can leave this field blank. There is no default. The prelink utility applies only to programs using C, C++, and Enterprise COBOL for z/OS. 9 LINK OPTIONS Lets you enter a list of link-edit options. Separate items in the list with commas, blanks, or both.
1036
To prepare a program that uses 31-bit addressing and runs above the 16-megabyte line, specify the following link-edit options: AMODE=31, RMODE=ANY. 10 PARAMETERS Lets you specify a list of parameters you want to pass either to your host language run time processor, or to your application. Separate items in the list with commas, blanks, or both. You can leave this field blank. If you are preparing an IMS or CICS program, you must leave this field blank; you cannot use DB2I to run IMS and CICS programs. Use a slash (/) to separate the options for your run time processor from those for your program. v For PL/I and Fortran, run time processor parameters must appear on the left of the slash, and the application parameters must appear on the right.
run time processor parameters / application parameters
v For COBOL, reverse this order. run time processor parameters must appear on the right of the slash, and the application parameters must appear on the left. v For assembler and C, there is no supported run time environment, and you need not use a slash to pass parameters to the application program. 11 SYSIN DATA SET Lets you specify the name of a SYSIN (or in Fortran, FT05F001) data set for your application program, if it needs one. If you do not enclose the data set name in apostrophes, a standard TSO prefix (user ID) and suffix is added to it. The default for this field is TERM. If you are preparing an IMS or CICS program, you must leave this field blank; you cannot use DB2I to run IMS and CICS programs. 12 SYSPRINT DS Lets you specify the names of a SYSPRINT (or in Fortran, FT06F001) data set for your application program, if it needs one. If you do not enclose the data set name in apostrophes, a standard TSO prefix (user ID) and suffix is added to it. The default for this field is TERM. If you are preparing an IMS or CICS program, you must leave this field blank; you cannot use DB2I to run IMS and CICS programs. Your application could need other data sets besides SYSIN and SYSPRINT. If so, remember to catalog and allocate them before you run your program. When you press ENTER after entering values in this panel, DB2 compiles and link-edits the application. If you specified in the DB2 Program Preparation panel that you want to run the application, DB2 also runs the application. Related reference: Language Environment Programming Guide (z/OS Language Environment Programming Guide)
DB2I panels that are used to rebind and free plans and packages
A set of DB2I panels lets you bind, rebind, or free packages.
1037
Table 164 describes additional panels that you can use to Rebind and Free packages and plans. It also describes the Run panel, which you can use to run application programs that have already been prepared.
Table 164. DB2I panels used to rebind and free plans and packages and used to Run application programs Panel Bind/Rebind/Free Selection panel Rebind Package panel on page 1039 Rebind Trigger Package panel on page 1041 Rebind Plan panel on page 1043 Free Package panel on page 1045 Panel description The BIND/REBIND/FREE panel lets you select the BIND, REBIND, or FREE, PLAN, PACKAGE, or TRIGGER PACKAGE process that you need. The Rebind Package panel lets you change options when you rebind a package. The Rebind Trigger Package panel lets you change options when you rebind a trigger package. The Rebind Plan panel lets you change options when you rebind an application plan. The Free Package panel lets you change options when you free a package.
Free Plan panel on page The Free Plan panel lets you change options when you free an 1046 application plan. DB2I Run panel on page 1048 The Run panel lets you start an application program. You should use this panel if you have already prepared the program and you only want to run it. You can also run a program by using the "Program Prep: Compile, Prelink, Link, and Run" panel.
BIND/REBIND/FREE
SSID: DSN
Select one of the following and press ENTER: 1 2 3 4 5 6 7 BIND PLAN REBIND PLAN FREE PLAN BIND PACKAGE REBIND PACKAGE REBIND TRIGGER PACKAGE FREE PACKAGE (Add or replace an application plan) (Rebind existing application plan or plans) (Erase application plan or plans) (Add or replace a package) (Rebind existing package or packages) (Rebind existing package or packages)
This panel lets you select the process you need. 1 BIND PLAN Lets you build an application plan. You must have an application plan to
1038
allocate DB2 resources and support SQL requests during run time. If you select this option, the Bind Plan panel displays. For more information, see Bind Plan panel on page 1025. 2 REBIND PLAN Lets you rebuild an application plan when changes to it affect the plan but the SQL statements in the program are the same. For example, you should rebind when you change authorizations, create a new index that the plan uses, or use RUNSTATS. If you select this option, the Rebind Plan panel displays. For more information, see Rebind Plan panel on page 1043. 3 FREE PLAN Lets you delete plans from DB2. If you select this option, the Free Plan panel displays. For more information, see Free Plan panel on page 1046. 4 BIND PACKAGE Lets you build a package. If you select this option, the Bind Package panel displays. For more information, see Bind Package panel on page 1022. 5 REBIND PACKAGE Lets you rebuild a package when changes to it affect the package but the SQL statements in the program are the same. For example, you should rebind when you change authorizations, create a new index that the package uses, or use RUNSTATS. If you select this option, the Rebind Package panel displays. For more information, see Rebind Package panel. 6 REBIND TRIGGER PACKAGE Lets you rebuild a trigger package when you need to change options for the package. When you execute CREATE TRIGGER, DB2 binds a trigger package using a set of default options. You can use REBIND TRIGGER PACKAGE to change those options. For example, you can use REBIND TRIGGER PACKAGE to change the isolation level for the trigger package. If you select this option, the Rebind Trigger Package panel displays. For more information, see Rebind Trigger Package panel on page 1041. 7 FREE PACKAGE Lets you delete a specific version of a package, all versions of a package, or whole collections of packages from DB2. If you select this option, the Free Package panel displays. For more information, see Free Package panel on page 1045.
1039
REBIND PACKAGE
SSID: DSN
===>
Enter package name(s) to be LOCATION NAME ............. COLLECTION-ID ............. PACKAGE-ID ................ VERSION-ID ................
(Defaults to local) (Required) (Required) (*, Blank, (), or version-id) (Yes to include more packages)
ADDITIONAL PACKAGES? ...... ===> ===> ===> ===> ===> ===> ===>
Enter options as desired ...... 7 CHANGE CURRENT DEFAULTS?... 8 OWNER OF PACKAGE (AUTHID).. 9 QUALIFIER ................. 10 ENABLE/DISABLE CONNECTIONS? 11 INCLUDE PATH? .............
This panel lets you choose options for rebinding a package. 1 Rebind all local packages Lets you rebind all packages on the local DBMS. To do so, place an asterisk (*) in this field; otherwise, leave it blank. 2 LOCATION NAME Lets you specify where to bind the package. If you specify a location name, you should use from 1 to 16 characters, and you must have defined it in the catalog table SYSIBM.LOCATIONS. | | | | 3 COLLECTION-ID Lets you specify the collection of the package to rebind. You must specify a collection ID from 1 to 8 characters, or an asterisk (*) to rebind all collections in the local DB2 system. You cannot use the asterisk to rebind a remote collection. 4 PACKAGE-ID Lets you specify the name of the package to rebind. You must specify a package ID from 1 to 8 characters, or an asterisk (*) to rebind all packages in the specified collections in the local DB2 system. You cannot use the asterisk to rebind a remote package. 5 VERSION-ID Lets you specify the version of the package to rebind. You must specify a version ID from 1 to 64 characters, or an asterisk (*) to rebind all versions in the specified collections and packages in the local DB2 system. You cannot use the asterisk to rebind a remote version. 6 ADDITIONAL PACKAGES? Lets you indicate whether to name more packages to rebind. Use YES to specify more packages on an additional panel, described on Panels for entering lists of values on page 1034. The default is NO. 7 CHANGE CURRENT DEFAULTS? Lets you indicate whether to change the binding defaults. Use: NO (default) to retain the binding defaults of the previous package. YES to change the binding defaults from the previous package. For information about the defaults for binding packages, see Defaults for Bind Package and Defaults for Rebind Package panels on page 1029.
1040
8 OWNER OF PACKAGE (AUTHID) Lets you change the authorization ID for the package owner. The owner must have the required privileges to execute the SQL statements in the package. The default is the existing package owner. | | | | 9 QUALIFIER Lets you specify the default schema for all unqualified table names, views, indexes, and aliases in the package. You can specify a schema name from 1 to 8 characters, which must conform to the rules for the SQL short identifier. The default is the existing qualifier name. 10 ENABLE/DISABLE CONNECTIONS? Lets you specify whether you want to enable and disable system connections types to use with this package. This is valid only if the LOCATION NAME field names your local DB2 system. Placing YES in this field displays a panel (shown in Figure 63 on page 1033) that lets you specify whether various system connections are valid for this application. The default is the values used for the previous package. 11 INCLUDE PATH? Indicates which one of the following actions you want to perform: v Request that DB2 uses the same schema names as when the package was bound for resolving unqualified distinct type, user-defined function, and stored procedure names in SQL statements. Choose SAME to perform this action. This is the default. v Supply a list of schema names that DB2 searches when it resolves unqualified distinct type, user-defined function, and stored procedure names in SQL statements. Choose YES to perform this action. v Request that DB2 resets the SQL path to SYSIBM, SYSFUN, SYSPROC, and the package owner. Choose DEFAULT to perform this action. If you specify YES, DB2 displays a panel in which you specify the names of schemas for DB2 to search. Related reference: BIND and REBIND options (DB2 Commands)
1041
SSID: DSN
Enter trigger package name(s) to be rebound: LOCATION NAME ............. ===> COLLECTION-ID (SCHEMA NAME) ===> PACKAGE-ID (TRIGGER NAME).. ===> ===> ===> ===> ===> ===> ===>
Enter options as desired ...... 5 ISOLATION LEVEL ........... 6 RESOURCE RELEASE TIME ..... 7 EXPLAIN PATH SELECTION .... 8 DATA CURRENCY ............. 9 IMMEDIATE WRITE OPTION ....
RR, RS, CS, UR, or NC) DEALLOCATE, or COMMIT) NO, or YES) NO, or YES) NO, YES)
This panel lets you choose options for rebinding a trigger package. 1 Rebind all trigger packages Lets you rebind all packages on the local DBMS. To do so, place an asterisk (*) in this field; otherwise, leave it blank. 2 LOCATION NAME Lets you specify where to bind the trigger package. If you specify a location name, you should use from 1 to 16 characters, and you must have defined it in the catalog table SYSIBM.LOCATIONS. | | | | 3 COLLECTION-ID (SCHEMA NAME) Lets you specify the collection of the trigger package to rebind. You must specify a collection ID from 1 to 8 characters, or an asterisk (*) to rebind all collections in the local DB2 system. You cannot use the asterisk to rebind a remote collection. 4 PACKAGE-ID Lets you specify the name of the trigger package to rebind. You must specify a package ID from 1 to 8 characters, or an asterisk (*) to rebind all trigger packages in the specified collections in the local DB2 system. You cannot use the asterisk to rebind a remote trigger package. 5 ISOLATION LEVEL Lets you specify how far to isolate your application from the effects of other running applications. The default is the value used for the old trigger package. 6 RESOURCE RELEASE TIME Lets you specify COMMIT or DEALLOCATE to tell when to release locks on resources. The default is that used for the old trigger package. 7 EXPLAIN PATH SELECTION Lets you specify YES or NO for whether to obtain EXPLAIN information about how SQL statements in the package execute. The default is the value used for the old trigger package. The bind process inserts information into the table owner.PLAN_TABLE, where owner is the authorization ID of the plan or package owner. If you defined owner.DSN_STATEMNT_TABLE, DB2 also inserts information about the cost of statement execution into that table. If you specify YES in this field and BIND in the VALIDATION TIME field, and if you do not correctly define PLAN_TABLE, the bind fails.
1042
8 DATA CURRENCY Lets you specify YES or NO for whether you need data currency for ambiguous cursors opened at remote locations. The default is the value used for the old trigger package. Data is current if the data within the host structure is identical to the data within the base table. Data is always current for local processing. 9 IMMEDIATE WRITE OPTION Specifies when DB2 writes the changes for updated group buffer pool-dependent pages. This field applies only to a data sharing environment. The values that you can specify are: SAME Choose the value of IMMEDIATE WRITE that you specified when you bound the trigger package. SAME is the default. NO Write the changes at or before phase 1 of the commit process. If the transaction is rolled back later, write the additional changes that are caused by the rollback at the end of the abort process. PH1 is equivalent to NO. Write the changes immediately after group buffer pool-dependent pages are updated. Related reference: YES BIND and REBIND options (DB2 Commands)
Enter plan name(s) to be rebound: 1 PLAN NAME ................. ===> 2 ADDITIONAL PLANS? ......... ===> NO Enter options as desired: 3 CHANGE CURRENT DEFAULTS?... 4 OWNER OF PLAN (AUTHID)..... 5 QUALIFIER ................. 6 CACHESIZE ................. 7 ENABLE/DISABLE CONNECTIONS? 8 INCLUDE PACKAGE LIST?...... 9 CURRENT SERVER ............ 10 INCLUDE PATH? .............
YES) new OWNER) new QUALIFIER) or value 0-4096) YES) NO, or YES) (Location name) (SAME, DEFAULT, or YES)
This panel lets you specify options for rebinding your plan. 1 PLAN NAME Lets you name the application plan to rebind. You can specify a name from 1 to 8 characters, and the first character must be alphabetic. Do not begin the name with DSN, because it could create name conflicts with DB2. If there are no errors, the bind process prepares the plan and enters its description into the EXPLAIN table. If you leave this field blank, the bind process occurs but produces no plan.
Chapter 17. Preparing an application to run on DB2 for z/OS
1043
2 ADDITIONAL PLANS? Lets you indicate whether to name more plans to rebind. Use YES to specify more plans on an additional panel, described at Panels for entering lists of values on page 1034. The default is NO. 3 CHANGE CURRENT DEFAULTS? Lets you indicate whether to change the binding defaults. Use: NO (default) to retain the binding defaults of the previous plan. YES to change the binding defaults from the previous plan. 4 OWNER OF PLAN (AUTHID) Lets you change the authorization ID for the plan owner. The owner must have the required privileges to execute the SQL statements in the plan. The default is the existing plan owner. | | | | 5 QUALIFIER Lets you specify the default schema for all unqualified table names, views, indexes, and aliases in the plan. You can specify a schema name from 1 to 8 characters, which must conform to the rules for the SQL identifier. The default is the authorization ID. 6 CACHESIZE Lets you specify the size (in bytes) of the authorization cache. Valid values are in the range 0 to 4096. Values that are not multiples of 256 round up to the next highest multiple of 256. A value of 0 indicates that DB2 does not use an authorization cache. The default is the cache size specified for the previous plan. Each concurrent user of a plan requires 8 bytes of storage, with an additional 32 bytes for overhead. 7 ENABLE/DISABLE CONNECTIONS? Lets you specify whether you want to enable and disable system connections types to use with this plan. This is valid only for rebinding on your local DB2 system. Placing YES in this field displays a panel (shown in Figure 63 on page 1033) that lets you specify whether various system connections are valid for this application. The default is the values used for the previous plan. 8 INCLUDE PACKAGE LIST? Lets you include a list of collections and packages in the plan. If you specify YES, a separate panel displays on which you must enter the package location, collection name, and package name for each package to include in the plan (see Panels for entering lists of values on page 1034). This field can either add a package list to a plan that did not have one, or replace an existing package list. | | | | | You can specify a location name from 1 to 16 characters, a collection ID from 1 to 18 characters, and a package ID from 1 to 8 characters. Separate two or more package list parameters with a comma. If you specify a location name, it must be in the catalog table SYSIBM.LOCATIONS. The default location is the package list used for the previous plan. 9 CURRENT SERVER Lets you specify the initial server to receive and process SQL statements in this plan. You can specify a name from 1 to 16 characters, which you must previously define in the catalog table SYSIBM.LOCATIONS.
1044
If you specify a remote server, DB2 connects to that server when the first SQL statement executes. The default is the name of the local DB2 subsystem. 10 INCLUDE PATH? Indicates which one of the following actions you want to perform: v Request that DB2 uses the same schema names as when the plan was bound for resolving unqualified distinct type, user-defined function, and stored procedure names in SQL statements. Choose SAME to perform this action. This is the default. v Supply a list of schema names that DB2 searches when it resolves unqualified distinct type, user-defined function, and stored procedure names in SQL statements. Choose YES to perform this action. v Request that DB2 resets the SQL path to SYSIBM, SYSFUN, SYSPROC, and the plan owner. Choose DEFAULT to perform this action. If you specify YES, DB2 displays a panel in which you specify the names of schemas for DB2 to search. Related reference: Defaults for Bind Plan and Defaults for Rebind Plan panels on page 1031 BIND and REBIND options (DB2 Commands)
FREE PACKAGE
SSID: DSN
Enter package name(s) to be LOCATION NAME ............. COLLECTION-ID ............. PACKAGE-ID ................ VERSION-ID ................
(Defaults to local) (Required) (* to free all packages) (*, Blank, (), or version-id) (Yes to include more packages)
This panel lets you specify options for erasing packages. 1 Free ALL packages Lets you free (erase) all packages for which you have authorization or to which you have BINDAGENT authority. To do so, place an asterisk (*) in this field; otherwise, leave it blank. 2 LOCATION NAME Lets you specify the location name of the DBMS to free the package. You can specify a name from 1 to 16 characters. | | 3 COLLECTION-ID Lets you specify the collection from which you want to delete packages for which you own or have BINDAGENT privileges. You can specify a name
Chapter 17. Preparing an application to run on DB2 for z/OS
1045
| |
from 1 to 18 characters, or an asterisk (*) to free all collections in the local DB2 system. You cannot use the asterisk to free a remote collection. 4 PACKAGE-ID Lets you specify the name of the package to free. You can specify a name from 1 to 8 characters, or an asterisk (*) to free all packages in the specified collections in the local DB2 system. You cannot use the asterisk to free a remote package. The name you specify must be in the DB2 catalog tables. 5 VERSION-ID Lets you specify the version of the package to free. You can specify an identifier from 1 to 64 characters, or an asterisk (*) to free all versions of the specified collections and packages in the local DB2 system. You cannot use the asterisk to free a remote version. 6 ADDITIONAL PACKAGES? Lets you indicate whether to name more packages to free. Use YES to specify more packages on an additional panel, described in Panels for entering lists of values on page 1034. The default is NO.
FREE PLAN
SSID: DSN
Enter plan name(s) to be freed: 1 PLAN NAME ............ ===> 2 ADDITIONAL PLANS? .... ===>
This panel lets you specify options for freeing plans. 1 PLAN NAME Lets you name the application plan to delete from DB2. Use an asterisk to free all plans for which you have BIND authority. You can specify a name from 1 to 8 characters, and the first character must be alphabetic. If there are errors, the free process terminates for that plan and continues with the next plan. 2 ADDITIONAL PLANS? Lets you indicate whether to name more plans to free. Use YES to specify more plans on an additional panel, described in Panels for entering lists of values on page 1034. The default is NO.
1046
1047
v Translation of return codes into error messages Limitations of the DSN command processor: When using DSN services, your application runs under the control of DSN. Because TSO executes the ATTACH macro to start DSN, and DSN executes the ATTACH macro to start a part of itself, your application gains control that is two task levels below TSO. Because your program depends on DSN to manage your connection to DB2: v If DB2 is down, your application cannot begin to run. v If DB2 terminates, your application also terminates. v An application can use only one plan. If these limitations are too severe, consider having your application use the call attachment facility or Resource Recovery Services attachment facility. For more information about these attachment facilities, see Call attachment facility on page 43 and Resource Recovery Services attachment facility on page 75. DSN return code processing: At the end of a DSN session, register 15 contains the highest value that is placed there by any DSN subcommand that is used in the session or by any program that is run by the RUN subcommand. Your run time environment might format that value as a return code. The value does not, however, originate in DSN.
RUN
SSID: DSN
Enter the name of the program you want to run: 1 DATA SET NAME ===> 2 PASSWORD..... ===> (Required if data set is password protected) Enter the following as desired: 3 PARAMETERS .. ===> 4 PLAN NAME ... ===> (Required if different from program name) 5 WHERE TO RUN ===> (FOREGROUND, BACKGROUND, or EDITJCL)
This panel lets you run existing application programs. 1 DATA SET NAME Lets you specify the name of the partitioned data set that contains the load module. If the module is in a data set that the operating system can find, you can specify the member name only. There is no default. If you do not enclose the name in apostrophes, a standard TSO prefix (user ID) and suffix (.LOAD) is added.
1048
2 PASSWORD Lets you specify the data set password if needed. The RUN processor does not check whether you need a password. If you do not enter a required password, your program does not run. 3 PARAMETERS Lets you specify a list of parameters you want to pass either to your host language run time processor, or to your application. You should separate items in the list with commas, blanks, or both. You can leave this field blank. Use a slash (/) to separate the options for your run time processor from those for your program. v For PL/I and Fortran, run time processor parameters must appear on the left of the slash, and the application parameters must appear on the right.
run time processor parameters / application parameters
v For COBOL, reverse this order. run time processor parameters must appear on the right of the slash, and the application parameters must appear on the left. v For assembler and C, there is no supported run time environment, and you need not use the slash to pass parameters to the application program. 4 PLAN NAME Lets you specify the name of the plan to which the program is bound. The default is the member name of the program. 5 WHERE TO RUN Lets you indicate whether to run in the foreground or background. You can also specify EDITJCL, in which case you are able to edit the job control statement before you run the program. Use: FOREGROUND to immediately run the program in the foreground with the specified values. BACKGROUND to create and immediately submit to run a file containing a DSNH CLIST using the JOB control statement from either DB2I Defaults Panel 2 or your site's SUBMIT exit. The program runs in the background. EDITJCL to create and open a file containing a DSNH CLIST in edit mode. You can then submit the CLIST or save it. The program runs in the background. Running command processors To run a command processor (CP), use the following commands from the TSO ready prompt or as a TSO TMP:
DSN SYSTEM (DB2-subsystem-name) RUN CP PLAN (plan-name)
The RUN subcommand prompts you for more input. The end the DSN processor, use the END command.
1049
. . .
(Here the program runs and might prompt you for input) DSN Prompt: DSN Enter: END TSO Prompt: READY
This sequence also works in ISPF option 6. You can package this sequence in a CLIST. DB2 does not support access to multiple DB2 subsystems from a single address space. The PARMS keyword of the RUN subcommand enables you to pass parameters to the run time processor and to your application program:
PARMS (/D01, D02, D03)
The slash (/) indicates that you are passing parameters. For some languages, you pass parameters and run time options in the form PARMS('parameters/run time-options). An example of the PARMS keyword might be:
PARMS (D01, D02, D03/)
Check your host language publications for the correct form of the PARMS option.
The SYSEXEC data set contains your REXX application, and the SYSTSIN data set contains the command that you use to invoke the application.
1050
ISPF
The Interactive System Productivity Facility (ISPF) helps you to construct and execute dialogs. DB2 includes a sample application that illustrates how to use ISPF through the call attachment facility (CAF). Each scenario has advantages and disadvantages in terms of efficiency, ease of coding, ease of maintenance, and overall flexibility.
1051
TSO or ISPF ATTACH DSN initialization load module Alias=DSN ATTACH DSN main load module ATTACH Application command processor (See Note 1)
Figure 73. DSN task structure
LINK
Notes: 1. The RUN command with the CP option causes DSN to attach your program and create a new TCB. 2. The RUN command without the CP option causes DSN to link to your program. If you are in ISPF and running under DSN, you can perform an ISPLINK to another program, which calls a CLIST. In turn, the CLIST uses DSN and another application. Each such use of DSN creates a separate unit of recovery (process or transaction) in DB2. All such initiated DSN work units are unrelated, with regard to isolation (locking) and recovery (commit). It is possible to deadlock with yourself; that is, one unit (DSN) can request a serialized resource (a data page, for example) that another unit (DSN) holds incompatibly. A COMMIT in one program applies only to that process. There is no facility for coordinating the processes. Related concepts: Dynamic SQL and the ISPF/CAF application (DB2 Installation and Migration) Printing options for the sample application listings (DB2 Installation and Migration) DB2 sample applications on page 1126 DSN command processor on page 1047
1052
The application has one large load module and one plan. Disadvantages: For large programs of this type, you want a more modular design, making the plan more flexible and easier to maintain. If you have one large plan, you must rebind the entire plan whenever you change a module that includes SQL statements. To achieve a more modular construction when all parts of the program use SQL, consider using packages. See DB2 program preparation overview on page 1002 You cannot pass control to another load module that makes SQL calls by using ISPLINK; rather, you must use LINK, XCTL, or LOAD and BALR. If you want to use ISPLINK, then call ISPF to run under DSN:
DSN RUN PROGRAM(ISPF) PLAN(MYPLAN) END
You then need to leave ISPF before you can start your application. Furthermore, the entire program is dependent on DB2; if DB2 is not running, no part of the program can begin or continue to run.
For a part that accesses DB2, the command can name a CLIST that starts DSN:
DSN RUN PROGRAM(PART1) PLAN(PLAN1) PARM(input from panel) END
1053
Breaking the application into separate modules makes it more flexible and easier to maintain. Furthermore, some of the application might be independent of DB2; portions of the application that do not call DB2 can run, even if DB2 is not running. A stopped DB2 database does not interfere with parts of the program that refer only to other databases. Disadvantages: The modular application, on the whole, has to do more work. It calls several CLISTs, and each one must be located, loaded, parsed, interpreted, and executed. It also makes and breaks connections to DB2 more often than the single load module. As a result, you might lose some efficiency.
1054
| | | | | | | | | | | | | | | | |
// SPACE=(TRK,(1,1),RLSE),UNIT=SYSDA, // DCB=(RECFM=VB,BLKSIZE=4096,LRECL=4092) //G.DDITV02 DD * SSDQ,SYS1,DSNMIN10,,A,-,BATCH001,,IVP8CP22 /* //*************************************************************** //*** ALWAYS ATTEMPT TO PRINT OUT THE DDOTV02 DATA SET *** //*************************************************************** //STEP3 EXEC PGM=DFSERA10,COND=EVEN //STEPLIB DD DSN=IMS.RESLIB,DISP=SHR //SYSPRINT DD SYSOUT=A //SYSUT1 DD DSNAME=&TEMP1,DISP=(OLD,DELETE) //SYSIN DD * CONTROL CNTL K=000,H=8000 OPTION PRINT /* //
Submitting a DL/I batch application without using DSNMTV01: The skeleton JCL in the following example illustrates a COBOL application program, IVP8CP22, that runs using DB2 DL/I batch support.
//TEPCTEST JOB USER=ADMF001,MSGCLASS=A,MSGLEVEL=(1,1), // TIME=1440,CLASS=A,USER=SYSADM,PASSWORD=SYSADM //******************************* //BATCH EXEC DLIBATCH,PSB=IVP8CA,MBR=IVP8CP22, // BKO=Y,DBRC=N,IRLM=N,SSM=SSDQ //******************************* //SYSPRINT DD SYSOUT=A //REPORT DD SYSOUT=* //G.DDOTV02 DD DSN=&TEMP,DISP=(NEW,PASS,DELETE), // SPACE=(CYL,(10,1),RLSE), // UNIT=SYSDA,DCB=(RECFM=VB,BLKSIZE=4096,LRECL=4092) //G.DDITV02 DD * SSDQ,SYS1,DSNMIN10,,Q,",DSNMTES1,,IVP8CP22 //G.SYSIN DD * /* //**************************************************** //* ALWAYS ATTEMPT TO PRINT OUT THE DDOTV02 DATA SET //**************************************************** //PRTLOG EXEC PGM=DFSERA10,COND=EVEN //STEPLIB DD DSN=IMS.RESLIB,DISP=SHR //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SYSUT1 DD DSN=&TEMP,DISP=(OLD,DELETE) //SYSIN DD * CONTROL CNTL K=000,H=8000 OPTION PRINT /*
1055
* * * * * *
JCL example of restarting a DL/I batch job: Operational procedures can restart a DL/I batch job step for an application program using IMS XRST and symbolic CHKP calls. You cannot restart a BMP application program in a DB2 DL/I batch environment. The symbolic checkpoint records are not accessed, causing an IMS user abend U0102. To restart a batch job that terminated abnormally or prematurely, find the checkpoint ID for the job on the z/OS system log or from the SYSOUT listing of the failing job. Before you restart the job step, place the checkpoint ID in the CKPTID=value option of the DLIBATCH procedure, submit the job. If the default connection name is used (that is, you did not specify the connection name option in the DDITV02 input data set), the job name of the restart job must be the same as the failing job. Refer to the following skeleton example, in which the last checkpoint ID value was IVP80002: | | | | | | |
//ISOCS04 JOB 3000,OJALA,MSGLEVEL=(1,1),NOTIFY=OJALA, // MSGCLASS=T,CLASS=A //* ****************************************************************** //* //* THE FOLLOWING STEP RESTARTS COBOL PROGRAM IVP8CP22, WHICH UPDATES //* BOTH DB2 AND DL/I DATABASES, FROM CKPTID=IVP80002. //*
1056
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
//* ****************************************************************** //RSTRT EXEC DLIBATCH,DBRC=Y,COND=EVEN,LOGT=SYSDA, // MBR=DSNMTV01,PSB=IVP8CA,BKO=Y,IRLM=N,CKPTID=IVP80002 //G.STEPLIB DD // DD // DD DSN=prefix.SDSNLOAD,DISP=SHR // DD DSN=prefix.RUNLIB.LOAD,DISP=SHR // DD DSN=SYS1.COB2LIB,DISP=SHR // DD DSN=IMS.PGMLIB,DISP=SHR //* other program libraries //* G.IEFRDER data set required //* G.IMSLOGR data set required //G.DDOTV02 DD DSN=&TEMP2,DISP=(NEW,PASS,DELETE), // SPACE=(TRK,(1,1),RLSE),UNIT=SYSDA, // DCB=(RECFM=VB,BLKSIZE=4096,LRECL=4092) //G.DDITV02 DD * DB2X,SYS1,DSNMIN10,,A,-,BATCH001,,IVP8CP22 /* //*************************************************************** //*** ALWAYS ATTEMPT TO PRINT OUT THE DDOTV02 DATA SET *** //*************************************************************** //STEP8 EXEC PGM=DFSERA10,COND=EVEN //STEPLIB DD DSN=IMS.RESLIB,DISP=SHR //SYSPRINT DD SYSOUT=A //SYSUT1 DD DSNAME=&TEMP2,DISP=(OLD,DELETE) //SYSIN DD * CONTROL CNTL K=000,H=8000 OPTION PRINT /* //
1057
logging the information before the failure. In that case, restart the application program from the previous checkpoint ID. DB2 performs one of two actions automatically when restarted, if the failure occurs outside the indoubt period: it either backs out the work unit to the prior checkpoint, or it commits the data without any assistance. If the operator then issues the following command, no work unit information is displayed:
-DISPLAY THREAD(*) TYPE(INDOUBT)
| | | | | | | | | | | | | | | | | |
Procedure
To run a stored procedure from the command line processor: 1. Invoke the command line processor and connect to the appropriate DB2 subsystem. For more information about how to perform these tasks, see Command line processor (DB2 Commands). 2. Specify the CALL statement in the form that is acceptable for the command line processor. Related tasks: Chapter 14, Calling a stored procedure from your application, on page 775 Implementing DB2 stored procedures (DB2 Administration Guide)
| | | | | | | | | | | | | Notes: 1 If you specify an unqualified stored procedure name, DB2 searches the schema list in the CURRENT PATH special register. DB2 searches this list for a stored procedure with the specified number of input and output parameters. Specify a question mark (?) as a placeholder for each output parameter. For non-numeric, BLOB, or CLOB input parameters, enclose each value in single quotation marks ('). The exception is if the data is a BLOB or CLOB value that is to be read from a file. In that case, use the notation file://fully qualified file name. Specify the input and output parameters in the order that they are specified in the signature for the stored procedure.
2 3
1058
| | | | | | | | | | | | | | | | | | | | | |
Example: Assume that the TEST.DEPT_MEDIAN stored procedure was created with the following statement:
CREATE PROCEDURE TEST.DEPT_MEDIAN (IN DEPTNUMBER SMALLINT, OUT MEDIANSALARY INT)
To invoke the stored procedure from the command line processor, you can specify the following CALL statement:
CALL TEST.DEPT_MEDIAN(51, ?)
Assume that the stored procedure returns a value of 25,000. The following information is displayed by the command line processor:
Value of output parameters -------------------------Parameter Name : MEDIANSALARY Parameter Value : 25000
Example: Suppose that stored procedure TEST.BLOBSP is defined with one input parameter of type BLOB and one output parameter. You can invoke this stored procedure from the command line processor with the following statement:
CALL TEST.BLOBSP(file:///tmp/photo.bmp,?)
The command line processor reads the contents from /tmp/photo.bmp as the input parameter. Alternatively, you can invoke this stored procedure by specifying the input parameter in the CALL statement itself, as in the following example:
CALL TEST.BLOBSP(abcdef,?)
v The JOB option identifies this as a job card. The USER option specifies the DB2 authorization ID of the user. v The EXEC statement calls the TSO Terminal Monitor Program (TMP). v The STEPLIB statement specifies the library in which the DSN Command Processor load modules and the default application programming defaults module, DSNHDECP, reside. It can also reference the libraries in which user applications, exit routines, and the customized DSNHDECP module reside. The customized DSNHDECP module is created during installation. v Subsequent DD statements define additional files that are needed by your program.
Chapter 18. Running an application on DB2 for z/OS
1059
The DSN command connects the application to a particular DB2 subsystem. The RUN subcommand specifies the name of the application program to run. The PLAN keyword specifies plan name. The LIB keyword specifies the library that the application should access. The PARMS keyword passes parameters to the run time processor and the application program. v END ends the DSN command processor. v v v v v
Usage notes
v Keep DSN job steps short. v Recommendation: Do not use DSN to call the EXEC command processor to run CLISTs that contain ISPEXEC statements; results are unpredictable. v If your program abends or gives you a non-zero return code, DSN terminates. v You can use a group attachment name instead of a specific ssid to connect to a member of a data sharing group. Related reference: Using the TSO TMP in batch mode (TSO/E User's Guide)
IMS: To run a message-driven program First, ensure that you can respond to the program's interactive requests for data and that you can recognize the expected results. Then, enter the transaction code that is associated with the program. Users of the transaction code must be authorized to run the program. To run a non-message-driven program CICSTo run a program First, ensure that the corresponding entries in the SNT and RACF control areas allow run authorization for your application. The system administrator is responsible for these functions. Submit the job control statements that are needed to run the program.
1060
Also, be sure to define to CICS the transaction code that is assigned to your program and the program itself. Make a new copy of the program Issue the NEWCOPY command if CICS has not been reinitialized since the program was last bound and compiled.
1061
1062
Chapter 19. Testing and debugging an application program on DB2 for z/OS
Depending on the situation, testing your application program might involve setting up a test environment, testing SQL statements, debugging your programs, and reading output from the precompiler. Related tasks: Modeling a production environment on a test subsystem (DB2 Performance) Modeling your production system statistics in a test subsystem (DB2 Performance)
1063
Procedure
To analyze the data needs of your application: 1. List the data that your application accesses and describe how it accesses each data item. For example, suppose that you are testing an application that accesses the DSN8910.EMP, DSN8910.DEPT, and DSN8910.PROJ tables. You might record the information about the data as shown in Table 165.
Table 165. Description of the application data Table or view name DSN8910.EMP Insert rows? No Delete rows? No Column name EMPNO LASTNAME WORKDEPT PHONENO JOB DSN8910.DEPT No No DEPTNO MGRNO DSN8910.PROJ Yes Yes PROJNO DEPTNO RESPEMP PRSTAFF PRSTDATE PRENDATE Data type CHAR(6) VARCHAR(15) CHAR(3) CHAR(4) DECIMAL(3) CHAR(3) CHAR (6) CHAR(6) CHAR(3) CHAR(6) DECIMAL(5,2) DECIMAL(6) DECIMAL(6) Update access? No No Yes Yes Yes No No No Yes Yes Yes Yes Yes
2. Determine the test tables and views that you need to test your application. Create a test table on your list when either of the following conditions exists: v The application modifies data in the table. v You need to create a view that is based on a test table because your application modifies data in the view. To continue the example, create these test tables: v TEST.EMP, with the following format:
EMPNO . . . LASTNAME . . . WORKDEPT . . . PHONENO . . . JOB . . .
v TEST.PROJ, with the same columns and format as DSN8910.PROJ, because the application inserts rows into the DSN8910.PROJ table. To support the example, create a test view of the DSN8910.DEPT table. v TEST.DEPT view, with the following format:
1064
DEPTNO . . .
MGRNO . . .
Because the application does not change any data in the DSN8910.DEPT table, you can base the view on the table itself (rather than on a test table). However, a safer approach is to have a complete set of test tables and to test the program thoroughly using only test data.
1065
DEC31 DECIMAL(3,1), DEC32 DECIMAL(3,2), DEC33 DECIMAL(3,3), DEC10 DECIMAL(1,0), DEC11 DECIMAL(1,1), DEC150 DECIMAL(15,0), DEC151 DECIMAL(15,1), DEC1515 DECIMAL(15,15) ) IN SPUFIDB.SPUFITS ;
Related reference: CREATE DATABASE (DB2 SQL) CREATE STOGROUP (DB2 SQL) CREATE TABLE (DB2 SQL) CREATE TABLESPACE (DB2 SQL)
| |
1066
Related concepts: DB2 sample applications on page 1126 Related tasks: Inserting rows by using the INSERT statement on page 637 Inserting rows into a table from another table on page 639 Inserting data and updating data in a single operation on page 643 Related reference: LOAD (DB2 Utilities) UNLOAD (DB2 Utilities)
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1067
Important: Ensure that the TSO terminal CCSID matches the DB2 CCSID. If these CCSIDs do not match, data corruption can occur. If SPUFI issues the warning message DSNE345I, terminate your SPUFI session and notify the system administrator. Before you begin this task, you can specify whether TSO message IDs are displayed by using the TSO PROFILE command. To view message IDs, type TSO PROFILE MSGID on the ISPF command line. To suppress message IDs, type TSO PROFILE NOMSGID. These instructions assume that ISPF is available to you. To execute SQL by using SPUFI:
Procedure
1. Open SPUFI and specify the initial options. 2. Optional: Changing SPUFI defaults on page 1074 3. Enter SQL statements in SPUFI. 4. Process SQL statements with SPUFI.
Results
Opening SPUFI and specifying initial options: To being using SPUFI, you need to open and fill out the SPUFI panel. To open SPUFI and specify initial options: 1. Select SPUFI from the DB2I Primary Option Menu as shown in Figure 52 on page 1010.The SPUFI panel is displayed. 2. Specify the input data set name and output data set name.An example of a SPUFI panel in which an input data set and output data set have been specified is shown in the following figure.
DSNESP01 SPUFI SSID: DSN ===> Enter the input data set name: (Can be sequential or partitioned) 1 DATA SET NAME..... ===> EXAMPLES(XMP1) 2 VOLUME SERIAL..... ===> (Enter if not cataloged) 3 DATA SET PASSWORD. ===> (Enter if password protected) Enter the output data set name: (Must be a sequential data set) 4 DATA SET NAME..... ===> RESULT Specify processing options: 5 CHANGE DEFAULTS... ===> 6 EDIT INPUT........ ===> 7 EXECUTE........... ===> 8 AUTOCOMMIT........ ===> 9 BROWSE OUTPUT..... ===> For remote SQL processing: 10 CONNECT LOCATION ===>
Y Y Y Y Y
Display SPUFI defaults panel?) Enter SQL statements?) Execute SQL statements?) Commit after successful run?) Browse output data set?)
END to exit
3. Optional: Specify new values in any of the other fields on the SPUFI panel. For more information about these fields, see The SPUFI panel on page 1072.
1068
Entering SQL statements in SPUFI: After you open SPUFI, specify the initial options, and optionally change any SPUFI defaults, you can enter one or more SQL statements to execute. Before you begin this task, you must complete the task "Opening SPUFI and specifying initial options." If the input data set that you specified on the SPUFI panel already contains all of the SQL statements that you want to execute, you can bypass this editing step by specifying NO for the EDIT INPUT field on the SPUFI panel. To enter SQL statements by using SPUFI: 1. If the EDIT panel is not already open, on the SPUFI panel, specify Y in the EDIT INPUT field and press ENTER. If the input data set that you specified is empty, an empty EDIT panel opens. Otherwise, if the input data set contained SQL statements, those SQL statements are displayed in an EDIT panel. 2. On the EDIT panel, use the ISPF EDIT program to enter or edit any SQL statements that you want to execute. Move the cursor to the first blank input line, and enter the first part of an SQL statement. You can enter the rest of the SQL statement on subsequent lines, as shown in the following figure:
EDIT --------userid.EXAMPLES(XMP1) --------------------- COLUMNS 001 072 COMMAND INPUT ===> SAVE SCROLL ===> PAGE ********************************** TOP OF DATA *********************** 000100 SELECT LASTNAME, FIRSTNME, PHONENO 000200 FROM DSN8910.EMP 000300 WHERE WORKDEPT= D11 000400 ORDER BY LASTNAME; ********************************* BOTTOM OF DATA *********************
Consider the following rules and recommendations when editing this input data set: v Indent your lines and enter your statements on several lines to make your statements easier to read. Entering your statements on multiple lines does not change how your statements are processed. v Do not put more than one SQL statement on a single line. If you do, the first statement executes, but DB2 ignores the other SQL statements on the same line. You can put more than one SQL statement in the input data set. DB2 executes the statements in the order in which you placed them in the data set. v End each SQL statement with the statement terminator that you specified on the CURRENT SPUFI DEFAULTS panel. v Save the data set every 10 minutes or so by entering the SAVE command. 3. Press the END PF key. The data set is saved, and the SPUFI panel is displayed. Processing SQL statements with SPUFI: You can use SPUFI to submit the SQL statements in a data set to DB2. Before you begin this task, you must: v Complete the task "Opening SPUFI and specifying initial options."
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1069
v Ensure that the input data set contains the SQL statements that you want to execute. To process SQL statements by using SPUFI: 1. On the SPUFI panel, specify YES in the EXECUTE field. 2. If you did not just finish using the EDIT panel to edit the input data set as described in "Entering SQL statements in SPUFI," specify NO In the EDIT INPUT field. 3. Press Enter. SPUFI passes the input data set to DB2 for processing. DB2 executes the SQL statement in the input data set and sends the output to the output data set. The output data set opens. Your SQL statement might take a long time to execute, depending on how large a table DB2 must search, or on how many rows DB2 must process. In this case, you can interrupt the processing by pressing the PA1 key. Then respond to the message that asks you if you really want to stop processing. This action cancels the executing SQL statement. Depending on how much of the input data set DB2 was able to process before you interrupted its processing, DB2 might not have opened the output data set yet, or the output data set might contain all or part of the results data that are produced so far. For information about how to interpret the output in the output data set, see Output from SPUFI on page 1081. SQL statements that exceed resource limit thresholds: Your system administrator might use the DB2 resource limit facility (governor) to set time limits for processing SQL statements in SPUFI. Those limits can be error limits or warning limits. If you execute an SQL statement through SPUFI that runs longer than this error time limit, SPUFI terminates processing of that SQL statement and all statements that follow in the SPUFI input data set. SPUFI displays a panel that lets you commit or roll back the previously uncommitted changes that you have made. That panel is shown in the following figure.
1070
DSNESP04 ===>
SSID: DSN
The following SQL statement has encountered an SQLCODE of -905 or -495: Statement text
Your SQL statement has exceeded the resource utilization threshold set by your site administrator. You must ROLLBACK or COMMIT all the changes made since the last COMMIT. SPUFI processing for the current input file will terminate immediately after the COMMIT or ROLLBACK is executed. 1 NEXT ACTION ===> ENTER to process (Enter COMMIT or ROLLBACK) HELP for more information
PRESS:
If you execute an SQL statement through SPUFI that runs longer than the warning time limit for predictive governing, SPUFI displays the SQL STATEMENT RESOURCE LIMIT EXCEEDED panel. On this panel, you can tell DB2 to continue executing that statement, or stop processing that statement and continue to the next statement in the SPUFI input data set. That panel is shown in the following figure.
DSNESP05 ===> SQL STATEMENT RESOURCE LIMIT EXCEEDED SSID: DSN
The following SQL statement has encountered an SQLCODE of 495: Statement text
You can now either CONTINUE executing this statement or BYPASS the execution of this statement. SPUFI processing for the current input file will continue after the CONTINUE or BYPASS processing is completed. 1 NEXT ACTION ===> ENTER to process (Enter CONTINUE or BYPASS) HELP for more information
PRESS:
Related tasks: Controlling resource usage (DB2 Performance) Related reference: ISPF User's Guide Vol II (z/OS V1R7.0 ISPF User's Guide Vol II)
SPUFI
You can use SPUFI to execute SQL statements dynamically.
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1071
SPUFI can execute SQL statements that retrieve Unicode UTF-16 graphic data. However, SPUFI might not be able to display some characters, if those characters have no mapping in the target SBCS EBCDIC CCSID.
1072
The device must be a direct-access storage device, and you must be authorized to allocate space on that device. Attributes required for the output data set are: v Organization: sequential v Record format: F, FB, FBA, V, VB, or VBA v Record length: 80 to 32768 bytes, not less than the input data set Executing SQL by using SPUFI on page 1067 shows the simplest choice, entering RESULT. SPUFI allocates a data set named userid.RESULT and sends all output to that data set. If a data set named userid.RESULT already exists, SPUFI sends DB2 output to it, replacing all existing data. 5 CHANGE DEFAULTS Enables you to change control values and characteristics of the output data set and format of your SPUFI session. If you specify Y(YES) you can look at the SPUFI defaults panel. See Changing SPUFI defaults on page 1074 for more information about the values you can specify and how they affect SPUFI processing and output characteristics. You do not need to change the SPUFI defaults for this example. 6 EDIT INPUT To edit the input data set, leave Y(YES) on line 6. You can use the ISPF editor to create a new member of the input data set and enter SQL statements in it. (To process a data set that already contains a set of SQL statements you want to execute immediately, enter N (NO). Specifying N bypasses the step 3 described in Executing SQL by using SPUFI on page 1067.) 7 EXECUTE To execute SQL statements contained in the input data set, leave Y(YES) on line 7. SPUFI handles the SQL statements that can be dynamically prepared. 8 AUTOCOMMIT To make changes to the DB2 data permanent, leave Y(YES) on line 8. Specifying Y makes SPUFI issue COMMIT if all statements execute successfully. If all statements do not execute successfully, SPUFI issues a ROLLBACK statement, which deletes changes already made to the file (back to the last commit point). If you specify N, DB2 displays the SPUFI COMMIT OR ROLLBACK panel after it executes the SQL in your input data set. That panel prompts you to COMMIT, ROLLBACK, or DEFER any updates made by the SQL. If you enter DEFER, you neither commit nor roll back your changes. 9 BROWSE OUTPUT To look at the results of your query, leave Y(YES) on line 9. SPUFI saves the results in the output data set. You can look at them at any time, until you delete or write over the data set. 10 CONNECT LOCATION Specify the name of the database server, if applicable, to which you want to submit SQL statements. SPUFI then issues a type 2 CONNECT statement to this server. SPUFI is a locally bound package. SQL statements in the input data set can process only if the CONNECT statement is successful. If the connect request fails, the output data set contains the resulting SQL return codes and error messages.
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1073
Related reference: Characteristics of SQL statements in DB2 for z/OS(DB2 SQL) COMMIT (DB2 SQL) ROLLBACK (DB2 SQL)
Procedure
To change the SPUFI defaults: 1. On the SPUFI panel, specify YES in the CHANGE DEFAULTS field. 2. Press Enter. The CURRENT SPUFI DEFAULTS panel opens. The following figure shows the initial default values. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
DSNESP02 CURRENT SPUFI DEFAULTS SSID: DSN ===> Enter the following to control your SPUFI session: 1 SQL TERMINATOR .. ===> ; (SQL Statement Terminator) 2 ISOLATION LEVEL ===> RR (RR=Repeatable Read, CS=Cursor Stability) UR=Uncommitted Read) 3 MAX SELECT LINES ===> 250 (Maximum lines to be returned from a SELECT) 4 ALLOW SQL WARNINGS===> NO (Continue fetching after SQL warning) 5 CHANGE PLAN NAMES ===> NO (Change the plan names used by SPUFI) 6 SQL FORMAT ...... ===> SQL (SQL, SQLCOMNT, or SQLPL) Output data set characteristics: 7 SPACE UNIT ...... ===> TRK (TRK or CYL) 8 PRIMARY SPACE ... ===> 5 (Primary space allocation 1-999) 9 SECONDARY SPACE . ===> 6 (Secondary space allocation 0-999) 10 RECORD LENGTH ... ===> 4092 (LRECL= logical record length) 11 BLOCKSIZE ....... ===> 4096 (Size of one block) 12 RECORD FORMAT.... ===> VB (RECFM= F, FB, FBA, V, VB, or VB) 13 DEVICE TYPE...... ===> SYSDA (Must be a DASD unit name) Output format characteristics: 14 MAX NUMERIC FIELD ===> 33 (Maximum width for numeric field) 15 MAX CHAR FIELD .. ===> 80 (Maximum width for character field) 16 COLUMN HEADING .. ===> NAMES (NAMES, LABELS, ANY, or BOTH)
END to exit
3. Specify any new values in the fields of this panel. All fields must contain a value. 4. Press Enter. SPUFI saves your changes and one of the following panels or data sets open: v The CURRENT SPUFI DEFAULTS - PANEL 2 panel. This panel opens if you specified YES in the CHANGE PLAN NAMES field.
1074
v EDIT panel. This panel opens if you specified YES in the EDIT INPUT field on the SPUFI panel. v Output data set. This data set opens if you specified NO in the EDIT INPUT field on the SPUFI panel. v SPUFI panel. This panel opens if you specified NO for all of the processing options on the SPUFI panel. If you press the END key on the CURRENT SPUFI DEFAULTS panel, the SPUFI panel is displayed, and you lose all the changes that you made on the CURRENT SPUFI DEFAULTS panel. 5. If the CURRENT SPUFI DEFAULTS - PANEL 2 panel opens, specify values for the fields on that panel and press Enter. All fields must contain a value. Important: If you specify an invalid or incorrect plan name, SPUFI might experience operational errors or your data might be contaminated. SPUFI saves your changes and one of the following panels or data sets open: v EDIT panel. This panel opens if you specified YES in the EDIT INPUT field on the SPUFI panel. v Output data set. This data set opens if you specified NO in the EDIT INPUT field on the SPUFI panel. v SPUFI panel. This panel opens if you specified NO for all of the processing options on the SPUFI panel.
Results
Next, continue with one of the following tasks: v If you want to add SQL statements to the input data set or edit the SQL statements in the input data set, enter SQL statements in SPUFI. v Otherwise if the input data set already contains the SQL statements that you want to execute, process SQL statements with SPUFI. Related reference: CURRENT SPUFI DEFAULTS panel CURRENT SPUFI DEFAULTS - PANEL 2 panel on page 1078
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1075
Table 166. Invalid special characters for the SQL terminator (continued) Name right parenthesis single quote underscore Character ) ' _ Hexadecimal representation X'5D' X'7D' X'6D'
Use a character other than a semicolon if you plan to execute a statement that contains embedded semicolons. For example, suppose you choose the character # as the statement terminator. Then a CREATE TRIGGER statement with embedded semicolons looks like the following statement:
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END#
A CREATE PROCEDURE statement with embedded semicolons looks like the following statement:
CREATE PROCEDURE PROC1 (IN PARM1 INT, OUT SCODE INT) LANGUAGE SQL BEGIN DECLARE SQLCODE INT; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET SCODE = SQLCODE; UPDATE TBL1 SET COL1 = PARM1; END #
Be careful to choose a character for the SQL terminator that is not used within the statement. You can also set or change the SQL terminator within a SPUFI input data set by using the --#SET TERMINATOR statement. 2 ISOLATION LEVEL Specify the isolation level for your SQL statements. 3 MAX SELECT LINES The maximum number of rows that a SELECT statement can return. To limit the number of rows retrieved, enter another maximum number greater than 1. 4 ALLOW SQL WARNINGS Enter YES or NO to indicate whether SPUFI will continue to process an SQL statement after receiving SQL warnings: YES If a warning occurs when SPUFI executes an OPEN or FETCH for a SELECT statement, SPUFI continues to process the SELECT statement. If a warning occurs when SPUFI executes an OPEN or FETCH for a SELECT statement, SPUFI stops processing the SELECT statement. If SQLCODE +802 occurs when SPUFI executes a FETCH for a SELECT statement, SPUFI continues to process the SELECT statement.
NO
| |
You can also specify how SPUFI pre-processes the SQL input by using the --#SET TOLWARN statement.
1076
5 CHANGE PLAN NAMES If you enter YES in this field, you can change plan names on a subsequent SPUFI defaults panel, DSNESP07. Enter YES in this field only if you are certain that you want to change the plan names that are used by SPUFI. Consult with your DB2 system administrator if you are uncertain whether you want to change the plan names. Using an invalid or incorrect plan name might cause SPUFI to experience operational errors or it might cause data contamination. | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 6 SQL FORMAT Specify how SPUFI pre-processes the SQL input before passing it to DB2. Select one of the following options: SQL This is the preferred mode for SQL statements other than SQL procedural language. When you use this option, which is the default, SPUFI collapses each line of an SQL statement into a single line before passing the statement to DB2. SPUFI also discards all SQL comments.
SQLCOMNT This mode is suitable for all SQL, but it is intended primarily for SQL procedural language processing. When this option is in effect, behavior is similar to SQL mode, except that SPUFI does not discard SQL comments. Instead, it automatically terminates each SQL comment with a line feed character (hex 25), unless the comment is already terminated by one or more line formatting characters. Use this option to process SQL procedural language with minimal modification by SPUFI. SQLPL This mode is suitable for all SQL, but it is intended primarily for SQL procedural language processing. When this option is in effect, SPUFI retains SQL comments and terminates each line of an SQL statement with a line feed character (hex 25) before passing the statement to DB2. Lines that end with a split token are not terminated with a line feed character. Use this mode to obtain improved diagnostics and debugging of SQL procedural language. You can also specify how SPUFI pre-processes the SQL input by using the --#SET SQLFORMAT statement. 7 SPACE UNIT Specify how space for the SPUFI output data set is to be allocated. TRK CYL Track Cylinder
8 PRIMARY SPACE Specify how many tracks or cylinders of primary space are to be allocated. 9 SECONDARY SPACE Specify how many tracks or cylinders of secondary space are to be allocated. 10 RECORD LENGTH The record length must be at least 80 bytes. The maximum record length depends on the device type you use. The default value allows a 32756-byte record.
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1077
Each record can hold a single line of output. If a line is longer than a record, the output is truncated, and SPUFI discards fields that extend beyond the record length. | 11 BLOCKSIZE Follow the normal rules for selecting the block size. For record format F, the block size is equal to the record length. For FB and FBA, choose a block size that is an even multiple of LRECL. For VB and VBA only, the block size must be 4 bytes larger than the block size for FB or FBA. 12 RECORD FORMAT Specify F, FB, FBA, V, VB, or VBA. FBA and VBA formats insert a printer control character after the number of lines specified in the LINES/PAGE OF LISTING field on the DB2I Defaults panel. The record format default is VB (variable-length blocked). 13 DEVICE TYPE Specify a standard z/OS name for direct-access storage device types. The default is SYSDA. SYSDA specifies that z/OS is to select an appropriate direct access storage device. 14 MAX NUMERIC FIELD The maximum width of a numeric value column in your output. Choose a value greater than 0. The default is 33. 15 MAX CHAR FIELD The maximum width of a character value column in your output. DATETIME and GRAPHIC data strings are externally represented as characters, and SPUFI includes their defaults with the default values for character fields. Choose a value greater than 0. The IBM-supplied default is 250. 16 COLUMN HEADING You can specify NAMES, LABELS, ANY, or BOTH for column headings. v NAMES uses column names only. v LABELS (default) uses column labels. Leave the title blank if no label exists. v ANY uses existing column labels or column names. v BOTH creates two title lines, one with names and one with labels. Column names are the column identifiers that you can use in SQL statements. If an SQL statement has an AS clause for a column, SPUFI displays the contents of the AS clause in the heading, rather than the column name. You define column labels with LABEL statements. Related concepts: Output from SPUFI on page 1081 Related tasks: Changing SPUFI defaults on page 1074 Executing SQL by using SPUFI on page 1067
| | | | |
1078
control your SPUFI session: ===> DSNESPCS (Name of plan for CS isolation level) ===> DSNESPRR (Name of plan for RR isolation level) ===> DSNESPUR (Name of plan for UR isolation level)
END to exit
The following descriptions explain the information on the CURRENT SPUFI DEFAULTS - PANEL 2 panel. 1 CS ISOLATION PLAN Specify the name of the plan that SPUFI uses when you specify an isolation level of cursor stability (CS). By default, this name is DSNESPCS. 2 RR ISOLATION PLAN Specify the name of the plan that SPUFI uses when you specify an isolation level of repeatable read (RR). By default, this name is DSNESPRR. 3 UR ISOLATION PLAN Specify the name of the plan that SPUFI uses when you specify an isolation level of uncommitted read (UR). By default, this name is DSNESPUR. 4 BLANK CCSID ALERT Indicate whether to receive message DSNE345I when the terminal CCSID setting is blank. A blank terminal CCSID setting occurs when the terminal code page and character set cannot be queried or if they are not supported by ISPF. Recommendation: To avoid possible data contamination use the default setting of YES, unless you are specifically directed by your DB2 system administrator to use NO.
1079
To set the SQL terminator character in a SPUFI input data set, specify the text --#SET TERMINATOR character before that SQL statement to which you want this character to apply. This text specifies that SPUFI is to interpret character as a statement terminator. You can specify any single-byte character except the characters that are listed in Table 167. Choose a character for the SQL terminator that is not used within the statement. The terminator that you specify overrides a terminator that you specified in option 1 of the CURRENT SPUFI DEFAULTS panel or in a previous --#SET TERMINATOR statement.
Table 167. Invalid special characters for the SQL terminator Name blank comma double quote left parenthesis right parenthesis single quote underscore , " ( ) ' _ Character Hexadecimal representation X'40' X'5E' X'7F' X'4D' X'5D' X'7D' X'6D'
Use a character other than a semicolon if you plan to execute a statement that contains embedded semicolons. For example, suppose that you choose the character # as the statement terminator. In this case, a CREATE TRIGGER statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END#
Example
The following example activates and then deactivates toleration of SQL warnings:
1080
Formatting rules for SELECT statement results in SPUFI: The results of SELECT statements follow these rules: v If numeric or character data of a column cannot be displayed completely:
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1081
| | | | | |
| |
Character values and binary values that are too wide truncate on the right. Numeric values that are too wide display as asterisks (*). For columns other than LOB and XML columns, if truncation occurs, the output data set contains a warning message. Because LOB and XML columns are generally longer than the value you choose for field MAX CHAR FIELD on panel CURRENT SPUFI DEFAULTS, SPUFI displays no warning message when it truncates LOB or XML column output. You can change the amount of data that is displayed for numeric and character columns by changing values on the CURRENT SPUFI DEFAULTS panel, as described in Changing SPUFI defaults on page 1074. v A null value is displayed as a series of hyphens (-). v A ROWID, BLOB, BINARY, or VARBINARY column value is displayed in hexadecimal. v A CLOB column value is displayed in the same way as a VARCHAR column value. v A DBCLOB column value is displayed in the same way as a VARGRAPHIC column value. v An XML column is displayed in the same way as a LOB column. v A heading identifies each selected column, and is repeated at the top of each output page. The contents of the heading depend on the value that you specified in the COLUMN HEADING field of the CURRENT SPUFI DEFAULTS panel. Content of the messages from SPUFI: Each SPUFI message contains the following: v The SQLCODE, if the statement executes successfully. v The formatted SQLCA, if the statement executes unsuccessfully. v What character positions of the input data set that SPUFI scanned to find SQL statements. This information helps you check the assumptions that SPUFI made about the location of line numbers (if any) in your input data set. v Some overall statistics: Number of SQL statements that are processed Number of input records that are read (from the input data set) Number of output records that are written (to the output data set). Other messages that you could receive from the processing of SQL statements include: v The number of rows that DB2 processed, that either: Your select operation retrieved Your update operation modified Your insert operation added to a table Your delete operation deleted from a table v Which columns display truncated data because the data was too wide
1082
Procedure
To test a user-defined function by using the Debug Tool for z/OS, choose one of the following approaches: v To use the Debug Tool interactively: 1. Compile the user-defined function with the TEST option. This places information in the program that the Debug Tool uses. 2. Invoke the Debug Tool. One way to do that is to specify the Language Environment run time TEST option. The TEST option controls when and how the Debug Tool is invoked. The most convenient place to specify run time options is with the RUN OPTIONS clause of CREATE FUNCTION or ALTER FUNCTION. For example, suppose that you code this option:
TEST(ALL,*,PROMPT,JBJONES%SESSNA:)
The parameter values cause the following things to happen: ALL The Debug Tool gains control when an attention interrupt, abend, or program or Language Environment condition of Severity 1 and above occurs. * Debug commands will be entered from the terminal.
PROMPT The Debug Tool is invoked immediately after Language Environment initialization. JBJONES%SESSNA: The Debug Tool initiates a session on a workstation identified to APPC as JBJONES with a session ID of SESSNA. 3. If you want to save the output from your debugging session, issue a command that names a log file. For example, the following command starts logging to a file on the workstation called dbgtool.log.
SET LOG ON FILE dbgtool.log;
This should be the first command that you enter from the terminal or include in your commands file. v To use the Debug Tool in batch mode: 1. If you plan to use the Language Environment run time TEST option to invoke the Debug Tool, compile the user-defined function with the TEST option. This places information in the program that the Debug Tool uses during a debugging session.
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1083
2. Allocate a log data set to receive the output from the Debug Tool. Put a DD statement for the log data set in the startup procedure for the stored procedures address space. 3. Enter commands in a data set that you want the Debug Tool to execute. Put a DD statement for that data set in the startup procedure for the stored procedures address space. To define the data set that contains the commands to the Debug Tool, specify its data set name or DD name in the TEST run time option. For example, this option tells the Debug Tool to look for the commands in the data set that is associated with DD name TESTDD:
TEST(ALL,TESTDD,PROMPT,*)
This command directs output from your debugging session to the log data set you defined in step 2. For example, if you defined a log data set with DD name INSPLOG in the start-up procedure for the stored procedures address space, the first command should be:
SET LOG ON FILE INSPLOG;
4. Invoke the Debug Tool. The following are two possible methods for invoking the Debug Tool: Specify the Language Environment run time TEST option. The most convenient place to do that is in the RUN OPTIONS parameter of CREATE FUNCTION or ALTER FUNCTION. Put CEETEST calls in the user-defined function source code. If you use this approach for an existing user-defined function, you must compile, link-edit, and bind the user-defined function again. Then you must issue the STOP FUNCTION SPECIFIC and START FUNCTION SPECIFIC commands to reload the user-defined function. You can combine the Language Environment run time TEST option with CEETEST calls. For example, you might want to use TEST to name the commands data set but use CEETEST calls to control when the Debug Tool takes control. You can combine the Language Environment run time TEST option with CEETEST calls. For example, you might want to use TEST to name the commands data set but use CEETEST calls to control when the Debug Tool takes control. Related reference: Components of a user-defined function definition on page 498 Debug Tool for z/OS
1084
Procedure
To debug a stored procedure, perform one or more of the following actions: v Take one or more of the following general actions, which are appropriate in many situations with stored procedures: Ensure that all stored procedures are written to handle any SQL errors. Debug stored procedures as stand-alone programs on a workstation. If you have debugging tools on a workstation, consider doing most of your development and testing on a workstation before installing a stored procedure on z/OS. This technique results in very little debugging activity on z/OS. Record stored procedure debugging messages to a disk file or JES spool file. Store debugging information in a table. This technique is especially useful for remote stored procedures. Use the DISPLAY command to view information about particular stored procedures, including statistics and thread information. In the stored procedure that you are debugging, issue DISPLAY commands. You can view the DISPLAY results in the SDSF output. The DISPLAY results can help you find information about the started task that is associated with the address space for the WLM application environment. If necessary, use the STOP PROCEDURE command to stop calls to one or more problematic stored procedures. You can restart them later.
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1085
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
v If your stored procedures address space has the CEEDUMP data set allocated, look at the diagnostic information in the CEEDUMP output. v For COBOL, C, and C++ stored procedures, use the Debug Tool for z/OS. v For COBOL stored procedures, compile the stored procedure with the option TEST(SYM) if you want a formatted local variable dump to be included in the CEEDUMP output. v For native SQL procedures, external SQL procedures, and Java stored procedures, use the Unified Debugger. v For external stored procedures, consider taking one or both of the following actions: Use a driver application. Create or alter the stored procedure definition to include the PARAMETER STYLE SQL option. This option enables the stored procedure to share any error information with the calling application. Ensure that your procedure follows linkage conventions for stored procedures. v If you changed a stored procedure or a startup JCL procedure for a WLM application environment, determine whether you need to refresh the WLM environment. You must refresh the WLM environment before certain stored procedure changes take effect. Related tasks: Handling SQL conditions in an SQL procedure on page 556 Displaying information about stored procedures with DB2 commands (DB2 Administration Guide) Refreshing WLM application environments for stored procedures (DB2 Administration Guide) Implementing DB2 stored procedures (DB2 Administration Guide) Related reference: Linkage conventions for external stored procedures on page 599 -START PROCEDURE (DB2) (DB2 Commands) -STOP PROCEDURE (DB2) (DB2 Commands) Debugging COBOL, PL/I, and C/C++ procedures on z/OS (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond) Debugging SQL procedures on z/OS, Linux, UNIX, and Windows (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond)
Debugging stored procedures with the Debug Tool and IBM VisualAge COBOL
If you have VisualAge COBOL installed on your workstation and the Debug Tool installed on your z/OS system, you can use the VisualAge COBOL Edit/Compile/Debug component with the Debug Tool to debug COBOL stored procedures that run in a WLM-established stored procedures address space.
1086
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To debug with the Debug Tool and IBM VisualAge COBOL: 1. When you compile the stored procedure, specify the TEST and SOURCE options. Ensure that the source listing is stored in a permanent data set. VisualAge COBOL displays the source listing during the debug session. 2. When you define the stored procedure, include run time option TEST with the suboption VADTCPIP&ipaddr in your RUN OPTIONS argument. VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that runs VisualAge COBOL and is configured for TCP/IP communication with your z/OS system. ipaddr is the IP address of the workstation on which you display your debug information. For example, the RUN OPTIONS value in the following stored procedure definition indicates that debug information should go to the workstation with IP address 9.63.51.17:
CREATE PROCEDURE WLMCOB (IN INTEGER, INOUT VARCHAR(3000), INOUT INTEGER) MODIFIES SQL DATA LANGUAGE COBOL EXTERNAL PROGRAM TYPE MAIN WLM ENVIRONMENT WLMENV1 RUN OPTIONS POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)
3. In the JCL startup procedure for WLM-established stored procedures address space, add the data set name of the Debug Tool load library to the STEPLIB concatenation. For example, suppose that ENV1PROC is the JCL procedure for application environment WLMENV1. The modified JCL for ENV1PROC might look like this:
//DSNWLM //IEFPROC // //STEPLIB // // // PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8 EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT, PARM=&DB2SSN,&NUMTCB,&APPLENV DD DISP=SHR,DSN=DSN910.RUNLIB.LOAD DD DISP=SHR,DSN=CEE.SCEERUN DD DISP=SHR,DSN=DSN910.SDSNLOAD DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
4. On the workstation, start the VisualAge Remote Debugger daemon. This daemon waits for incoming requests from TCP/IP. 5. Call the stored procedure. When the stored procedure starts, a window that contains the debug session is displayed on the workstation. You can then execute Debug Tool commands to debug the stored procedure. Related reference: Debug Tool for z/OS
Debugging a C language stored procedure with the Debug Tool and C/C++ Productivity Tools for z/OS
You can debug a C or C++ stored procedure that runs in a WLM-established stored procedures address space. You must have the C/C++ Productivity Tools for z/OS installed on your workstation and the Debug Tool installed on your z/OS system.
1087
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Procedure
To test the stored procedure with the Distributed Debugger feature of the C/C++ Productivity Tools for z/OS and the Debug Tool: 1. When you define the stored procedure, include run time option TEST with the suboption VADTCPIP&ipaddr in your RUN OPTIONS argument. VADTCPIP& tells the Debug Tool that it is interfacing with a workstation that runs VisualAge C++ and is configured for TCP/IP communication with your z/OS system. ipaddr is the IP address of the workstation on which you display your debug information. For example, this RUN OPTIONS value in a stored procedure definition indicates that debug information should go to the workstation with IP address 9.63.51.17:
RUN OPTIONS POSIX(ON),TEST(,,,VADTCPIP&9.63.51.17:*)
2. Precompile the stored procedure. Ensure that the modified source program that is the output from the precompile step is in a permanent, catalogued data set. 3. Compile the output from the precompile step. Specify the TEST, SOURCE, and OPT(0) compiler options. 4. In the JCL startup procedure for the stored procedures address space, add the data set name of the Debug Tool load library to the STEPLIB concatenation. For example, suppose that ENV1PROC is the JCL procedure for application environment WLMENV1. The modified JCL for ENV1PROC might look like this:
//DSNWLM //IEFPROC // //STEPLIB // // // PROC RGN=0K,APPLENV=WLMENV1,DB2SSN=DSN,NUMTCB=8 EXEC PGM=DSNX9WLM,REGION=&RGN,TIME=NOLIMIT, PARM=&DB2SSN,&NUMTCB,&APPLENV DD DISP=SHR,DSN=DSN910.RUNLIB.LOAD DD DISP=SHR,DSN=CEE.SCEERUN DD DISP=SHR,DSN=DSN910.SDSNLOAD DD DISP=SHR,DSN=EQAW.SEQAMOD <== DEBUG TOOL
5. On the workstation, start the Distributed Debugger daemon. This daemon waits for incoming requests from TCP/IP. 6. Call the stored procedure. When the stored procedure starts, a window that contains the debug session is displayed on the workstation. You can then execute Debug Tool commands to debug the stored procedure. Related reference: Debug Tool for z/OS
Procedure
To debug stored procedures by using the Unified Debugger: 1. Set up the Unified Debugger by performing the following steps:
1088
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
a. Customize and run the post installation job DSNTIJSD program in data set DSN910.SDSNSAMP. This JCL defines to DB2 the stored procedures that provide server support for the Unified Debugger. b. Define the debug mode characteristics for the stored procedure that you want to debug by completing one of the following actions: v For a native SQL procedure, define the procedure with the ALLOW DEBUG MODE option and the WLM ENVIRONMENT FOR DEBUG MODE option. If the procedure already exists, you can use the ALTER PROCEDURE statement to specify these options. v For an external SQL procedure, use DSNTPSMP or IBM Optim Development Studio to build the SQL procedure with the BUILD_DEBUG option. v For a Java stored procedure, define the procedure with the ALLOW DEBUG MODE option, select an appropriate WLM environment for Java debugging, and compile the Java code with the -G option. c. Grant the DEBUGSESSION privilege to the user who runs the debug client. 2. Include breakpoints in your routines or executable files. 3. Follow the instructions for debugging stored procedures in the information for IBM Optim Development Studio. Related concepts: Java stored procedures and user-defined functions (DB2 Application Programming for Java) Related tasks: Creating an external SQL procedure by using DSNTPSMP on page 580 Developing database routines (IBM Data Studio, IBM Optim Database Administrator, IBM infoSphere Data Architect, IBM Optim Development Studio) Related reference: Sample programs to help you prepare and run external SQL procedures on page 594 ALTER PROCEDURE (SQL - native) (DB2 SQL) CREATE PROCEDURE (SQL - native) (DB2 SQL) The Unified Debugger (DB2 9 for z/OS Stored Procedures: Through the CALL and Beyond)
Procedure
To debug your stored procedure using the Debug Tool:
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1089
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1. Compile the stored procedure with option TEST. This places information in the program that the Debug Tool uses during a debugging session. 2. Invoke the Debug Tool. One way to do that is to specify the Language Environment run time option TEST. The TEST option controls when and how the Debug Tool is invoked. The most convenient place to specify run time options is in the RUN OPTIONS parameter of the CREATE PROCEDURE or ALTER PROCEDURE statement for the stored procedure. For example, you can code the TEST option using the following parameters:
TEST(ALL,*,PROMPT,JBJONES%SESSNA:)
The following table lists the effects that each parameter has on the Debug Tool:
Table 168. Effects of the TEST option parameters on the Debug Tool Parameter value ALL Effect on the Debug Tool The Debug Tool gains control when an attention interrupt, ABEND, or program or Language Environment condition of Severity 1 and above occurs. Debug commands will be entered from the terminal. PROMPT JBJONES%SESSNA: The Debug Tool is invoked immediately after Language Environment initialization. The Debug Tool initiates a session on a workstation identified to APPC/MVS as JBJONES with a session ID of SESSNA.
3. If you want to save the output from your debugging session, issue the following command:
SET LOG ON FILE dbgtool.log;
This command saves a log of your debugging session to a file on the workstation called dbgtool.log. This should be the first command that you enter from the terminal or include in your commands file.
Results
Using Debug Tool in batch mode: To test your stored procedure in batch mode, you must have the Debug Tool installed on the z/OS system where the stored procedure runs. To debug your stored procedure in batch mode using the Debug Tool, do the following: v Compile the stored procedure with option TEST, if you plan to use the Language Environment run time option TEST to invoke the Debug Tool. This places information in the program that the Debug Tool uses during a debugging session. v Allocate a log data set to receive the output from the Debug Tool. Put a DD statement for the log data set in the start-up procedure for the stored procedures address space. v Enter commands in a data set that you want the Debug Tool to execute. Put a DD statement for that data set in the start-up procedure for the stored procedures address space. To define the commands data set to the Debug Tool, specify the commands data set name or DD name in the TEST run time option. For example, to specify that the Debug Tool use the commands that are in the data set that is associated with the DD name TESTDD, include the following parameter in the TEST option:
1090
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
TEST(ALL,TESTDD,PROMPT,*)
This command directs output from your debugging session to the log data set that you defined in the previous step. For example, if you defined a log data set with DD name INSPLOG in the stored procedures address space start-up procedure, the first command should be the following command:
SET LOG ON FILE INSPLOG;
v Invoke the Debug Tool. The following are two possible methods for invoking the Debug Tool: Specify the run time option TEST. The most convenient place to do that is in the RUN OPTIONS parameter of the CREATE PROCEDURE or ALTER PROCEDURE statement for the stored procedure. Put CEETEST calls in the stored procedure source code. If you use this approach for an existing stored procedure, you must recompile, re-link, and bind it, and issue the STOP PROCEDURE and START PROCEDURE commands to reload the stored procedure. You can combine the run time option TEST with CEETEST calls. For example, you might want to use TEST to name the commands data set but use CEETEST calls to control when the Debug Tool takes control. Related reference: Debug Tool for z/OS
Procedure
To record stored procedure debugging messages in a file: 1. Specify the Language Environment (LE) MSGFILE runtime option for the stored procedure. This option identifies where LE is to write the debugging messages. To specify this option, include the RUN OPTIONS clause in either the CREATE PROCEDURE statement or an ALTER PROCEDURE statement. Specify the following MSGFILE parameters: v Use the first MSGFILE parameter to specify the JCL DD statement that identifies the data set for the debugging messages. You can direct debugging messages to a disk file or JES spool file. To prevent multiple procedures from sharing a data set, ensure that you specify a unique DD statement. v Use the ENQ option to serialize I/O to the message file. This action is necessary, because multiple TCBs can be active in the stored procedure address space. Alternatively, if you debug your applications infrequently or on a DB2 test system, you can serialize I/O by temporarily running the stored procedures address space with NUMTCB=1 in the stored procedures address space start-up procedure. 2. For each instance of MSGFILE that you specify, add a DD statement to the JCL procedure that is used to start the stored procedures address space.
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1091
| | | | | | | | | | | | | | | | | | | | | | | |
Related reference: ALTER PROCEDURE (external) (DB2 SQL) ALTER PROCEDURE (SQL - external) (DB2 SQL) CREATE PROCEDURE (external) (DB2 SQL) CREATE PROCEDURE (SQL - external) (DB2 SQL) GRANT (system privileges) (DB2 SQL) Using Language Environment MSGFILE (z/OS Language Environment Programming Guide)
1092
Related reference: ISPF User's Guide Vol II (z/OS V1R7.0 ISPF User's Guide Vol II)
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1093
the ENTER key (after the error message and READY message), the system requests a dump. You then need to use the FREE command to deallocate the dump data set.
DB2 SQL PRECOMPILER STATISTICS SOURCE STATISTICS3 SOURCE LINES READ: 36 NUMBER OF SYMBOLS: 15 SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 1848 THERE WERE 1 MESSAGES FOR THIS PROGRAM.4 THERE WERE 0 MESSAGES SUPPRESSED BY THE FLAG OPTION.5 111664 BYTES OF STORAGE WERE USED BY THE PRECOMPILER.6 RETURN CODE IS 87
Notes: 1. Error message. 2. Source SQL statement. 3. Summary statements of source statistics. 4. Summary statement of the number of errors that were detected. 5. Summary statement that indicates the number of errors that were detected but not printed. This situation might occur if you specify a FLAG option other than I. 6. Storage requirement statement that indicates how many bytes of working storage that the DB2 precompiler actually used to process your source statements. That value helps you determine the storage allocation requirements for your program. 7. Return code: 0 = success, 4 = warning, 8 = error, 12 = severe error, and 16 = unrecoverable error.
1094
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1095
Notes: 1. This section lists the options that are specified at precompilation time. This list does not appear if one of the precompiler option is NOOPTIONS. 2. This section lists the options that are in effect, including defaults, forced values, and options that you specified. The DB2 precompiler overrides or ignores any options that you specify that are inappropriate for the host language. The following figure shows an example list of source statements as it is displayed in the SYSPRINT output.
DB2 SQL PRECOMPILER 1 2 3 . . . 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 . . . 1523 END; 00152300 /*************************************************/ /* GET INFORMATION ABOUT THE PROJECT FROM THE */ /* PROJECT TABLE. */ /*************************************************/ EXEC SQL SELECT ACTNO, PREQPROJ, PREQACT INTO PROJ_DATA FROM TPREREQ WHERE PROJNO = :PROJ_NO; /*************************************************/ /* PROJECT IS FINISHED. DELETE IT. */ /*************************************************/ EXEC SQL DELETE FROM PROJ WHERE PROJNO = :PROJ_NO; 00132400 00132500 00132600 00132700 00132800 00132900 00133000 00133100 00133200 00133300 00133400 00133500 00133600 00133700 00133800 TMN5P40:PROCEDURE OPTIONS (MAIN): PAGE 2
Notes: v The left column of sequence numbers, which the DB2 precompiler generates, is for use with the symbol cross-reference listing, the precompiler error messages, and the BIND error messages. v The right column shows sequence numbers that come from the sequence numbers that are supplied with your source statements. The following figure shows an example list of symbolic names as it is displayed in the SYSPRINT output.
1096
SYMBOL CROSS-REFERENCE LISTING DEFN **** **** **** **** REFERENCE FIELD 1328 FIELD 1328 FIELD 1328 FIELD 1331 1338
PAGE 29
... PROJ_DATA PROJ_NO "TPREREQ" 495 496 **** CHARACTER(35) 1329 CHARACTER(3) 1331 1338 TABLE 1330 1337
Notes: DATA NAMES Identifies the symbolic names that are used in source statements. Names enclosed in double quotation marks (") or apostrophes (') are names of SQL entities such as tables, columns, and authorization IDs. Other names are host variables. DEFN Is the number of the line that the precompiler generates to define the name. **** means that the object was not defined, or the precompiler did not recognize the declarations. REFERENCE Contains two kinds of information: the symbolic name, which the source program defines, and which lines refer to the symbolic name. If the symbolic name refers to a valid host variable, the list also identifies the data type or the word STRUCTURE. The following code shows an example summary report of errors as it is displayed in the SYSPRINT output.
DB2 SQL PRECOMPILER STATISTICS
SOURCE STATISTICS SOURCE LINES READ: 15231 NUMBER OF SYMBOLS: 1282 SYMBOL TABLE BYTES EXCLUDING ATTRIBUTES: 64323 THERE WERE 1 MESSAGES FOR THIS PROGRAM.4 THERE WERE 0 MESSAGES SUPPRESSED.5 65536 BYTES OF STORAGE WERE USED BY THE PRECOMPILER.6 RETURN CODE IS 8.7 DSNH104I E LINE 590 COL 64 ILLEGAL SYMBOL: X; VALID SYMBOLS ARE:,FROM8
Notes: 1. Summary statement that indicates the number of source lines. 2. Summary statement that indicates the number of symbolic names in the symbol table (SQL names and host names).
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1097
3. Storage requirement statement that indicates the number of bytes for the symbol table. 4. Summary statement that indicates the number of messages that are printed. 5. Summary statement that indicates the number of errors that are detected but not printed. You might get this statement if you specify the option FLAG. 6. Storage requirement statement that indicates the number of bytes of working storage that are actually used by the DB2 precompiler to process your source statements. 7. Return code 0 = success, 4 = warning, 8 = error, 12 = severe error, and 16 = unrecoverable error. 8. Error messages (this example detects only one error).
1098
The input message that is being processed The name of the originating logical terminal The failing statement and its function The contents of the SQLCA (SQL communication area) and, if your program accepts dynamic SQL statements, the SQLDA (SQL descriptor area) v The date and time of day v The PSB name for the program v The transaction code that the program was processing v v v v v v v v The call function (that is, the name of a DL/I function) The contents of the PCB that the program call refers to If a DL/I database call was running, the SSAs, if any, that the call used The abend completion code, abend reason code, and any dump error messages
When your program encounters an error, it can pass all the required error information to a standard error routine. Online programs can also send an error message to the originating logical terminal. An interactive program also can send a message to the master terminal operator giving information about the termination of the program. To do that, the program places the logical terminal name of the master terminal in an express PCB and issues one or more ISRT calls. Some organizations run a BMP at the end of the day to list all the errors that occurred during the day. If your organization does this, you can send a message by using an express PCB that has its destination set for that BMP. Batch Terminal Simulator: The Batch Terminal Simulator (BTS) enables you to test IMS application programs. BTS traces application program DL/I calls and SQL statements, and it simulates data communication functions. It can make a TSO terminal appear as an IMS terminal to the terminal operator, which enables the end user to interact with the application as though it were an online application. The user can use any application program that is under the user's control to access any database (whether DL/I or DB2) that is under the user's control. Access to DB2 databases requires BTS to operate in batch BMP or TSO BMP mode.
1099
Using CICS facilities, you can have a printed error record; you can also print the SQLCA and SQLDA contents.
1100
TRANSACTION: XC05 PROGRAM: TESTC05 TASK NUMBER: 0000668 DISPLAY: 00 STATUS: ABOUT TO EXECUTE COMMAND CALL TO RESOURCE MANAGER DSNCSQL EXEC SQL INSERT DBRM=TESTC05, STMT=00368, SECT=00004 IVAR 001: TYPE=CHAR, LEN=00007, IND=000 AT X03C92810 DATA=XF0F0F9F4F3F4F2 IVAR 002: TYPE=CHAR, LEN=00007, IND=000 AT X03C92817 DATA=XF0F1F3F3F7F5F1 IVAR 003: TYPE=CHAR, LEN=00004, IND=000 AT X03C9281E DATA=XE7C3F0F5 IVAR 004: TYPE=CHAR, LEN=00040, IND=000 AT X03C92822 DATA=XE3C5E2E3C3F0F540E2C9D4D7D3C540C4C2F240C9D5E2C5D9E3404040... IVAR 005: TYPE=SMALLINT, LEN=00002, IND=000 AT X03C9284A DATA=X0001 OFFSET:X001ECE LINE:UNKNOWN EIBFN=X1002
ENTER: CONTINUE PF1 : UNDEFINED PF4 : SUPPRESS DISPLAYS PF7 : SCROLL BACK PF10: PREVIOUS DISPLAY
The DB2 SQL information in this screen is as follows: v EXEC SQL statement type This is the type of SQL statement to execute. The SQL statement can be any valid SQL statement. v DBRM=dbrm name The name of the database request module (DBRM) that is currently processing. The DBRM, created by the DB2 precompiler, contains information about an SQL statement. v STMT=statement number This is the DB2 precompiler-generated statement number. The source and error message listings from the precompiler use this statement number, and you can use the statement number to determine which statement is processing. This number is a source line counter that includes host language statements. A statement number that is greater than 32 767 displays as 0. v SECT=section number The section number of the plan that the SQL statement uses.
1101
Specifies the length of the host variable. v IND=indicator variable status number Specifies the indicator variable that is associated with this particular host variable. A value of zero indicates that no indicator variable exists. If the value for the selected column is null, DB2 puts a negative value in the indicator variable for this host variable. v DATA=host variable data Specifies the data, displayed in hexadecimal format, that is associated with this host variable. If the data exceeds what can display on a single line, three periods (...) appear at the far right to indicate that more data is present.
END EDF SESSION USER DISPLAY STOP CONDITIONS ABEND USER TASK
The DB2 SQL information in this screen is as follows: v P.AUTH=primary authorization ID The primary DB2 authorization ID. v S.AUTH=secondary authorization ID The secondary authorization ID. If the RACF list of group options is not active, DB2 uses the connected group name that the CICS attachment facility supplies as the secondary authorization ID. If the RACF list of group options is active, DB2 ignores the connected group name that the CICS attachment facility supplies, but the value is displayed in the DB2 list of secondary authorization IDs. v PLAN=plan name The name of the plan that is currently running. The PLAN represents the control structure that is produced during the bind process and that is used by DB2 to process SQL statements that are encountered while the application is running. v SQL Communication Area (SQLCA)
1102
Information in the SQLCA. The SQLCA contains information about errors, if any occur. DB2 uses the SQLCA to give an application program information about the executing SQL statements. Plus signs (+) on the left of the screen indicate that you can see additional EDF output by using PF keys to scroll the screen forward or back. The OVAR (output host variables) section and its attendant fields are displayed only when the executing statement returns output host variables. The following figure contains the rest of the EDF output for this example.
TRANSACTION: XC05 PROGRAM: TESTC05 TASK NUMBER: 0000698 DISPLAY: 00 STATUS: COMMAND EXECUTION COMPLETE CALL TO RESOURCE MANAGER DSNCSQL + OVAR 002: TYPE=CHAR, LEN=00008, IND=000 AT X03C920B0 DATA=XC8F3E3E3C1C2D3C5 OVAR 003: TYPE=CHAR, LEN=00040, IND=000 AT X03C920B8 DATA=XC9D5C9E3C9C1D340D3D6C1C440404040404040404040404040404040...
OFFSET:X001D14
LINE:UNKNOWN
EIBFN=X1802
ENTER: CONTINUE PF1 : UNDEFINED PF4 : SUPPRESS DISPLAYS PF7 : SCROLL BACK PF10: PREVIOUS DISPLAY
END EDF SESSION USER DISPLAY STOP CONDITIONS ABEND USER TASK
The attachment facility automatically displays SQL information while in the EDF mode. (You can start EDF as outlined in the appropriate CICS application programmer's reference manual.) If this information is not displayed, contact the person that is responsible for installing and migrating DB2. Related concepts: Data types on page 436 Indicator variables, arrays, and structures on page 140 Related information: CICS debugging aids (CICS Transaction Server for z/OS)
Chapter 19. Testing and debugging an application program on DB2 for z/OS
1103
Answer: When you receive an SQL error because of a constraint violation, print out the SQLCA. You can use the DSNTIAR routine to format the SQLCA for you. Check the SQL error message insertion text (SQLERRM) for the name of the constraint. For information about possible violations, see SQLCODEs -530 through -548. Related concepts: SQL error codes (DB2 Codes) Related tasks: Displaying SQLCA fields by calling DSNTIAR on page 203
1104
The sample storage group, databases, table spaces, tables, and views are created when you run the installation sample jobs DSNTEJ1 and DSNTEJ7. DB2 sample objects that include LOBs are created in job DSNTEJ7. All other sample objects are created in job DSNTEJ1. The CREATE INDEX statements for the sample tables are not shown here; they, too, are created by the DSNTEJ1 and DSNTEJ7 sample jobs. Authorization on all sample objects is given to PUBLIC in order to make the sample programs easier to run. You can review the contents of any table by executing an SQL statement, for example SELECT * FROM DSN8910.PROJ. For convenience in interpreting the examples, the department and employee tables are listed in full.
The activity table resides in database DSN8D91A and is created with the following statement:
CREATE TABLE DSN8910.ACT (ACTNO SMALLINT ACTKWD CHAR(6) ACTDESC VARCHAR(20) PRIMARY KEY (ACTNO) IN DSN8D91A.DSN8S91P CCSID EBCDIC; NOT NULL, NOT NULL, NOT NULL, )
1105
The department table resides in table space DSN8D91A.DSN8S91D and is created with the following statement:
CREATE TABLE DSN8910.DEPT (DEPTNO CHAR(3) DEPTNAME VARCHAR(36) MGRNO CHAR(6) ADMRDEPT CHAR(3) LOCATION CHAR(16) PRIMARY KEY (DEPTNO) IN DSN8D91A.DSN8S91D CCSID EBCDIC; NOT NULL, NOT NULL, , NOT NULL, , )
Because the department table is self-referencing, and also is part of a cycle of dependencies, its foreign keys must be added later with the following statements:
ALTER TABLE DSN8910.DEPT FOREIGN KEY RDD (ADMRDEPT) REFERENCES DSN8910.DEPT ON DELETE CASCADE; ALTER TABLE DSN8910.DEPT FOREIGN KEY RDE (MGRNO) REFERENCES DSN8910.EMP ON DELETE SET NULL;
1106
LOCATION
The LOCATION column contains null values until sample job DSNTEJ6 updates this column with the location name.
1107
1108
Table 174. Columns of the employee table (continued) Column 4 5 6 7 8 9 10 11 12 13 14 Column name LASTNAME WORKDEPT PHONENO HIREDATE JOB EDLEVEL SEX BIRTHDATE SALARY BONUS COMM Description Last name of employee ID of department in which the employee works Employee telephone number Date of hire Job held by the employee Number of years of formal education Sex of the employee (M or F) Date of birth Yearly salary in dollars Yearly bonus in dollars Yearly commission in dollars
The following table shows the first half (left side) of the content of the employee table. (Table 177 on page 1110 shows the remaining content (right side) of the employee table.)
Table 176. Left half of DSN8910.EMP: employee table. Note that a blank in the MIDINIT column is an actual value of " " rather than null. EMPNO 000010 000020 000030 000050 000060 000070 000090 000100 000110 000120 000130 000140 000150 000160 000170 000180 000190 000200 000210 000220 FIRSTNME CHRISTINE MICHAEL SALLY JOHN IRVING EVA EILEEN THEODORE VINCENZO SEAN DOLORES HEATHER BRUCE ELIZABETH MASATOSHI MARILYN JAMES DAVID WILLIAM JENNIFER MIDINIT I L A B F D W Q G M A R J S H T K LASTNAME HAAS THOMPSON KWAN GEYER STERN PULASKI HENDERSON SPENSER LUCCHESSI O'CONNELL QUINTANA NICHOLLS ADAMSON PIANKA YOSHIMURA SCOUTTEN WALKER BROWN JONES LUTZ WORKDEPT A00 B01 C01 E01 D11 D21 E11 E21 A00 A00 C01 C01 D11 D11 D11 D11 D11 D11 D11 D11 PHONENO 3978 3476 4738 6789 6423 7831 5498 0972 3490 2167 4578 1793 4510 3782 2890 1682 2986 4501 0942 0672 HIREDATE 1965-01-01 1973-10-10 1975-04-05 1949-08-17 1973-09-14 1980-09-30 1970-08-15 1980-06-19 1958-05-16 1963-12-05 1971-07-28 1976-12-15 1972-02-12 1977-10-11 1978-09-15 1973-07-07 1974-07-26 1966-03-03 1979-04-11 1968-08-29
1109
Table 176. Left half of DSN8910.EMP: employee table (continued). Note that a blank in the MIDINIT column is an actual value of " " rather than null. EMPNO 000230 000240 000250 000260 000270 000280 000290 000300 000310 000320 000330 000340 200010 200120 200140 200170 200220 200240 200280 200310 200330 200340 FIRSTNME JAMES SALVATORE DANIEL SYBIL MARIA ETHEL JOHN PHILIP MAUDE RAMLAL WING JASON DIAN GREG KIM KIYOSHI REBA ROBERT EILEEN MICHELLE HELENA ROY MIDINIT J M S P L R R X F V R J N K M R F R LASTNAME JEFFERSON MARINO SMITH JOHNSON PEREZ SCHNEIDER PARKER SMITH SETRIGHT MEHTA LEE GOUNOT HEMMINGER ORLANDO NATZ YAMAMOTO JOHN MONTEVERDE SCHWARTZ SPRINGER WONG ALONZO WORKDEPT D21 D21 D21 D21 D21 E11 E11 E11 E11 E21 E21 E21 A00 A00 C01 D11 D11 D21 E11 E11 E21 E21 PHONENO 2094 3780 0961 8953 9001 8997 4502 2095 3332 9990 2103 5698 3978 2167 1793 2890 0672 3780 8997 3332 2103 5698 HIREDATE 1966-11-21 1979-12-05 1969-10-30 1975-09-11 1980-09-30 1967-03-24 1980-05-30 1972-06-19 1964-09-12 1965-07-07 1976-02-23 1947-05-05 1965-01-01 1972-05-05 1976-12-15 1978-09-15 1968-08-29 1979-12-05 1967-03-24 1964-09-12 1976-02-23 1947-05-05
(Table 176 on page 1109 shows the first half (right side) of the content of employee table.)
Table 177. Right half of DSN8910.EMP: employee table (EMPNO) (000010) (000020) (000030) (000050) (000060) (000070) (000090) (000100) (000110) (000120) (000130) (000140) (000150) (000160) (000170) (000180) (000190) (000200) (000210) (000220) (000230) (000240) JOB PRES MANAGER MANAGER MANAGER MANAGER MANAGER MANAGER MANAGER SALESREP CLERK ANALYST ANALYST DESIGNER DESIGNER DESIGNER DESIGNER DESIGNER DESIGNER DESIGNER DESIGNER CLERK CLERK EDLEVEL 18 18 20 16 16 16 16 14 19 14 16 18 16 17 16 17 16 16 17 18 14 17 SEX F M F M M F F M M M F F M F M F M M M F M M BIRTHDATE 1933-08-14 1948-02-02 1941-05-11 1925-09-15 1945-07-07 1953-05-26 1941-05-15 1956-12-18 1929-11-05 1942-10-18 1925-09-15 1946-01-19 1947-05-17 1955-04-12 1951-01-05 1949-02-21 1952-06-25 1941-05-29 1953-02-23 1948-03-19 1935-05-30 1954-03-31 SALARY 52750.00 41250.00 38250.00 40175.00 32250.00 36170.00 29750.00 26150.00 46500.00 29250.00 23800.00 28420.00 25280.00 22250.00 24680.00 21340.00 20450.00 27740.00 18270.00 29840.00 22180.00 28760.00 BONUS 1000.00 800.00 800.00 800.00 600.00 700.00 600.00 500.00 900.00 600.00 500.00 600.00 500.00 400.00 500.00 500.00 400.00 600.00 400.00 600.00 400.00 600.00 COMM 4220.00 3300.00 3060.00 3214.00 2580.00 2893.00 2380.00 2092.00 3720.00 2340.00 1904.00 2274.00 2022.00 1780.00 1974.00 1707.00 1636.00 2217.00 1462.00 2387.00 1774.00 2301.00
1110
Table 177. Right half of DSN8910.EMP: employee table (continued) (EMPNO) (000250) (000260) (000270) (000280) (000290) (000300) (000310) (000320) (000330) (000340) (200010) (200120) (200140) (200170) (200220) (200240) (200280) (200310) (200330) (200340) JOB CLERK CLERK CLERK OPERATOR OPERATOR OPERATOR OPERATOR FIELDREP FIELDREP FIELDREP SALESREP CLERK ANALYST DESIGNER DESIGNER CLERK OPERATOR OPERATOR FIELDREP FIELDREP EDLEVEL 15 16 15 17 12 14 12 16 14 16 18 14 18 16 18 17 17 12 14 16 SEX M F F F M M F M M M F M F M F M F F F M BIRTHDATE 1939-11-12 1936-10-05 1953-05-26 1936-03-28 1946-07-09 1936-10-27 1931-04-21 1932-08-11 1941-07-18 1926-05-17 1933-08-14 1942-10-18 1946-01-19 1951-01-05 1948-03-19 1954-03-31 1936-03-28 1931-04-21 1941-07-18 1926-05-17 SALARY 19180.00 17250.00 27380.00 26250.00 15340.00 17750.00 15900.00 19950.00 25370.00 23840.00 46500.00 29250.00 28420.00 24680.00 29840.00 28760.00 26250.00 15900.00 25370.00 23840.00 BONUS 400.00 300.00 500.00 500.00 300.00 400.00 300.00 400.00 500.00 500.00 1000.00 600.00 600.00 500.00 600.00 600.00 500.00 300.00 500.00 500.00 COMM 1534.00 1380.00 2190.00 2100.00 1227.00 1420.00 1272.00 1596.00 2030.00 1907.00 4220.00 2340.00 2274.00 1974.00 2387.00 2301.00 2100.00 1272.00 2030.00 1907.00
DB2 requires an auxiliary table for each LOB column in a table. The following statements define the auxiliary tables for the three LOB columns in DSN8910.EMP_PHOTO_RESUME:
1111
CREATE AUX TABLE DSN8910.AUX_BMP_PHOTO IN DSN8D91L.DSN8S91M STORES DSN8910.EMP_PHOTO_RESUME COLUMN BMP_PHOTO; CREATE AUX TABLE DSN8910.AUX_PSEG_PHOTO IN DSN8D91L.DSN8S91L STORES DSN8910.EMP_PHOTO_RESUME COLUMN PSEG_PHOTO; CREATE AUX TABLE DSN8910.AUX_EMP_RESUME IN DSN8D91L.DSN8S91N STORES DSN8910.EMP_PHOTO_RESUME COLUMN RESUME;
The following table shows the indexes for the employee photo and resume table.
Table 179. Indexes of the employee photo and resume table Name DSN8910.XEMP_PHOTO_RESUME On column EMPNO Type of index Primary, ascending
The following table shows the indexes for the auxiliary tables that support the employee photo and resume table.
Table 180. Indexes of the auxiliary tables for the employee photo and resume table Name DSN8910.XAUX_BMP_PHOTO DSN8910.XAUX_PSEG_PHOTO DSN8910.XAUX_EMP_RESUME On table DSN8910.AUX_BMP_PHOTO DSN8910.AUX_PSEG_PHOTO DSN8910.AUX_EMP_RESUME Type of index Unique Unique Unique
1112
CREATE TABLE DSN8910.PROJ (PROJNO CHAR(6) PRIMARY KEY NOT NULL, PROJNAME VARCHAR(24) NOT NULL WITH DEFAULT PROJECT NAME UNDEFINED, DEPTNO CHAR(3) NOT NULL REFERENCES DSN8910.DEPT ON DELETE RESTRICT, RESPEMP CHAR(6) NOT NULL REFERENCES DSN8910.EMP ON DELETE RESTRICT, PRSTAFF DECIMAL(5, 2) , PRSTDATE DATE , PRENDATE DATE , MAJPROJ CHAR(6)) IN DSN8D91A.DSN8S91P CCSID EBCDIC;
Because the project table is self-referencing, the foreign key for that constraint must be added later with the following statement:
ALTER TABLE DSN8910.PROJ FOREIGN KEY RPP (MAJPROJ) REFERENCES DSN8910.PROJ ON DELETE CASCADE;
6 7 8
The following table shows the indexes for the project table:
1113
Table 182. Indexes of the project table Name DSN8910.XPROJ1 DSN8910.XPROJ2 On column PROJNO RESPEMP Type of index Primary, ascending Ascending
CREATE TABLE DSN8910.PROJACT (PROJNO CHAR(6) NOT NULL, ACTNO SMALLINT NOT NULL, ACSTAFF DECIMAL(5,2) , ACSTDATE DATE NOT NULL, ACENDATE DATE , PRIMARY KEY (PROJNO, ACTNO, ACSTDATE), FOREIGN KEY RPAP (PROJNO) REFERENCES DSN8910.PROJ ON DELETE RESTRICT, FOREIGN KEY RPAA (ACTNO) REFERENCES DSN8910.ACT ON DELETE RESTRICT) IN DSN8D91A.DSN8S91P CCSID EBCDIC;
1114
The following table shows the index of the project activity table:
Table 184. Index of the project activity table Name DSN8910.XPROJAC1 On columns PROJNO, ACTNO, ACSTDATE Type of index primary, ascending
The employee-to-project activity table resides in database DSN8D91A. Because this table has foreign keys that reference EMP and PROJACT, those tables and the indexes on their primary keys must be created first. Then EMPPROJACT is created with the following statement:
CREATE TABLE DSN8910.EMPPROJACT (EMPNO CHAR(6) NOT NULL, PROJNO CHAR(6) NOT NULL, ACTNO SMALLINT NOT NULL, EMPTIME DECIMAL(5,2) , EMSTDATE DATE , EMENDATE DATE , FOREIGN KEY REPAPA (PROJNO, ACTNO, EMSTDATE) REFERENCES DSN8910.PROJACT ON DELETE RESTRICT, FOREIGN KEY REPAE (EMPNO) REFERENCES DSN8910.EMP ON DELETE RESTRICT) IN DSN8D91A.DSN8S91P CCSID EBCDIC;
1115
Table 185. Columns of the employee-to-project activity table (continued) Column 2 3 4 5 6 Column name PROJNO ACTNO EMPTIME EMSTDATE EMENDATE Description Project ID of the project ID of the activity within the project A proportion of the employee's full time (between 0.00 and 1.00) that is to be spent on the activity Date the activity starts Date the activity ends
The following table shows the indexes for the employee-to-project activity table:
Table 186. Indexes of the employee-to-project activity table Name DSN8910.XEMPPROJACT1 DSN8910.XEMPPROJACT2 On columns PROJNO, ACTNO, EMSTDATE, EMPNO EMPNO Type of index Unique, ascending Ascending
The table resides in database DSN8D91A, and is defined with the following statement:
CREATE TABLE DSN8910.DEMO_UNICODE (LOWER_A_TO_Z CHAR(26) UPPER_A_TO_Z CHAR(26) ZERO_TO_NINE CHAR(10) X00_TO_XFF VARCHAR(256) IN DSN8D81E.DSN8S81U CCSID UNICODE; , , , FOR BIT DATA)
1116
CASCADE DEPT SET NULL RESTRICT EMP RESTRICT RESTRICT EMP_PHOTO_RESUME RESTRICT CASCADE PROJ RESTRICT PROJACT RESTRICT EMPPROJACT ACT RESTRICT SET NULL
RESTRICT
1117
Related reference: Activity table (DSN8910.ACT) on page 1105 Department table (DSN8910.DEPT) on page 1106 Employee photo and resume table (DSN8910.EMP_PHOTO_RESUME) on page 1111 Employee table (DSN8910.EMP) on page 1108 Employee-to-project activity table (DSN8910.EMPPROJACT) on page 1115 Project activity table (DSN8910.PROJACT) on page 1114 Project table (DSN8910.PROJ) on page 1113 Unicode sample table (DSN8910.DEMO_UNICODE) on page 1116
1118
Table 188. Views on sample tables (continued) View name VFORPLA VPROJRE1 EMPPROJACT VSTAFAC1 PROJACT ACT VSTAFAC2 EMPPROJACT ACT EMP VPHONE EMP DEPT VEMPLP EMP Phone Phone Project Project On tables or views Used in application Project
1119
CREATE VIEW DSN8910.VACT AS SELECT ALL ACTNO , ACTKWD , ACTDESC FROM DSN8910.ACT ;
1120
The following figure shows the SQL statement that creates the view named VPROJRE1.
CREATE VIEW DSN8910.VPROJRE1 (PROJNO,PROJNAME,PROJDEP,RESPEMP,FIRSTNME,MIDINIT, LASTNAME,MAJPROJ) AS SELECT ALL PROJNO,PROJNAME,DEPTNO,EMPNO,FIRSTNME,MIDINIT, LASTNAME,MAJPROJ FROM DSN8910.PROJ, DSN8910.EMP WHERE RESPEMP = EMPNO ; Figure 88. VPROJRE1
1121
CREATE VIEW DSN8910.VSTAFAC2 (PROJNO, ACTNO, ACTDESC, EMPNO, FIRSTNME, MIDINIT, LASTNAME, EMPTIME,STDATE, ENDATE, TYPE) AS SELECT ALL EP.PROJNO, EP.ACTNO, AC.ACTDESC, EP.EMPNO,EM.FIRSTNME, EM.MIDINIT, EM.LASTNAME, EP.EMPTIME, EP.EMSTDATE, EP.EMENDATE,2 FROM DSN8910.EMPPROJACT EP, DSN8910.ACT AC, DSN8910.EMP EM WHERE EP.ACTNO = AC.ACTNO AND EP.EMPNO = EM.EMPNO ;
),
1122
Storage group:
DSN8Gvr0
Databases:
In addition to the storage group and databases that are shown in the preceding figure, the storage group DSN8G91U and database DSN8D91U are created when you run DSNTEJ2A.
The storage group that is used to store sample application data is defined by the following statement:
CREATE STOGROUP DSN8G910 VOLUMES (DSNV01) VCAT DSNC910;
1123
| | | | | | | | | | | | |
CREATE DATABASE DSN8D91L STOGROUP DSN8G910 BUFFERPOOL BP0 CCSID EBCDIC; CREATE DATABASE DSN8D91E STOGROUP DSN8G910 BUFFERPOOL BP0 CCSID UNICODE; CREATE DATABASE DSN8D91U STOGROUP DSN8G91U CCSID EBCDIC;
1124
CREATE LOB TABLESPACE DSN8S91M IN DSN8D91L LOG NO; CREATE LOB TABLESPACE DSN8S91L IN DSN8D91L LOG NO; CREATE LOB TABLESPACE DSN8S91N IN DSN8D91L LOG NO; CREATE TABLESPACE DSN8S91C IN DSN8D91P USING STOGROUP DSN8G910 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE TABLE BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S91P IN DSN8D91A USING STOGROUP DSN8G910 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE ROW BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S91R IN DSN8D91A USING STOGROUP DSN8G910 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S91S IN DSN8D91A USING STOGROUP DSN8G910 PRIQTY 20 SECQTY 20 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81Q IN DSN8D81P USING STOGROUP DSN8G810 PRIQTY 160 SECQTY 80 SEGSIZE 4 LOCKSIZE PAGE BUFFERPOOL BP0 CLOSE NO CCSID EBCDIC; CREATE TABLESPACE DSN8S81U IN DSN8D81E USING STOGROUP DSN8G810
Chapter 20. DB2 sample applications and data
1125
PRIQTY 5 SECQTY 5 ERASE NO LOCKSIZE PAGE LOCKMAX SYSTEM BUFFERPOOL BP0 CLOSE NO CCSID UNICODE;
1126
DSNTEP4 A sample dynamic SQL program that is written in the PL/I language. This program is identical to DSNTEP2 except DSNTEP4 uses multi-row fetch for increased performance. You can use the source version of DSNTEP4 and modify it to meet your needs, or, if you do not have a PL/I compiler at your installation, you can use the object code version of DSNTEP4. Because these four programs also accept the static SQL statements CONNECT, SET CONNECTION, and RELEASE, you can use the programs to access DB2 tables at remote locations.
To run the sample programs, use the DSN RUN command. The following table lists the load module name and plan name that you must specify, and the parameters that you can specify when you run each program. See the following topics for the meaning of each parameter.
Table 190. DSN RUN option values for DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4 Program name DSNTIAUL Load module DSNTIAUL Plan DSNTIB91 Parameters SQL number of rows per fetch TOLWARN(NO|YES) RC0 SQLTERM(termchar)
DSNTIAD
DSNTIAD
DSNTIA91
1127
Table 190. DSN RUN option values for DSNTIAUL, DSNTIAD, DSNTEP2, and DSNTEP4 (continued) Program name DSNTEP2 Load module DSNTEP2 Plan DSNTEP91 Parameters ALIGN(MID) or ALIGN(LHS) NOMIXED or MIXED SQLTERM(termchar) TOLWARN(NO|YES) PREPWARN ALIGN(MID) or ALIGN(LHS) NOMIXED or MIXED SQLTERM(termchar) TOLWARN(NO|YES) PREPWARN
|
DSNTEP4 DSNTEP4 DSNTP491
The remainder of this section contains the following information about running each program: v Descriptions of the input parameters v Data sets that you must allocate before you run the program v Return codes from the program v Examples of invocation Related reference: RUN (DSN) (DB2 Commands) DB2 for z/OS Exchange
Organization application:
The organization application manages the following company information: v Department administrative structure v Individual departments v Individual employees. Management of information about department administrative structures involves how departments relate to other departments. You can view or change the organizational structure of an individual department, and the information about individual employees in any department. The organization application runs interactively in the ISPF/TSO, IMS, and CICS environments and is available in PL/I and COBOL.
Project application:
The project application manages information about a company's project activities, including the following: v Project structures v Project activity listings v Individual project processing
1128
v Individual project activity estimate processing v Individual project staffing processing. Each department works on projects that contain sets of related activities. Information available about these activities includes staffing assignments, completion-time estimates for the project as a whole, and individual activities within a project. The project application runs interactively in IMS and CICS and is available in PL/I only.
Phone application:
The phone application lets you view or update individual employee phone numbers. There are different versions of the application for ISPF/TSO, CICS, IMS, and batch: v ISPF/TSO applications use COBOL and PL/I. v CICS and IMS applications use PL/I. v Batch applications use C, C++, COBOL, FORTRAN, and PL/I.
1129
System parameter reporting application This application is a client program that calls the DB2supplied stored procedure DSNWZP to display the current settings of system parameters. This program is written in C. All stored procedure applications run in the TSO batch environment.
v Return the table name for a table, view, or alias v Return the qualifier for a table, view or alias v Return the location for a table, view or alias v Return a table of weather information All programs are written in C or C++ and run in the TSO batch environment.
LOB application:
The LOB application demonstrates how to perform the following tasks: v Define DB2 objects to hold LOB data v Populate DB2 tables with LOB data using the LOAD utility, or using INSERT and UPDATE statements when the data is too large for use with the LOAD utility v Manipulate the LOB data using LOB locators The programs that create and populate the LOB objects use DSNTIAD and run in the TSO batch environment. The program that manipulates the LOB data is written in C and runs under ISPF/TSO.
1130
Table 191. Application languages and environments (continued) Programs Exit routines Organization Phone ISPF/TSO Assembler COBOL COBOL PL/I Assembler1 IMS Assembler COBOL PL/I PL/I CICS Assembler COBOL PL/I PL/I COBOL FORTRAN PL/I C C++ Batch Assembler SPUFI Assembler
Project SQLCA formatting routines Stored procedures User-defined functions LOBs Notes: C
PL/I Assembler
COBOL
Application Phone
Description This COBOL batch program lists employee telephone numbers and updates them if requested. This C batch program lists employee telephone numbers and updates them if requested. This C++ batch program lists employee telephone numbers and updates them if requested. This PL/I batch program lists employee telephone numbers and updates them if requested. This FORTRAN program lists employee telephone numbers and updates them if requested. This COBOL ISPF program displays and updates information about a local department. It can also display and update information about an employee at a local or remote location. This COBOL ISPF program lists employee telephone numbers and updates them if requested.
Chapter 20. DB2 sample applications and data
Phone Phone
DSN8BD3 DSN8BE3
DSNTEJ2D DSNTEJ2E
DSNELI DSNELI
Phone
DSN8BP3
DSNTEJ2P
DSNELI
Phone
DSN8BF3
DSNTEJ2F
DSNELI
Organization
DSN8HC3
DSNTEJ3C or DSNTEJ6
DSNALI
Phone
DSN8SC3
DSNTEJ3C
DSNALI
1131
Table 192. Sample DB2 applications for TSO (continued) Preparation JCL member name DSNTEJ3P Attachment facility DSNALI
Application Phone
Description This PL/I ISPF program lists employee telephone numbers and updates them if requested. This assembler language program unloads the data from a table or view and to produce LOAD utility control statements for the data. This assembler language program dynamically executes non-SELECT statements read in from SYSIN; that is, it uses dynamic SQL to execute non-SELECT SQL statements. This PL/I program dynamically executes SQL statements read in from SYSIN. Unlike DSNTIAD, this application can also execute SELECT statements. The jobs DSNTEJ6P and DSNTEJ6S prepare a PL/I version of the application. This sample executes DB2 commands using the instrumentation facility interface (IFI). The sample that is prepared by job DSNTEJ6U invokes the utilities stored procedure. The jobs DSNTEJ6D and DSNTEJ6T prepare a C version of the application. The C stored procedure uses result sets to return commands to the client. This sample executes DB2 commands using the instrumentation facility interface (IFI). The sample that is prepared by jobs DSNTEJ61 and DSNTEJ62 demonstrates a stored procedure that accesses IMS databases through the ODBA interface. The sample that is prepared by jobs DSNTEJ63 and DSNTEJ64 demonstrates how to prepare an SQL procedure using JCL.
UNLOAD
DSNTIAUL
DSNTEJ2A
DSNELI
Dynamic SQL
DSNTIAD
DSNTIJTM
DSNELI
Dynamic SQL
DSNTEP2
DSNTEJ1P or DSNTEJ1L
DSNELI
Stored procedures1
Stored procedures1
| Stored
procedures1 Stored procedures1
| Stored
procedures1 Stored procedures1
| Stored
procedures1 Stored procedures1
The sample that is prepared by job DSNTEJ65 demonstrates how to prepare an SQL procedure using the SQL procedure processor. The sample that is prepared by job DSNTEJ6W demonstrates how to prepare and run a client program that calls a DB2supplied stored procedure to refresh a WLM environment. The sample that is prepared by job DSNTEJ6Z demonstrates how to prepare and run a client program that calls a DB2supplied stored procedure to display the current settings of system parameters.
DSN8ED6
DSNTEJ6W
DSNELI
Stored procedures1
DSN8ED7
DSNTEJ6Z
DSNELI
1132
Table 192. Sample DB2 applications for TSO (continued) Preparation JCL member name DSNTEJ66 DSNTEJ66 Attachment facility DSNELI not applicable
Application
Description The sample that is prepared by job DSNTEJ66 demonstrates how to prepare and run a client program that calls a native SQL procedure, manages versions of that procedure, and optionally, deploys that procedure to a remote server. DSN8ES3 is the sample native SQL procedure and DSN8ED9 is the sample C language caller of DSN8ES3. These C applications consist of a set of user-defined scalar functions that can be invoked through SPUFI or DSNTEP2.
| | | | | | | | | |
| User-defined
functions
DSN8DUAD DSN8DUAT DSN8DUCD DSN8DUCT DSN8DUCY DSN8DUTI DSN8DUWC DSN8DUWF DSN8EUDN DSN8EUMN DSN8DLPL DSN8DLCT DSN8DLRV DSN8DLPV
DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ2U DSNTEJ71 DSNTEJ71 DSNTEJ73 DSNTEJ75
DSNRLI DSNRLI DSNRLI DSNRLI DSNRLI DSNRLI DSNRLI DSNRLI DSNRLI DSNRLI DSNELI DSNELI DSNELI DSNELI
| | | | | | | | |
User-defined functions User-defined functions User-defined functions User-defined functions User-defined functions User-defined functions User-defined functions User-defined functions User-defined functions LOBs LOBs LOBs LOBs
The user-defined table function DSN8DUWF can be invoked by the C client program DSN8DUWC.
These C++ applications consist of a set of user-defined scalar functions that can be invoked through SPUFI or DSNTEP2.
These applications demonstrate how to populate a LOB column that is greater than 32 KB, manipulate the data using the POSSTR and SUBSTR built-in functions, and display the data in ISPF using GDDM.
Note: 1. All of the stored procedure applications consist of a calling program, a stored procedure program, or both.
1133
Related reference: Data sets that the precompiler uses on page 947
Organization
DSNTEJ4P
Project
DSNTEJ4P
Phone
DSNTEJ4P
Related reference: Data sets that the precompiler uses on page 947
Organization
DSNTEJ5P
Project
DSNTEJ5P
Phone
DSNTEJ5P
1134
Related reference: Data sets that the precompiler uses on page 947
DSNTIAUL
Use the DSNTIAUL program to unload data from DB2 tables into sequential data sets. | | | | | | | | | | To retrieve data from a remote site by using the multi-row fetch capability for enhanced performance, bind DSNTIAUL with the DBPROTOCOL(DRDA) option. To run DSNTIAUL remotely when it is bound with the DBPROTOCOL(PRIVATE) option, switch DSNTIAUL to single-row fetch mode by specifying 1 for the number of rows per fetch parameter. When multi-row fetch is used, parallelism might be disabled in the last parallel group in the top-level query block for a query. For very simple queries, parallelism might be disabled for the entire query when multi-row fetch is used. To obtain full parallelism when running DSNTIAUL, switch DSNTIAUL to single-row fetch mode by specifying 1 for the number of rows per fetch parameter.
DSNTIAUL parameters:
SQL | | | | | | | | | Specify SQL to indicate that your input data set contains one or more complete SQL statements, each of which ends with a semicolon. You can include any SQL statement that can be executed dynamically in your input data set. In addition, you can include the static SQL statements CONNECT, SET CONNECTION, or RELEASE. Static SQL statements must be uppercase. DSNTIAUL uses the SELECT statements to determine which tables to unload and dynamically executes all other statements except CONNECT, SET CONNECTION, and RELEASE. DSNTIAUL executes CONNECT, SET CONNECTION, and RELEASE statically to connect to remote locations. number of rows per fetch Specify a number from 1 to 32767 to specify the number of rows per fetch that DSNTIAUL retrieves. If you do not specify this number, DSNTIAUL retrieves 100 rows per fetch. This parameter can be specified with the SQL parameter. Specify 1 to retrieve data from a remote site when DSNTIAUL is bound with the DBPROTOCOL(PRIVATE) option. TOLWARN Specify NO (the default) or YES to indicate whether DSNTIAUL continues to retrieve rows after receiving an SQL warning: | | | | | (NO) If a warning occurs when DSNTIAUL executes an OPEN or FETCH to retrieve rows, DSNTIAUL stops retrieving rows. If the SQLWARN1, SQLWARN2, SQLWARN6, or SQLWARN7 flag is set when DSNTIAUL executes a FETCH to retrieve rows, DSNTIAUL continues to retrieve rows. If a warning occurs when DSNTIAUL executes an OPEN or FETCH to retrieve rows, DSNTIAUL continues to retrieve rows.
(YES) | | | | |
LOBFILE(prefix) Specify LOBFILE to indicate that you want DSNTIAUL to dynamically allocate data sets, each to receive the full content of a LOB cell. (A LOB cell is the intersection of a row and a LOB column.) If you do not specify the LOBFILE option, you can unload up to only 32 KB of data from a LOB column.
1135
| | | | | | | | | | | | | | | | | | | |
prefix Specify a high-level qualifier for these dynamically allocated data sets. You can specify up to 17 characters. The qualifier must conform with the rules for TSO data set names. DSNTIAUL uses a naming convention for these dynamically allocated data sets of prefix.Qiiiiiii.Cjjjjjjj.Rkkkkkkk, where these qualifiers have the following values: prefix The high-level qualifier that you specify in the LOBFILE option. Qiiiiiii The sequence number (starting from 0) of a query that returns one or more LOB columns Cjjjjjjj The sequence number (starting from 0) of a column in a query that returns one or more LOB columns Rkkkkkkk The sequence number (starting from 0) of a row of a result set that has one or more LOB columns. The generated LOAD statement contains LOB file reference variables that can be used to load data from these dynamically allocated data sets. If you do not specify the SQL parameter, your input data set must contain one or more single-line statements (without a semicolon) that use the following syntax:
table or view name [WHERE conditions] [ORDER BY columns]
Each input statement must be a valid SQL SELECT statement with the clause SELECT * FROM omitted and with no ending semicolon. DSNTIAUL generates a SELECT statement for each input statement by appending your input line to SELECT * FROM, then uses the result to determine which tables to unload. For this input format, the text for each table specification can be a maximum of 72 bytes and must not span multiple lines. You can use the input statements to specify SELECT statements that join two or more tables or select specific columns from a table. If you specify columns, you need to modify the LOAD statement that DSNTIAUL generates.
1136
SYSPUNCH Output data set. DSNTIAUL writes the LOAD utility control statements in this data set. SYSRECnn Output data sets. The value nn ranges from 00 to 99. You can have a maximum of 100 output data sets for a single execution of DSNTIAUL. Each data set contains the data that is unloaded when DSNTIAUL processes a SELECT statement from the input data set. Therefore, the number of output data sets must match the number of SELECT statements (if you specify parameter SQL) or table specifications in your input data set. Define all data sets as sequential data sets. You can specify the record length and block size of the SYSPUNCH and SYSRECnn data sets. The maximum record length for the SYSPUNCH and SYSRECnn data sets is 32760 bytes.
| | | | | | | | | | | | | | | |
12
1137
// VOL=SER=SCR03 //SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH, // UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG), // VOL=SER=SCR03,RECFM=FB,LRECL=120,BLKSIZE=1200 //SYSIN DD *DSN8910 .PROJ WHERE DEPTNO=D01
1138
DD DD DD DD
The following call to DSNTIAUL unloads the sample LOB table. The parameters for DSNTIAUL indicate the following options: v The input data set (SYSIN) contains SQL. v DSNTIAUL is to retrieve 2 rows per fetch. v DSNTIAUL places the LOB data in data sets with a high-level qualifier of DSN8UNLD.
//UNLOAD EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTIAUL) PLAN(DSNTIB91) PARMS(SQL,2,LOBFILE(DSN8UNLD)) LIB(DSN910.RUNLIB.LOAD) //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSREC00 DD DSN=DSN8UNLD.SYSREC00, // UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG), // VOL=SER=SCR03,RECFM=FB //SYSPUNCH DD DSN=DSN8UNLD.SYSPUNCH, // UNIT=SYSDA,SPACE=(800,(15,15)),DISP=(,CATLG), // VOL=SER=SCR03,RECFM=FB //SYSIN DD * SELECT * FROM DSN8910.EMP_PHOTO_RESUME;
Given that the sample LOB table has 4 rows of data, DSNTIAUL produces the following output: v Data for columns EMPNO and EMP_ROWID are placed in the data set that is allocated according to the SYSREC00 DD statement. The data set name is DSN8UNLD.SYSREC00 v A generated LOAD statement is placed in the data set that is allocated according to the SYSPUNCH DD statement. The data set name is DSN8UNLD.SYSPUNCH v The following data sets are dynamically created to store LOB data: DSN8UNLD.Q0000000.C0000002.R0000000 DSN8UNLD.Q0000000.C0000002.R0000001 DSN8UNLD.Q0000000.C0000002.R0000002 DSN8UNLD.Q0000000.C0000002.R0000003
Chapter 20. DB2 sample applications and data
1139
DSN8UNLD.Q0000000.C0000004.R0000001 DSN8UNLD.Q0000000.C0000004.R0000002 DSN8UNLD.Q0000000.C0000004.R0000003 For example, DSN8UNLD.Q0000000.C0000004.R0000001 means that the data set contains data that is unloaded from the second row (R0000001) and the fifth column (C0000004) of the result set for the first query (Q0000000).
DSNTIAD
Use the DSNTIAD program to execute SQL statements other than SELECT statements dynamically.
DSNTIAD parameters:
RC0 If you specify this parameter, DSNTIAD ends with return code 0, even if the program encounters SQL errors. If you do not specify RC0, DSNTIAD ends with a return code that reflects the severity of the errors that occur. Without RC0, DSNTIAD terminates if more than 10 SQL errors occur during a single execution. SQLTERM(termchar) Specify this parameter to indicate the character that you use to end each SQL statement. You can use any special character except one of those listed in the following table. SQLTERM(;) is the default.
Table 196. Invalid special characters for the SQL terminator Name blank comma double quotation mark left parenthesis right parenthesis single quotation mark underscore , " ( ) ' _ Character Hexadecimal representation X'40' X'6B' X'7F' X'4D' X'5D' X'7D' X'6D'
Use a character other than a semicolon if you plan to execute a statement that contains embedded semicolons. example: Suppose that you specify the parameter SQLTERM(#) to indicate that the character # is the statement terminator. Then a CREATE TRIGGER statement with embedded semicolons looks like this:
1140
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END#
A CREATE PROCEDURE statement with embedded semicolons looks like the following statement:
CREATE PROCEDURE PROC1 (IN PARM1 INT, OUT SCODE INT) LANGUAGE SQL BEGIN DECLARE SQLCODE INT; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET SCODE = SQLCODE; UPDATE TBL1 SET COL1 = PARM1; END #
Be careful to choose a character for the statement terminator that is not used within the statement.
1141
//RUNTIAD EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DSN) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA91) PARMS(RC0) LIB(DSN910.RUNLIB.LOAD) //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSIN DD * UPDATE DSN8910.PROJ SET DEPTNO=J01 WHERE DEPTNO=A01; UPDATE DSN8910.PROJ SET DEPTNO=J02 WHERE DEPTNO=A02; . . . UPDATE DSN8910.PROJ SET DEPTNO=J20 WHERE DEPTNO=A20;
Important: When you allocate a new data set with the SYSPRINT DD statement, either specify a DCB with RECFM=FBA and LRECL=133, or do not specify the DCB parameter.
1142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
PREPWARN Specifies that DSNTEP2 or DSNTEP4 is to display details about any SQL warnings that are encountered at PREPARE time. Regardless of whether you specify PREPWARN, when an SQL warning is encountered at PREPARE time, the program displays the message SQLWARNING ON PREPARE and sets the return code to 4. When you specify PREPWARN, the program also displays the details about any SQL warnings. SQLFORMAT Specifies how DSNTEP2 or DSNTEP4 pre-processes SQL statements before passing them to DB2. Select one of the following options: SQL This is the preferred mode for SQL statements other than SQL procedural language. When you use this option, which is the default, DSNTEP2 or DSNTEP4 collapses each line of an SQL statement into a single line before passing the statement to DB2. DSNTEP2 or DSNTEP4 also discards all SQL comments.
SQLCOMNT This mode is suitable for all SQL, but it is intended primarily for SQL procedural language processing. When this option is in effect, behavior is similar to SQL mode, except that DSNTEP2 or DSNTEP4 does not discard SQL comments. Instead, it automatically terminates each SQL comment with a line feed character (hex 25), unless the comment is already terminated by one or more line formatting characters. Use this option to process SQL procedural language with minimal modification by DSNTEP2 or DSNTEP4. SQLPL This mode is suitable for all SQL, but it is intended primarily for SQL procedural language processing. When this option is in effect, DSNTEP2 or DSNTEP4 retains SQL comments and terminates each line of an SQL statement with a line feed character (hex 25) before passing the statement to DB2. Lines that end with a split token are not terminated with a line feed character. Use this mode to obtain improved diagnostics and debugging of SQL procedural language. SQLTERM(termchar) Specifies the character that you use to end each SQL statement. You can use any character except one of those that are listed in Table 196 on page 1140. SQLTERM(;) is the default. Use a character other than a semicolon if you plan to execute a statement that contains embedded semicolons. Example: Suppose that you specify the parameter SQLTERM(#) to indicate that the character # is the statement terminator. Then a CREATE TRIGGER statement with embedded semicolons looks like this:
CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END#
A CREATE PROCEDURE statement with embedded semicolons looks like the following statement:
CREATE PROCEDURE PROC1 (IN PARM1 INT, OUT SCODE INT) LANGUAGE SQL BEGIN
Chapter 20. DB2 sample applications and data
1143
DECLARE SQLCODE INT; DECLARE EXIT HANDLER FOR SQLEXCEPTION SET SCODE = SQLCODE; UPDATE TBL1 SET COL1 = PARM1; END #
Be careful to choose a character for the statement terminator that is not used within the statement. If you want to change the SQL terminator within a series of SQL statements, you can use the --#SET TERMINATOR control statement. Example: Suppose that you have an existing set of SQL statements to which you want to add a CREATE TRIGGER statement that has embedded semicolons. You can use the default SQLTERM value, which is a semicolon, for all of the existing SQL statements. Before you execute the CREATE TRIGGER statement, include the --#SET TERMINATOR # control statement to change the SQL terminator to the character #:
SELECT * FROM DEPT; SELECT * FROM ACT; SELECT * FROM EMPPROJACT; SELECT * FROM PROJ; SELECT * FROM PROJACT; --#SET TERMINATOR # CREATE TRIGGER NEW_HIRE AFTER INSERT ON EMP FOR EACH ROW MODE DB2SQL BEGIN ATOMIC UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1; END#
See the following discussion of the SYSIN data set for more information about the --#SET control statement. TOLWARN Indicates whether DSNTEP2 or DSNTEP4 continues to process SQL SELECT statements after receiving an SQL warning. You can specify one of the following values: NO Indicates that the program stops processing the SELECT statement if a warning occurs when the program executes an OPEN or FETCH for a SELECT statement. NO is the default value for TOLWARN. The following exceptions exist: v If SQLCODE +445 or SQLCODE +595 occurs when DSNTEP2 or DSNTEP4 executes a FETCH for a SELECT statement, the program continues to process the SELECT statement. v If SQLCODE +802 occurs when DSNTEP2 or DSNTEP4 executes a FETCH for a SELECT statement, the program continues to process the SELECT statement if the TOLARTHWRN control statement is set to YES. YES Indicates that the program continues to process the SELECT statement if a warning occurs when the program executes an OPEN or FETCH for a SELECT statement.
1144
SYSIN Input data set. In this data set, you can enter any number of SQL statements, each terminated with a semicolon. A statement can span multiple lines, but DSNTEP2 or DSNTEP4 reads only the first 72 bytes of each line. You can enter comments in DSNTEP2 or DSNTEP4 input with an asterisk (*) in column 1 or two hyphens (--) anywhere on a line. Text that follows the asterisk is considered to be comment text. Text that follows two hyphens can be comment text or a control statement. Comments are not considered in dynamic statement caching. Comments and control statements cannot span lines. You can enter control statements of the following form in the DSNTEP2 and DSNTEP4 input data set:
--#SET control-option value
| |
| | |
You can specify the following control options. If you specify a value of NO for any of the options in this list, the program behaves as if you did not specify the parameter. TERMINATOR The SQL statement terminator. value is any single-byte character other than one of those that are listed in Table 196 on page 1140. The default is the value of the SQLTERM parameter. ROWS_FETCH The number of rows that are to be fetched from the result table. value is a numeric literal between -1 and the number of rows in the result table. -1 means that all rows are to be fetched. The default is -1. ROWS_OUT The number of fetched rows that are to be sent to the output data set. value is a numeric literal between -1 and the number of fetched rows. -1 means that all fetched rows are to be sent to the output data set. The default is -1. MULT_FETCH This option is valid only for DSNTEP4. Use MULT_FETCH to specify the number of rows that are to be fetched at one time from the result table. The default fetch amount for DSNTEP4 is 100 rows, but you can specify from 1 to 32676 rows.
| | | | | | | | | | | |
TOLWARN Indicates whether DSNTEP2 or DSNTEP4 continues to process SQL SELECT statements after receiving an SQL warning. You can specify one of the following values: NO Indicates that the program stops processing the SELECT statement if a warning occurs when the program executes an OPEN or FETCH for a SELECT statement. NO is the default value for TOLWARN. The following exceptions exist: v If SQLCODE +445 or SQLCODE +595 occurs when DSNTEP2 or DSNTEP4 executes a FETCH for a SELECT statement, the program continues to process the SELECT statement.
1145
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | YES
v If SQLCODE +802 occurs when DSNTEP2 or DSNTEP4 executes a FETCH for a SELECT statement, the program continues to process the SELECT statement if the TOLARTHWRN control statement is set to YES. Indicates that the program continues to process the SELECT statement if a warning occurs when the program executes an OPEN or FETCH for a SELECT statement. TOLARTHWRN Indicates whether DSNTEP2 and DSNTEP4 continue to process an SQL SELECT statement after an arithmetic SQL warning (SQLCODE +802) is returned. value is either NO (the default) or YES. PREPWARN Specifies that DSNTEP2 or DSNTEP4 is to display details about any SQL warnings that are encountered at PREPARE time. Regardless of whether you specify PREPWARN, when an SQL warning is encountered at PREPARE time, the program displays the message SQLWARNING ON PREPARE and sets the return code to 4. When you specify PREPWARN, the program also displays the details about any SQL warnings. SQLFORMAT Specifies how DSNTEP2 or DSNTEP4 pre-processes SQL statements before passing them to DB2. Select one of the following options: SQL This is the preferred mode for SQL statements other than SQL procedural language. When you use this option, which is the default, DSNTEP2 or DSNTEP4 collapses each line of an SQL statement into a single line before passing the statement to DB2. DSNTEP2 or DSNTEP4 also discards all SQL comments.
SQLCOMNT This mode is suitable for all SQL, but it is intended primarily for SQL procedural language processing. When this option is in effect, behavior is similar to SQL mode, except that DSNTEP2 or DSNTEP4 does not discard SQL comments. Instead, it automatically terminates each SQL comment with a line feed character (hex 25), unless the comment is already terminated by one or more line formatting characters. Use this option to process SQL procedural language with minimal modification by DSNTEP2 or DSNTEP4. SQLPL This mode is suitable for all SQL, but it is intended primarily for SQL procedural language processing. When this option is in effect, DSNTEP2 or DSNTEP4 retains SQL comments and terminates each line of an SQL statement with a line feed character (hex 25) before passing the statement to DB2. Lines that end with a split token are not terminated with a line feed character. Use this mode to obtain improved diagnostics and debugging of SQL procedural language. MAXERRORS Specifies that number of errors that DSNTEP2 and DSNTEP4 handle before processing stops. The default is 10.
1146
SYSPRINT Output data set. DSNTEP2 and DSNTEP4 write informational and error messages in this data set. DSNTEP2 and DSNTEP4 write output records of no more than 133 bytes. Define all data sets as sequential data sets.
1147
1148
1149
1150
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan, Ltd. 19-21, Nihonbashi-Hakozakicho, Chuo-ku Tokyo 103-8510, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.
1151
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation J46A/G4 555 Bailey Avenue San Jose, CA 95141-1003 U.S.A. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement, or any equivalent agreement between us. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. If you are viewing this information softcopy, the photographs and color illustrations may not appear.
1152
Information... PSPI
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered marks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at http://www.ibm.com/ legal/copytrade.shtml. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
1153
This Software Offering does not use cookies or other technologies to collect personally identifiable information. If the configurations deployed for this Software Offering provide you as customer the ability to collect personally identifiable information from end users via cookies and other technologies, you should seek your own legal advice about any laws applicable to such data collection, including any requirements for notice and consent. For more information about the use of various technologies, including cookies, for these purposes, See IBMs Privacy Policy at http://www.ibm.com/privacy and IBMs Online Privacy Statement at http://www.ibm.com/privacy/details the section entitled Cookies, Web Beacons and Other Technologies and the IBM Software Products and Software-as-a-Service Privacy Statement at http://www.ibm.com/software/info/product-privacy.
1154
Glossary
The glossary is available in the Information Management Software for z/OS Solutions Information Center. See the Glossary topic for definitions of DB2 for z/OS terms.
1155
1156
Numerics
31-bit addressing 1035
A
abend effect on cursor position 708 for synchronization calls 757 IMS U0102 1056 system X"04E" 757 abend recovery routine in CAF 46 access path direct row access 738 accessibility keyboard xiv shortcut keys xiv accessing data from an application program 659 activity sample table 1105 adding data 637 ALL quantified predicate 695 ALTER PROCEDURE statement external stored procedure 635 AMODE link-edit option 968, 1035 ANY quantified predicate 695 APOST precompiler option 959 application rebinding 991 application 991 Application Messaging Interface (AMI) policies 900 services 900 WebSphere MQ 899 application plan binding 975 creating 969 dynamic plan selection for CICS applications invalidated 999 listing packages 975 rebinding 993 application program bill of materials 464 checking success of SQL statements 136 coding SQL statements 123 coding conventions 226 data entry 637 dynamic SQL 158, 162 selecting rows using a cursor 704
990
1157
AS clause (continued) naming result columns 665 ORDER BY name 663 ASCII data, retrieving 166 assembler application program assembling 968 data type compatibility 238 declaring tables 243 declaring views 243 defining the SQLDA 137, 231 host variable naming convention 243 host variable, declaring 231 INCLUDE statement 243 including SQLCA 229 indicator variable declaration 237 reentrant 243 SQLCODE host variable 229 SQLSTATE host variable 229 variable declaration 232 assignment, compatibility rules 436 ATTACH precompiler option 959 attachment facility options in z/OS environment 39 AUTH SIGNON (connection function of RRSAF) language examples 95 syntax 95 authority authorization ID 1059 creating test tables 1065 SYSIBM.SYSTABAUTH table 659 authorization cache determining size 534 AUTOCOMMIT field of SPUFI panel 1072 automatic query rewrite 448 automatic rebind conditions for 999 invalid package 999 invalid plan or package 999 SQLCA not available 999
B
batch processing access to DB2 and DL/I together binding a plan 984 checkpoint calls 757 commits 757 precompiling 946 batch DB2 application running 1059 starting with a CLIST 1060 bill of materials applications 464 binary host variable assembler 232 C/C++ 251 COBOL 302 PL/I 385 binary host variable array C/C++ 263 PL/I 391 binary large object (BLOB) 440 BIND command line processor command BIND COPY for native SQL procedures 570
BIND COPY REPLACE for native SQL procedures 571 bind options planning for 19 BIND PACKAGE subcommand of DSN options CURRENTDATA 983 DBPROTOCOL 983 ENCODING 983 KEEPDYNAMIC 197 location-name 982 OPTIONS 983 SQLERROR 983 options associated with DRDA access 981, 984 remote 970 BIND PLAN subcommand of DSN options CACHESIZE 989 CURRENTDATA 982 DBPROTOCOL 982 DISCONNECT 981 ENCODING 982 KEEPDYNAMIC 197 SQLRULES 982, 1001 options associated with DRDA access 981 remote 970 binding application plans 969 changes that require 17 checking BIND PACKAGE options 984 DBRMs precompiled elsewhere 946 options associated with DRDA access 981 packages remote 970 planning for 14 plans 975 remote package requirements 970 specify SQL rules 1001 binding method deciding 14 block fetch preventing 705 with cursor stability 705 BMP (batch message processing) program checkpoints 28 bounded character pointer host variable declaring 276 description 276 referencing in SQL statements 275 BTS (batch terminal simulator) 1099
C
C application program declaring tables 283 sample application 1128 C/C++ creating stored procedure 596 C/C++ application program data type compatibility 278 DCLGEN support 131 declaring views 283 defining the SQLDA 137, 250 host structure 271 INCLUDE statement 283 including SQLCA 249 indicator variable array declaration
973
273
1158
C/C++ application program (continued) indicator variable declaration 273 naming convention 283 precompiler option defaults 966 SQLCODE host variable 249 SQLSTATE host variable 249 variable array declaration 263 variable declaration 251 with classes, preparing 946 C/C++ application programs pointer host variables 276 cache (dynamic SQL) statements 192 CACHERAC determining value 534 CACHESIZE option of BIND PLAN subcommand 989 REBIND subcommand 989 CAF (call attachment facility) description 44 CAF functions summary of behavior 52 calculated values groups with conditions 676 summarizing group values 675 call attachment facility (CAF) application program examples 66 preparation 48 attention exit routines 46 authorization IDs 44 behavior summary 52 connection functions 53 connection name 44 connection properties 44 connection type 44 DB2 abends 44 description 44 error messages 64 implicit connections to 49 invoking 40 parameters for CALL DSNALI 50 program requirements 48 recovery routines 46 register changes 49 return codes example of checking 66 return codes and reason codes 64 sample scenarios 65 scope 44 terminated task 44 trace 64 call attachment language interface loading 47 making available 47 CALL DSNALI parameter list 50 required parameters 50 CALL DSNRLI parameter list 82 required parameters 82 CALL statement command line processor 1058 examples 775 multiple 788 syntax for invoking DSNTPSMP 584
catalog table SYSIBM.LOCATIONS 892 SYSIBM.SYSCOLUMNS 659 SYSIBM.SYSTABAUTH 659 CCSID (coded character set identifier) controlling in COBOL programs 328 precompiler option 959 setting for host variables 141 SQLDA 166 CEEDUMP using to debug stored procedures 1085 character host variable assembler 232 C/C++ 251 COBOL 302 Fortran 373 PL/I 385 character host variable array C/C++ 263 COBOL 312 PL/I 391 character input data REXX program 420 character large object (CLOB) 440 character string literals 226 mixed data 436 width of column in results 1075, 1081 check constraint check integrity 445 considerations 444 CURRENT RULES special register effect 445 defining 444 description 444 determining violations 1103 enforcement 444 programming considerations 1103 CHECK-pending status 445 checkpoint calls 26, 28 specifying frequency 28 CHKP call, IMS 26 CICS DSNTIAC subroutine assembler 243 C 283 COBOL 334 PL/I 403 environment planning 1060 facilities command language translator 955 control areas 1047 EDF (execution diagnostic facility) 1099 language interface module (DSNCLI) use in link-editing an application 968 operating running a program 1047 preparing with JCL procedures 1008 programming DFHEIENT macro 243 sample applications 1130, 1134 SYNCPOINT command 22 storage handling assembler 243 C 283 COBOL 334 PL/I 403 Index
1159
CICS (continued) sync point 22 unit of work 22 CICS applications thread reuse 122 CICS attachment facility controlling from applications 120 detecting whether it is operational 121 starting 120 stopping 120 client 34 client program preparing for calling a remote stored procedure 785 CLOSE statement description 714 recommendation 720 WHENEVER NOT FOUND clause 164, 166 CLOSE (connection function of CAF) description 53 language examples 60 program example 66 syntax 60 COALESCE function 685 COBOL creating stored procedure 596 COBOL application program compiling 968 controlling CCSID 328 data type compatibility 329 DB2 precompiler option defaults 966 DCLGEN support 131 declaring tables 334 declaring views 334 defining the SQLDA 137, 301 dynamic SQL 162 host structure 322 host variable use of hyphens 334 host variable array, declaring 301 host variable, declaring 301 INCLUDE statement 334 including SQLCA 299 indicator variable array declaration 326 indicator variable declaration 326 naming convention 334 object-oriented extensions 340 options 334 preparation 968 resetting SQL-INIT-FLAG 334 sample program 340 SQLCODE host variable 299 SQLSTATE host variable 299 variable array declaration 312 variable declaration 302 WHENEVER statement 334 with classes, preparing 946 coding SQL statements dynamic 158 collection, package identifying 978 SET CURRENT PACKAGESET statement 978 colon preceding a host variable 147 preceding a host variable array 155 column data types 436
column (continued) default value system-defined 435 user-defined 436 displaying, list of 659 heading created by SPUFI 1082 labels, usage 166 name, with UPDATE statement 653 retrieving, with SELECT 660 specified in CREATE TABLE 435 width of results 1075, 1081 COMMA precompiler option 959 command line processor binding 972 CALL statement 1058 stored procedures 1058 commit point description 22 IMS unit of work 26 COMMIT statement description 1072 in a stored procedure 549 when to issue 22 with RRSAF 75 common table expressions description 464 examples 464 in a CREATE VIEW statement 462 in a SELECT statement 462 in an INSERT statement 462 infinite loops 690 recursion 464 comparison compatibility rules 436 HAVING clause subquery 695 operator, subquery 695 WHERE clause subquery 695 compatibility data types 436 rules 436 composite key 450 compound statement example dynamic SQL 546 nested IF and WHILE statements 544 EXIT handler 557 labels 543 compound statements nested 553 within the declaration of a condition handler 558 condition handlers empty 566 conditions ignoring 566 CONNECT statement SPUFI 1072 CONNECT (connection function of CAF) description 53 language examples 54 program example 66 syntax 54 CONNECT LOCATION field of SPUFI panel 1072 CONNECT precompiler option 959
1160
CONNECT processing option enforcing restricted system rules 36 CONNECT statement, with DRDA access 892 connecting DB2 39 connection DB2 connecting from tasks 1051 function of CAF CLOSE 60 CONNECT 54 DISCONNECT 61 OPEN 58 TRANSLATE 63 function of RRSAF AUTH SIGNON 95 CONTEXT SIGNON 99 CREATE THREAD 107 FIND_DB2_SYSTEMS 114 IDENTIFY 85 SET_CLIENT_ID 105 SET_ID 104 SIGNON 91 SWITCH TO 89 TERMINATE IDENTIFY 111 TERMINATE THREAD 110 TRANSLATE 112 connection properties call attachment facility (CAF) 44 Resource Recovery Services attachment facility (RRSAF) 77 connection to DB2 environment requirements 39 constants, syntax C/C++ 251 Fortran 373 CONTEXT SIGNON (connection function of RRSAF) language examples 99 syntax 99 CONTINUE clause of WHENEVER statement 207 CONTINUE handler (SQL procedure) description 557 example 557 coordinating updates distributed data 36 correlated reference correlation name 698 SQL rules 677 usage 677 using in subquery 698 correlation name 698 create external SQL procedure by using DSNTPSMP 581 external SQL procedure by using JCL 592 external stored procedure 596 CREATE GLOBAL TEMPORARY TABLE statement 454 CREATE PROCEDURE statement external stored procedure 596 for external SQL procedures 592 CREATE TABLE statement DEFAULT clause 435 NOT NULL clause 435 PRIMARY KEY clause 449 relationship names 451 UNIQUE clause 435, 449 usage 435
CREATE THREAD (connection function of RRSAF) language examples 107 program example 118 syntax 107 CREATE TRIGGER activation order 481 description 468 example 468 timestamp 481 trigger naming 468 CREATE TYPE statement example 489 CREATE VIEW statement 460 created temporary table instances 455 use of NOT NULL 455 working with 456 creating objects in an application program 435 creating stored procedures external SQL procedures 580 CURRENT PACKAGESET special register dynamic plan switching 990 identify package collection 978 CURRENT RULES special register effect on check constraints 445 usage 1001 current server 34 CURRENT SERVER special register description 978 saving value in application program 894 CURRENT SQLID special register use in test 1063 value in INSERT statement 436 cursor attributes using GET DIAGNOSTICS 729 using SQLCA 729 closing 714 CLOSE statement 720 deleting a current row 716 description 704 dynamic scrollable 705 effect of abend on position 708 example retrieving backward with scrollable cursor 733 updating specific row with rowset-positioned cursor 736 updating with non-scrollable cursor 733 updating with rowset-positioned cursor 735 insensitive scrollable 705 maintaining position 708 non-scrollable 705 open state 708 OPEN statement 711 result table 704 row-positioned declaring 710 deleting a current row 712 description 704 end-of-data condition 712 retrieving a row of data 712 steps in using 709 updating a current row 712 rowset-positioned declaring 715 description 704 Index
1161
cursor (continued) rowset-positioned (continued) end-of-data condition 715 number of rows 716 number of rows in rowset 719 opening 715 retrieving a rowset of data 716 steps in using 714 updating a current rowset 716 scrollable description 705 dynamic 705 fetch orientation 720 INSENSITIVE 705 retrieving rows 720 SENSITIVE DYNAMIC 705 SENSITIVE STATIC 705 sensitivity 705 static 705 updatable 705 static scrollable 705 types 705 WITH HOLD description 708 cursors declaring in SQL procedures 556
D
data accessing from an application program 659 adding 637 adding to the end of a table 652 associated with WHERE clause 660 currency 705 distributed 34 modifying 637 not in a table 756 retrieval using SELECT * 692 retrieving a rowset 716 retrieving a set of rows 712 retrieving large volumes 754 scrolling backward through 730 security and integrity 21 updating during retrieval 691 updating previously retrieved data 732 data encryption 453 data integrity tables 443 data type built-in 436 comparisons 147 compatibility assembler application program 238 C application program 278 COBOL and SQL 329 Fortran and SQL 377 PL/I application program 399 REXX and SQL 414 data types compatibility 143 used by DCLGEN 131 DATE precompiler option 959 datetime data type 436 DB2 connection from a program 39
DB2 abend DL/I batch 757 DB2 coprocessor 953 processing SQL statements 945 DB2 MQ tables descriptions 908 DB2 private protocol access coding an application 889 compared to DRDA access 35 DB2_RETURN_STATUS using to get procedure status 791 DB2-established address spaces stored procedures 549 DB2-supplied stored procedures 797 DB2I default panels 943 invoking DCLGEN 126 DB2I (DB2 Interactive) background processing run time libraries 1016 EDITJCL processing run time libraries 1016 interrupting 1067 menu 1067 panels BIND PACKAGE 1022 BIND PLAN 1025 Compile, Link, and Run 1035 Current SPUFI Defaults 1074 DB2I Primary Option Menu 1010, 1067 Defaults for BIND PACKAGE 1029 Defaults for BIND PLAN 1031 Defaults for REBIND PACKAGE 1029 Defaults for REBIND PLAN 1031 Precompile 1020 Program Preparation 1012 System Connection Types 1033 preparing programs 941 program preparation example 1012 selecting SPUFI 1067 SPUFI 1067 DB2I defaults setting 943 DBCS (double-byte character set) translation in CICS 955 DBINFO passing to external stored procedure 596 user-defined function 513 DBRM (database request module) binding to a package 970 binding to a plan 975 deciding how to bind 14 description 951 DBRMs in HFS files binding 972 DCLGEN COBOL example 133 data types 131 declaring indicator variable arrays 126 generating table and view declarations 125 generating table and view declarations from DB2I INCLUDE statement 133 including declarations in a program 133 invoking 125 using from DB2I 126 variable declarations 131
126
1162
DCLGEN (declarations generator) description 125 DDITV02 input data set 1004 DDOTV02 output data set 1004 Debug Tool user-defined function 1083 debugging recording messages for stored procedures 1091 stored procedures 1085 debugging application programs 1092 DEC15 precompiler option 959 rules 691 DEC31 avoiding overflow 692 precompiler option 959 rules 691 decimal 15 digit precision 691 31 digit precision 691 arithmetic 691 DECIMAL data type C/C++ 251 declarations generator (DCLGEN) description 125 DECLARE CURSOR statement description, row-positioned 710 description, rowset-positioned 715 FOR UPDATE clause 710 multilevel security 710 prepared statement 164, 166 scrollable cursor 705 WITH HOLD clause 708 WITH RETURN option 622 WITH ROWSET POSITIONING clause 715 DECLARE GLOBAL TEMPORARY TABLE statement DECLARE TABLE statement assembler 243 C 283 COBOL 334 Fortran 379 in application programs 124 PL/I 403 declared temporary table including column defaults 457 including identity columns 457 instances 457 ON COMMIT clause 458 qualifier for 457 remote access using a three-part name 889 requirements 457 working with 456 declaring tables and views advantages 124 DELETE statement correlated subquery 698 description 643, 655 positioned FOR ROW n OF ROWSET clause 716 restrictions 712 WHERE CURRENT clause 712, 716 deleting current rows 712 data 655 every row from a table 655 with TRUNCATE 655 rows from a table 655
457
delimiter, SQL 146 DENSE_RANK specification 668 example 668 department sample table 1106 creating 453 DEPLOY bind option for native SQL procedures 574 DESCRIBE INPUT statement 191 DESCRIBE statement column labels 166 INTO clauses 166 designing applications 1 designing applications distributed data 33 DFHEIENT macro 243 DFSLI000 (IMS language interface module) 968 diagnostics area RESIGNAL affect on 570 SIGNAL affect on 570 direct row access 738 disability xiv DISCONNECT (connection function of CAF) description 53 language examples 61 program example 66 syntax 61 displaying table columns 659 table privileges 659 DISTINCT clause of SELECT statement 665 unique values 665 distinct type assigning values 641 comparing types 702 description 489 example argument of user-defined function (UDF) 490 arguments of infix operator 771 casting constants 772 casting function arguments 771 casting host variables 772 LOB data type 490 function arguments 771 strong typing 489 UNION with INTERSECT 701 with EXCEPT 701 with UNION 701 distinct types creating 489 distributed data 34 choosing an access method 35 coordinating updates 36 copying a remote table 889 DBPROTOCOL bind option 889, 980 designing applications for 33 encoding scheme of retrieved data 895 example accessing remote temporary table 891 calling stored procedure at remote location 980 connecting to remote server 892, 980 specifying location in table name 980 using alias for multiple sites 892 using RELEASE statement 893 using three-part table names 889 executing long SQL statements 895 Index
1163
distributed data (continued) identifying server at run time 894 maintaining data currency 705 planning DB2 private protocol access 980 DRDA access 980 program preparation 984 programming coding with DB2 private protocol access 889 coding with DRDA access 889 retrieving from ASCII or Unicode tables 895 savepoints 35 scrollable cursors 35 three-part table names 889 transmitting mixed data 894 two-phase commit 36 using alias for location 892 DL/I batch application programming 757 checkpoint ID 1057 DB2 requirements 757 DDITV02 input data set 1004 DSNMTV01 module 1054 features 757 SSM= parameter 1054 submitting an application 1054 double-byte character large object (DBCLOB) 440 DRDA access accessing remote temporary table 891 bind options 981 coding an application 889 compared to DB2 private protocol access 35 connecting to remote server 892 planning 980 precompiler options 968 preparing programs 981 programming hints 894 releasing connections 893 sample program 352 SQL limitations at different servers 894 DRDA access with CONNECT statements sample program 352 DRDA with three-part names sample program 358 DROP TABLE statement 460 DSN applications, running with CAF 40 DSN command of TSO return code processing 1047 RUN subcommands 1047 DSN_FUNCTION_TABLE table 767 DSN_STATEMENT_CACHE_TABLE 195 populating 195 DSN8BC3 sample program 334 DSN8BD3 sample program 283 DSN8BE3 sample program 283 DSN8BF3 sample program 379 DSN8BP3 sample program 403 DSNACCOR stored procedure description 826 option descriptions 827 output 842 syntax diagram 827 DSNACICS stored procedure 808 DSNACICX exit routine 814 DSNAIMS stored procedure 816 DSNAIMS2 stored procedure 821
DSNALI loading 47 making available 47 DSNALI (CAF language interface module) example of deleting 66 example of loading 66 DSNCLI (CICS language interface module) 968 DSNEBP10 1029 DSNEBP11 1029 DSNH command of TSO 1094 DSNHASM procedure 1007 DSNHC procedure 1007 DSNHCOB procedure 1007 DSNHCOB2 procedure 1007 DSNHCPP procedure 1007 DSNHCPP2 procedure 1007 DSNHFOR procedure 1007 DSNHICB2 procedure 1007 DSNHICOB procedure 1007 DSNHLI entry point to DSNALI program example 66 DSNHLI2 entry point to DSNALI program example 66 DSNHPLI procedure 1007 DSNMTV01 module 1054 DSNRLI loading 79 making available 79 DSNTEDIT CLIST 994 DSNTEP2 and DSNTEP4 sample program specifying SQL terminator 1135, 1142 DSNTEP2 sample program how to run 1126 parameters 1126 program preparation 1126 DSNTEP4 sample program how to run 1126 parameters 1126 program preparation 1126 DSNTIAC subroutine assembler 243 C 283 COBOL 334 PL/I 403 DSNTIAD sample program how to run 1126 parameters 1126 program preparation 1126 specifying SQL terminator 1140 DSNTIAR subroutine assembler 214 C 283 COBOL 334 description 203 Fortran 379 PL/I 403 return codes 205 using 204 DSNTIAUL sample program how to run 1126 parameters 1126 program preparation 1126 DSNTIJSD sample program using to set up the Unified Debugger 1088 DSNTIR subroutine 379 DSNTPSMP creating external SQL procedures 581
1164
DSNTPSMP (continued) required authorizations 581 syntax for invoking 584 DSNTRACE data set 64 DSNXDBRM 951 DSNXNBRM 951 DXXMQGEN stored procedure description 866 invocation example 868 invocation syntax 866 output 869 parameter descriptions 867 DXXMQGENCLOB stored procedure description 873 invocation example 875 invocation syntax 874 output 876 parameter descriptions 874 DXXMQINSERT stored procedure description 846 invocation example 848 invocation syntax 847 output 849 parameter descriptions 847 DXXMQINSERTALL stored procedure description 856 invocation example 858 invocation syntax 857 output 858 parameter descriptions 857 DXXMQINSERTALLCLOB stored procedure description 863 invocation example 865 invocation syntax 864 output 866 parameter descriptions 864 DXXMQINSERTCLOB stored procedure description 851 invocation example 853 invocation syntax 852 output 854 parameter descriptions 852 DXXMQRETRIEVE stored procedure description 869 invocation example 872 invocation syntax 870 output 873 parameter descriptions 870 DXXMQRETRIEVECLOB stored procedure description 877 invocation example 879 invocation syntax 877 output 880 parameter descriptions 877 DXXMQSHRED stored procedure description 849 invocation example 850 invocation syntax 849 output 851 parameter descriptions 850 DXXMQSHREDALL stored procedure description 859 invocation example 860 invocation syntax 859 output 861 parameter descriptions 859
DXXMQSHREDALLCLOB stored procedure description 861 invocation example 862 invocation syntax 861 output 863 parameter descriptions 862 DXXMQSHREDCLOB stored procedure description 854 invocation example 855 invocation syntax 854 output 856 parameter descriptions 855 DYNAM option of COBOL 334 dynamic buffer allocation FETCH WITH CONTINUE 727 dynamic plan selection restrictions with CURRENT PACKAGESET special register 990 using packages with 990 dynamic SQL advantages and disadvantages 159 assembler program 166 C program 166 caching prepared statements 192 COBOL application program 334 COBOL program 162 description 158 effect of bind option REOPT(ALWAYS) 166 effect of WITH HOLD cursor 186 EXECUTE IMMEDIATE statement 184 fixed-list SELECT statements 164 Fortran program 379 host languages 162 including in your program 158 non-SELECT statements 163, 186 PL/I 166 PREPARE and EXECUTE 186 programming 158 requirements 159 restrictions 159 sample C program 287 statement caching 192 varying-list SELECT statements 166 DYNAMICRULES bind option 987
E
ECB (event control block) CONNECT function of CAF 54 IDENTIFY function of RRSAF 85 EDIT panel, SPUFI SQL statements 1067 embedded semicolon embedded 1140 embedded SQL applications host variables, XML data 216 XML data 216 employee photo and resume sample table 1111 employee sample table 1108 employee-to-project activity sample table 1115 ENCRYPT_TDES function 453 END-EXEC delimiter 146 end-of-data condition 712, 715 error arithmetic expression 214 division by zero 214 handling 207 Index
1165
error (continued) messages generated by precompiler 1094 overflow 214 return codes 202 run 1093 errors when retrieving data into a host variable determining cause 142 EXCEPT eliminating duplicate rows 671 keeping duplicate rows with ALL 673 EXCEPT clause columns of result table 670 exception condition handling 207 EXEC SQL delimiter 146 EXECUTE IMMEDIATE statement 184 EXECUTE statement dynamic execution 186 parameter types 166 USING DESCRIPTOR clause 166 EXISTS predicate, subquery 695 EXIT handler (SQL procedure) 557 exit routine abend recovery with CAF 46 attention processing with CAF 46 exit routines DSNACICX 814 EXPLAIN automatic rebind 999 EXPLAIN tables 195 DSN_FUNCTION_TABLE 768 external SQL procedure creating 580 external SQL procedures 542 creating by using DSNTPSMP 581 creating by using JCL 592 debugging with the Unified Debugger 1088 migrating to native SQL procedures 576 external stored procedure creating 596 modifying the definition 635 package 618 package authorizations 618 plan 618 preparing 596 reentrant 624 running as authorized program 596
F
FETCH CURRENT CONTINUE 726 FETCH statement description, multiple rows 716 description, single row 712 fetch orientation 720 host variables 164 multiple-row assembler 243 description 716 FOR n ROWS clause 719 number of rows in rowset 719 using with descriptor 716 using with host variable arrays 716 row and rowset positioning 732 scrolling through data 730 USING DESCRIPTOR clause 166 using row-positioned cursor 712 FETCH WITH CONTINUE 726
file reference variable 751 DB2-generated construct 751 FIND_DB2_SYSTEMS (connection function of RRSAF) language examples 114 syntax 114 fixed buffer allocation FETCH WITH CONTINUE 728 FLAG precompiler option 959 FLOAT precompiler option 959 FOLD value for C and CPP 959 value of precompiler option HOST 959 FOR UPDATE clause 710 FOREIGN KEY clause description 451 usage 451 format SELECT statement results 1081 SQL in input data set 1067 formatting result tables 664 Fortran application program @PROCESS statement 379 byte data type 379 constant syntax 373 data type compatibility 377 declaring tables 379 declaring views 379 defining the SQLDA 137, 372 host variable, declaring 373 INCLUDE statement 379 including SQLCA 371 indicator variable declaration 376 naming convention 379 parallel option 379 precompiler option defaults 966 SQLCODE host variable 371 SQLSTATE host variable 371 statement labels 379 variable declaration 373 WHENEVER statement 379 FROM clause joining tables 677 SELECT statement 660 FRR (functional recovery routine) in CAF 46 FULL OUTER JOIN clause 685 function resolution 764 functional recovery routine (FRR) in CAF 46
G
GENERAL linkage convention 599, 602 GENERAL WITH NULLS linkage convention 599, 605 general-use programming information, described 1153 generating table and view declarations by using DCLGEN 125 with DCLGEN from DB2I 126 generating XML documents for MQ message queue 907 GET DIAGNOSTICS using to get procedure status 791 GET DIAGNOSTICS statement condition items 208 connection items 208 data types for items 208, 211 description 208
1166
GET DIAGNOSTICS statement (continued) multiple-row INSERT 208 RETURN_STATUS item 566 ROW_COUNT item 716 statement items 208 using in handler 565 GO TO clause of WHENEVER statement 207 governor (resource limit facility) 199 GRANT statement 1065 graphic host variable assembler 232 C/C++ 251 COBOL 302 PL/I 385 graphic host variable array C/C++ 263 COBOL 312 PL/I 391 GRAPHIC precompiler option 959 GROUP BY clause use with aggregate functions 675
host variable array (continued) description 139, 155 indicator variable array 140 inserting multiple rows 156 PL/I 385, 391 retrieving multiple rows 156 host variable processing errors 142 host variables 138 compatible data types 143 XML in assembler 217 XML in C language 218 XML in COBOL 218 XML in embedded SQL applications XML in PL/I 220
216
I
IBM Data Studio Developer creating external SQL procedures 580 creating native SQL procedures 550 IDENTIFY (connection function of RRSAF) language examples 85 program example 118 syntax 85 identity column defining 441, 641 IDENTITY_VAL_LOCAL function 441 inserting in table 459 inserting values into 641 trigger 468 using as parent key 441 IKJEFT01 terminal monitor program in TSO 1059 implicit CAF connection 49 implicit RRSAF connections 82 IMS checkpoint calls 26 checkpoints 28 CHKP call 26 commit point 26 environment planning 1060 language interface module (DFSLI000) 968 link-editing 968 recovery 23 ROLB call 23, 26 ROLL call 23, 26 SYNC call 26 unit of work 26 IMS programs recovery 30 IN predicate, subquery 695 index types foreign key 451 primary 459 unique 459 unique on primary key 450 indicator structure description 140 indicator variable description 140 inserting null values 154 indicator variable array description 140 inserting null values 154 indicator variable arrays declaring with DCLGEN 126 Index
H
handler, using in SQL procedure 557 HAVING clause selecting groups subject to conditions 676 HOST FOLD value for C and CPP 959 precompiler option 959 host language dynamic SQL 162 host language data types compatibility with SQL data types 143 host structure C/C++ 271 COBOL 322 description 139 indicator structure 140 PL/I 396 retrieving row of data 157 using SELECT INTO 157 host variable assembler 231, 232 C/C++ 251 COBOL 301, 302 description 138 FETCH statement 164 Fortran 373 indicator variable 140 inserting values into tables 154 LOB assembler 742 C 742 COBOL 743 Fortran 745 PL/I 745 PL/I 385 PREPARE statement 164 retrieving a single row 147 setting the CCSID 141 static SQL flexibility 159 updating values in tables 153 using 147 host variable array C/C++ 263 COBOL 301, 312
1167
indicator variable arraysC/C++ syntax 273 indicator variable arraysCOBOL syntax 326 indicator variable arraysPL/I syntax 398 indicator variables using to pass large output parameters 780 indicator variablesassembler syntax 237 indicator variablesC/C++ syntax 273 indicator variablesCOBOL syntax 326 indicator variablesFortran syntax 376 indicator variablesPL/I syntax 398 infinite loop 690 informational referential constraint automatic query rewrite 448 description 448 INNER JOIN clause 682 input data set DDITV02 1004 input parameters stored procedures 538 INSERT statement description 637 multiple rows 639 single row 637 VALUES clause 637 with identity column 641 with ROWID column 640 inserting values from host variable arrays 156 inserting data by using host variables 154 Interactive System Productivity Facility (ISPF) 1067 internal resource lock manager (IRLM) 1054 INTERSECT eliminating duplicate rows 671 keeping duplicate rows with ALL 673 INTERSECT clause columns of result table 670 invalid SQL terminator characters 1140 invoking call attachment facility (CAF) 40 Resource Recovery Services attachment facility (RRSAF) 72 invoking stored procedures syntax for command line processor 1058 isolation level REXX 422 ISPF (Interactive System Productivity Facility) browse 1072, 1081 DB2 uses dialog management 1067 DB2I Primary Option Menu 1010 Program Preparation panel 1012 programming 1051 scroll command 1082 ISPLINK SELECT services 1051
join operation FULL OUTER JOIN 685 INNER JOIN 682 joining a table to itself 682 joining tables 677 LEFT OUTER JOIN 686 more than one join 680 more than one join type 681 operand nested table expression 677 user-defined table function 677 RIGHT OUTER JOIN 686 SQL rules 687
K
KEEPDYNAMIC option BIND PACKAGE subcommand 197 BIND PLAN subcommand 197 key composite 450 foreign 451 parent 450 primary choosing 450 defining 449 recommendations for defining 449 using timestamp 450 unique 459
L
label, column 166 language interface modules DSNCLI 968 large object (LOB) character conversion 749 declaring host variables 741 for precompiler 741 declaring LOB file reference variables 741 declaring LOB locators 741 defining and moving data into DB2 438 description 440 expression 749 file reference variable 751 indicator variable 748 locator 747 materialization 747 sample applications 740 LEFT OUTER JOIN clause 686 LEVEL precompiler option 959 libraries for table declarations and host-variable structures LINECOUNT precompiler option 959 link-editing 968 AMODE option 1035 RMODE option 1035 linkage conventions GENERAL 599, 602 GENERAL WITH NULLS 599, 605 SQL 599, 609 stored procedures 599 LOAD z/OS macro used by CAF 48 LOAD z/OS macro used by RRSAF 81 LOB column, definition 438
133
J
Java stored procedures debugging with the Unified Debugger 1088 JCL (job control language) batch backout example 1056 DDNAME list format 951 page number format 951 precompilation procedures 1007 precompiler option list format 950 preparing a CICS program 1008 preparing a object-oriented program 946 starting a TSO batch application 1059
1168
LOB file reference variable assembler 232 C/C++ 251, 263 COBOL 302, 312 PL/I 385, 391 LOB host variable array C/C++ 263 COBOL 312 PL/I 391 LOB locator assembler 232 C/C++ 251, 263 COBOL 312 Fortran 373 PL/I 385, 391 LOB values fetching 726 LOB variable assembler 232 C/C++ 251 COBOL 302 Fortran 373 PL/I 385 location name 34 lock escalation when retrieving large numbers of rows
754
M
mapping macro assembler applications 248 DSNXDBRM 951 DSNXNBRM 951 MARGINS precompiler option 959 Mashup Center creating a feed 37 materialization LOBs 747 merging data 643 message analyzing 1094 obtaining text assembler 214 C 283 COBOL 334 Fortran 379 PL/I 403 message data WebSphere MQ 897 message handling WebSphere MQ 897 Message Queue Interface (MQI) DB2 MQ tables 908 policies 899 services 898 WebSphere MQ 898 messages WebSphere MQ 897 migrating applications 1 mixed data converting 894 description 436 transmitting to remote location 894
MLS (multilevel security) referential constraints 447 triggers 484 modified source statements 951 modify external stored procedure definition 635 modifying data 637 MPP program checkpoints 28 MQ message queue sending table data 907 shredding XML documents 907 MQ XML composition stored procedures alternative method 907 MQ XML decomposition stored procedures alternative method 907 MQI DB2 MQ functions converting applications to use 916 MQSeries DB2 functions connecting applications 920 MQPUBLISH 901 MQPUBLISHXML 901 MQREAD 901 MQREADALL 901 MQREADALLCLOB 901 MQREADALLXML 901 MQREADCLOB 901 MQREADXML 901 MQRECEIVE 901 MQRECEIVEALL 901 MQRECEIVEALLCLOB 901 MQRECEIVEALLXML 901 MQRECEIVECLOB 901 MQRECEIVEXML 901 MQSEND 901 MQSENDXML 901 MQSENDXMLFILE 901 MQSENDXMLFILECLOB 901 MQSUBSCRIBE 901 MQUNSUBSCRIBE 901 programming considerations 901 retrieving messages 919 sending messages 918 DB2 scalar functions 901 DB2 stored procedures DXXMQINSERT 901 DXXMQINSERTALL 901 DXXMQINSERTALLCLOB 901 DXXMQINSERTCLOB 901 DXXMQRETRIEVE 901 DXXMQRETRIEVECLOB 901 DXXMQSHRED 901 DXXMQSHREDALL 901 DXXMQSHREDALLCLOB 901 DXXMQSHREDCLOB 901 DB2 table functions 901 DB2 XML-specific functions 901 MSGFILE runtime option using to debug stored procedures 1091 multilevel security (MLS) check referential constraints 447 triggers 484 multiple-row FETCH statement checking DB2_LAST_ROW 211 SQLCODE +100 202 Index
1169
multiple-row INSERT statement dynamic execution 188 NOT ATOMIC CONTINUE ON SQLEXCEPTION using GET DIAGNOSTICS 208
208
N
naming convention assembler 243 C 283 COBOL 334 Fortran 379 PL/I 403 REXX 415 tables you create 453 native SQL procedures 542 BIND COPY 570 BIND COPY REPLACE 571 creating 550 debugging with the Unified Debugger 1088 deploying to another server 574 deploying to production 574 migrating from external SQL procedures 576 packages for 570 replacing packages for 571 nested compound statements cursor declarations 556 definition 553 for controlling scope of conditions 559 scope of variables 552 statement labels 554 nested table expression correlated reference 677 correlation name 677 join operation 677 NEWFUN precompiler option 959 NODYNAM option of COBOL 334 NOFOR precompiler option 959 NOGRAPHIC precompiler option 959 non-DB2 resources accessing from stored procedure 619 nontabular data storage 652 NOOPTIONS precompiler option 959 NOPADNTSTR precompiler option 959 NOSOURCE precompiler option 959 NOT FOUND clause of WHENEVER statement 207 not logged table spaces recovering 32 NOXREF precompiler option 959 NUL character in C 283 null determining value of output host variable 150 NULL pointer in C 283 null value column value of UPDATE statement 653 determining column value 152 inserting into columns 154 Null, in REXX 415 numeric data width of column in results 1081 numeric data description 436 width of column in results 1075
numeric host variable assembler 232 C/C++ 251 COBOL 302 Fortran 373 PL/I 385 numeric host variable array C/C++ 263 COBOL 312 PL/I 391 NUMTCB parameter 790
O
object-oriented program, preparation 946 objects creating in a application program 435 ON clause, joining tables 677 ONEPASS precompiler option 959 OPEN statement opening a cursor 711 opening a rowset cursor 715 prepared SELECT 164 USING DESCRIPTOR clause 166 without parameter markers 166 OPEN (connection function of CAF) description 53 language examples 58 program example 66 syntax 58 syntax usage 58 OPTIONS precompiler option 959 ORDER BY clause SELECT statement 666 with ORDER OF clause 651 ORDER OF clause 651 organization application examples 1128 outer join FULL OUTER JOIN 685 LEFT OUTER JOIN 686 RIGHT OUTER JOIN 686 output host variable determining if null 150 determining if truncated 150 output host variable processing errors 142 output parameters stored procedures 538, 780
P
package advantages 14 binding DBRM to a package 969 remote 970 to plans 975 identifying at run time 977 invalid 17 invalidated 999 listing 975 location 978 rebinding examples 991 rebinding with pattern-matching characters
991
1170
package (continued) selecting 977, 978 trigger 480 version, identifying 972 package authorization for external stored procedures 618 packages collection ID for stored procedure packages 623 for external procedures 618 for native SQL procedures 570 for nested routines 623 packages bound on DB2 Version 4 and before 1 PADNTSTR precompiler option 959 panel Current SPUFI Defaults 1074, 1078 DB2I Primary Option Menu 1067 DSNEPRI 1067 DSNESP01 1067 DSNESP02 1074 DSNESP07 1078 EDIT (for SPUFI input data set) 1067 SPUFI 1067 panels DB2I (DB2 Interactive) 133 DB2I DEFAULTS 133 DCLGEN 133 DSNEDP01 133 DSNEOP01 133 DSNEOP02 133 REBIND PACKAGE 1039 REBIND TRIGGER PACKAGE 1041 parameter list stored procedures 538 parameter marker casting in function invocation 772 dynamic SQL 186 more than one 186 values provided by OPEN 164 with arbitrary statements 166 parameter marker information obtaining by using an SQLDA 191 PARAMETER STYLE SQL option using to debug stored procedures 1085 parent key 450 PARMS option 1050 performance affected by application structure 1051 application programs 756 programming applications 19 PERIOD precompiler option 959 phone application, description 1128 PL/I creating stored procedure 596 PL/I application program coding considerations 403 data type compatibility 399 DBCS constants 403 DCLGEN support 131 declaring tables 403 declaring views 403 defining the SQLDA 137, 384 host structure 396 host variable array, declaring 385 host variable, declaring 385 INCLUDE statement 403 including SQLCA 383
PL/I application program (continued) indicator variable array declaration 398 indicator variable declaration 398 naming convention 403 SQLCODE host variable 383 SQLSTATE host variable 383 statement labels 403 variable array declaration 391 variable declaration 385 WHENEVER statement 403 plan invalid 17 planning applications 1 bind options 19 binding 14 planning application programs SQL processing options 14 planning applications recovery 21 plans bound on DB2 Version 4 and before 1 pointer host variables declaring 276 referencing in SQL statements 275 policies Application Messaging Interface (AMI) 900 Message Queue Interface (MQI) 899 WebSphere MQ 897 precompiler binding on another system 946 data sets used by 948 description 945 diagnostics 951 functions 946 input 949 maximum size of input 949 modified source statements 951 option descriptions 958 options CONNECT 968 defaults 966 DRDA access 968 SQL 968 output 951 precompiling programs 945 running 946 starting dynamically 950 JCL for procedures 1007 submitting jobs ISPF panels 1012 submitting jobs with ISPF panels 941 predicate general rules 660 predictive governing in a distributed environment 200 with DEFER(PREPARE) 200 writing an application for 200 PRELINK utility 1012 PREPARE statement dynamic execution 186 host variable 164 INTO clause 166 prepared SQL statement caching 197 preparing programs overview 1002 Index
1171
PRIMARY KEY clause ALTER TABLE statement 459 CREATE TABLE statement 449 problem determination, guidelines 1093 procedure status retrieving 791 setting 791 procedures creating versions 572 inheriting special registers 524 WLM_SET_CLIENT_INFO 806 product-sensitive programming information, described production environment deploying native SQL procedures 574 program preparation 941 program problems checklist documenting error situations 1092 error messages 1098, 1099 programming applications performance 19 programming interface information, described 1153 project activity sample table 1114 project application, description 1128 project sample table 1113 PSPI symbols 1153
1153
Q
queries in application programs 123 tuning in application programs 756 QUOTE precompiler option 959 QUOTESQL precompiler option 959
R
RANK specification 668 example 668 real-time statistics stored procedure 826 reason code CAF 64 RRSAF 115 X"00D44057" 757 REBIND PACKAGE subcommand of DSN generating list of 994 rebinding with wildcard characters 991 remote 970 REBIND PLAN subcommand of DSN generating list of 994 options NOPKLIST 993 PKLIST 993 remote 970 REBIND TRIGGER PACKAGE subcommand of DSN rebinding automatically conditions for 999 changes that require 17 list of plans and packages 993 lists of plans or packages 994 packages with pattern-matching characters 991 planning for 999 plans 993 recovering table spaces that are not logged 32
480
recovery IMS programs 23, 30 planning for in your application 21 recursive SQL controlling depth 464 description 690 examples 464 infinite loops 690 rules 690 single level explosion 464 summarized explosion 464 reentrant code in stored procedures 624 referential constraint defining 446 description 446 determining violations 1103 informational 448 name 451 on tables with data encryption 453 on tables with multilevel security 447 referential integrity effect on subqueries 700 programming considerations 1103 register conventions RRSAF 81 registering XML schema XSR_REGISTER 880 registers changed by CAF (call attachment facility) 49 release incompatibilities applications and SQL 1 RELEASE SAVEPOINT statement 31 RELEASE statement, with DRDA access 893 remote stored procedure preparing client program 785 REPLACE statement (COBOL) 334 requester 34 resetting control blocks CAF 61 RESIGNAL statement raising a condition 566 setting SQLSTATE value 568 resource limit facility (governor) description 199 writing an application for predictive governing 200 Resource Recovery Services attachment facility (RRSAF) application program preparation 81 authorization IDs 77 behavior summary 83 connection functions 85 connection name 77 connection properties 77 connection type 77 DB2 abends 77 description 75 implicit connections 82 invoking 72 loading 79 making available 79 parameters for CALL DSNRLI 82 program examples 118 program requirements 81 register conventions 81 return codes and reason codes 115 sample JCL 118
1172
Resource Recovery Services attachment facility (RRSAF) (continued) sample scenarios 116 scope 77 terminated task 77 restart, DL/I batch programs using JCL 1056 restricted system definition 36 forcing rules 36 update rules 36 restricted systems 33 result column join operation 684 naming with AS clause 665 result set locator assembler 232 C/C++ 251 COBOL 302 Fortran 373 PL/I 385 result sets receiving from a stored procedure 792 result table description 664 example 664 numbering rows 668 of SELECT statement 664 read-only 710 result tables formatting 664 retrieving data in ASCII from DB2 for z/OS 166 data in Unicode from DB2 for z/OS 166 data using SELECT * 692 data, changing the CCSID 166 large volumes of data 754 multiple rows into host variable arrays 156 retrieving a single row into host variables 147 return code CAF 64 DSN command 1047 RRSAF 115 RETURN statement returning SQL procedure status 791 REXX creating stored procedure 596 REXX application program including SQLCA 413 SQLCODE host variable 413 SQLSTATE host variable 413 REXX program application programming interface CONNECT 418 DISCONNECT 418 EXECSQL 418 character input data 420 data type conversion 414 DSNREXX 418 error handling 415 input data type 414 isolation level 422 naming convention 415 naming cursors 423 naming prepared statements 423 running 1050 SQLDA 137, 413
REXX program (continued) statement label 415 RIB (release information block) CONNECT function of CAF 54 IDENTIFY function of RRSAF 85 RID for direct row access 738 RID function 738 RIGHT OUTER JOIN clause 686 RMODE link-edit option 1035 ROLB call, IMS 23, 26 ROLL call, IMS 23, 26 rollback changes within a unit of work 31 ROLLBACK option CICS SYNCPOINT command 22 ROLLBACK statement description 1072 error in IMS 757 in a stored procedure 549 TO SAVEPOINT clause 31 when to issue 22 with RRSAF 75 routines inheriting special registers 524 row selecting with WHERE clause 660 updating 653 updating current 712 updating large volumes 655 ROW CHANGE TIMESTAMP 677 ROW_NUMBER 668 row-level security 447 ROWID data type 436 inserting in table 459 ROWID column defining 640, 740 defining LOBs 438 inserting values into 640 using for direct row access 738 ROWID host variable array C/C++ 263 COBOL 312 PL/I 391 ROWID variable assembler 232 C/C++ 251 COBOL 302 Fortran 373 PL/I 385 rowset deleting current 716 updating current 716 rowset cursor closing 720 DB2 for z/OS down-level requester declaring 715 end-of-data condition 715 example 735 multiple-row FETCH 716 opening 715 using 714 RRSAF functions summary of behavior 83 RUN subcommand of DSN return code processing 1047
896
Index
1173
RUN subcommand of DSN (continued) running a program in TSO foreground run time libraries, DB2I background processing 1016 EDITJCL processing 1016 running application program CICS 1060 errors 1093 IMS 1060
1047
S
sample application DRDA access 352 DRDA access with CONNECT statements 352 DRDA with three-part names 358 dynamic SQL 287 environments 1130 languages 1130 LOB 1128 organization 1128 phone 1128 programs 1126 project 1128 static SQL 287 stored procedure 1128 use 1126 user-defined function 1128 sample applications 1105 databases 1123 storage 1122 storage groups 1123 structure 1122 Sample applications TSO 1131 sample data 1105 Sample data joins 688 sample program DSN8BC3 334 DSN8BD3 283 DSN8BE3 283 DSN8BF3 379 DSN8BP3 403 sample tables 1105 DSN8910.ACT (activity) 1105 DSN8910.DEMO_UNICODE (Unicode sample ) 1116 DSN8910.DEPT (department) 1106 DSN8910.EMP (employee) 1108 DSN8910.EMP_PHOTO_RESUME (employee photo and resume) 1111 DSN8910.EMPPROJACT (employee-to-project activity) 1115 DSN8910.PROJ (project) 1113 PROJACT (project activity) 1114 relationships 1117 storage 1122 views 1118 samples provided by DB2 1105 savepoint distributed environment 35 SAVEPOINT statement 31 savepoints 31 scalar pointer host variable declaring 276 referencing in SQL statements 275
scrollable cursor comparison of types 721 DB2 for z/OS down-level requester 896 distributed environment 35 dynamic dynamic model 705 fetching current row 724 fetch orientation 720 retrieving rows 720 sensitive dynamic 705 sensitive static 705 sensitivity 721 static creating delete hole 724 creating update hole 724 holes in result table 724 number of rows 723 removing holes 723 static model 705 updatable 705 scrolling backward through data 730 backward using identity columns 730 backward using ROWIDs 730 in any direction 722 ISPF (Interactive System Productivity Facility) 1082 search condition comparison operators 660 NOT keyword 660 SELECT statement 693 WHERE clause 660 SELECT FROM DELETE statement description 657 retrieving multiple rows 657 with INCLUDE clause 657 SELECT FROM INSERT statement BEFORE trigger values 645 default values 645 description 645 inserting into view 645 multiple rows cursor sensitivity 645 effect of changes 645 effect of SAVEPOINT and ROLLBACK 645 effect of WITH HOLD 645 processing errors 645 result table of cursor 645 using cursor 645 using FETCH FIRST 645 using INPUT SEQUENCE 645 result table 645 retrieving BEFORE trigger values 645 default values 645 generated values 645 multiple rows 645 special registers 645 using SELECT INTO 645 SELECT FROM MERGE statement description 644 with INCLUDE clause 644 SELECT FROM UPDATE statement description 654 retrieving multiple rows 654 with INCLUDE clause 645, 654
1174
SELECT INTO using with host variables 147 SELECT statement AS clause with ORDER BY clause 666 changing result format 1081 clauses DISTINCT 665 EXCEPT 670 FROM 660 GROUP BY 675 HAVING 676 INTERSECT 670 ORDER BY 666 UNION 670 WHERE 660 derived column with AS clause 663 filtering by time changed 677 fixed-list 164 named columns 660 ORDER BY clause derived columns 666 with AS clause 666 parameter markers 166 search condition 693 selecting a set of rows 704 subqueries 693 unnamed columns 663 using with * (to select all columns) 660 column-name list 660 DECLARE CURSOR statement 710, 715 varying-list 166 selecting all columns 660 named columns 660 rows 660 some columns 660 unnamed columns 663 semicolon default SPUFI statement terminator 1074 embedded 1140 sequence numbers COBOL application program 334 Fortran 379 PL/I 403 sequence object creating 487 referencing 754 using across multiple tables 487 server 34 services Application Messaging Interface (AMI) 900 Message Queue Interface (MQI) 898 WebSphere MQ 897 SET clause of UPDATE statement 653 SET CURRENT PACKAGESET statement 978 SET ENCRYPTION PASSWORD statement 453 SET_CLIENT_ID (connection function of RRSAF) language examples 105 syntax 105 SET_ID (connection function of RRSAF) language examples 104 syntax 104 setting SQL terminator DSNTIAD 1140 SPUFI 1079
shortcut keys keyboard xiv shredding XML documents from MQ messages 907 SIGNAL statement raising a condition 566 setting condition message text 567 SIGNON (connection function of RRSAF) language examples 91 program example 118 syntax 91 SOME quantified predicate 695 sort key ORDER BY clause 666 ordering 666 SOURCE precompiler option 959 special register behavior in stored procedures 549 behavior in user-defined functions and stored procedures 524 CURRENT PACKAGE PATH 976 CURRENT PACKAGESET 976 CURRENT RULES 1001 SPUFI browsing output 1081 changed column widths 1081 CONNECT LOCATION field 1072 created column heading 1082 DB2 governor 1067 default values 1074 entering comments 1072 panels allocates RESULT data set 1072 filling in 1067 format and display output 1081 previous values displayed on panel 1067 selecting on DB2I menu 1067 processing SQL statements 1067 retrieving Unicode data 1072 setting SQL terminator 1079 specifying SQL statement terminator 1074 SQLCODE returned 1081 SPUFI DEFAULTS panel 1075 SQL (Structured Query Language) checking execution 201 coding dynamic 162 Fortran program 146 object extensions 488 cursors 704 dynamic coding 158 sample C program 287 return codes checking 202 handling 203 statement terminator 1140 string delimiter 1018 syntax checking 894 varying-list 166 SQL communication area (SQLCA) description 202 using DSNTIAR to format 203 SQL data types compatibility with host language data types 143 SQL linkage convention 599, 609 SQL precompiler option 959
Index
1175
SQL procedure allowable statements 543 body 543 changing 578 conditions, handling 557 ignoring conditions 566 parameters 543 preparation using DSNTPSMP procedure 582 SQL variable 543 SQL procedure processor (DSNTPSMP) result set 591 SQL procedure statement CONTINUE handler 557 EXIT handler 557 handler 557 handling errors 557 SQL procedures 542 creating versions 572 declaring cursors 556 nested compound statements 553 SQL processing optionsplanning for 14 SQL statement nesting restrictions 703 stored procedures 703 user-defined functions 703 SQL statement terminator modifying in DSNTEP2 and DSNTEP4 1135, 1142 modifying in DSNTIAD 1140 modifying in SPUFI 1074 specifying in SPUFI 1074 SQL statements ALTER FUNCTION 493 checking for successful execution 136 CLOSE 164, 714, 720 COBOL program sections 334 coding REXX 146 comments assembler 243 C 283 COBOL 334 Fortran 379 PL/I 403 REXX 415 CONNECT, with DRDA access 892 continuation assembler 243 C 283 COBOL 334 Fortran 379 PL/I 403 REXX 415 CREATE FUNCTION 493 DECLARE CURSOR description 710, 715 example 164, 166 DELETE description 712 example 655 DESCRIBE 166 embedded 949 error return codes 203 EXECUTE 186 EXECUTE IMMEDIATE 184 FETCH description 712, 716 example 164 Fortran program sections 379
SQL statements (continued) in application programs 123 INSERT 637 labels assembler 243 C 283 COBOL 334 Fortran 379 PL/I 403 REXX 415 margins assembler 243 C 283 COBOL 334 Fortran 379 PL/I 403 REXX 415 MERGE example 643 OPEN description 711, 715 example 164 PL/I program sections 403 PREPARE 186 RELEASE, with DRDA access 893 REXX program sections 415 SELECT description 660 joining a table to itself 682 joining tables 677 SELECT FROM DELETE 657 SELECT FROM INSERT 645 SELECT FROM MERGE 644 SELECT FROM UPDATE 654 set symbols 243 UPDATE description 712, 716 example 653 WHENEVER 207 SQL terminator, specifying in DSNTEP2 and DSNTEP4 1142 SQL terminator, specifying in DSNTIAD 1140 SQL variable 543 SQL-INIT-FLAG, resetting 334 SQLCA (SQL communication area) checking SQLCODE 206 checking SQLERRD(3) 202 checking SQLSTATE 206 checking SQLWARN0 202 description 202 DSNTIAC subroutine assembler 243 C 283 COBOL 334 PL/I 403 DSNTIAR subroutine assembler 214 C 283 COBOL 334 Fortran 379 PL/I 403 sample C program 287 SQLCA (SQL communications area) assembler 229 C/C++ 249 COBOL 299 deciding whether to include 136
1135,
1176
SQLCA (SQL communications area) (continued) Fortran 371 PL/I 383 REXX 413 SQLCODE -923 1004 -925 757 -926 757 +100 207 +802 214 values 206 SQLCODE host variable deciding whether to declare 136 SQLDA setting an XML host variable 166 XML column 166 SQLDA (SQL descriptor area) allocating storage 166, 716 assembler 137, 231 assembler program 166 C 166 C/C++ 137, 250 COBOL 137, 301 declaring 716 dynamic SELECT example 166 for LOBs and distinct types 166 Fortran 137, 372 multiple-row FETCH statement 716 no occurrences of SQLVAR 166 OPEN statement 164 parameter markers 166 PL/I 137, 166, 384 requires storage addresses 166 REXX 137, 413 setting output fields 716 storing parameter marker information 191 varying-list SELECT statement 166 SQLERROR clause of WHENEVER statement 207 SQLN field of SQLDA 166 SQLRULES, option of BIND PLAN subcommand 1001 SQLSTATE "01519" 214 "2D521" 757 "57015" 1004 values 206 SQLSTATE host variable deciding whether to declare 136 SQLSTATEs web service consumer 938 SQLVAR field of SQLDA 166 SQLWARNING clause of WHENEVER statement 207 SSID (subsystem identifier), specifying 1017 static SQL C/C++ application program examples 287 description 158 host variables 159 sample C program 287 statistics real-time stored procedure 826 STDSQL precompiler option 959 storage acquiring retrieved row 166 SQLDA 166 addresses in SQLDA 166
storage groups for sample applications 1123 storage shortages when calling stored procedures 788 stored procedure abend 775 accessing CICS 619 accessing IMS 619 accessing non-DB2 resources 619 accessing transition tables 527 authorization to run 775 CALL statement 775 calling from a REXX procedure 781 calling from an application 775 COMMIT statement 549 compatible data types 781 creating 535 creating external stored procedure 596 cursors 549 Data types 627 defining parameter lists 599 DSNACCOR 826 DXXMQGEN 866 DXXMQGENCLOB 873 DXXMQINSERT 846 DXXMQINSERTALL 856 DXXMQINSERTALLCLOB 863 DXXMQINSERTCLOB 851 DXXMQRETRIEVE 869 DXXMQRETRIEVECLOB 877 DXXMQSHRED 849 DXXMQSHREDALL 859 DXXMQSHREDALLCLOB 861 DXXMQSHREDCLOB 854 example 539 invoking from a trigger 476 languages supported 546 linkage conventions 599 preparation 535 real-time statistics 826 reentrant 624 returning non-relational data 622 returning result set 622 ROLLBACK statement 549 running multiple instances 788 types 535 use of special registers 549 using host variables with 539 using temporary tables in 622 WLM_REFRESH 804 writing 546 writing in REXX 632 stored procedure result sets receiving in a program 792 stored procedures >DSNACCOX 797 ADMIN_COMMAND_DB2 797 ADMIN_COMMAND_DSN 797 ADMIN_COMMAND_UNIX 797 ADMIN_DS_BROWSE 797 ADMIN_DS_DELETE 797 ADMIN_DS_LIST 797 ADMIN_DS_RENAME 797 ADMIN_DS_SEARCH 797 ADMIN_DS_WRITE 797 ADMIN_INFO_HOST 797 ADMIN_INFO_SSID 797 Index
1177
stored procedures (continued) ADMIN_JOB_CANCEL 797 ADMIN_JOB_FETCH 797 ADMIN_JOB_QUERY 797 ADMIN_JOB_SUBMIT 797 ADMIN_TASK_ADD 797 ADMIN_TASK_REMOVE 797 ADMIN_UTL_SCHEDULE 797 ADMIN_UTL_SORT 797 calling other programs 623 creating native SQL procedures 550 DB2-supplied 797 debugging 1085 debugging with the Unified Debugger 1088 description 536 DSNACCOR 797 DSNACICS 797, 808 DSNAEXP 797 DSNAHVPM 797 DSNAIMS 797, 816 DSNAIMS2 797, 821 DSNLEUSR 797 DSNTBIND 797 DSNTPSMP 797 DSNUTILS 797 DSNUTILU 797 DSNWSPM 797 DSNWZP 797 from command line processor 1058 GET_CONFIG 797 GET_MESSAGE 797 GET_SYSTEM_INFO 797 inheriting special registers 524 migrating external SQL to native SQL 576 package collection ID 623 packages for nested routines 623 parameter list 538 passing large output parameters 780 recording debugging messages 1091 running concurrently 790 SQLJ.ALTER_JAVA_PATH 797 SQLJ.DB2_INSTALL_JAR 797 SQLJ.DB2_REMOVE_JAR 797 SQLJ.DB2_REPLACE_JAR 797 SQLJ.DB2_UPDATEJARINFO 797 SQLJ.INSTALL_JAR 797 SQLJ.REMOVE_JAR 797 SQLJ.REPLACE_JAR 797 syntax for invoking from command line processor WLM_REFRESH 797 XDBDECOMPXML 797 XSR_ADDSCHEMADOC 797 XSR_COMPLETE 797 XSR_REGISTER 797 XSR_REMOVE 797 stored proceduresmoving to a WLM-established environment 549 storm drain effect 121 string data type 436 structure array host variable declaring 276 referencing in SQL statements 275 subquery basic predicate 695 conceptual overview 693
subquery (continued) correlated DELETE statement 698 description 697 example 697 UPDATE statement 698 DELETE statement 698 description 693 EXISTS predicate 695 IN predicate 695 quantified predicate 695 referential constraints 700 restrictions with DELETE 700 UPDATE statement 698 subsystem identifier (SSID), specifying 1017 subsystem parameters 790 summarizing group values 675 SWITCH TO (connection function of RRSAF) language examples 89 syntax 89 SYNC call, IMS 26 synchronization call abends 757 SYNCPOINT command of CICS 22 syntax diagram how to read xv SYSIBM.MQPOLICY_TABLE column descriptions 908 SYSIBM.MQSERVICE_TABLE column descriptions 908 SYSLIB data sets 1007 SYSPRINT precompiler output options section 1095 source statements section, example 1095 summary section, example 1095 symbol cross-reference section 1095 used to analyze errors 1095 SYSTERM output to analyze errors 1094
T
table altering changing definitions 453 using CREATE and ALTER 215 copying from remote locations 889 declaring in a program 124 deleting rows 655 dependent, cycle restrictions 447 displaying, list of 659 DROP statement 460 filling with test data 1066 incomplete definition of 459 inserting multiple rows 639 inserting single row 637 loading, in referential structure 445 merging rows 643 populating 1066 referential structure 446 retrieving 704 selecting values as you delete rows 657 selecting values as you insert rows 645 selecting values as you merge rows 644 selecting values as you update rows 654 temporary 456 updating rows 653 using three-part table names 889
1058
1178
table and view declarations including in an application program 133 table and view declarationsgenerating with DCLGEN 125 table declarations adding to libraries 133 table locator assembler 232 C/C++ 251 COBOL 302 PL/I 385 table space not logged recovering 32 table spaces for sample applications 1124 tables creating for data integrity 443 supplied by DB2 DSN_FUNCTION_TABLE 768 TCB (task control block) capabilities with CAF 44 capabilities with RRSAF 75 temporary table advantages of 456 working with 456 terminal monitor program (TMP) 1047 TERMINATE IDENTIFY (connection function of RRSAF) language examples 111 program example 118 syntax 111 TERMINATE THREAD (connection function of RRSAF) language examples 110 program example 118 syntax 110 TEST command of TSO 1098 test environment, designing 1047 test tables 1063 test views of existing tables 1063 TIME precompiler option 959 time that row was changed determining 754 TMP (terminal monitor program) DSN command processor 1047 running under TSO 1059 transition table, trigger 468 transition variable, trigger 468 TRANSLATE (connection function of CAF) description 53 language example 63 program example 66 syntax 63 TRANSLATE (connection function of RRSAF) language examples 112 syntax 112 translating requests into SQL 215 trigger activation order 481 activation time 468 cascading 480 coding 468 data integrity 484 delete 468 description 468 FOR EACH ROW 468 FOR EACH STATEMENT 468 granularity 468 insert 468
trigger (continued) interaction with constraints 482 interaction with security label columns 484 invoking stored procedure 476 invoking user-defined function 476 naming 468 parts example 468 parts of 468 passing transition tables 476 subject table 468 transition table 468 transition variable 468 triggering event 468 update 468 using identity columns 468 with row-level security 484 troubleshooting errors for output host variables 142 TRUNCATE 655 example 655 truncated determining value of output host variable 150 TSO CLISTs calling application programs 1060 running in foreground 1060 TEST command 1098 TWOPASS precompiler option 959
U
Unicode data, retrieving from DB2 for z/OS 166 sample table 1116 Unified Debugger debugging stored procedures 1088 setting up 1088 UNION eliminating duplicate rows 671 keeping duplicate rows with ALL 673 UNION clause columns of result table 670 combining SELECT statements 670 UNIQUE clause 449 unit of work CICS 22 completion open cursors 708 description 21 IMS 26 TSO 22 undoing changes within 31 updatable cursor 710 UPDATE statement correlated subqueries 698 description 653 positioned FOR ROW n OF ROWSET 716 restrictions 712 WHERE CURRENT clause 712, 716 SET clause 653 updating during retrieval 691 large volumes 655 updating data by using host variables 153
Index
1179
USER special register value in INSERT statement 436 value in UPDATE statement 653 user-defined function Debug Tool 1083 user-defined function (UDF) abnormal termination 531 accessing transition tables 527 ALTER FUNCTION statement 493 authorization ID 763 call type 511 casting arguments 772 characteristics 498 coding guidelines 500 CREATE FUNCTION statement 493 data type promotion 764 DBINFO structure 513 definer 496 defining 498 description 496 diagnostic message 510 DSN_FUNCTION_TABLE 767 example external scalar 493, 532 external table 493 function resolution 764 overloading operator 493 sourced 493 SQL 493 function resolution 764 host data types assembler 503 C 503 COBOL 503 PL/I 503 implementer 496 implementing 497 indicators input 510 result 510 inheriting special registers 524 invoker 496 invoking 761 invoking from a trigger 476 invoking from predicate 761 main program 500 multiple programs 523 naming 510 nesting SQL statements 703 parallelism considerations 500 parameter conventions 503 assembler 516 C 517 COBOL 520 PL/I 522 preparing 530 reentrant 523 restrictions 500 samples 533 scratchpad 511, 531 scrollable cursor 761 setting result values 509 simplifying function resolution 763 specific name 510 steps in creating and using 496 subprogram 500
user-defined function (UDF) (continued) table locators assembler 528 C 529 COBOL 529 PL/I 530 testing 1083 types 496 user-defined functions SOAPHTTPNC 937 SOAPHTTPNV 937 USING DESCRIPTOR clause EXECUTE statement 166 FETCH statement 166 OPEN statement 166
V
VALUES clause, INSERT statement 637 varbinary host variable assembler 232 C/C++ 251 COBOL 302 PL/I 385 varbinary host variable array C/C++ 263 PL/I 391 variable assembler 232 C/C++ 251 COBOL 302 declaring in SQL procedure 543 Fortran 373 PL/I 385 variable array C/C++ 263 COBOL 312 PL/I 391 version changing for SQL procedure 578 version of a package 972 VERSION precompiler option 959, 972 versions procedures 572 view contents 461 declaring in a program 124 description 460 dropping 462 identity columns 461 join of two or more tables 461 referencing special registers 461 retrieving 704 summary data 461 union of two or more tables 461 using deleting rows 655 inserting rows 637 updating rows 653
W
web service consumer SQLSTATEs 938 WebSphere MQ APIs 896
1180
WebSphere MQ (continued) Application Messaging Interface (AMI) 899 commit environment 906 description 896 interaction with DB2 896 message handling 897 Message Queue Interface (MQI) 898 messages 897 WHENEVER statement assembler 243 C 283 COBOL 334 CONTINUE clause 207 Fortran 379 GO TO clause 207 NOT FOUND clause 207, 712 PL/I 403 specifying 207 SQL error codes 207 SQLERROR clause 207 SQLWARNING clause 207 WHERE clause SELECT statement description 660 joining a table to itself 682 joining tables 677 WITH clause common table expressions 464 WITH HOLD clause and CICS 708 and IMS 708 DECLARE CURSOR statement 708 restrictions 708 WITH HOLD cursor effect on dynamic SQL 186 WLM environment moving stored procedures 549 WLM_REFRESH stored procedure description 804 option descriptions 805 sample JCL 806 syntax diagram 804 WLM_SET_CLIENT_INFO procedure 806 write-down privilege 484
XML host variable array C/C++ 263 COBOL 312 PL/I 391 XML schema registration XSR_ADDSCHEMADOC stored procedure XSR_COMPLETE stored procedure 884 XSR_REMOVE stored procedure 885 XML values fetching 726 XML variable assembler 232 C/C++ 251 COBOL 302 PL/I 385 XMLEXISTS 755 description 755 example 755 XMLQUERY 663 description 663 example 663 XPath 663 XPath contexts 663 XPath contexts XMLEXISTS 755 XPath expressions 663 XREF precompiler option 959 XSR_COMPLETE stored procedure 884 XSR_REGISTER register XML schema 880
882
X
XDBDECOMPXML authorization 887 invocation syntax 887 parameter descriptions 887 stored procedure 886 XML data embedded SQL applications 216 retrieving from tables, embedded SQL applications selecting 663 updating, embedded SQL applications 221 XML decomposition XDBDECOMPXML stored procedure 886 XML file reference variable assembler 232 C/C++ 251, 263 COBOL 302, 312 PL/I 385, 391 XML host variable SQLDA 166
223
Index
1181
1182
Printed in USA
SC18-9841-11
Spine information: