DB2 - Application Development Guide
DB2 - Application Development Guide
SC09-2949-00
® ®
IBM DB2 Universal Database
Application Development Guide
Version 7
SC09-2949-00
Before using this information and the product it supports, be sure to read the general information under
“Appendix G. Notices” on page 813.
This document contains proprietary information of IBM. It is provided under a license agreement and is protected by
copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
Order publications through your IBM representative or the IBM branch office serving your locality or by calling
1-800-879-2755 in the United States or 1-800-IBM-4YOU in Canada.
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 1993, 2000. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Part 1. DB2 Application Part 2. Embedding SQL in
Development Concepts . . . . . . 1 Applications . . . . . . . . . . 43
Contents v
Associating Transforms with a Type. . . 318 Example: AVG over a UDT . . . . . 375
Where Transform Groups Must Be Example: Counting . . . . . . . . 375
Specified . . . . . . . . . . . . 320 Example: Counting with an OLE
Creating the Mapping to the Host Automation Object . . . . . . . . 376
Language Program: Transform Functions . 321 Example: Table Function Returning
Working with Structured Type Host Document IDs . . . . . . . . . . 376
Variables . . . . . . . . . . . . 340 Using Functions and Methods . . . . . 377
Referring to Functions . . . . . . . 377
Chapter 13. Using Large Objects (LOBs) 341 Examples of Function Invocations . . . 378
What are LOBs? . . . . . . . . . . 341 Using Parameter Markers in Functions 379
Understanding Large Object Data Types Using Qualified Function Reference . . . 379
(BLOB, CLOB, DBCLOB) . . . . . . . 342 Using Unqualified Function Reference 380
Understanding Large Object Locators . . . 343 Summary of Function References . . . 380
Example: Using a Locator to Work With a
CLOB Value . . . . . . . . . . . 345 Chapter 15. Writing User-Defined
How the Sample LOBLOC Program Functions (UDFs) and Methods . . . . 385
Works. . . . . . . . . . . . . 345 Description . . . . . . . . . . . . 385
C Sample: LOBLOC.SQC . . . . . . 346 Interface between DB2 and a UDF . . . . 387
COBOL Sample: LOBLOC.SQB . . . . 348 The Arguments Passed from DB2 to a
Example: Deferring the Evaluation of a LOB UDF . . . . . . . . . . . . . 387
Expression . . . . . . . . . . . . 351 Summary of UDF Argument Use . . . 400
How the Sample LOBEVAL Program How the SQL Data Types are Passed to a
Works. . . . . . . . . . . . . 352 UDF . . . . . . . . . . . . . 402
C Sample: LOBEVAL.SQC . . . . . . 353 Writing Scratchpads on 32-bit and 64-bit
COBOL Sample: LOBEVAL.SQB . . . . 355 Platforms . . . . . . . . . . . 410
Indicator Variables and LOB Locators . . 358 The UDF Include File: sqludf.h . . . . 411
LOB File Reference Variables . . . . . . 358 Creating and Using Java User-Defined
Example: Extracting a Document To a File 360 Functions . . . . . . . . . . . . 412
How the Sample LOBFILE Program Coding a Java UDF . . . . . . . . 412
Works. . . . . . . . . . . . . 360 Changing How a Java UDF Runs . . . 414
C Sample: LOBFILE.SQC . . . . . . 361 Table Function Execution Model for Java 415
COBOL Sample: LOBFILE.SQB . . . . 362 Writing OLE Automation UDFs . . . . . 416
Example: Inserting Data Into a CLOB Creating and Registering OLE
Column . . . . . . . . . . . . . 364 Automation UDFs . . . . . . . . 417
Object Instance and Scratchpad
Chapter 14. User-Defined Functions Considerations . . . . . . . . . . 418
(UDFs) and Methods . . . . . . . . 365 How the SQL Data Types are Passed to
What are Functions and Methods? . . . . 365 an OLE Automation UDF . . . . . . 418
Why Use Functions and Methods? . . . . 366 Implementing OLE Automation UDFs in
UDF And Method Concepts . . . . . . 369 BASIC and C++ . . . . . . . . . 420
Implementing Functions and Methods . . . 370 OLE DB Table Functions . . . . . . . 423
Writing Functions and Methods . . . . . 371 Creating an OLE DB Table Function . . 424
Registering Functions and Methods . . . . 371 Fully Qualified Rowset Names . . . . 426
Examples of Registering UDFs and Methods 371 Defining a Server Name for an OLE DB
Example: Exponentiation . . . . . . 372 Provider . . . . . . . . . . . . 427
Example: String Search . . . . . . . 372 Defining a User Mapping . . . . . . 427
Example: BLOB String Search . . . . . 373 Supported OLE DB Data Types . . . . 428
Example: String Search over UDT . . . 373 Scratchpad Considerations . . . . . . . 430
Example: External Function with UDT Table Function Considerations . . . . . 432
Parameter . . . . . . . . . . . 374 Table Function Error Processing . . . . . 433
Contents vii
Debugging . . . . . . . . . . . . 560 Indicator Variables in C and C++ . . . 593
Diagnosing a Looping or Suspended Graphic Host Variable Declarations in C
application . . . . . . . . . . . 560 or C++ . . . . . . . . . . . . 593
LOB Data Declarations in C or C++. . . 596
Chapter 19. Writing Programs for DB2 LOB Locator Declarations in C or C++ 598
Federated Systems. . . . . . . . . 563 File Reference Declarations in C or C++ 599
Introduction to DB2 Federated Systems . . 563 Initializing Host Variables in C and C++ 600
Accessing Data Source Tables and Views . . 564 C Macro Expansion . . . . . . . . 600
Working with Nicknames . . . . . . 564 Host Structure Support in C and C++ . . 602
Using Isolation Levels to Maintain Data Indicator Tables in C and C++ . . . . 603
Integrity . . . . . . . . . . . . 568 Null-terminated Strings in C and C++ 604
Working with Data Type Mappings . . . . 569 Pointer Data Types in C and C++ . . . 606
How DB2 Determines What Data Types Using Class Data Members as Host
to Define Locally . . . . . . . . . 569 Variables in C and C++ . . . . . . . 607
Default Data Type Mappings . . . . . 569 Using Qualification and Member
How You Can Override Default Type Operators in C and C++ . . . . . . 608
Mappings and Create New Ones. . . . 570 Handling Graphic Host Variables in C
Using Distributed Requests to Query Data and C++ . . . . . . . . . . . . 609
Sources . . . . . . . . . . . . . 571 Japanese or Traditional Chinese EUC, and
Coding Distributed Requests . . . . . 571 UCS-2 Considerations in C and C++ . . 614
Using Server Options to Facilitate Supported SQL Data Types in C and C++ 615
Optimization . . . . . . . . . . 572 FOR BIT DATA in C and C++. . . . . 620
Invoking Data Source Functions . . . . . 574 SQLSTATE and SQLCODE Variables in C
Enabling DB2 to Invoke Data Source and C++ . . . . . . . . . . . . . 620
Functions . . . . . . . . . . . 574
Reducing the Overhead of Invoking a Chapter 21. Programming in Java . . . 623
Function . . . . . . . . . . . . 574 Programming Considerations for Java . . . 623
Specifying Function Names in the Comparison of SQLJ to JDBC . . . . . 623
CREATE FUNCTION MAPPING Advantages of Java over Other
Statement . . . . . . . . . . . 576 Languages . . . . . . . . . . . 624
Discontinuing Function Mappings . . . 576 SQL Security in Java . . . . . . . . 624
Using Pass-Through to Query Data Sources Source and Output Files for Java. . . . 624
Directly . . . . . . . . . . . . . 576 Java Class Libraries . . . . . . . . 625
SQL Processing in Pass-Through Sessions 576 Java Packages . . . . . . . . . . 625
Considerations and Restrictions . . . . 577 Supported SQL Data Types in Java . . . 625
SQLSTATE and SQLCODE Values in Java 627
Part 6. Language Considerations 579 Trace Facilities in Java . . . . . . . 627
Creating Java Applications and Applets 628
JDBC Programming . . . . . . . . . 630
Chapter 20. Programming in C and C++ 581
How the DB2Appl Program Works . . . 630
Programming Considerations for C and C++ 581
Distributing a JDBC Application . . . . 633
Language Restrictions for C and C++ . . . 581
Distributing and Running a JDBC Applet 633
Trigraph Sequences for C and C++ . . . 581
JDBC 2.0 . . . . . . . . . . . . 634
C++ Type Decoration Consideration . . 582
SQLJ Programming . . . . . . . . . 637
Input and Output Files for C and C++ . . . 582
DB2 SQLJ Support . . . . . . . . 637
Include Files for C and C++ . . . . . . 583
Embedding SQL Statements in Java . . . 639
Including Files in C and C++ . . . . . 585
Host Variables in Java . . . . . . . 646
Embedding SQL Statements in C and C++ 586
Calls to Stored Procedures and Functions
Host Variables in C and C++ . . . . . . 588
in SQLJ . . . . . . . . . . . . 646
Naming Host Variables in C and C++ . . 588
Compiling and Running SQLJ Programs 646
Declaring Host Variables in C and C++ 589
Contents ix
Part 7. Appendixes . . . . . . . 721 Large Object (LOB) Data Type . . . . 775
User Defined Types (UDTs) . . . . . 775
ROWID Data Type . . . . . . . . 776
Appendix A. Supported SQL Statements 723
64-bit Integer (BIGINT) data type . . . 776
Using Data Control Language (DCL) . . . 776
Appendix B. Sample Programs . . . . 729
Connecting and Disconnecting . . . . . 776
DB2 API Non-Embedded SQL Samples . . 733
Precompiling . . . . . . . . . . . 777
DB2 API Embedded SQL Samples . . . . 736
Blocking . . . . . . . . . . . . 777
Embedded SQL Samples With No DB2 APIs 738
Package Attributes . . . . . . . . 778
User-Defined Function Samples . . . . . 740
C Null-terminated Strings . . . . . . 779
DB2 Call Level Interface Samples . . . . 740
Standalone SQLCODE and SQLSTATE 779
Java Samples . . . . . . . . . . . 742
Defining a Sort Order . . . . . . . . 779
SQL Procedure Samples. . . . . . . . 744
Managing Referential Integrity . . . . . 779
ADO, RDO, and MTS Samples . . . . . 746
Locking . . . . . . . . . . . . . 780
Object Linking and Embedding Samples . . 747
Differences in SQLCODEs and SQLSTATEs 780
Command Line Processor Samples . . . . 748
Using System Catalogs . . . . . . . . 781
Log Management User Exit Samples . . . 749
Numeric Conversion Overflows on Retrieval
Assignments . . . . . . . . . . . 781
Appendix C. DB2DARI and DB2GENERAL Isolation Levels . . . . . . . . . . 781
Stored Procedures and UDFs . . . . . 751 Stored Procedures. . . . . . . . . . 782
DB2DARI Stored Procedures . . . . . . 751 Stored Procedure Builder . . . . . . 783
Using the SQLDA in a Client Application 751 NOT ATOMIC Compound SQL . . . . . 785
Using Host Variables in a DB2DARI Multisite Update with DB2 Connect. . . . 785
Client . . . . . . . . . . . . . 752 Host or AS/400 Server SQL Statements
Using the SQLDA in a Stored Procedure 752 Supported by DB2 Connect . . . . . . 786
Summary of Data Structure Usage . . . 753 Host or AS/400 Server SQL Statements
Input/Output SQLDA and SQLCA Rejected by DB2 Connect . . . . . . . 787
Structures . . . . . . . . . . . 754
Return Values for DB2DARI Stored
Appendix E. Simulating EBCDIC Binary
Procedures . . . . . . . . . . . 755
Collation . . . . . . . . . . . . 789
DB2GENERAL UDFs and Stored Procedures 755
Supported SQL Data Types . . . . . 756
Appendix F. Using the DB2 Library . . . 795
Classes for Java Stored Procedures and
DB2 PDF Files and Printed Books . . . . 795
UDFs . . . . . . . . . . . . . 757
DB2 Information . . . . . . . . . 795
NOT FENCED Stored Procedures . . . 763
Printing the PDF Books . . . . . . . 804
Example Input-SQLDA Programs . . . . 764
Ordering the Printed Books . . . . . 805
How the Example Input-SQLDA Client
DB2 Online Documentation . . . . . . 806
Application Works . . . . . . . . 765
Accessing Online Help . . . . . . . 806
C Example: V5SPCLI.SQC . . . . . . 767
Viewing Information Online . . . . . 808
How the Example Input-SQLDA Stored
Using DB2 Wizards . . . . . . . . 810
Procedure Works . . . . . . . . . 770
Setting Up a Document Server . . . . 811
C Example: V5SPSRV.SQC . . . . . . 771
Searching Information Online . . . . . 812
Appendix D. Programming in a Host or
Appendix G. Notices . . . . . . . . 813
AS/400 Environment . . . . . . . . 773
Trademarks . . . . . . . . . . . . 816
Using Data Definition Language (DDL) . . 774
Using Data Manipulation Language (DML) 775
Numeric Data Types . . . . . . . . 775 Index . . . . . . . . . . . . . 819
Mixed-Byte Data . . . . . . . . . 775
Long Fields . . . . . . . . . . . 775 Contacting IBM . . . . . . . . . . 847
Product Information . . . . . . . . . 847
To effectively use the information in this book to design, write, and test your
DB2 application programs, you need to refer to the SQL Reference along with
this book. If you are using the DB2 Call Level Interface (CLI) or Open
Database Connectivity (ODBC) interface in your applications to access DB2
databases, refer to the CLI Guide and Reference. To perform database manager
administration functions using the DB2 administration APIs in your
application programs, refer to the Administrative API Reference.
You can also develop applications where one part of the application runs on
the client and another part runs on the server. Version 7 of DB2 introduces
You can use object-based extensions to DB2 to make your DB2 application
programs more powerful, flexible, and active than traditional DB2
applications. The extensions include large objects (LOBs), distinct types,
structured types, user-defined functions (UDFs), and triggers. These features
of DB2 are described in:
v “Chapter 10. Using the Object-Relational Capabilities” on page 267
v “Chapter 11. User-defined Distinct Types” on page 273
v “Chapter 12. Working with Complex Objects: User-Defined Structured
Types” on page 283
v “Chapter 13. Using Large Objects (LOBs)” on page 341
v “Chapter 14. User-Defined Functions (UDFs) and Methods” on page 365
v “Chapter 15. Writing User-Defined Functions (UDFs) and Methods” on
page 385
v “Chapter 16. Using Triggers in an Active DBMS” on page 473
The application development process described in this book assumes that you
have established the appropriate operating environment. This means that the
following are properly installed and configured:
v A supported compiler or interpreter for developing your applications.
For details on how to accomplish these tasks, refer to the Application Building
Guide and the Quick Beginnings books for your operating environment.
You can develop applications at a server, or on any client, that has the DB2
Application Development Client (DB2 Application Development Client)
installed. You can run applications with either the server, the DB2 Run-Time
Client, or the DB2 Administrative Client. You can also develop Java JDBC
programs on one of these clients, provided that you install the ″Java
Enablement″ component when you install the client. That means you can
execute any DB2 application on these clients. However, unless you also install
the DB2 Application Development Client with these clients, you can only
develop JDBC applications on them.
DB2 supports the C, C++, Java (SQLJ), COBOL, and FORTRAN programming
languages through its precompilers. In addition, DB2 provides support for the
Perl, Java (JDBC), and REXX dynamically interpreted languages. For
information on the specific precompilers provided by DB2, and the languages
supported on your platform, refer to the Application Building Guide.
The body of every program contains the SQL statements that access and
manage data. These statements constitute transactions. Transactions must
include the following statements:
v The CONNECT statement, which establishes a connection to a database
server
v One or more:
– Data manipulation statements (for example, the SELECT statement)
– Data definition statements (for example, the CREATE statement)
– Data control statements (for example, the GRANT statement)
v Either the COMMIT or ROLLBACK statement to end the transaction
The end of the application program typically contains SQL statements that:
v Release the program’s connection to the database server
v Clean up any resource
Declaring and Initializing Variables
To code a DB2 application, you must first declare:
v the variables that interact with the database manager
v the SQLCA, if applicable
Declaring Variables that Represent SQL Objects: For DB2 Version 7, the
names of tables, aliases, views, and correlations have a maximum length of
128 bytes. Column names have a maximum length of 30 bytes. In DB2 Version
7, schema names have a maximum length of 30 bytes. Future releases of DB2
may increase the lengths of column names and other identifiers of SQL objects
up to 128 bytes. If you declare variables that represent SQL objects with less
than 128 byte lengths, future increases in SQL object identifier lengths may
affect the stability of your applications. For example, if you declare the
variable char[9]schema_name in a C++ application to hold a schema name,
your application functions properly for the allowed schema names in DB2
Version 6, which have a maximum length of 8 bytes.
char[9] schema_name; /* holds null-delimited schema name of up to 8 bytes;
works for DB2 Version 6, but may truncate schema names in future releases */
However, if you migrate the database to DB2 Version 7, which accepts schema
names with a maximum length of 30 bytes, your application cannot
differentiate between the schema names LONGSCHEMA1 and LONGSCHEMA2. The
database manager truncates the schema names to their 8-byte limit of
LONGSCHE, and any statement in your application that depends on
differentiating the schema names fails. To increase the longevity of your
application, declare the schema name variable with a 128-byte length as
follows:
char[129] schema_name; /* holds null-delimited schema name of up to 128 bytes
good for DB2 Version 7 and beyond */
To ease the use of this coding practice and increase the clarity of your C/C++
application code, consider using C macro expansion to declare the lengths of
these SQL object identifiers. Since the include file sql.h declares
SQL_MAX_IDENT to be 128, you can easily declare SQL object identifiers
with the SQL_MAX_IDENT macro. For example:
Relating Host Variables to an SQL Statement: You can use host variables to
receive data from the database manager or to transfer data to it from the host
program. Host variables that receive data from the database manager are
output host variables, while those that transfer data to it from the host program
are input host variables.
It contains two output host variables, hdate and lvl, and one input host
variable, idno. The database manager uses the data stored in the host variable
idno to determine the EMPNO of the row that is retrieved from the
EMPLOYEE table. If the database manager finds a row that meets the search
criteria, hdate and lvl receive the data stored in the columns HIREDATE and
EDLEVEL, respectively. This statement illustrates an interaction between the
host program and the database manager using columns of the EMPLOYEE
table.
In order to determine exactly how to define the host variable for use with a
column, you need to find out the SQL data type for that column. Do this by
querying the system catalog, which is a set of views containing information
about all tables created in the database. The SQL Reference describes this
catalog.
After you have determined the data types, you can refer to the conversion
charts in the host language chapters and code the appropriate declarations.
The Declaration Generator utility (db2dclgn) is also available for generating
the appropriate declarations for a given table in a database. For more
information on db2dclgn, see “Declaration Generator - db2dclgn” on page 73
and refer to the Command Reference.
Table 4 also shows the BEGIN and END DECLARE SECTION statements.
Observe how the delimiters for SQL statements differ for each language. For
the exact rules of placement, continuation, and delimiting of these statements,
see the language-specific chapters of this book.
For Java applications: You do not explicitly use the SQLCA in Java. Instead,
use the SQLException instance methods to get the SQLSTATE and SQLCODE
values. See “SQLSTATE and SQLCODE Values in Java” on page 627 for more
details.
When you preprocess your program, the database manager inserts host
language variable declarations in place of the INCLUDE SQLCA statement.
The system communicates with your program using the variables for warning
flags, error codes, and diagnostic information.
After executing each SQL statement, the system returns a return code in both
SQLCODE and SQLSTATE. SQLCODE is an integer value that summarizes
the execution of the statement, and SQLSTATE is a character field that
provides common error codes across IBM’s relational database products.
SQLSTATE also conforms to the ISO/ANS SQL92 and FIPS 127-2 standard.
Note that if SQLCODE is less than 0, it means an error has occurred and the
statement has not been processed. If the SQLCODE is greater than 0, it means
a warning has been issued, but the statement is still processed. See the
Message Reference for a listing of SQLCODE and SQLSTATE error conditions.
If you want the system to control error checking after each SQL statement, use
the WHENEVER statement.
Note: Embedded SQL for Java (SQLJ) applications cannot use the
WHENEVER statement. Use the SQLException methods described in
“SQLSTATE and SQLCODE Values in Java” on page 627 to handle
errors returned by SQL statements.
That is, whenever an SQL error occurs, program control is transferred to code
that follows the label, such as errchk. This code should include logic to
analyze the error indicators in the SQLCA. Depending upon the ERRCHK
definition, action may be taken to execute the next sequential program
instruction, to perform some special functions, or as in most situations, to roll
back the current transaction and terminate the program. See “Coding
Transactions” on page 17 for more information on a transaction and
“Diagnostic Handling and the SQLCA Structure” on page 115 for more
information about how to control error checking in your application program.
If your application must be compliant with the ISO/ANS SQL92 or FIPS 127-2
standard, do not use the above statements or the INCLUDE SQLCA statement.
For more information on the ISO/ANS SQL92 and FIPS 127-2 standards, see
“Definition of FIPS 127-2 and ISO/ANS SQL92” on page 15. For the
alternative to coding the above statements, see the following:
v For C or C++ applications, see “SQLSTATE and SQLCODE Variables in C
and C++” on page 620
v For COBOL applications, “SQLSTATE and SQLCODE Variables in COBOL”
on page 685
v For FORTRAN applications, “SQLSTATE and SQLCODE Variables in
FORTRAN” on page 700
After the connection has been established, your program can issue SQL
statements that:
v Manipulate data
v Define and maintain database objects
v Initiate control operations, such as granting user authority, or committing
changes to the database
A connection lasts until a CONNECT RESET, CONNECT TO, or
DISCONNECT statement is issued. In a multisite update environment, a
connection also lasts until a DB2 RELEASE then DB2 COMMIT is issued. A
CONNECT TO statement does not terminate a connection when using
multisite update (see “Multisite Update” on page 525).
Coding Transactions
A transaction is a sequence of SQL statements (possibly with intervening host
language code) that the database manager treats as a whole. An alternative
term that is often used for transaction is unit of work.
To ensure the consistency of data at the transaction level, the system makes
sure that either all operations within a transaction are completed, or none are
completed. Suppose, for example, that the program is supposed to deduct
money from one account and add it to another. If you place both of these
updates in a single transaction, and a system failure occurs while they are in
progress, then when you restart the system, the database manager
automatically restores the data to the state it was in before the transaction
began. If a program error occurs, the database manager restores all changes
made by the statement in error. The database manager will not undo work
performed in the transaction prior to execution of the statement in error,
unless you specifically roll it back.
You can code one or more transactions within a single application program,
and it is possible to access more than one database from within a single
transaction. A transaction that accesses more than one database is called a
Beginning a Transaction
A transaction begins implicitly with the first executable SQL statement and
ends with either a COMMIT or a ROLLBACK statement, or when the
program ends.
Ending a Transaction
To end a transaction, you can use either:
v The COMMIT statement to save its changes
v The ROLLBACK statement to ensure that these changes are not saved
Using the COMMIT Statement: This statement ends the current transaction.
It makes the database changes performed during the current transaction
visible to other processes.
In the event of a severe error, you will receive a message indicating that you
cannot issue a ROLLBACK statement. Do not issue a ROLLBACK statement if
a severe error occurs such as the loss of communications between the client
and server applications, or if the database gets corrupted. After a severe error,
the only statement you can issue is a CONNECT statement.
Ending the Program
To properly end your program:
1. End the current transaction (if one is in progress) by explicitly issuing
either a COMMIT statement or a ROLLBACK statement.
2. Release your connection to the database server by using the CONNECT
RESET statement.
3. Clean up resources used by the program. For example, free any temporary
storage or data structures that are used.
Note: If the current transaction is still active when the program terminates,
DB2 implicitly ends the transaction. Since DB2’s behavior when it
implicitly ends a transaction is platform specific, you should explicitly
end all transactions by issuing a COMMIT or a ROLLBACK statement
before the program terminates. See Implicitly Ending a Transaction for
details on how DB2 implicitly ends a transaction.
Implicitly Ending a Transaction
If your program terminates without ending the current transaction, DB2
implicitly ends the current transaction (see “Ending the Program” for details
on how to properly end your program). DB2 implicitly terminates the current
transaction by issuing either a COMMIT or a ROLLBACK statement when the
application ends. Whether DB2 issues a COMMIT or ROLLBACK depends on
factors such as:
v Whether the application terminated normally
v The platform on which the DB2 server runs
(program logic)
The capabilities you use and the extent to which you use them can vary
greatly. This section is an overview of the capabilities available that can
significantly affect your design and provides some reasons for why you might
choose one over another. For more information and detail on any of the
capabilities described, a reference to more detail is provided.
You will notice that this list mentions some capabilities more than once, such
as triggers. This reflects the flexibility of these capabilities to address more
than one design criteria.
Your first and most fundamental decision is whether or not to move the logic
to enforce application related rules about the data into the database.
The key advantage in transferring logic focussed on the data from the
application into the database is that your application becomes more
independent of the data. The logic surrounding your data is centralized in one
place, the database. This means that you can change data or data logic once
and affect all applications immediately.
This latter advantage is very powerful, but you must also consider that any
data logic put into the database affects all users of the data equally. You must
consider whether the rules and constraints that you wish to impose on the
data apply to all users of the data or just the users of your application.
Your application requirements may also affect whether to enforce rules at the
database or the application. For example, you may need to process validation
errors on data entry in a specific order. In general, you should do these types
of data validation in the application code.
You should also consider the computing environment where the application is
used. You need to consider the difference between performing logic on the
client machines against running the logic on the usually more powerful
database server machines using either stored procedures, UDFs, or a
combination of both.
Embedded SQL
Embedded SQL has the advantage that it can consist of either static or
dynamic SQL or a mixture of both types. If the content and format of your
SQL statements will be frozen when your application is in use, you should
consider using embedded static SQL in your application. With static SQL, the
person who executes the application temporarily inherit the privileges of the
user that bound the application to the database. Unless you bind the
application with the DYNAMICRULES BIND option, dynamic SQL uses the
privileges of the person who executes the application. In general, you should
use embedded dynamic SQL where the executable statements are determined
at run time. This creates a more secure application program that can handle a
greater variety of input.
Note: Embedded SQL for Java (SQLJ) applications can only embed static SQL
statements. However, you can use JDBC to make dynamic SQL calls in
SQLJ applications.
For details of coding and building DB2 applications using REXX, see
“Chapter 25. Programming in REXX” on page 703.
DB2 Call Level Interface (DB2 CLI) and Open Database Connectivity
(ODBC)
The DB2 Call Level Interface (DB2 CLI) is IBM’s callable SQL interface to the
DB2 family of database servers. It is a C and C++ application programming
interface for relational database access, and it uses function calls to pass
dynamic SQL statements as function arguments. A callable SQL interface is an
application program interface (API) for database access, which uses function
calls to invoke dynamic SQL statements. It is an alternative to embedded
dynamic SQL, but unlike embedded SQL, it does not require precompiling or
binding.
For more information on the ODBC support in DB2, see the CLI Guide and
Reference.
JDBC
DB2’s Java support includes JDBC, a vendor-neutral dynamic SQL interface
that provides data access to your application through standardized Java
methods. JDBC is similar to DB2 CLI in that you do not have to precompile or
bind a JDBC program. As a vendor-neutral standard, JDBC applications offer
increased portability.
Microsoft Specifications
You can write database applications that conform to the ActiveX Data Object
(ADO) in Microsoft Visual Basic™ or Visual C++™. ADO applications use the
OLE DB Bridge. You can write database applications that conform to the
Remote Data Object (RDO) specifications in Visual Basic. You can also define
OLE DB table functions that return data from OLE DB providers. For more
information on OLE DB table functions, see “OLE DB Table Functions” on
page 423.
This book does not attempt to provide a tutorial on writing applications that
conform to the ADO and RDO specifications. For full samples of DB2
applications that use the ADO and RDO specifications, refer to the following
directories:
v For samples written in Visual Basic, refer to sqllib\samples\VB
v For samples written in Visual C++, refer to sqllib\samples\VC
v For samples that use the RDO specification, refer to sqllib\samples\RDO
v For samples that use the Microsoft Transaction Server™, refer to
sqllib\samples\MTS
Perl DBI
DB2 supports the Perl Database Interface (DBI) specification for data access
through the DBD::DB2 driver. For more information on creating appliations
with the Perl DBI that access DB2 databases, see “Chapter 22. Programming in
Perl” on page 661. The DB2 Universal Database Perl DBI Web site at
http://www.ibm.com/software/data/db2/perl/ contains the latest DBD::DB2
driver and information on the support available for your platform.
Query Products
Query products including IBM Query Management Facility (QMF) and Lotus
Notes support query development and reporting. The products vary in how
SQL statements are developed and the degree of logic that can be introduced.
Depending on your needs, this approach may meet your requirements to
access data. This book does not provide further information on query
products.
Data Value Control
One traditional area of application logic is validating and protecting data
integrity by controlling the values allowed in the database. Applications have
logic that specifically checks data values as they are entered for validity. (For
example, checking that the department number is a valid number and that it
Data Types
The database stores every data element in a column of a table, and defines
each column with a data type. This data type places certain limits on the
types of values for the column. For example, an integer must be a number
within a fixed range. The use of the column in SQL statements must conform
to certain behaviors; for instance, the database does not compare an integer to
a character string. DB2 includes a set of built-in data types with defined
characteristics and behaviors. DB2 also supports defining your own data
types, called user-defined distinct types, that are based on the built-in types but
do not automatically support all the behaviors of the built-in type. You can
also use data types, like binary large object (BLOB), to store data that may
consist of a set of related values, such as a data structure.
Unique Constraints
Unique constraints prevent occurrences of duplicate values in one or more
columns within a table. Unique and primary keys are the supported unique
constraints. For example, you can define a unique constraint on the DEPTNO
column in the DEPARTMENT table to ensure that the same department
number is not given to two departments.
Use unique constraints if you need to enforce a uniqueness rule for all
applications that use the data in a table. For additional information on unique
constraints, refer to the SQL Reference.
If the rule applies for all applications that use the data, use a table check
constraint to enforce your restriction on the data allowed in the table. Table
check constraints make the restriction generally applicable and easier to
maintain.
For additional information on the WITH CHECK OPTION, refer to the SQL
Reference.
RI constraints enforce your rules on the data across one or more tables. If the
rules apply for all applications that use the data, then RI constraints centralize
the rules in the database. This makes the rules generally applicable and easier
to maintain.
Triggers
You can use triggers before or after an update to support logic that can also
be performed in an application. If the rules or operations supported by the
triggers apply for all applications that use the data, then triggers centralize the
rules or operations in the database, making it generally applicable and easier
to maintain.
Using Triggers Before an Update: Using triggers that run before an update
or insert, values that are being updated or inserted can be modified before the
database is actually modified. These can be used to transform input from the
application (user view of the data) to an internal database format where
desired. These before triggers can also be used to cause other non-database
operations to be activated through user-defined functions.
Using Triggers After an Update: Triggers that run after an update, insert or
delete can be used in several ways:
v Triggers can update, insert, or delete data in the same or other tables. This
is useful to maintain relationships between data or to keep audit trail
information.
v Triggers can check data against values of data in the rest of the table or in
other tables. This is useful when you cannot use RI constraints or check
constraints because of references to data from other rows from this or other
tables.
v Triggers can use user-defined functions to activate non-database operations.
This is useful, for example, for issuing alerts or updating information
outside the database.
Stored Procedures
A stored procedure is a routine for your application that is called from client
application logic but runs on the database server. The most common reason to
use a stored procedure is for database intensive processing that produces only
small amounts of result data. This can save a large amount of communications
across the network during the execution of the stored procedure. You may
also consider using a stored procedure for a set of operations that are
common to multiple applications. In this way, all the applications use the
same logic to perform the operation.
User-Defined Functions
You can write a user-defined function (UDF) for use in performing operations
within an SQL statement to return:
v A single scalar value (scalar function)
v A table from a non-DB2 data source, for example, an ASCII file or a Web
page (table function)
A UDF cannot contain SQL statements. UDFs are useful for tasks like
transforming data values, performing calculations on one or more data values,
or extracting parts of a value (such as extracting parts of a large object).
Triggers
In “Triggers” on page 28, it is noted that triggers can be used to invoke
user-defined functions. This is useful when you always want a certain
non-SQL operation performed when specific statements occur, or data values
You can use the IBM DB2 Universal Database Project Add-In for Microsoft
Visual C++ to develop, package, and deploy:
v Stored procedures written in C/C++ for DB2 Universal Database on
Windows 32-bit operating systems
v Windows 32-bit C/C++ embedded SQL client applications that access DB2
Universal Database servers
v Windows 32-bit C/C++ client applications that invoke stored procedures
using C/C++ function call wrappers
The IBM DB2 Universal Database Project Add-In for Microsoft Visual C++
allows you to focus on the design and logic of your DB2 applications rather
than the actual building and deployment of it.
Some of the tasks performed by the IBM DB2 Universal Database Project
Add-In for Microsoft Visual C++ include:
v Creating a new embedded SQL module
v Inserting SQL statements into an embedded SQL module using SQL Assist
v Adding imported stored procedures
v Creating an exported stored procedure
v Packaging the DB2 Project
v Deploying the DB2 project from within Visual C++
The IBM DB2 Universal Database Project Add-In for Microsoft Visual C++ is
presented in the form of a toolbar. The toolbar buttons include:
DB2 Project Properties
Manages the project properties (development database and
code-generation options)
New DB2 Object
Adds a new embedded SQL module, imported stored procedure, or
exported stored procedure
The IBM DB2 Universal Database Project Add-In for Microsoft Visual C++
also has the following three hidden buttons that can be made visible using the
standard Visual C++ tools customization options:
New DB2 Embedded SQL Module
Adds a new C/C++ embedded SQL module
New DB2 Imported Stored Procedure
Imports a new database stored procedure
New DB2 Exported Stored Procedure
Exports a new database stored procedure
The IBM DB2 Universal Database Project Add-In for Microsoft Visual C++ can
automatically generate the following code elements:
v Skeletal embedded SQL module files with optional sample SQL statements
v Standard database connect and disconnect embedded SQL functions
v Imported stored procedure call wrapper functions
v Exported stored procedure function templates
v Exported stored procedure data definition language (DDL) files
Activating the IBM DB2 Universal Database Project Add-In for Microsoft
Visual C++
To activate the IBM DB2 Universal Database Project Add-In for Microsoft
Visual C++, perform the following steps:
Step 1. Register the add-in, if you have not already done so, by entering:
db2vccmd register
Note: If the toolbar is accidentally closed, you can either deactivate then
reactivate the add-in or use the Microsoft Visual C++ standard
customization options to redisplay the toolbar.
Activating the IBM DB2 Universal Database Tools Add-In for Microsoft
Visual C++
The DB2 Tools Add-In is a toolbar that enables the launch of some of the DB2
administration and development tools from within the Visual C++ integrated
development environment.
Note: If the toolbar is accidentally closed, you can either deactivate then
reactivate the add-in or use the Visual C++ standard customization
options to redisplay the toolbar.
For more information on the IBM DB2 Universal Database Project Add-In for
Microsoft Visual C++, refer to:
v The online help for the IBM DB2 Universal Database Project Add-In for
Microsoft Visual C++.
v http://www.ibm.com/software/data/db2/udb/ide/index.html.
Authorization Considerations
An authorization allows a user or group to perform a general task such as
connecting to a database, creating tables, or administering a system. A privilege
gives a user or group the right to access one specific database object in a
specified way. DB2 uses a set of privileges to provide protection for the
information that you store in it. For more information about the different
privileges, refer to the Administration Guide: Planning.
Most SQL statements require some type of privilege on the database objects
which the statement utilizes. Most API calls usually do not require any
privilege on the database objects which the call utilizes, however, many APIs
require that you possess the necessary authority in order to invoke them. The
DB2 APIs enable you to perform the DB2 administrative functions from
For information on the required privilege to issue each SQL statement, refer to
the SQL Reference. For information on the required privilege and authority to
issue each API call, refer to the Administrative API Reference.
When you design your application, consider the privileges your users will
need to run the application. The privileges required by your users depend on:
v whether your application uses dynamic SQL, including JDBC and DB2 CLI,
or static SQL
v which APIs the application uses
Dynamic SQL
To use dynamic SQL in a package bound with DYNAMICRULES RUN
(default), the person that runs a dynamic SQL application must have the
privileges necessary to issue each SQL request performed, as well as the
EXECUTE privilege on the package. The privileges may be granted to the
user’s authorization ID, to any group of which the user is a member, or to
PUBLIC.
If you bind the application with the DYNAMICRULES BIND option, DB2
associates your authorization ID with the application packages. This allows
any user that runs the application to inherit the privileges associated your
authorization ID.
The person binding the application (for embedded dynamic SQL applications)
only needs the BINDADD authority on the database, if the program contains
no static SQL. Again, this privilege can be granted to the user’s authorization
ID, to a group of which the user is a member, or to PUBLIC.
When you bind a dynamic SQL package with the DYNAMICRULES BIND
option, the user that runs the application only needs the EXECUTE privilege
on the package. To bind a dynamic SQL application with the
DYNAMICRULES BIND option, you must have the privileges necessary to
perform all the dynamic and static SQL statements in the application. If you
have SYSADM or DBADM authority and bind packages with
DYNAMICRULES BIND, consider using the OWNER BIND option to
designate a different authorization ID. OWNER BIND prevents the package
from automatically inheriting SYSADM or DBADM privileges on dynamic
SQL statements. For more information on DYNAMICRULES BIND and
OWNER BIND, refer to the BIND command in the Command Reference.
Unless you specify the VALIDATE RUN option when binding the application,
the authorization ID you use to bind the application must have the privileges
necessary to perform all the statements in the application. If VALIDATE RUN
was specified at BIND time, all authorization failures for any static SQL
within this package will not cause the BIND to fail and those statements will
be revalidated at run time. The person binding the application must always
have BINDADD authority. The privileges needed to execute the statements
must be granted to the user’s authorization ID or to PUBLIC. Group
privileges are not used when binding static SQL statements. As with dynamic
SQL, the BINDADD privilege can be granted to the user authorization ID, to a
group of which the user is a member, or to PUBLIC.
These properties of static SQL give you very precise control over access to
information in DB2. See the example at the end of this section for a possible
application of this.
Using APIs
Most of the APIs provided by DB2 do not require the use of privileges,
however, many do require some kind of authority to invoke. For the APIs that
do require a privilege, the privilege must be granted to the user running the
application. The privilege may be granted to the user’s authorization ID, to
any group of which the user is a member, or to PUBLIC. For information on
the required privilege and authority to issue each API call, see the
Administrative API Reference.
Example
Consider two users, PAYROLL and BUDGET, who need to perform queries
against the STAFF table. PAYROLL is responsible for paying the employees of
the company, so it needs to issue a variety of SELECT statements when
issuing paychecks. PAYROLL needs to be able to access each employee’s
salary. BUDGET is responsible for determining how much money is needed to
pay the salaries. BUDGET should not, however, be able to see any particular
employee’s salary.
Since PAYROLL issues many different SELECT statements, the application you
design for PAYROLL could probably make good use of dynamic SQL. This
would require that PAYROLL have SELECT privilege on the STAFF table. This
is not a problem since PAYROLL needs full access to the table anyhow.
The database manager includes APIs for language vendors who want to write
their own precompiler, and other APIs useful for developing applications.
For complete details on the APIs available with the database manager and
how to call them, see the examples in the Administrative API Reference.
List the data the application accesses and describe how each data item is
accessed. For example, suppose the application being developed accesses the
TEST.TEMPL, TEST.TDEPT, and TEST.TPROJ tables. You could record the type
of accesses as shown in Table 1.
Table 1. Description of the Application Data
Table or View Insert Delete Column Name Data Type Update
Name Rows Rows Access
TEST.TEMPL No No EMPNO CHAR(6) Yes
LASTNAME VARCHAR(15) Yes
WORKDEPT CHAR(3) Yes
PHONENO CHAR(4)
JOBCODE DECIMAL(3)
TEST.TDEPT No No DEPTNO CHAR(3)
MGRNO CHAR(6)
TEST.TPROJ Yes Yes PROJNO CHAR(6) Yes
DEPTNO CHAR(3) Yes
RESPEMP CHAR(6) Yes
PRSTAFF DECIMAL(5,2) Yes
PRSTDATE DECIMAL(6) Yes
PRENDATE DECIMAL(6)
If the database schema is being developed along with the application, the
definitions of the test tables might be refined repeatedly during the
development process. Usually, the primary application cannot both create the
tables and access them because the database manager cannot bind statements
that refer to tables and views that do not exist. To make the process of
creating and changing tables less time-consuming, consider developing a
separate application to create the tables. Of course you can always create test
tables interactively using the Command Line Processor (CLP).
Generating Test Data
Use any of the following methods to insert data into a table:
v INSERT...VALUES (an SQL statement) puts one or more rows into a table
each time the command is issued.
v INSERT...SELECT obtains data from an existing table (based on a SELECT
clause) and puts it into the table identified with the INSERT statement.
v The IMPORT or LOAD utility inserts large amounts of new or existing data
from a defined source.
v The RESTORE utility can be used to duplicate the contents of an existing
database into an identical test database by using a BACKUP copy of the
original database.
For information about the INSERT statement, refer to the SQL Reference. For
information about the IMPORT, LOAD, and RESTORE utilities, refer to the
Administration Guide.
For sample programs that are helpful in generating random test data, please
see the fillcli.sqc and fillsrv.sqc sample programs in the
sqllib/samples/c subdirectory.
For the Java language, the SQLJ translator converts SQLJ clauses into JDBC
statements. The SQLJ translator is invoked with the SQLJ command.
When the precompiler processes a source file, it specifically looks for SQL
statements and avoids the non-SQL host language. It can find SQL statements
because they are surrounded by special delimiters. For the syntax information
necessary to embed SQL statements in the language you are using, see the
following:
v for C/C++, “Embedding SQL Statements in C and C++” on page 586
v for Java (SQLJ), “Embedding SQL Statements in Java” on page 639
v for COBOL, “Embedding SQL Statements in COBOL” on page 668
v for FORTRAN, “Embedding SQL Statements in FORTRAN” on page 691
v for REXX, “Embedding SQL Statements in REXX” on page 705
Table 3 shows how to use delimiters and comments to create valid embedded
SQL statements in the supported compiled host languages.
Table 3. Embedding SQL Statements in a Host Language
Language Sample Source Code
C/C++ /* Only C or C++ comments allowed here */
EXEC SQL
-- SQL comments or
/* C comments or */
// C++ comments allowed here
DECLARE C1 CURSOR FOR sname;
/* Only C or C++ comments allowed here */
SQLJ /* Only Java comments allowed here */
#sql c1 = {
-- SQL comments or
/* Java comments or */
// Java comments allowed here
SELECT name FROM employee
};
/* Only Java comments allowed here */
Note: Not all platforms support all host languages. See the Application
Building Guide for specific information.
For this discussion, assume that you have already written the source code.
If you have written your application using a compiled host language, you
must follow additional steps to build your application. Along with compiling
and linking your program, you must precompile and bind it.
Binding is the process of creating a package from a bind file and storing it in a
database. If your application accesses more than one database, you must
create a package for each database.
Figure 1 on page 48 shows the order of these steps, along with the various
modules of a typical compiled DB2 application. You may wish to refer to it as
Source Files
1 With SQL
Statements
PACKAGE BINDFILE
Precompiler
2 Create a Create a
(db2 PREP)
Package Bind File
Source Files
Modified
Without SQL
Source Files
Statements
Object
Libraries
Files
Executable Bind
6
Program File
Binder
5
(db2 BIND)
To create the packages needed by SQLJ applications, you need to use both the
SQLJ translator and db2profc command. For more information on using the
SQLJ translator, see “SQLJ Programming” on page 637.
Precompiling
After you create the source files, you must precompile each host language file
containing SQL statements with the PREP command for host language source
files. The precompiler converts SQL statements contained in the source file to
comments, and generates the DB2 run-time API calls for those statements.
The precompiler also creates the information the database manager needs to
process the SQL statements against a database. This information is stored in a
package, in a bind file, or in both, depending on the precompiler options
selected.
For detailed information on precompiler syntax and options, see the Command
Reference.
If your application uses a code page that is not the same as your database
code page, you need to consider which code page to use when precompiling.
See “Conversion Between Different Code Pages” on page 504.
To precompile an application program that accesses more than one server, you
can do one of the following:
v Split the SQL statements for each database into separate source files. Do not
mix SQL statements for different databases in the same file. Each source file
can be precompiled against the appropriate database. This is the
recommended method.
v Code your application using dynamic SQL statements only, and bind
against each database your program will access.
v If all the databases look the same, that is, they have the same definition,
you can group the SQL statements together into one source file.
The same procedures apply if your application will access a host or AS/400
application server through DB2 Connect. Precompile it against the server to
which it will be connecting, using the PREP options available for that server.
For details about the PREP command, refer to the Command Reference.
Compiling and Linking
Compile the modified source files and any additional source files that do not
contain SQL statements using the appropriate host language compiler. The
language compiler converts each modified source file into an object module.
A typical example of using the BIND command follows. To bind a bind file
named filename.bnd to the database, you can issue the following command:
DB2 BIND filename.bnd
For detailed information on BIND command syntax and options, refer to the
Command Reference.
One package is created for each separately precompiled source code module.
If an application has five source files, of which three require precompilation,
three packages or bind files are created. By default, each package is given a
name that is the same as the name of the source module from which the .bnd
file originated, but truncated to 8 characters. If the name of this newly created
package is the same as a package that currently exists in the target database,
the new package replaces the previously existing package. To explicitly specify
a different package name, you must use the PACKAGE USING option on the
PREP command. See the Command Reference for details.
Renaming Packages
When creating multiple versions of an application, you should avoid
conflicting names by renaming your package. For example, if you have an
application called foo (compiled from foo.sqc), you precompile it and send it
to all the users of your application. The users bind the application to the
database, and then run the application. To make subsequent changes, create a
new version of foo and send this application and its bind file to the users that
require the new version. The new users bind foo.bnd and the new application
runs without any problem. However, when users attempt to run the old
version of the application, they receive a timestamp conflict on the FOO
package (which indicates that the package in the database does not match the
application being run) so they rebind the client. (See “Timestamps” on page 58
for more information on package timestamps.) Now the users of the new
application receive a timestamp conflict. This problem is caused because both
applications use packages with the same name.
After you distribute this application, users can bind and run it without any
problem. When you build the new version, you precompile it with the
command:
DB2 PREP FOO.SQC BINDFILE PACKAGE USING FOO2
After you distribute the new application, it will also bind and run without
any problem. Since the package name for the new version is FOO2 and the
package name for the first version is FOO1, there is no naming conflict and
both versions of the application can be used.
In the above example, db_name is the name of the database, user_name is the
name of the user, and file_name is the name of the application that will be
bound. Note that user_name and schema_name are usually the same value.
Then use the SET CURRENT PACKAGESET statement to specify which
package to use, and therefore, which qualifiers will be used. The default
qualifier is the authorization identifier that is used when binding the
package. For an example of how to use the SET CURRENT PACKAGESET
statement, refer to the SQL Reference.
If your application issues calls to any of the database manager utility APIs,
such as IMPORT or EXPORT, you must bind the supplied utility bind files to
the database. For details, refer to the Quick Beginnings guide for your
platform.
You can use bind options to control certain operations that occur during
binding, as in the following examples:
v The QUERYOPT bind option takes advantage of a specific optimization
class when binding.
v The EXPLSNAP bind option stores Explain Snapshot information for
eligible SQL statements in the Explain tables.
v The FUNCPATH bind option properly resolves user-defined distinct types
and user-defined functions in static SQL.
For information on bind options, refer to the section on the BIND command in
the Command Reference.
If the bind process starts but never returns, it may be that other applications
connected to the database hold locks that you require. In this case, ensure that
no applications are connected to the database. If they are, disconnect all
applications on the server and the bind process will continue.
If your application will access a server using DB2 Connect, you can use the
BIND options available for that server. For details on the BIND command and
its options, refer to the Command Reference.
Bind files are not backward compatible with previous versions of DB2
Universal Database. In mixed-level environments, DB2 can only use the
functions available to the lowest level of the database environment. For
example, if a V5.2 client connects to a V5.0 server, the client will only be able
to use V5.0 functions. As bind files express the functionality of the database,
they are subject to the mixed-level restriction.
If you need to rebind higher-level bind files on lower-level systems, you can:
Using the BIND API during execution allows an application to bind itself,
perhaps as part of an installation procedure or before an associated module is
executed. For example, an application can perform several tasks, only one of
which requires the use of SQL statements. You can design the application to
bind itself to a database only when the application calls the task requiring
SQL statements, and only if an associated package does not already exist.
Another advantage of the deferred binding method is that it lets you create
packages without providing source code to end users. You can ship the
associated bind files with the application.
DB2 Bind File Description Utility - db2bfd
With the DB2 Bind File Description (db2bfd) utility, you can easily display the
contents of a bind file to examine and verify the SQL statements within it, as
well as display the precompile options used to create the bind file. This may
be useful in problem determination related to your application’s bind file.
The db2bfd utility is located in the bin subdirectory of the sqllib directory of
the instance.
Notes:
1 Display the help information.
2 Display bind file header.
3 Display SQL statements.
4 Display host variable declarations
5 The name of the bind file.
Database applications use packages for some of the same reasons that
applications are compiled: improved performance and compactness. By
precompiling an SQL statement, the statement is compiled into the package
when the application is built, instead of at run time. Each statement is parsed,
and a more efficiently interpreted operand string is stored in the package. At
run time, the code generated by the precompiler calls run-time services
database manager APIs with any variable information required for input or
output data, and the information stored in the package is executed.
Remember that when you bind an application to a database, the first eight
characters of the application name are used as the package name unless you
override the default by using the PACKAGE USING option on the PREP command.
This means that if you precompile and bind two programs using the same
name, the second will override the package of the first. When you run the
first program, you will get a timestamp error because the timestamp for the
modified source file no longer matches that of the package in the database.
The application and package timestamps match because the bind file contains
the same timestamp as the one that was stored in the modified source file
during precompilation.
Rebinding
Rebinding is the process of recreating a package for an application program
that was previously bound. You must rebind packages if they have been
marked invalid or inoperative. In some situations, however, you may want to
rebind packages that are valid. For example, you may want to take advantage
of a newly created index, or make use of updated statistics after executing the
RUNSTATS command.
Because the authorization of the person binding the application is used, the
end user does not require direct privileges to execute the statements in the
package. For example, an application could allow a user to update parts of a
table without granting an update privilege on the entire table. This can be
achieved by restricting the static SQL statements to allow updates only to
certain columns or a range of values.
Static SQL statements are persistent, meaning that the statements last for as
long as the package exists. Dynamic SQL statements are cached until they are
either invalidated, freed for space management reasons, or the database is
shut down. If required, the dynamic SQL statements are recompiled implicitly
by the DB2 SQL compiler whenever a cached statement becomes invalid. For
information on caching and the reasons for invalidation of a cached statement,
refer to the SQL Reference.
The key advantage of static SQL, with respect to persistence, is that the static
statements exist after a particular database is shut down, whereas dynamic
SQL statements cease to exist when this occurs. In addition, static SQL does
not have to be compiled by the DB2 SQL compiler at run time, while dynamic
SQL must be explicitly compiled at run time (for example, by using the
PREPARE statement). Because DB2 caches dynamic SQL statements, the
statements do not need to be compiled often by DB2, but they must be
compiled at least once when you execute the application.
Note: The performance of static SQL depends on the statistics of the database
the last time the application was bound. However, if these statistics
change, the performance of equivalent dynamic SQL can be very
different. If, for example, an index is added to a database at a later
time, an application using static SQL cannot take advantage of the
index unless it is re-bound to the database. In addition, if you are using
host variables in a static SQL statement, the optimizer will not be able
to take advantage of any distribution statistics for the table.
The REXX language does not support static SQL, so a sample is not provided.
This sample program contains a query that selects a single row. Such a query
can be performed using the SELECT INTO statement.
The SELECT INTO statement selects one row of data from tables in a
database, and the values in this row are assigned to host variables specified in
the statement. Host variables are discussed in detail in “Using Host Variables”
on page 71. For example, the following statement will deliver the salary of
the employee with the last name of 'HAAS' into the host variable empsal:
SELECT SALARY
INTO :empsal
FROM EMPLOYEE
WHERE LASTNAME='HAAS'
A SELECT INTO statement must be specified to return only one or zero rows.
Finding more than one row results in an error, SQLCODE -811 (SQLSTATE
21000). If several rows can be the result of a query, a cursor must be used to
process the rows. See “Selecting Multiple Rows Using a Cursor” on page 81
for more information.
For more details on the SELECT INTO statement, refer to the SQL Reference.
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for
the source code for this error checking utility.
char dbAlias[15] ;
char user[15] ;
char pswd[15] ;
class Static
{ static
{ try
{ Class.forName ("COM.ibm.db2.jdbc.app.DB2Driver").newInstance ();
}
catch (Exception e)
{ System.out.println ("\n Error loading DB2 Driver...\n");
System.out.println (e);
System.exit(1);
}
}
Data Division.
Working-Storage Section.
copy "sql.cbl".
copy "sqlca.cbl". 1
Procedure Division.
Main Section.
display "Sample COBOL program: STATIC".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
End-Prog.
stop run.
After you have written a select-statement, you code the SQL statements that
define how information will be passed to your application.
You can think of the result of a select-statement as being a table having rows
and columns, much like a table in the database. If only one row is returned,
you can deliver the results directly into host variables specified by the
SELECT INTO statement.
If more than one row is returned, you must use a cursor to fetch them one at a
time. A cursor is a named control structure used by an application program to
point to a specific row within an ordered set of rows. For information about
how to code and use cursors, see the following sections:
v “Declaring and Using the Cursor” on page 81,
v “Selecting Multiple Rows Using a Cursor” on page 81,
v “Example: Cursor Program” on page 84.
Note: Java JDBC and SQLJ programs do not use declare sections. Host
variables in Java follow the normal Java variable declaration syntax.
Host variables are declared using a subset of the host language. For a
description of the supported syntax for your host language, see:
v “Chapter 20. Programming in C and C++” on page 581
v “Chapter 21. Programming in Java” on page 623
v “Chapter 23. Programming in COBOL” on page 665
v “Chapter 24. Programming in FORTRAN” on page 687
v “Chapter 25. Programming in REXX” on page 703.
The following rules apply to host variable declaration sections:
v All host variables must be declared in the source file before they are
referenced, except for host variables referring to SQLDA structures.
v Multiple declare sections may be used in one source file.
v The precompiler is unaware of host language variable scoping rules.
With respect to SQL statements, all host variables have a global scope
regardless of where they are actually declared in a single source file.
Therefore, host variable names must be unique within a source file.
This does not mean that the DB2 precompiler changes the scope of host
variables to global so that they can be accessed outside the scope in which
they are defined. Consider the following example:
foo1(){
.
.
.
BEGIN SQL DECLARE SECTION;
int x;
END SQL DECLARE SECTION;
x=10;
.
.
.
}
foo2(){
.
.
.
y=x;
.
.
.
}
foo2(int x){
.
.
.
y=x;
.
.
.
}
For example, to generate the declarations for the STAFF table in the SAMPLE
database in C in the output file staff.h, issue the following command:
db2dclgn -d sample -t staff -l C
In the figure, cmind is examined for a negative value. If it is not negative, the
application can use the returned value of cm. If it is negative, the fetched
value is NULL and cm should not be used. The database manager does not
change the value of the host variable in this case.
If the data type can handle NULLs, the application must provide a NULL
indicator. Otherwise, an error may occur. If a NULL indicator is not used, an
SQLCODE -305 (SQLSTATE 22002) is returned.
The SQLWARN1 field in the SQLCA structure may contain an ’X’ or ’W’ if the
value of a string column is truncated when it is assigned to a host variable. It
contains an ’N’ if a null terminator is truncated.
A value of ’X’ is returned by the database manager only if all of the following
conditions are met:
v A mixed code page connection exists where conversion of character string
data from the database code page to the application code page involves a
change in the length of the data.
v A cursor is blocked.
v An indicator variable is provided by your application.
The value returned in the indicator variable will be the length of the resultant
character string in the application’s code page.
In all other cases involving data truncation, (as opposed to NULL terminator
truncation), the database manager returns a ’W’. In this case, the database
manager returns a value in the indicator variable to the application that is the
length of the resultant character string in the code page of the select list item
(either the application code page, the data base code page, or nothing). For
related information, refer to the SQL Reference.
Data Types
Each column of every DB2 table is given an SQL data type when the column is
created. For information about how these types are assigned to columns, refer
to the CREATE TABLE statement in the SQL Reference. The database manager
supports the following column data types:
SMALLINT
16-bit signed integer.
The following data types are supported only in double-byte character set
(DBCS) and Extended UNIX Code (EUC) character set environments:
GRAPHIC
Fixed-length graphic string of length 1 to 127 double-byte characters.
VARGRAPHIC
Variable-length graphic string of length 1 to 16336 double-byte
characters.
Supported host languages have data types that correspond to the majority of
the database manager data types. Only these host language data types can be
used in host variable declarations. When the precompiler finds a host variable
declaration, it determines the appropriate SQL data type value. The database
manager uses this value to convert the data exchanged between itself and the
application.
The general rule for data type compatibility is that all supported host-language
numeric data types are comparable and assignable with all database manager
numeric data types, and all host-language character types are compatible with
all database manager character types; numeric types are incompatible with
character types. However, there are also some exceptions to this general rule
depending on host language idiosyncrasies and limitations imposed when
working with large objects.
Note that the execution of the above statement includes conversion between
DECIMAL and DOUBLE data types. To make the query results more readable
on your screen, you could use the following SELECT statement:
To convert data within your application, contact your compiler vendor for
additional routines, classes, built-in types, or APIs that supports this
conversion.
For the list of supported SQL data types and the corresponding host language
data types, see the following:
v for C/C++, “Supported SQL Data Types in C and C++” on page 615
v for Java, “Supported SQL Data Types in Java” on page 625
v for COBOL, “Supported SQL Data Types in COBOL” on page 681
v for FORTRAN, “Supported SQL Data Types in FORTRAN” on page 698
v for REXX, “Supported SQL Data Types in REXX” on page 712.
For more information about SQL data types, the rules of assignments and
comparisons, and data conversion and conversion errors, refer to the SQL
Reference.
Using an Indicator Variable in the STATIC program
The following code segments show the modification to the corresponding
segments in the C version of the sample STATIC program, listed in “C
Example: STATIC.SQC” on page 66. They show the implementation of
indicator variables on data columns that are nullable. In this example, the
STATIC program is extended to select another column, WORKDEPT. This column
can have a null value. An indicator variable needs to be declared as a host
variable before being used.
..
.
To help understand the concept of a cursor, assume that the database manager
builds a result table to hold all the rows retrieved by executing a SELECT
statement. A cursor makes rows from the result table available to an
application, by identifying or pointing to a current row of this table. When a
cursor is used, an application can retrieve each row sequentially from the
result table until an end of data condition, that is, the NOT FOUND
condition, SQLCODE +100 (SQLSTATE 02000) is reached. The set of rows
obtained as a result of executing the SELECT statement can consist of zero,
one, or more rows, depending on the number of rows that satisfy the search
condition.
The application assigns a name for the cursor. This name is referred to in
subsequent OPEN, FETCH, and CLOSE statements. The query is any valid
select statement.
If the unit of work ends with a COMMIT statement, open cursors defined
WITH HOLD remain OPEN. The cursor is positioned before the next logical
row of the result table. In addition, prepared statements referencing OPEN
cursors defined WITH HOLD are retained. Only FETCH and CLOSE requests
If the unit of work ends with a ROLLBACK statement, all open cursors are
closed, all locks acquired during the unit of work are released, and all
prepared statements that are dependent on work done in that unit are
dropped.
For example, suppose that the TEMPL table contains 1000 entries. You want to
update the salary column for all employees, and you expect to issue a
COMMIT statement every time you update 100 rows.
1. Declare the cursor using the WITH HOLD option:
EXEC SQL DECLARE EMPLUPDT CURSOR WITH HOLD FOR
SELECT EMPNO, LASTNAME, PHONENO, JOBCODE, SALARY
FROM TEMPL FOR UPDATE OF SALARY
2. Open the cursor and fetch data from the result table one row at a time:
EXEC SQL OPEN EMPLUPDT
.
.
.
Since REXX does not support static SQL, a sample is not provided. See
“Example: Dynamic SQL Program” on page 133 for a REXX example that
processes a cursor dynamically.
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc == 1)
{
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3)
{
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else
{
printf ("\nUSAGE: cursor [userid passwd]\n\n");
return 1;
} /* endif */
do
{
EXEC SQL FETCH c1 INTO :pname, :dept; 3
if (SQLCODE != 0) break;
class Cursor
{ static
{ try
{ Class.forName ("COM.ibm.db2.jdbc.app.DB2Driver").newInstance ();
}
catch (Exception e)
{ System.out.println ("\n Error loading DB2 Driver...\n");
System.out.println (e);
System.exit(1);
}
}
// Enable transactions
con.setAutoCommit(false);
// Using cursors
try
{ CursorByName cursorByName;
CursorByPos cursorByPos;
#sql cursorByName = {
SELECT name, dept FROM staff WHERE job='Mgr' }; 1
while (cursorByName.next()) 2
{ name = cursorByName.name(); 3
dept = cursorByName.dept();
#sql cursorByPos = {
SELECT name, dept FROM staff WHERE job='Mgr' }; 1 2
while (true)
{ #sql { FETCH :cursorByPos INTO :name, :dept }; 3
if (cursorByPos.endFetch()) break;
Data Division.
Working-Storage Section.
copy "sqlenv.cbl".
copy "sql.cbl".
copy "sqlca.cbl".
Procedure Division.
Main Section.
display "Sample COBOL program: CURSOR".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
Fetch-Loop Section.
EXEC SQL FETCH c1 INTO :PNAME, :DEPT END-EXEC. 3
if SQLCODE not equal 0
go to End-Fetch-Loop.
display pname, " in dept. ", dept,
" will be demoted to Clerk".
End-Fetch-Loop. exit.
End-Prog.
stop run.
The DELETE statement causes the row being referenced by the cursor to be
deleted. This leaves the cursor positioned before the next row and a FETCH
statement must be issued before additional WHERE CURRENT OF operations
may be performed against the cursor.
Types of Cursors
Cursors fall into three categories:
Read only
The rows in the cursor can only be read, not updated. Read-only
cursors are used when an application will only read data, not modify
it. A cursor is considered read only if it is based on a read-only
select-statement. See the rules in “Updating Retrieved Data” for
select-statements which define non-updatable result tables.
There can be performance advantages for read-only cursors. For more
information on read-only cursors, refer to the Administration Guide:
Implementation.
The REXX language does not support static SQL, so a sample is not provided.
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc == 1)
{
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3)
{
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else
{
printf ("\nUSAGE: openftch [userid passwd]\n\n");
return 1;
} /* endif */
do
{
EXEC SQL FETCH c1 INTO :pname, :dept; 3
if (SQLCODE != 0) break;
import sqlj.runtime.ForUpdate;
#sql public iterator OpF_Curs implements ForUpdate (String, short);
Openftch.sqlj
import java.sql.*;
import sqlj.runtime.*;
import sqlj.runtime.ref.*;
class Openftch
{ static
{ try
{ Class.forName ("COM.ibm.db2.jdbc.app.DB2Driver").newInstance ();
}
catch (Exception e)
{ System.out.println ("\n Error loading DB2 Driver...\n");
System.out.println (e);
System.exit(1);
}
}
// Enable transactions
#sql forUpdateCursor =
{ SELECT name, dept
FROM staff
WHERE job='Mgr'
}; // #sql 12
while (true)
{ #sql
{ FETCH :forUpdateCursor
INTO :name, :dept
}; // #sql 3
if (forUpdateCursor.endFetch()) break;
Data Division.
Working-Storage Section.
copy "sqlca.cbl".
Procedure Division.
Main Section.
display "Sample COBOL program: OPENFTCH".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
Fetch-Loop Section.
EXEC SQL FETCH c1 INTO :pname, :dept END-EXEC. 3
if SQLCODE not equal 0
go to End-Fetch-Loop.
Delete-Staff.
display pname, " in dept. ", dept,
" will be DELETED!".
go to End-Fetch-Loop.
Update-Staff.
display pname, " in dept. ", dept,
" will be demoted to Clerk".
End-Fetch-Loop. exit.
End-Prog.
stop run.
Using an isolation level of repeatable read, the data you retrieve from a
transaction can be retrieved again by closing and opening a cursor. Other
applications are prevented from updating the data in your result set. Isolation
levels and locking can affect how users update data.
Retrieving the Data a Second Time
This technique depends on the order in which you want to see the data again:
v Retrieving from the Beginning
v Retrieving from the Middle
v Order of Rows in the Second Result Table
v Retrieving in Reverse Order
Now, suppose that you want to return to the rows that start with
DEPTNO = 'M95' and fetch sequentially from that point. Code the following:
SELECT * FROM DEPARTMENT
WHERE LOCATION = 'CALIFORNIA'
AND DEPTNO >= 'M95'
ORDER BY DEPTNO
The difference in ordering could occur even if you were to execute the same
SQL statement, with the same host variables, a second time. For example, the
statistics in the catalog could be updated between executions, or indexes
could be created or dropped. You could then execute the SELECT statement
again.
The ordering is more likely to change if the second SELECT has a predicate
that the first did not have; the database manager could choose to use an index
on the new predicate. For example, it could choose an index on LOCATION for
the first statement in our example and an index on DEPTNO for the second.
Because rows are fetched in order by the index key, the second order need not
be the same as the first.
Because of the subtle relationships between the form of an SQL statement and
the values in this statement, never assume that two different SQL statements
will return rows in the same order unless the order is uniquely determined by
an ORDER BY clause.
To retrieve the same rows in reverse order, specify that the order is
descending, as in the following statement:
SELECT * FROM DEPARTMENT
WHERE LOCATION = 'CALIFORNIA'
ORDER BY DEPTNO DESC
A cursor on the second statement retrieves rows in exactly the opposite order
from a cursor on the first statement. Order of retrieval is guaranteed only if
the first statement specifies a unique ordering sequence.
For retrieving rows in reverse order, it can be useful to have two indexes on
the DEPTNO column, one in ascending order and the other in descending order.
Establishing a Position at the End of a Table
The database manager does not guarantee an order to data stored in a table;
therefore, the end of a table is not defined. However, order is defined on the
result of an SQL statement:
SELECT * FROM DEPARTMENT
ORDER BY DEPTNO DESC
For this example, the following statement positions the cursor at the row with
the highest DEPTNO value:
Note, however, that if several rows have the same value, the cursor is
positioned on the first of them.
Updating Previously Retrieved Data
To scroll backward and update data that was retrieved previously, you can
use a combination of the techniques discussed in “Scrolling Through Data that
has Already Been Retrieved” on page 102 and “Updating Retrieved Data” on
page 92. You can do one of two things:
1. If you have a second cursor on the data to be updated and if the SELECT
statement uses none of the restricted elements, you can use a
cursor-controlled UPDATE statement. Name the second cursor in the
WHERE CURRENT OF clause.
2. In other cases, use UPDATE with a WHERE clause that names all the
values in the row or specifies the primary key of the table. You can
execute one statement many times with different values of the variables.
Example: UPDAT Program
The UPDAT program uses dynamic SQL to access the STAFF table in the
SAMPLE database and changes all managers to clerks. Then the program
reverses the changes by rolling back the unit of work. The sample is available
in the following programming languages:
C updat.sqc
Java Updat.sqlj
COBOL updat.sqb
REXX updat.cmd
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc == 1)
{
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3)
{
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd; 3
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else
{
printf ("\nUSAGE: updat [userid passwd]\n\n");
return 1;
} /* endif */
class Updat
{ static
{ try
{ Class.forName ("COM.ibm.db2.jdbc.app.DB2Driver").newInstance ();
}
catch (Exception e)
{ System.out.println ("\n Error loading DB2 Driver...\n");
System.out.println (e);
System.exit(1);
}
}
// Enable transactions
con.setAutoCommit(false);
// UPDATE/DELETE/INSERT
try
{ String jobUpdate = null;
jobUpdate="Clerk";
#sql {UPDATE staff SET job = :jobUpdate WHERE job = 'Mgr'}; 4
jobUpdate="Sales";
#sql {DELETE FROM staff WHERE job = :jobUpdate};
System.out.println("All 'Sales' people have been deleted!"); 5
Data Division.
Working-Storage Section.
copy "sql.cbl".
copy "sqlenv.cbl".
copy "sqlca.cbl". 1
* Local variables
77 errloc pic x(80).
77 error-rc pic s9(9) comp-5.
77 state-rc pic s9(9) comp-5.
Procedure Division.
Main Section.
display "Sample COBOL program: UPDAT".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
EXEC SQL INSERT INTO staff VALUES (999, 'Testing', 99, 6
:job-update, 0, 0, 0) END-EXEC.
move "INSERT INTO STAFF" to errloc.
call "checkerr" using SQLCA errloc.
End-Prog.
stop run.
Note: REXX programs cannot contain static SQL. This program is written
with dynamic SQL.
/* REXX program UPDAT.CMD */
exit -1
end
/* connect to database */
SAY
SAY 'Connect to' dbname
IF password= "" THEN
CALL SQLEXEC 'CONNECT TO' dbname
ELSE
CALL SQLEXEC 'CONNECT TO' dbname 'USER' userid 'USING' password
jobupdate = "'Clerk'"
st = "UPDATE staff SET job =" jobupdate "WHERE job = 'Mgr'"
call SQLEXEC 'EXECUTE IMMEDIATE :st' 4
call CHECKERR 'UPDATE'
say "All 'Mgr' have been demoted to 'Clerk'!"
st = "INSERT INTO staff VALUES (999, 'Testing', 99," jobupdate ", 0, 0, 0)"
call SQLEXEC 'EXECUTE IMMEDIATE :st' 6
call CHECKERR 'INSERT'
say 'New data has been inserted'
CHECKERR:
arg errloc
if ( SQLCA.SQLCODE = 0 ) then
return 0
else do
say '--- error report ---'
say 'ERROR occurred :' errloc
say 'SQLCODE :' SQLCA.SQLCODE
/******************************\
* GET ERROR MESSAGE API called *
\******************************/
call SQLDBS 'GET MESSAGE INTO :errmsg LINEWIDTH 80'
say errmsg
say '--- end error report ---'
A source file containing executable SQL statements can provide at least one
SQLCA structure with the name sqlca. The SQLCA structure is defined in the
SQLCA include file. Source files without embedded SQL statements, but
calling database manager APIs, can also provide one or more SQLCA
structures, but their names are arbitrary.
If your application is compliant with the FIPS 127-2 standard, you can declare
the SQLSTATE and SQLCODE as host variables instead of using the SQLCA
structure. For information on how to do this, see “SQLSTATE and SQLCODE
Variables in C and C++” on page 620 for C or C++ applications, “SQLSTATE
and SQLCODE Variables in COBOL” on page 685 for COBOL applications, or
“SQLSTATE and SQLCODE Variables in FORTRAN” on page 700 for
FORTRAN applications.
Note: If you want to develop applications that access various IBM RDBMS
servers you should:
v Where possible, have your applications check the SQLSTATE rather
than the SQLCODE.
v If your applications will use DB2 Connect, consider using the
mapping facility provided by DB2 Connect to map SQLCODE
conversions between unlike databases.
Token Truncation in SQLCA Structure
Since tokens may be truncated in the SQLCA structure, you should not use
the token info for diagnostic purposes. While you can define table and
column names with lengths of up to 128 bytes, the SQLCA tokens will be
truncated to 17 bytes plus a truncation terminator (>). Application logic
should not depend on actual values of the sqlerrmc field. Refer to the SQL
Reference for a description of the SQLCA structure, and a discussion of token
truncation.
Handling Errors using the WHENEVER Statement
The WHENEVER statement causes the precompiler to generate source code
that directs the application to go to a specified label if an error, warning, or if
no rows are found during execution. The WHENEVER statement affects all
subsequent executable SQL statements until another WHENEVER statement
alters the situation.
The WHENEVER statement must appear before the SQL statements you want
to affect. Otherwise, the precompiler does not know that additional
error-handling code should be generated for the executable SQL statements.
You can have any combination of the three basic forms active at any time. The
order in which you declare the three forms is not significant. To avoid an
infinite looping situation, ensure that you undo the WHENEVER handling
before any SQL statements are executed inside the handler. You can do this
using the WHENEVER SQLERROR CONTINUE statement.
For other operating systems that are not in the above list, refer to the
Application Building Guide.
Note that you should exercise caution when coding a COMMIT and
ROLLBACK in exception/signal/interrupt handlers. If you call either of these
When using APPC to access a remote database server (DB2 for AIX or host
database system using DB2 Connect), the application may receive a SIGUSR1
signal. This signal is generated by SNA Services/6000 when an unrecoverable
error occurs and the SNA connection is stopped. You may want to install a
signal handler in your application to handle SIGUSR1.
You can find information on building these examples in the README files, or in
the header section of these sample programs.
/*#############################################################################
** 1. SQL_CHECK section
**
** 1.1 - SqlInfoPrint - prints on the screen everything that
** goes unexpected.
** 1.2 - TransRollback - rolls back the transaction
#############################################################################*/
/******************************************************************************
** 1.1 - SqlInfoPrint - prints on the screen everything that
** goes unexpected.
******************************************************************************/
int SqlInfoPrint( char * appMsg,
struct sqlca * pSqlca,
int line,
char * file )
{ int rc = 0;
char sqlInfo[1024];
char sqlInfoToken[1024];
char sqlstateMsg[1024];
char errorMsg[1024];
printf("%s", sqlInfo);
return 1;
}
else
{ sprintf( sqlInfoToken, "--- end warning report ---\n");
strcat( sqlInfo, sqlInfoToken);
printf("%s", sqlInfo);
return 0;
} /* endif */
} /* endif */
return 0;
}
/******************************************************************************
** 1.2 - TransRollback - rolls back the transaction
******************************************************************************/
void TransRollback( )
{ int rc = 0;
Data Division.
Working-Storage Section.
copy "sql.cbl".
* Local variables
77 error-rc pic s9(9) comp-5.
77 state-rc pic s9(9) comp-5.
Linkage Section.
copy "sqlca.cbl" replacing ==VALUE "SQLCA "== by == ==
==VALUE 136== by == ==.
01 errloc pic x(80).
********************************
* GET ERROR MESSAGE API called *
********************************
call "sqlgintp" using
by value buffer-size
by value line-width
by reference sqlca
by reference error-buffer
returning error-rc.
************************
* GET SQLSTATE MESSAGE *
************************
call "sqlggstt" using
by value buffer-size
by value line-width
by reference sqlstate
by reference state-buffer
CHECKERR:
arg errloc
if ( SQLCA.SQLCODE = 0 ) then
return 0
else do
say '--- error report ---'
say 'ERROR occurred :' errloc
say 'SQLCODE :' SQLCA.SQLCODE
/******************************\
* GET ERROR MESSAGE API called *
\******************************/
call SQLDBS 'GET MESSAGE INTO :errmsg LINEWIDTH 80'
say errmsg
say '--- end error report ---'
CHECKERR:
arg errloc
if ( SQLCA.SQLCODE = 0 ) then
return 0
else do
say '--- error report ---'
say 'ERROR occurred :' errloc
say 'SQLCODE :' SQLCA.SQLCODE
/******************************\
* GET ERROR MESSAGE API called *
\******************************/
call SQLDBS 'GET MESSAGE INTO :errmsg LINEWIDTH 80'
say errmsg
say '--- end error report ---'
Note: The content of dynamic SQL statements follows the same syntax as
static SQL statements, but with the following exceptions:
v Comments are not allowed.
v The statement cannot begin with EXEC SQL.
v The statement cannot end with the statement terminator. An
exception to this is the CREATE TRIGGER statement which can
contain a semicolon (;).
Comparing Dynamic SQL with Static SQL
The question of whether to use static or dynamic SQL for performance is
usually of great interest to programmers. The answer, of course, is that it all
depends on your situation. Refer to Table 6 on page 129 to help you decide
whether to use static or dynamic SQL. There may be certain considerations
such as security which dictate static SQL, or your environment (such as
whether you are using DB2 CLI or the CLP) which dictates dynamic SQL.
In general, an application using dynamic SQL has a higher start-up (or initial)
cost per SQL statement due to the need to compile the SQL statements prior
to using them. Once compiled, the execution time for dynamic SQL compared
to static SQL should be equivalent and, in some cases, faster due to better
access plans being chosen by the optimizer. Each time a dynamic statement is
executed, the initial compilation cost becomes less of a factor. If multiple users
are running the same dynamic application with the same statements, only the
first application to issue the statement realizes the cost of statement
compilation.
Note: Static and dynamic SQL each come in two types that make a difference
to the DB2 optimizer. These are:
1. Static SQL containing no host variables
This is an unlikely situation which you may see only for:
v Initialization code
v Novice training examples
Note: Java applications do not use the SQLDA structure, and therefore do not
use the PREPARE or DESCRIBE statements. In JDBC applications you
can use a PreparedStatement object and the executeQuery() method to
generate a ResultSet object, which is the equivalent of a host language
cursor. In SQLJ applications you can also declare an SQLJ iterator
object with a CursorByPos or CursorByName cursor to return data from
FETCH statements.
For an example of a simple dynamic SQL program that uses the PREPARE,
DESCRIBE, and FETCH statements without using an SQLDA, see “Example:
Dynamic SQL Program” on page 133. For an example of a dynamic SQL
program that uses the PREPARE, DESCRIBE, and FETCH statements and an
SQLDA to process interactive SQL statements, see “Example: ADHOC
Program” on page 154.
Declaring and Using Cursors
Processing a cursor dynamically is nearly identical to processing it using static
SQL. When a cursor is declared, it is associated with a query.
In the static SQL case, the query is a SELECT statement in text form, as
shown in “Declare Cursor Statement” on page 82.
The main difference between a static and a dynamic cursor is that a static
cursor is prepared at precompile time, and a dynamic cursor is prepared at
run time. Additionally, host variables referenced in the query are represented
by parameter markers, which are replaced by run-time host variables when
the cursor is opened.
For more information about how to use cursors, see the following sections:
v “Selecting Multiple Rows Using a Cursor” on page 81
v “Example: Cursor Program” on page 84
v “Using Cursors in REXX” on page 714
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc == 1) {
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3) {
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else {
printf ("\nUSAGE: dynamic [userid passwd]\n\n");
return 1;
} /* endif */
class Dynamic
{ static
{ try
{ Class.forName ("COM.ibm.db2.jdbc.app.DB2Driver").newInstance ();
}
catch (Exception e)
{ System.out.println ("\n Error loading DB2 Driver...\n");
System.out.println (e);
System.exit(1);
}
}
if (argv.length == 0)
{ // connect with default id/password
con = DriverManager.getConnection(url);
}
else if (argv.length == 2)
{ String userid = argv[0];
String passwd = argv[1];
// Enable transactions
con.setAutoCommit(false);
System.out.print("\n");
while( rs.next() ) 5
rs.close();
pstmt1.close(); 7
}
catch( Exception e )
{ throw e;
}
finally
{ // Rollback the transaction
System.out.println("\nRollback the transaction...");
con.rollback();
System.out.println("Rollback done.");
}
}
catch( Exception e )
{ System.out.println(e);
}
}
}
Data Division.
Working-Storage Section.
copy "sqlenv.cbl".
copy "sql.cbl".
copy "sqlca.cbl".
Procedure Division.
Main Section.
display "Sample COBOL program: DYNAMIC".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
End-Main.
go to End-Prog.
Fetch-Loop Section.
EXEC SQL FETCH c1 INTO :table-name END-EXEC. 5
if SQLCODE not equal 0
go to End-Fetch-Loop.
display "TABLE = ", table-name.
End-Fetch-Loop. exit.
End-Prog.
stop run.
exit -1
end
/* connect to database */
SAY
SAY 'Connect to' dbname
IF password= "" THEN
CALL SQLEXEC 'CONNECT TO' dbname
ELSE
CALL SQLEXEC 'CONNECT TO' dbname 'USER' userid 'USING' password
parm_var = "STAFF"
call SQLEXEC 'OPEN c1 USING :parm_var' 4
CHECKERR:
arg errloc
if ( SQLCA.SQLCODE = 0 ) then
return 0
else do
say '--- error report ---'
say 'ERROR occurred :' errloc
say 'SQLCODE :' SQLCA.SQLCODE
/******************************\
* GET ERROR MESSAGE API called *
\******************************/
call SQLDBS 'GET MESSAGE INTO :errmsg LINEWIDTH 80'
say errmsg
say '--- end error report ---'
HEADER
sqln SMALLINT sqld SMALLINT
SQLVAR
(1 per field) sqldata POINTER sqlind POINTER
OTHER SQLVARs
For the above methods, the question arises as to how many initial SQLVAR
entries you should allocate. Each SQLVAR element uses up 44 bytes of storage
(not counting storage allocated for the SQLDATA and SQLIND fields). If
memory is plentiful, the first method of providing an SQLDA of maximum
size is easier to implement.
The SQLVARs in the SQLDA are NOT set (requiring allocation of additional
space and another DESCRIBE) in the following cases:
The SQLWARN option of the BIND command is used to control whether the
DESCRIBE (or PREPARE...INTO) will return the following warnings:
v SQLCODE +236 (SQLSTATE 01005)
v SQLCODE +237 (SQLSTATE 01594)
v SQLCODE +239 (SQLSTATE 01005).
It is recommended that your application code always consider that these
SQLCODEs could be returned. The warning SQLCODE +238 (SQLSTATE
01005) is always returned when there are LOB columns in the select list and
there are insufficient SQLVARs in the SQLDA. This is the only way the
application can know that the number of SQLVARs must be doubled because
of a LOB column in the result set.
Allocating an SQLDA with Sufficient SQLVAR Entries
After the number of columns in the result table is determined, storage can be
allocated for a second, full-size SQLDA. For example, if the result table
contains 20 columns (none of which are LOB columns), a second SQLDA
structure, fulsqlda, must be allocated with at least 20 SQLVAR elements (or
40 elements if the result table contains any LOBs or distinct types). For the
rest of this example, assume that no LOBs or distinct types are in the result
table.
The number of SQLVAR entries needed for fulsqlda was specified in the
SQLD field of minsqlda. This value was 20. Therefore, the storage allocation
required for fulsqlda used in this example is:
16 + (20 * sizeof(struct sqlvar))
This value represents the size of the header plus 20 times the size of each
SQLVAR entry, giving a total of 896 bytes.
You can use the SQLDASIZE macro to avoid doing your own calculations and
to avoid any version-specific dependencies.
Describing the SELECT Statement
Having allocated sufficient space for fulsqlda, an application must take the
following steps:
1. Store the value 20 in the SQLN field of fulsqlda.
2. Obtain information about the SELECT statement using the second SQLDA
structure, fulsqlda. Two methods are available:
v Use another PREPARE statement specifying fulsqlda instead of
minsqlda.
v Use the DESCRIBE statement specifying fulsqlda.
Using the DESCRIBE statement is preferred because the costs of preparing the
statement a second time are avoided. The DESCRIBE statement simply reuses
information previously obtained during the prepare operation to fill in the
new SQLDA structure. The following statement can be issued:
EXEC SQL DESCRIBE STMT INTO :fulsqlda
In addition, if the specified column allows nulls, then the application must
replace the content of the SQLIND field with the address of an indicator
variable for the column.
Processing the Cursor
After the SQLDA structure is properly allocated, the cursor associated with
the SELECT statement can be opened and rows can be fetched by specifying
the USING DESCRIPTOR clause of the FETCH statement.
When finished, the cursor should be closed and any dynamically allocated
memory should be released.
Allocating an SQLDA Structure
To create an SQLDA structure with C, either embed the INCLUDE SQLDA
statement in the host language or include the SQLDA include file to get the
structure definition. Then, because the size of an SQLDA is not fixed, the
application must declare a pointer to an SQLDA structure and allocate storage
for it. The actual size of the SQLDA structure depends on the number of
distinct data items being passed using the SQLDA. (For an example of how to
code an application to process the SQLDA, see “Example: ADHOC Program”
on page 154.)
The effect of this macro is to calculate the required storage for an SQLDA
with n SQLVAR elements.
The following table shows the declaration and use of an SQLDA structure
with one SQLVAR element.
integer*2 sqlvar1
parameter ( sqlvar1 = sqlda_header_sz + 0*sqlvar_struct_sz )
In either case, the database manager places a value in the SQLD field of the
SQLDA structure, indicating the number of columns in the result table
generated by the SQL statement. If the SQLD field contains a zero (0), the
statement is not a SELECT statement. Since the statement is already prepared,
it can immediately be executed using the EXECUTE statement.
You must save the source SQL statements, not the prepared versions. This
means that you must retrieve and then prepare each statement before
Note that this example uses a number of additional procedures that are
provided as utilities in the file utilemb.sqc. These include:
init_da
Allocates memory for a prepared SQL statement. An internally
described function called SQLDASIZE is used to calculate the proper
amount of memory.
#ifdef DB268K
/* Need to include ASLM for 68K applications */
#include <LibraryManager.h>
#endif
int rc ;
char sqlInput[256] ;
char st[1024] ;
#ifdef DB268K
/*
Before making any API calls for 68K environment,
need to initial the Library Manager
*/
InitLibraryManager(0,kCurrentZone,kNormalMemory) ;
atexit(CleanupLibraryManager) ;
#endif
printf( "Enter 'c' to COMMIT or Any Other key to ROLLBACK the transaction :\n" ) ;
gets( sqlInput ) ;
if ( ( *sqlInput == 'c' ) || ( *sqlInput == 'C' ) ) {
printf( "COMMITING the transactions.\n" ) ;
EXEC SQL COMMIT ; 7
EMB_SQL_CHECK( "COMMIT" ) ;
}
else { /* assume that the transaction is to be rolled back */
printf( "ROLLING BACK the transactions.\n" ) ;
EXEC SQL ROLLBACK ; 8
EMB_SQL_CHECK( "ROLLBACK" ) ;
}
return( 0 ) ;
/******************************************************************************
* FUNCTION : process_statement
* This function processes the inputted statement and then prepares the
* procedural SQL implementation to take place.
int counter = 0 ;
struct sqlda * sqldaPointer ;
short sqlda_d ;
sqlda_d = sqldaPointer->sqld ;
free( sqldaPointer ) ;
if ( SQLCODE == SQL_RC_W237 ||
SQLCODE == SQL_RC_W238 ||
SQLCODE == SQL_RC_W239 )
/* this output contains columns that need a DOUBLED SQLDA */
init_da( &sqldaPointer, sqlda_d * 2 ) ;
return( 0 ) ;
If the data type of a parameter marker is not obvious from the context of the
statement in which it is used, the type can be specified using a CAST. Such a
parameter marker is considered a typed parameter marker. Typed parameter
markers will be treated like a host variable of the given type. For example, the
statement SELECT ? FROM SYSCAT.TABLES is invalid because DB2 does not
know the type of the result column. However, the statement
SELECT CAST(? AS INTEGER) FROM SYSCAT.TABLES, is valid because the cast
promises that the parameter marker represents an INTEGER, so DB2 knows the
type of the result column.
If the SQL statement contains more than one parameter marker, then the
USING clause of the EXECUTE statement must either specify a list of host
variables (one for each parameter marker), or it must identify an SQLDA that
has an SQLVAR entry for each parameter marker. (Note that for LOBs, there
are two SQLVARs per parameter marker.) The host variable list or SQLVAR
entries are matched according to the order of the parameter markers in the
statement, and they must have compatible data types.
The rules that apply to parameter markers are listed under the PREPARE
statement in the SQL Reference.
Example: VARINP Program
This is an example of an UPDATE that uses a parameter marker in the search
and update conditions. The sample is available in the following programming
languages:
C varinp.sqc
Java Varinp.java
COBOL varinp.sqb
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc == 1)
{
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3)
{
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else
{
printf ("\nUSAGE: varinp [userid passwd]\n\n");
return 1;
} /* endif */
class Varinp
{ static
{ try
{ Class.forName ("COM.ibm.db2.jdbc.app.DB2Driver").newInstance ();
}
catch (Exception e)
{ System.out.println ("\n Error loading DB2 Driver...\n");
System.out.println (e);
System.exit(1);
}
}
if (argv.length == 0)
{ // connect with default id/password
con = DriverManager.getConnection(url);
}
else if (argv.length == 2)
{ String userid = argv[0];
String passwd = argv[1];
// Enable transactions
con.setAutoCommit(false);
pstmt2.executeUpdate(); 6
};
rs.close();
pstmt1.close(); 7
pstmt2.close();
}
catch( Exception e )
{ throw e;
}
finally
{ // Rollback the transaction
System.out.println("\nRollback the transaction...");
con.rollback();
System.out.println("Rollback done.");
}
}
catch( Exception e )
{ System.out.println(e);
}
}
}
Data Division.
Working-Storage Section.
copy "sqlca.cbl".
Procedure Division.
Main Section.
display "Sample COBOL program: VARINP".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
Fetch-Loop Section.
EXEC SQL FETCH c1 INTO :pname, :dept END-EXEC. 5
if SQLCODE not equal 0
go to End-Fetch-Loop.
display pname, " in dept. ", dept,
" will be demoted to Clerk".
End-Fetch-Loop. exit.
End-Prog.
stop run.
Note: DB2 CLI can also accept some SQL statements that cannot be prepared
dynamically, such as compound SQL statements.
Table 37 on page 723 lists each SQL statement, and indicates whether or not it
can be executed using DB2 CLI. The table also indicates if the command line
processor can be used to execute the statement interactively, (useful for
prototyping SQL statements).
Each DBMS may have additional statements that you can dynamically
prepare. In this case, DB2 CLI passes the statements to the DBMS. There is
one exception: the COMMIT and ROLLBACK statement can be dynamically
prepared by some DBMSs but are not passed. In this case, use the
SQLEndTran() function to specify either the COMMIT or ROLLBACK
statement.
Advantages of Using DB2 CLI
The DB2 CLI interface has several key advantages over embedded SQL.
v It is ideally suited for a client-server environment, in which the target
database is not known when the application is built. It provides a consistent
interface for executing SQL statements, regardless of which database server
the application is connected to.
v It increases the portability of applications by removing the dependence on
precompilers.
v Individual DB2 CLI applications do not need to be bound to each database,
only bind files shipped with DB2 CLI need to be bound once for all DB2
CLI applications. This can significantly reduce the amount of management
required for the application once it is in general use.
v DB2 CLI applications can connect to multiple databases, including multiple
connections to the same database, all from the same application. Each
connection has its own commit scope. This is much simpler using CLI than
using embedded SQL where the application must make use of
multi-threading to achieve the same result.
v DB2 CLI eliminates the need for application controlled, often complex data
areas, such as the SQLDA and SQLCA, typically associated with embedded
SQL applications. Instead, DB2 CLI allocates and controls the necessary
data structures, and provides a handle for the application to reference them.
DB2 CLI is ideally suited for query-based graphical user interface (GUI)
applications that require portability. The advantages listed above, may make
using DB2 CLI seem like the obvious choice for any application. There is
however, one factor that must be considered, the comparison between static
and dynamic SQL. It is much easier to use static SQL in embedded
applications.
For more information on using static SQL in CLI applications, refer to the Web
page at:
http://www.ibm.com/software/data/db2/udb/staticcli
For more information on using static SQL in CLI applications, refer to the Web
page at:
http://www.ibm.com/software/data/db2/udb/staticcli
It is also possible to write a mixed application that uses both DB2 CLI and
embedded SQL, taking advantage of their respective benefits. In this case,
DB2 CLI is used to provide the base application, with key modules written
using static SQL for performance or security reasons. This complicates the
application design, and should only be used if stored procedures do not meet
the applications requirements. For more information, refer to the section on
Mixing Embedded SQL and DB2 CLI in the CLI Guide and Reference.
Identity Columns
Identity columns provide DB2 application developers with an easy way of
automatically generating a unique primary key value for every row in a table.
To create an identity column, include the IDENTITY clause in your CREATE
TABLE or ALTER TABLE statement.
An identity column may appear to have generated gaps in the counter, as the
result of a transaction that was rolled back, or because the database cached a
range of values that have been deactivated (normally or abnormally) before all
the cached values were assigned.
If you develop applications written for concurrent users, your applications can
take advantage of declared temporary tables. Unlike regular tables, declared
temporary tables are not subject to name collision. For each instance of the
application, DB2 can create a declared temporary table with an identical
name. For example, to write an application for concurrent users that uses
regular tables to process large amounts of temporary data, you must ensure
that each instance of the application uses a unique name for the regular table
that holds the temporary data. Typically, you would create another table that
tracks the names of the tables that are in use at any given time. With declared
To select the contents of the column1 column from the declared temporary
table created in the previous example, use the following statement:
SELECT column1 FROM SESSION.TT1;
Note that DB2 also enables you to create persistent tables with the SESSION
schema. If you create a persistent table with the qualified name SESSION.TT3,
you can then create a declared temporary table with the qualified name
SESSION.TT3. In this situation, DB2 always resolves references to persistent
and declared temporary tables with identical qualified names to the declared
temporary table. To avoid confusing persistent tables with declared temporary
tables, you should not create persistent tables using the SESSION schema.
The default behavior of a declared temporary table is to delete all rows from
the table when you commit a transaction. However, if one or more WITH
HOLD cursors are still open on the declared temporary table, DB2 does not
delete the rows from the table when you commit a transaction. To avoid
deleting all rows when you commit a transaction, create the temporary table
using the ON COMMIT PRESERVE ROWS clause in the DECLARE GLOBAL
TEMPORARY TABLE statement.
The following SQL statements enable you to create and control savepoints:
SAVEPOINT
To set a savepoint, issue a SAVEPOINT SQL statement. To improve
the clarity of your code, you can choose a meaningful name for the
savepoint. For example:
SAVEPOINT savepoint1 ON ROLLBACK RETAIN CURSORS
RELEASE SAVEPOINT
To release a savepoint, issue a RELEASE SAVEPOINT SQL statement.
If you do not explicitly release a savepoint with a RELEASE
SAVEPOINT SQL statement, it is released at the end of the
transaction. For example:
RELEASE SAVEPOINT savepoint1
ROLLBACK TO SAVEPOINT
To rollback to a savepoint, issue a ROLLBACK TO SAVEPOINT SQL
statement. For example:
ROLLBACK TO SAVEPOINT
You can issue a CLOSE statement to close invalid cursors. If you issue a
FETCH against an invalid cursor, DB2 returns SQLCODE −910. If you issue an
OPEN statement against an invalid cursor, DB2 returns SQLCODE −502. If
you issue an UPDATE or DELETE WHERE CURRENT OF statement against
an invalid cursor, DB2 returns SQLCODE −910.
Within savepoints, DB2 treats tables with the NOT LOGGED INITIALLY
property and temporary tables as follows:
NOT LOGGED INITIALLY tables
Within a savepoint, you can create a table with the NOT LOGGED
INITIALLY property, or alter a table to have the NOT LOGGED
INITIALLY property. For these savepoints, however, DB2 treats
If you do not precompile the application using BLOCKING NO, and your
application issues a FETCH statement after a ROLLBACK TO SAVEPOINT
has occurred, the FETCH statement may retrieve deleted data. For example,
assume that the application containing the following SQL is precompiled
without the BLOCKING NO option:
CREATE TABLE t1(c1 INTEGER);
DECLARE CURSOR c1 AS 'SELECT c1 FROM t1 ORDER BY c1';
INSERT INTO t1 VALUES (1);
SAVEPOINT showFetchDelete;
INSERT INTO t1 VALUES (2);
INSERT INTO t1 VALUES (3);
OPEN CURSOR c1;
FETCH c1; --get first value and cursor block
ALTER TABLE t1... --add constraint
ROLLBACK TO SAVEPOINT;
FETCH c1; --retrieves second value from cursor block
When your application issues the first FETCH on table “t1”, the DB2 server
sends a block of column values (1, 2 and 3) to the client application. These
You can write stored procedures using SQL, called SQL procedures. For more
information on writing SQL procedures, see “Chapter 8. Writing SQL
Procedures” on page 239. You can also write stored procedures using
languages such as C or Java. You do not have to write client applications in
the same language as the stored procedure. When the language of the client
application and the stored procedure differ, DB2 transparently passes the
values between the client and the stored procedure.
You can use the DB2 Stored Procedure Builder (SPB) to help develop Java or
SQL stored procedures. You can integrate SPB with popular application
development tools, including Microsoft Visual Studio and IBM Visual Age for
Java, or you can use it as a standalone utility. To help you create your stored
For more information on the DB2 Stored Procedure Builder, see “Chapter 9.
IBM DB2 Stored Procedure Builder” on page 261.
Database
Client
Client
Application
Database
Server
Network
Database
All database access must go across the network which, in some cases, results
in poor performance.
Client
Application DB2
Stored
Procedure
DB2 Client
Database
This chapter describes how to write stored procedures with the following
parameter styles:
DB2SQL The stored procedure receives parameters that you declare in
the CREATE PROCEDURE statement as host variables from
the CALL statement in the client application. DB2 allocates
additional parameters for DB2SQL stored procedures.
GENERAL The stored procedure receives parameters as host variables
from the CALL statement in the client application. The stored
procedure does not directly pass null indicators to the client
application. GENERAL is the equivalent of SIMPLE stored
procedures for DB2 Universal Database for OS/390.
GENERAL WITH NULLS
For each parameter declared by the user, DB2 allocates a
corresponding INOUT parameter null indicator. Like
GENERAL, parameters are passed as host variables.
GENERAL WITH NULLS is the equivalent of SIMPLE WITH
NULLS stored procedures for DB2 Universal Database for
OS/390.
JAVA The stored procedure uses a parameter passing convention
that conforms to the SQLJ Routines specification. The stored
procedure receives IN parameters as host variables, and
receives OUT and INOUT parameters as single entry arrays.
You must register each stored procedure for the previously listed parameter
styles with a CREATE PROCEDURE statement. The CREATE PROCEDURE
statement specifies the procedure name, arguments, location, and parameter
style of each stored procedure. These parameter styles offer increased
portability and scalability of your stored procedure code across the DB2
family.
When writing the client portion of your stored procedure, you should attempt
to overload as many of the host variables as possible by using them for both
input and output. This will increase the efficiency of handling multiple host
variables. For example, when returning an SQLCODE to the client from the
stored procedure, try to use an input host variable that is declared as an
INTEGER to return the SQLCODE.
Note: Do not allocate storage for these structures on the database server. The
database manager automatically allocates duplicate storage based upon
the storage allocated by the client application. Do not alter any storage
pointers for the input/output parameters on the stored procedure side.
Attempting to replace a pointer with a locally created storage pointer
will cause an error with SQLCODE -1133 (SQLSTATE 39502).
Procedure Names: You can overload stored procedures only by using the
same name for procedures that accept a unique number of parameters. Since
DB2 does not distinguish between data types, you cannot overload stored
procedures based on parameter data types.
However, DB2 will fail to register the second stored procedure in the
following example because it has the same number of parameters as the first
stored procedure with the same name:
CREATE PROCEDURE OVERLOADFAIL (IN VAR1 INTEGER) ...
CREATE PROCEDURE OVERLOADFAIL (IN VAR2 VARCHAR(15)) ...
Note: You should give your library a name that is different than the stored
procedure name. If DB2 locates the library in the search path, DB2
executes any stored procedure with the same name as the library which
contains the stored procedure as a FENCED DB2DARI procedure.
The following list defines the EXTERNAL keywords for Java stored
procedures:
jar-file-name
If a jar file installed in the database contains the stored procedure
method, you must include this value. The keyword represents the
name of the jar file, and is delimitied by a colon (:). If you do not
specify a jar file name, the database manager looks for the class in the
function directory. For more information on installing jar files, see
“Java Stored Procedures and UDFs” on page 654.
class-name
The name of the class that contains the stored procedure method. If
the class is part of a package, you must include the complete package
name as a prefix.
method-name
The name of the stored procedure method.
The following code for the stored procedure copies the value of argv[1] into
the CHAR(8) host variable injob, then copies the value of the DOUBLE host
variable outsalary into argv[2] and returns the SQLCODE as argv[3]:
EXEC SQL BEGIN DECLARE SECTION;
char injob[9];
double outsalary;
EXEC SQL END DECLARE SECTION;
return (0);
GENERAL
The stored procedure receives parameters as host variables from the
CALL statement in the client application. The stored procedure does
not directly pass null indicators to the client application. You can only
use GENERAL when you also specify the LANGUAGE C or
LANGUAGE COBOL option.
DB2 Universal Database for OS/390 compatibility note: GENERAL is
the equivalent of SIMPLE.
PARAMETER STYLE GENERAL stored procedures accept parameters
in the manner indicated by the value of the PROGRAM TYPE clause.
The following example demonstrates a PARAMETER STYLE
GENERAL stored procedure that accepts two parameters using
PROGRAM TYPE SUBROUTINE:
SQL_API_RC SQL_API_FN one_result_set_to_client
(double *insalary, sqlint32 *out_sqlerror)
{
EXEC SQL INCLUDE SQLCA;
l_insalary = *insalary;
*out_sqlerror = 0;
return (0);
The following C code demonstrates how to declare and use the null
indicators required by a GENERAL WITH NULLS stored procedure:
SQL_API_RC SQL_API_FN inout_param (double *inoutMedian,
sqlint32 *out_sqlerror, char buffer[33], sqlint16 nullinds[3])
{
EXEC SQL INCLUDE SQLCA;
if (nullinds[0] < 0)
{
/* NULL value was received as input, so return NULL output */
nullinds[0] = -1;
nullinds[1] = -1;
nullinds[2] = -1;
}
else
{
int counter = 0;
*out_sqlerror = 0;
medianSalary = *inoutMedian;
nullinds[1] = 0;
nullinds[2] = 0;
if (numRecords != 0)
/* At least one record was found */
{
strcpy(buffer, "OPEN inout");
EXEC SQL OPEN inout USING :medianSalary;
*inoutMedian = medianSalary;
counter = counter + 1;
}
return (0);
if (nullinds[0] < 0)
{
/* NULL value was received as input, so return NULL output */
nullinds[1] = -1;
}
else
{
EXEC SQL SELECT (CAST(AVG(salary) AS DOUBLE))
INTO :outsalary INDICATOR :outsalaryind
FROM employee
WHERE job = :injob;
*salary = outsalary;
nullinds[1] = outsalaryind;
}
return (0);
} /* end db2sql_example function */
/********************************************************\
* Call DB2SQL_EXAMPLE stored procedure *
\********************************************************/
strcpy(in_job, job_name);
return 0;
}
Using IN and OUT Parameters: Assume that you want to create a Java
stored procedure GET_LASTNAME that, given empno (SQL type VARCHAR),
returns lastname (SQL type CHAR) from the EMPLOYEE table in the SAMPLE
database. You will create the procedure as the getname method of the Java
class StoredProcedure, contained in the JAR installed as myJar. Finally, you
will call the stored procedure with a client application coded in C.
1. Declare two host variables in your stored procedure source code:
String empid;
String name;
...
#sql { SELECT lastname INTO :empid FROM employee WHERE empno=:empid }
2. Register the stored procedure with the following CREATE PROCEDURE
statement:
CREATE PROCEDURE GET_LASTNAME (IN EMPID CHAR(6), OUT NAME VARCHAR(15))
EXTERNAL NAME 'myJar:StoredProcedure.getname'
LANGUAGE JAVA PARAMETER STYLE JAVA FENCED
READS SQL DATA
3. Call the stored procedure from your client application written in C:
EXEC SQL BEGIN DECLARE SECTION;
struct name { short int; char[15] }
char[7] empid;
EXEC SQL END DECLARE SECTION;
...
EXEC SQL CALL GET_LASTNAME (:empid, :name);
Using INOUT Parameters: For the following example, assume that you want
to create a C stored procedure GET_MANAGER that, given deptnumb (SQL
type SMALLINT), returns manager (SQL type SMALLINT) from the ORG table
in the SAMPLE database.
Nested embedded SQL stored procedures written in C and nested CLI stored
procedures cannot return result sets to the client application or calling stored
procedure. If a nested embedded SQL stored procedure or a nested CLI stored
procedure leaves cursors open when the stored procedure exits, DB2 closes
the cursors. For more information on returning result sets from stored
procedures, see “Returning Result Sets from Stored Procedures” on page 225.
After you code an OLE automation object, you must register the methods of
the object as stored procedures using the CREATE PROCEDURE statement. To
register an OLE automation stored procedure, issue a CREATE PROCEDURE
statement with the LANGUAGE OLE clause. The external name consists of
the OLE progID identifying the OLE automation object and the method name
separated by ! (exclamation mark).
The calling conventions for OLE method implementations are identical to the
conventions for procedures written in C or C++.
DB2 automatically handles the type conversions between SQL types and OLE
automation types. For a list of the DB2 mappings between supported OLE
automation types and SQL types, see Table 16 on page 419. For a list of the
DB2 mappings between SQL types and the data types of the OLE
programming language, such as BASIC or C/C++, see Table 17 on page 420.
Data passed between DB2 and OLE automation stored procedures is passed as
call by reference. DB2 does not support SQL types such as DECIMAL or
LOCATORS, or OLE automation types such as boolean or CURRENCY, that
are not listed in the previously referenced tables. Character and graphic data
mapped to BSTR is converted from the database code page to UCS-2 (also
known as Unicode, IBM code page 13488) scheme. Upon return, the data is
converted back to the database code page. These conversions occur regardless
of the database code page. If code page conversion tables to convert from the
database code page to UCS-2 and from UCS-2 to the database code page are
not installed, you receive an SQLCODE -332 (SQLSTATE 57017).
Example OUT Parameter Stored Procedure
Following is a sample program demonstrating the use of an OUT host
variable. The client application invokes a stored procedure that determines the
median salary for employees in the SAMPLE database. (The definition of the
median is that half the values lie above it, and half below it.) The median
salary is then passed back to the client application using an OUT host
variable.
An application that uses neither the stored procedures technique, nor blocking
cursors, must FETCH each salary across the network as shown in Figure 5.
Client Database
Workstation Server
Since only the salary at row n ⁄ 2 + 1 is needed, the application discards all
the additional data, but only after it is transmitted across the network.
You can design an application using the stored procedures technique that
allows the stored procedure to process and discard the unnecessary data,
returning only the median salary to the client application. Figure 6 shows this
feature.
Determine the
median salary.
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
class Spclient
{
static String sql = "";
static String procName = "";
static String inLanguage = "";
static CallableStatement callStmt;
static int outErrorCode = 0;
static String outErrorLabel = "";
static double outMedian = 0;
static
{
try
{
System.out.println();
System.out.println("Java Stored Procedure Sample");
Class.forName("COM.ibm.db2.jdbc.app.DB2Driver").newInstance();
}
catch (Exception e)
{
System.out.println("\nError loading DB2 Driver...\n");
e.printStackTrace();
}
}
try
{
// connect to sample database
// connect with default id/password
con = DriverManager.getConnection(url); 2
outLanguage(con);
outParameter(con);
inParameters(con);
inoutParam(con, outMedian);
resultSet(con);
twoResultSets(con);
allDataTypes(con);
if (outErrorCode == 0) { 7
System.out.println(procName + " completed successfully");
System.out.println ("Median salary returned from OUT_PARAM = "
+ outMedian);
}
else { // stored procedure failed
System.out.println(procName + " failed with SQLCODE "
+ outErrorCode);
System.out.println(procName + " failed at " + outErrorLabel);
}
}
}
outparameter();
int outparameter() {
/********************************************************\
* Call OUT_PARAM stored procedure *
\********************************************************/
EXEC SQL BEGIN DECLARE SECTION;
/* Declare host variables for passing data to OUT_PARAM */
double out_median;
EXEC SQL END DECLARE SECTION;
strcpy(procname, "OUT_PARAM");
printf("\nCALL stored procedure named %s\n", procname);
/***********************************************************\
* Display the median salary returned as an output parameter *
\***********************************************************/
}
else
{ /* print the error message, roll back the transaction */
printf("Stored procedure returned SQLCODE %d\n", out_sqlcode);
printf("from procedure section labelled \"%s\".\n", out_buffer);
}
return 0;
}
// clean up resources
rs2.close();
}
catch (SQLException sqle)
{
errorCode[0] = sqle.getErrorCode();
}
}
}
int counter = 0;
*out_sqlerror = 0;
strcpy(buffer, "SELECT");
EXEC SQL SELECT COUNT(*) INTO :numRecords FROM staff; 3
strcpy(buffer, "OPEN");
EXEC SQL OPEN c1;
strcpy(buffer, "FETCH");
while (counter < (numRecords / 2 + 1)) {
EXEC SQL FETCH c1 INTO :medianSalary; 4
When a client program (using, for example, code page A) calls a remote
stored procedure that accesses a database using a different code page (for
example, code page Z), the following events occur:
1. Input character string parameters (whether defined as host variables or in
an SQLDA in the client application) are converted from the application
code page (A) to the one associated with the database (Z). Conversion
does not occur for data defined in the SQLDA as FOR BIT DATA.
2. Once the input parameters are converted, the database manager does not
perform any more code page conversions.
Therefore, you must run the stored procedure using the same code page
as the database, in this example, code page Z. It is a good practice to
prep, compile, and bind the server procedure using the same code page as
the database.
3. When the stored procedure finishes, the database manager converts the
output character string parameters (whether defined as host variables or in
an SQLDA in the client application) and the SQLCA character fields from
the database code page (Z) back to the application code page (A).
Conversion does not occur for data defined in the SQLDA as FOR BIT
DATA.
Note: If the parameter of the stored procedure is defined as FOR BIT DATA
at the server, conversion does not occur for a CALL statement to DB2
Universal Database for OS/390 or DB2 Universal Database for AS/400,
regardless of whether it is explicitly specified in the SQLDA. (Refer to
the section on the SQLDA in the SQL Reference for details.)
For more information on this topic, see “Conversion Between Different Code
Pages” on page 504.
C++ Consideration
When writing a stored procedure in C++, you may want to consider declaring
the procedure name using extern “C”, as in the following example:
extern “C” SQL_API_RC SQL_API_FN proc_name( short *parm1, char *parm2)
The extern "C" prevents type decoration (or mangling) of the function name
by the C++ compiler. Without this declaration, you have to include all the
type decorations for the function name when you call the stored procedure.
Graphic Host Variable Considerations
Any stored procedure written in C or C++, that receives or returns graphic
data through its parameter input or output should generally be precompiled
with the WCHARTYPE NOCONVERT option. This is because graphic data
CONVERT can be used in FENCED stored procedures, and it will affect the
graphic data in SQL statements within the stored procedure, but not through
the stored procedure’s interface. NOT FENCED stored procedures must be
built using the NOCONVERT option.
A NOT FENCED stored procedure runs in the same address space as the
database manager (the DB2 Agent’s address space). Running your stored
procedure as NOT FENCED results in increased performance when compared
with running it as FENCED because FENCED stored procedures, by default,
run in a special DB2 process. The address space of this process is distinct from
the DB2 System Controller.
DB2 does not support the use of any of the following features in NOT
FENCED stored procedures:
v 16-bit
v Multi-threading
v Nested calls: calling or being called by another stored procedure
v Result sets: returning result sets to a client application or caller
v REXX
The following DB2 APIs and any DB2 CLI API are not supported in a NOT
FENCED stored procedure:
v BIND
v EXPORT
v IMPORT
v PRECOMPILE PROGRAM
v ROLLFORWARD DATABASE
This sample stored procedure accepts one IN parameter and returns one OUT
parameter and one result set. The stored procedure uses the IN parameter to
create a result set containing the values of the NAME, JOB, and SALARY
columns for the STAFF table for rows where SALARY is greater than the IN
parameter.
1 Register the stored procedure using the DYNAMIC RESULT SETS
clause of the CREATE PROCEDURE statement. For example, to
register the stored procedure written in embedded SQL for C, issue
the following statement:
CREATE PROCEDURE RESULT_SET_CLIENT
(IN salValue DOUBLE, OUT sqlCode INTEGER)
DYNAMIC RESULT SETS 1
LANGUAGE C
PARAMETER STYLE GENERAL
NO DBINFO
FENCED
READS SQL DATA
PROGRAM TYPE SUB
EXTERNAL NAME 'spserver!one_result_set_to_client'
l_insalary = *insalary;
*out_sqlerror = 0;
try {
// Get caller's connection to the database
Connection con =
DriverManager.getConnection("jdbc:default:connection");
SQLCHAR stmt[50];
SQLINTEGER out_sqlcode;
char out_buffer[33];
SQLINTEGER indicator;
struct sqlca sqlca;
SQLRETURN rc,rc1 ;
char procname[254];
SQLHANDLE henv; /* environment handle */
SQLHANDLE hdbc; /* connection handle */
SQLHANDLE hstmt1; /* statement handle */
SQLHANDLE hstmt2; /* statement handle */
SQLRETURN sqlrc = SQL_SUCCESS;
double out_median;
int oneresultset1(SQLHANDLE);
char dbAlias[SQL_MAX_DSN_LENGTH + 1] ;
char user[MAX_UID_LENGTH + 1] ;
char pswd[MAX_PWD_LENGTH + 1] ;
/********************************************************\
* Call oneresultsettocaller stored procedure *
\********************************************************/
rc = oneresultset1(hstmt_oneresult);
rc = SQLFreeHandle( SQL_HANDLE_STMT, hstmt_oneresult ) ;
HANDLE_CHECK( SQL_HANDLE_DBC, hdbc, rc, &henv, &hdbc ) ;
int oneresultset1(hstmt)
SQLHANDLE hstmt; /* statement handle */
{
/********************************************************\
* Call one_result_set_to_client stored procedure *
\********************************************************/
rc = SQLFetch( hstmt );
}
return(rc);
}
if (outErrorCode == 0) {
System.out.println(procName + " completed successfully");
ResultSet rs = callStmt.getResultSet(); 2
while (rs.next()) {
fetchAll(rs); 3
}
// close ResultSet
rs.close();
}
else { // stored procedure failed
System.out.println(procName + " failed with SQLCODE "
+ outErrorCode);
}
You can use the debugger supplied with your compiler to debug a local
FENCED stored procedure as you would any other application. Consult your
compiler documentation for information on using the supplied debugger.
For example, to use the debugger supplied with Visual Studio™ on Windows
NT, perform the following steps:
Step 1. Set the DB2_STPROC_ALLOW_LOCAL_FENCED registry variable to true.
Step 2. Compile the source file for the stored procedure DLL with the -Zi
and -Od flags, and then link the DLL using the -DEBUG option.
Step 3. Copy the resulting DLL to the instance_name \function directory of
the server.
Step 4. Invoke the client application on the server with the Visual Studio
debugger. For the client application outcli.exe, enter the following
command:
Within the SQL procedure body, you cannot use OUT parameters as a value
in any expression. You can only assign values to OUT parameters using the
assignment statement, or as the target variable in the INTO clause of SELECT,
VALUES and FETCH statements. You cannot use IN parameters as the target
of assignment or INTO clauses.
For a complete list of the SQL statements allowed within an SQL procedure
body, see “Appendix A. Supported SQL Statements” on page 723. For detailed
descriptions and syntax of each of these statements, refer to the SQL Reference.
IF (rating = 1)
THEN UPDATE employee
SET salary = salary * 1.10, bonus = 1000
WHERE empno = employee_number;
ELSEIF (rating = 2)
THEN UPDATE employee
SET salary = salary * 1.05, bonus = 500
WHERE empno = employee_number;
ELSE UPDATE employee
SET salary = salary * 1.03, bonus = 0
WHERE empno = employee_number;
END IF;
END
@
To process the DB2 CLP script from the command line, use the following
syntax:
db2 -tdterm-char -vf script-name
When DB2 raises a condition that matches condition, DB2 passes control to the
condition handler. The condition handler performs the action indicated by
handler-type, and then executes SQL-procedure-statement.
handler-type
CONTINUE
Specifies that after SQL-procedure-statement completes, execution
continues with the statement after the statement that caused the
error.
You can also use the DECLARE statement to define your own condition
for a specific SQLSTATE. For more information on defining your own
condition, refer to the SQL Reference.
SQL-procedure-statement
You can use a single SQL procedure statement to define the behavior of
the condition handler. DB2 accepts a compound statement delimited by a
BEGIN...END block as a single SQL procedure statement. If you use a
compound statement to define the behavior of a condition handler, and
you want the handler to retain the value of either the SQLSTATE or
SQLCODE variables, you must assign the value of the variable to a local
variable or parameter in the first statement of the compound block. If the
first statement of a compound block does not assign the value of
SQLSTATE or SQLCODE to a local variable or parameter, SQLSTATE and
SQLCODE cannot retain the value that caused DB2 to invoke the
condition handler.
Note: You cannot define another condition handler within the condition
handler.
Example: CONTINUE handler: This handler assigns the value of 1 to the local
variable at_end when DB2 raises a NOT FOUND condition. DB2 then passes
control to the statement following the one that raised the NOT FOUND
condition.
DECLARE not_found CONDITION FOR SQLSTATE '02000';
DECLARE CONTINUE HANDLER FOR not_found SET at_end=1;
For more information on the SIGNAL and RESIGNAL statements, refer to the
SQL Reference.
SQLCODE and SQLSTATE Variables in SQL Procedures
To help debug your SQL procedures, you might find it useful to insert the
value of the SQLCODE and SQLSTATE into a table at various points in the
SQL procedure, or to return the SQLCODE and SQLSTATE values in a
diagnostic string as an OUT parameter. To use the SQLCODE and SQLSTATE
values, you must declare the following SQL variables in the SQL procedure
body:
DECLARE SQLCODE INTEGER DEFAULT 0;
DECLARE SQLSTATE CHAR(5) DEFAULT ‘00000’;
You can also use CONTINUE condition handlers to assign the value of the
SQLSTATE and SQLCODE variables to local variables in your SQL procedure
body. You can then use these local variables to control your procedural logic,
or pass the value back as an output parameter. In the following example, the
SQL procedure returns control to the statement following each SQL statement
with the SQLCODE set in a local variable called RETCODE.
DECLARE SQLCODE INTEGER DEFAULT 0;
DECLARE retcode INTEGER DEFAULT 0;
For example, you can create an SQL procedure that calls a target SQL
procedure with the name “SALES_TARGET” and that accepts a single OUT
parameter of type INTEGER with the following SQL:
CREATE PROCEDURE NEST_SALES(OUT budget DECIMAL(11,2))
LANGUAGE SQL
BEGIN
DECLARE total INTEGER DEFAULT 0;
SET total = 6;
CALL SALES_TARGET(total);
SET budget = total * 10000;
END
For more information on returning result sets from nested SQL procedures,
see “Returning Result Sets to Caller or Client” on page 250.
Restrictions on Nested SQL Procedures
Keep the following restrictions in mind when designing your application
architecture:
LANGUAGE
SQL procedures can only call stored procedures written in SQL or C.
You cannot call other host language stored procedures from within an
SQL procedure.
16 levels of nesting
You may only include a maximum of 16 levels of nested calls to SQL
procedures. A scenario where SQL procedure A calls SQL procedure B,
and SQL procedure B calls SQL procedure C, is an example of three
levels of nested calls.
Recursion
You can create an SQL procedure that calls itself recursively. Recursive
SQL procedures must comply with the previously described restriction
on the maximum levels of nesting.
Security
An SQL procedure cannot call a target SQL procedure that is
cataloged with a higher SQL data access level. For example, an SQL
procedure created with the CONTAINS SQL clause can call SQL
procedures created with either the CONTAINS SQL clause or the NO
SQL clause, and cannot call SQL procedures created with either the
READS SQL DATA clause or the MODIFIES SQL DATA clause.
An SQL procedure created with the NO SQL clause cannot issue a
CALL statement.
CALL targetProcedure();
ASSOCIATE RESULT SET LOCATORS(result1, result2, result3)
WITH PROCEDURE targetProcedure;
ALLOCATE CURSOR
Use the ALLOCATE CURSOR statement in a calling SQL procedure to
open a result set returned from a target SQL procedure. To use the
ALLOCATE CURSOR statement, the result set must already be
associated with a result set locator through the ASSOCIATE RESULT
SET LOCATORS statement. Once the SQL procedure issues an
ALLOCATE CURSOR statement, you can fetch rows from the result
set using the cursor name declared in the ALLOCATE CURSOR
statement. To extend the previously described ASSOCIATE
LOCATORS example, the SQL procedure could fetch rows from the
first of the returned result sets using the following SQL:
Once you display the message, try modifying your SQL procedure following
the suggestions in the “User Response” section.
Displaying Error Messages for SQL Procedures
When you issue a CREATE PROCEDURE statement for an SQL procedure,
DB2 may accept the syntax of the SQL procedure body but fail to create the
SQL procedure at the precompile or compile stage. In these situations, DB2
To retrieve the error messages generated by DB2 and the C compiler for an
SQL procedure, display the message log file in the following directory on
your database server:
UNIX $DB2PATH/function/routine/sqlproc/$DATABASE/$SCHEMA/tmp
where $DB2PATH represents the location of the instance directory,
$DATABASE represents the database name, and $SCHEMA represents
the schema name used to create the SQL procedure.
Windows NT
%DB2PATH%\function\routine\sqlproc\%DB%\%SCHEMA%\tmp
where %DB2PATH% represents the location of the instance directory,
%DB% represents the database name, and %SCHEMA% represents
the schema name used to create the SQL procedure.
You can also issue a CALL statement in an application to call the sample
stored procedure db2udp!get_error_messages using the following syntax:
CALL db2udp!get_error_messages(schema-name, file-name, message-text)
For example, you could use the following Java application to display the error
messages for an SQL procedure:
public static String getErrorMessages(Connection con,
String procschema, String filename) throws Exception
{
String filecontents = null;
// prepare the CALL statement
CallableStatement stmt = null;
try
{
String sql = "Call db2udp!get_error_messages(?, ?, ?) ";
stmt = con.prepareCall (sql);
You could use the following C application to display the error messages for
an SQL procedure:
int getErrors(char inputSchema[9], char inputFilename[9],
char outputFilecontents[32000])
{
EXEC SQL BEGIN DECLARE SECTION;
char procschema[100] = "";
char filename[100] = "";
char filecontents[32000] = "";
EXEC SQL END DECLARE SECTION;
Note: Before you can display the error messages for an SQL procedure that
DB2 failed to create, you must know both the procedure name and the
generated file name of the SQL procedure. If the procedure schema
name is not issued as part of the CREATE PROCEDURE statement,
DB2 uses the value of the CURRENT SCHEMA special register. To
display the value of the CURRENT SCHEMA special register, issue the
following statement at the CLP:
VALUES CURRENT SCHEMA
On UNIX systems, DB2 uses the following base directory to keep intermediate
files: instance/function/routine/sqlproc/dbAlias/schema, where instance
represents the path of the DB2 instance, dbAlias represents the database alias,
and schema represents the schema with which the CREATE PROCEDURE
statement was issued.
On OS/2 and Windows 32-bit operating systems, DB2 uses the following base
directory to keep intermediate files:
instance\function\routine\sqlproc\dbAlias\schema, where instance represents
the path of the DB2 instance, dbAlias represents the database alias, and schema
represents the schema with which the CREATE PROCEDURE statement was
issued.
If the SQL procedure was created successfully, but does not return the
expected results from your CALL statements, you may want to examine the
intermediate files. To prevent DB2 from removing the intermediate files, set
the DB2_SQLROUTINE_KEEP_FILES DB2 registry variable to “yes” using the
following command:
db2set DB2_SQLROUTINE_KEEP_FILES=“yes”
Before DB2 can use the new value of the registry variable, you must restart
the database.
CASE rating
WHEN 1 THEN
UPDATE employee
SET salary = salary * 1.10, bonus = 1000
WHERE empno = employee_number;
WHEN 2 THEN
UPDATE employee
SET salary = salary * 1.05, bonus = 500
WHERE empno = employee_number;
ELSE
UPDATE employee
SET salary = salary * 1.03, bonus = 0
WHERE empno = employee_number;
END CASE;
END
OPEN C1;
FETCH C1 INTO v_id, v_salary, v_years;
WHILE at_end = 0 DO
IF (v_salary < 2000 * v_years)
THEN UPDATE staff
SET salary = 2150 * v_years
WHERE id = v_id;
ELSEIF (v_salary < 5000 * v_years)
THEN IF (v_salary < 3000 * v_years)
THEN UPDATE staff
SET salary = 3000 * v_years
WHERE id = v_id;
ELSE UPDATE staff
SET salary = v_salary * 1.10
WHERE id = v_id;
END IF;
ELSE UPDATE staff
SET job = 'PREZ'
WHERE id = v_id;
END IF;
FETCH C1 INTO v_id, v_salary, v_years;
END WHILE;
CLOSE C1;
END
Example 3: Using Nested SQL Procedures with Global Temporary Tables and Result
Sets:
The following example shows how to use the ASSOCIATE RESULT SET
LOCATOR and ALLOCATE CURSOR statements to return a result set from
the called SQL procedure, temp_table_insert, to the calling SQL procedure,
temp_table_create. The example also shows how a called SQL procedure can
use a global temporary table that is created by a calling SQL procedure.
To return a result set from a global temporary table that was created by a
different SQL procedure, temp_table_insert must issue the DECLARE
CURSOR statement within a new scope. temp_table_insert issues the
DECLARE CURSOR and OPEN CURSOR statements within a compound SQL
block, which satisfies the requirement for a new scope. The cursor is not
closed before the SQL procedure exits, so DB2 passes the result set back to the
caller, temp_table_create.
To accept the result set from the called SQL procedure, temp_table_create
issues an ASSOCIATE RESULT SET LOCATOR statement that identifies
temp_table_insert as the originator of the result set. temp_table_create then
issues an ALLOCATE CURSOR statement for the result set locator to open the
result set. If the ALLOCATE CURSOR statement succeeds, the SQL procedure
can work with the result set as usual. In this example, temp_table_create
fetches every row from the result set, adding the values of the columns to its
output parameters.
where ts1 represents the name of the user temporary tablespace, and
ts1file represents the name of the container used by the tablespace.
CREATE PROCEDURE temp_table_create(IN parm1 INTEGER, IN parm2 INTEGER,
OUT parm3 INTEGER, OUT parm4 INTEGER)
LANGUAGE SQL
BEGIN
DECLARE loc1 RESULT_SET_LOCATOR VARYING;
DECLARE total3,total4 INTEGER DEFAULT 0;
DECLARE rcolumn1, rcolumn2 INTEGER DEFAULT 0;
DECLARE result_set_end INTEGER DEFAULT 0;
DECLARE CONTINUE HANDLER FOR NOT FOUND, SQLEXCEPTION, SQLWARNING
BEGIN
SET result_set_end = 1;
END;
--Create the temporary table that is used in both this SQL procedure
--and in the SQL procedure called by this SQL procedure.
DECLARE GLOBAL TEMPORARY TABLE ttt(column1 INT, column2 INT)
NOT LOGGED;
--Insert rows into the temporary table.
--The result set includes these rows.
You can use Stored Procedure Builder on your client to build and deploy Java
stored procedures and SQL procedures on DB2 Universal Database servers for
the following platforms:
Stored Procedure Language Supported DB2 UDB Platforms
Java OS/2, OS/390, AIX, HP-UX, Linux, Solaris
Operating Environment, and Windows 32-bit
operating systems
SQL OS/2, OS/390, OS/400, AIX, HP-UX, Linux,
Solaris Operating Environment, and Windows
32-bit operating systems
You can export SQL stored procedures and create Java stored procedures from
existing Java class files. To provide a comfortable development environment,
In Stored Procedure Builder you can create highly portable stored procedures
written in Java or SQL. Using the stored procedure wizards, you create your
basic SQL structure and then use the source code editor to modify the stored
procedure to contain sophisticated stored procedure logic.
Running a stored procedure from within Stored Procedure Builder allows you
to test the procedure to make sure that it is correctly installed. When you run
a stored procedure, it can return result sets based on test input parameter
values that you enter, depending on how you set up the stored procedure.
Testing stored procedures makes programming the client application easier
because you know that the stored procedure is correctly installed on the DB2
database server. You can then focus on writing and debugging the client
application
From the Project window in Stored Procedure Builder, you can also easily
drop a stored procedure or copy it to another database connection.
Creating Stored Procedure Builder Projects
When you open a new or existing Stored Procedure Builder project, the
Project window shows all the stored procedures that reside on the DB2
database to which you are connected. You can choose to filter stored
procedures to view the procedures based on their name or schema. A Stored
Procedure Builder project saves only connection information and stored
procedure objects that have not been successfully built to the database.
Debugging Stored Procedures
Using Stored Procedure Builder and the IBM Distributed Debugger (available
separately), you can remotely debug a stored procedure installed on a DB2
server. To debug a stored procedure, you build the stored procedure in debug
mode, add a debug entry for your client IP address, and run the stored
Using Stored Procedure Builder, you can view all the stored procedures that
you have the authority to change, add, or remove debug entries for in the
stored procedures debug table. If you are a database administrator or the
creator of the selected stored procedure, you can grant authorization to other
users to debug the stored procedure.
The object extensions of DB2 enable you to realize many of the benefits of
object technology while building on the strengths of relational technology. In a
relational system, data types are used to describe the data in columns of
tables where the instances (or objects) of these data types are stored.
Operations on these instances are supported by means of operators or
functions that can be invoked anywhere that expressions are allowed.
With the object extensions of DB2, you can incorporate object-oriented (OO)
concepts and methodologies into your relational database.
Object-Relational Features of DB2
Some object-relational features that help you model your data in an
object-oriented fashion include the following:
Data types for very large objects
The data you may need to model in your system may be very large
and complex, such as text, audio, engineering data, or video. The
VARCHAR or VARGRAPHIC data types may not be large enough for
objects of this size. DB2 provides three data types to store these data
objects as strings of up to 2 gigabytes (GB) in size. The three data
types are: Binary Large Objects (BLOBs), single-byte Character Large
Objects (CLOBs), and Double-Byte Character Large Objects
(DBCLOBs).
For more information about the object-relational features of DB2, refer to:
v “Chapter 12. Working with Complex Objects: User-Defined Structured
Types” on page 283
v “Chapter 11. User-defined Distinct Types” on page 273
v “Chapter 13. Using Large Objects (LOBs)” on page 341
v “Chapter 14. User-Defined Functions (UDFs) and Methods” on page 365
v “Chapter 15. Writing User-Defined Functions (UDFs) and Methods” on
page 385
v “Chapter 16. Using Triggers in an Active DBMS” on page 473
Use the CREATE DISTINCT TYPE statement to define your new distinct type.
Detailed explanations for the statement syntax and all its options are found in
the SQL Reference.
Because DB2 does not support comparisons on CLOBs, you do not specify the
clause WITH COMPARISONS. You have specified a schema name different
from your authorization ID since you have DBADM authority, and you would
like to keep all distinct types and UDFs dealing with applicant forms in the
same schema.
The distinct types in the above examples are created using the same CREATE
DISTINCT TYPE statements in “Example: Money” on page 275. Note that the
above examples use check constraints. For information on check constraints
refer to the SQL Reference.
Example: Application Forms
Suppose you need to define a table where you keep the forms filled out by
applicants as follows:
CREATE TABLE APPLICATIONS
(ID SYSIBM.INTEGER,
NAME VARCHAR (30),
APPLICATION_DATE SYSIBM.DATE,
FORM PERSONAL.APPLICATION_FORM)
You have fully qualified the distinct type name because its qualifier is not the
same as your authorization ID and you have not changed the default function
path. Remember that whenever type and function names are not fully
qualified, DB2 searches through the schemas listed in the current function
path and looks for a type or function name matching the given unqualified
name. Because SYSIBM is always considered (if it has been omitted) in the
current function path, you can omit the qualification of built-in data types.
For example, you can execute SET CURRENT FUNCTION PATH = cheryl and the
value of the current function path special register will be "CHERYL", and does
not include "SYSIBM". Now, if CHERYL.INTEGER type is not defined, the
statement CREATE TABLE FOO(COL1 INTEGER) still succeeds because SYSIBM is
always considered as COL1 is of type SYSIBM.INTEGER.
You are, however, allowed to fully qualify the built-in data types if you wish
to do so. Details about the use of the current function path are discussed in
the SQL Reference.
Strong typing is important to ensure that the instances of your distinct types
are correct. For example, if you have defined a function to convert US dollars
to Canadian dollars according to the current exchange rate, you do not want
this same function to be used to convert euros to Canadian dollars because it
will certainly return the wrong amount.
As a consequence of strong typing, DB2 does not allow you to write queries
that compare, for example, distinct type instances with instances of the source
type of the distinct type. For the same reason, DB2 will not let you apply
functions defined on other types to distinct types. If you want to compare
instances of distinct types with instances of another type, you have to cast the
instances of one or the other type. In the same sense, you have to cast the
distinct type instance to the type of the parameter of a function that is not
defined on a distinct type if you want to apply this function to a distinct type
instance.
Because you cannot compare US dollars with instances of the source type of
US dollars (that is, DECIMAL) directly, you have used the cast function
provided by DB2 to cast from DECIMAL to US dollars. You can also use the
other cast function provided by DB2 (that is, the one to cast from US dollars
At first glance, such a UDF may appear easy to write. However, C does not
support DECIMAL values. The distinct types representing different currencies
have been defined as DECIMAL. Your UDF will need to receive and return
DOUBLE values, since this is the only data type provided by C that allows
the representation of a DECIMAL value without losing the decimal precision.
Thus, your UDF should be defined as follows:
CREATE FUNCTION CDN_TO_US_DOUBLE(DOUBLE) RETURNS DOUBLE
EXTERNAL NAME '/u/finance/funcdir/currencies!cdn2us'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
NOT DETERMINISTIC
NO EXTERNAL ACTION
FENCED
The exchange rate between Canadian and U.S. dollars may change between
two invocations of the UDF, so you declare it as NOT DETERMINISTIC.
The question now is, how do you pass Canadian dollars to this UDF and get
U.S. dollars from it? The Canadian dollars must be cast to DECIMAL values.
The DECIMAL values must be cast to DOUBLE. You also need to have the
returned DOUBLE value cast to DECIMAL and the DECIMAL value cast to
U.S. dollars.
Such casts are performed automatically by DB2 anytime you define sourced
UDFs, whose parameter and return type do not exactly match the parameter
and return type of the source function. Therefore, you need to define two
sourced UDFs. The first brings the DOUBLE values to a DECIMAL
representation. The second brings the DECIMAL values to the distinct type.
That is, you define the following:
That is, C1 (in Canadian dollars) is cast to decimal which in turn is cast to a
double value that is passed to the CDN_TO_US_DOUBLE function. This function
accesses the exchange rate file and returns a double value (representing the
amount in U.S. dollars) that is cast to decimal, and then to U.S. dollars.
You want to know the total of sales in Germany for each product in the year
of 1994. You would like to obtain the total sales in US dollars:
SELECT PRODUCT_ITEM, US_DOLLAR (SUM (TOTAL))
FROM GERMAN_SALES
WHERE YEAR = 1994
GROUP BY PRODUCT_ITEM
You could not write SUM (us_dollar (total)), unless you had defined a SUM
function on US dollar in a manner similar to the above.
Example: Assignments Involving Distinct Types
Suppose you want to store the form filled by a new applicant into the
database. You have defined a host variable containing the character string
value used to represent the filled form:
EXEC SQL BEGIN DECLARE SECTION;
SQL TYPE IS CLOB(32K) hv_form;
EXEC SQL END DECLARE SECTION;
You do not explicitly invoke the cast function to convert the character string
to the distinct type personal.application_form because DB2 lets you assign
instances of the source type of a distinct type to targets having that distinct
type.
Example: Assignments in Dynamic SQL
If you want to use the same statement given in “Example: Assignments
Involving Distinct Types” in dynamic SQL, you can use parameter markers as
follows:
You made use of DB2’s cast specification to tell DB2 that the type of the
parameter marker is CLOB(32K), a type that is assignable to the distinct type
column. Remember that you cannot declare a host variable of a distinct type
type, since host languages do not support distinct types. Therefore, you
cannot specify that the type of a parameter marker is a distinct type.
Example: Assignments Involving Different Distinct Types
Suppose you have defined two sourced UDFs on the built-in SUM function to
support SUM on US and Canadian dollars, similar to the UDF sourced on
euros in “Example: Sourced UDFs Involving Distinct Types” on page 280:
Now suppose your supervisor requests that you maintain the annual total
sales in US dollars of each product and in each country, in separate tables:
You explicitly cast the amounts in Canadian dollars and euros to US dollars
since different distinct types are not directly assignable to each other. You
cannot use the cast specification syntax because distinct types can only be cast
to their own source type.
Example: Use of Distinct Types in UNION
Suppose you would like to provide your American users with a view
containing all the sales of every product of your company:
CREATE VIEW ALL_SALES AS
SELECT PRODUCT_ITEM, MONTH, YEAR, TOTAL
FROM US_SALES
UNION
SELECT PRODUCT_ITEM, MONTH, YEAR, US_DOLLAR (TOTAL)
FROM CANADIAN_SALES
UNION
SELECT PRODUCT_ITEM, MONTH, YEAR, US_DOLLAR (TOTAL)
FROM GERMAN_SALES
To create a type, you must specify the name of the type, its attribute names
and their data types, and, optionally, how you want the reference type for this
type to be represented in the system. Here is the SQL to create the
BusinessUnit_t type:
CREATE TYPE BusinessUnit_t AS
(Name VARCHAR(20),
Headcount INT)
REF USING INT
MODE DB2SQL;
The AS clause provides the attribute definitions associated with the type.
BusinessUnit_t is a type with two attributes: Name and Headcount. To create a
structured type, you must include the MODE DB2SQL clause in the CREATE
TYPE statement. For more information on the REF USING clause, see
“Reference Types and Their Representation Types” on page 287.
Structured types offer two major extensions beyond traditional relational data
types: the property of inheritance, and the capability of storing instances of a
structured type either as rows in a table, or as values in a column. The
following section briefly describes these features:
Inheritance
It is certainly possible to model objects such as people using
traditional relational tables and columns. However, structured types
offer an additional property of inheritance. That is, a structured type
can have subtypes that reuse all of its attributes and contain additional
attributes specific to the subtype. For example, the structured type
Person_t might contain attributes for Name, Age, and Address. A
subtype of Person_t might be Employee_t, that contains all of the
attributes Name, Age, and Address and in addition contains attributes
for SerialNum, Salary, and BusinessUnit.
Each column in the table derives its name and data type from one
of the attributes of the indicated structured type. Such tables are
known as typed tables.
v As a value in a column. To store objects in table columns, the
column is defined using the structured type as its type. The
following statement creates a Properties table that has a structured
type Address that is of the Address_t structured type:
CREATE TABLE Properties
(ParcelNum INT,
Photo BLOB(2K),
Address Address_t)
...
For example, a data model may need to represent a special type of employee
called a manager. Managers have more attributes than employees who are not
managers. The Manager_t type inherits the attributes defined for an employee,
but also is defined with some additional attributes of its own, such as a
special bonus attribute that is only available to managers. The type hierarchies
used for examples in this book are shown in Figure 8 on page 286. The type
Chapter 12. Working with Complex Objects: User-Defined Structured Types 285
hierarchy for Address_t is defined in “Inserting Structured Type Instances into
a Column” on page 313.
BusinessUnit_t
Person_t
Employee_t Student_t
Manager_t Architect_t
In Figure 8, the person type Person_t is the root type of the hierarchy. Person_t
is also the supertype of the types below it--in this case, the type named
Employee_t and the type named Student_t. The relationships among subtypes
and supertypes are transitive; in other words, the relationship between
subtype and supertype exists throughout the entire type hierarchy. So,
Person_t is also a supertype of types Manager_t and Architect_t.
The CREATE TYPE statement for type Person_t declares that Person_t is
INSTANTIABLE. For more information on declaring structured types using
the INSTANTIABLE or NOT INSTANTIABLE clauses, see “Additional
Properties of Structured Types” on page 295.
Person_t has three attributes: Name, Age and Address. Its two subtypes,
Employee_t and Student_t, each inherit the attributes of Person_t and also
have several additional attributes that are specific to their particular types. For
example, although both employees and students have serial numbers, the
format used for student serial numbers is different from the format used for
employee serial numbers.
Note: A typed table created from the Person_t type includes the column
Address of structured type Address_t. As with any structured type
column, you must define transform functions for the structured type of
that column. For information on defining transform functions, see
“Creating the Mapping to the Host Language Program: Transform
Functions” on page 321.
DB2 uses the reference type as the type of the object identifier column in
typed tables. The object identifier uniquely identifies a row object in the typed
table hierarchy. DB2 also uses reference types to store references to rows in
typed tables. You can use reference types to refer to each row object in the
table. For more information about using references, see “Using Reference
Types” on page 300. For more information on typed tables, see “Storing
Objects in Typed Tables” on page 291.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 287
References are strongly typed. Therefore, you must have a way to use the
type in expressions. When you create the root type of a type hierarchy, you
can specify the base type for a reference with the REF USING clause of the
CREATE TYPE statement. The base type for a reference is called the
representation type. If you do not specify the representation type with the REF
USING clause, DB2 uses the default data type of VARCHAR(16) FOR BIT
DATA. The representation type of the root type is inherited by all its subtypes.
The REF USING clause is only valid when you define the root type of a
hierarchy. In the examples used throughout this section, the representation
type for the BusinessUnit_t type is INTEGER, while the representation type
for Person_t is VARCHAR(13).
DB2 also creates the function that does the inverse operation:
CREATE FUNCTION Person_t(VARCHAR(13))
RETURNS REF(Person_t)
You will use these cast functions whenever you need to insert a new value
into the typed table or when you want to compare a reference value to
another value.
DB2 also creates functions that let you compare reference types using the
following comparison operators: =, <>, <, <=, >, and >=. For more information
on comparison operators for reference types, refer to the SQL Reference.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 289
To invoke a method on a structured type, use the method invocation operator:
‘..’. For more information about method invocation, refer to the SQL Reference.
The method specification must be associated with the type before you issue
the CREATE METHOD statement. The following statement adds the method
specification for a method called calc_bonus to the Employee_t type:
ALTER TYPE Employee_t
ADD METHOD calc_bonus (rate DOUBLE)
RETURNS DECIMAL(7,2)
LANGUAGE SQL
CONTAINS SQL
NO EXTERNAL ACTION
DETERMINISTIC;
Once you have associated the method specification with the type, you can
define the behavior for the type by creating the method as either an external
method or an SQL-bodied method, according to the method specification. For
example, the following statement registers an SQL method called calc_bonus
that resides in the same schema as the type Employee_t:
CREATE METHOD calc_bonus (rate DOUBLE)
FOR Employee_t
RETURN SELF..salary * rate;
You can create as many methods named calc_bonus as you like, as long as
they have different numbers or types of parameters, or are defined for types
in different type hierarchies. In other words, you cannot create another
method named calc_bonus for Architect_t that has the same parameter types
and same number of parameters.
Note: DB2 does not currently support dynamic dispatch. This means that you
cannot declare a method for a type, and then redefine the method for a
subtype using the same number of parameters. As a workaround, you
can use the TYPE predicate to determine the dynamic type and then
use the TREAT AS clause to call a different method for each dynamic
type. For an example of transform functions that handle subtypes, see
“Retrieving Subtype Data from DB2 (Bind Out)” on page 332.
For more information about registering, writing, and invoking methods, see
“Chapter 14. User-Defined Functions (UDFs) and Methods” on page 365 and
“Chapter 15. Writing User-Defined Functions (UDFs) and Methods” on
page 385.
When objects are stored as rows in a table, each column of the table contains
one attribute of the object. You could store an instance of a person, for
example, in a table that contains a column for name and a column for age.
Here is an example of a CREATE TABLE statement for storing instances of
Person.
CREATE TABLE Person OF Person_t
(REF IS Oid USER GENERATED)
To insert an instance of Person into the table, you could use the following
syntax:
INSERT INTO Person (Oid, Name, Age)
VALUES(Person_t('a'), 'Andrew', 29);
Table 10. Person typed table
Oid Name Age Address
a Andrew 29
Your program accesses attributes of the object by accessing the columns of the
typed table:
Chapter 12. Working with Complex Objects: User-Defined Structured Types 291
CREATE TABLE Employee OF Employee_t UNDER Person
INHERIT SELECT PRIVILEGES
(SerialNum WITH OPTIONS NOT NULL,
Dept WITH OPTIONS SCOPE BusinessUnit);
And, again, an insert into the Employee table looks like this:
INSERT INTO Employee (Oid, Name, Age, SerialNum, Salary)
VALUES (Employee_t('s'), 'Susan', 39, 24001, 37000.48)
Table 12. Employer typed subtable
Oid Name Age Address SerialNum Salary Dept
s Susan 39 24001 37000.48
If you execute the following query, the information for Susan is returned:
SELECT *
FROM Employee
WHERE Name='Susan';
The interesting thing about these two tables is that you can access instances of
both employees and people just by executing your SQL statement on the
Person table. This feature is called substitutability, and is discussed in
“Additional Properties of Structured Types” on page 295. By executing a query
on the table that contains instances that are higher in the type hierarchy, you
automatically get instances of types that are lower in the hierarchy. In other
words, the Person table logically looks like this to SELECT, UPDATE, and
DELETE statements :
Table 13. Person table contains Person and Employee instances
Oid Name Age Address
a Andrew 30 (null)
s Susan 39 (null)
If you execute the following query, you get an object identifier and Person_t
information about both Andrew (a person) and Susan (an employee):
SELECT *
FROM Person;
(ref) 1 Toy
(ref) 2 Shoe
(ref) 3 Finance
(ref) 4 Quality
(ref) ...
..
(ref) .
..
(ref) .
For example, the following query on the Employee table uses the dereference
operator to tell DB2 to follow the path from the Dept column to the
BusinessUnit table. The dereference operator returns the value of the Name
column:
SELECT Name, Salary, Dept->Name
FROM Employee;
Chapter 12. Working with Complex Objects: User-Defined Structured Types 293
employees, departments, and so on) in typed tables, but those objects might
also have attributes that are best modelled using a structured type.
For example, assume that your application has the need to access certain parts
of an address. Rather than store the address as an unstructured character
string, you can store it as a structured object as shown in Figure 10.
Person
When objects are stored as column values, the attributes are not externally
represented as they are with objects stored in rows of tables. Instead, you
must use methods to manipulate their attributes. DB2 generates both observer
methods to return attributes, and mutator methods to change attributes. The
following examples uses one observer method and two mutator methods, one
for the Number attribute and one for the Street attribut, to change an address:
UPDATE Employee
SET Address=Address..Number('4869')..Street('Appletree')
WHERE Name='Franky'
AND Address..State='CA';
In the preceding example, the SET clause of the UPDATE statement invokes
the Number and Street mutator methods to update attributes of the instances
of type Address_t. The WHERE clause restricts the operation of the update
statement with two predicates: an equality comparison for the Name column,
and an equality comparison that invokes the State observer method of the
Address column.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 295
Using Structured Types in Typed Tables
Creating a Typed Table
Typed tables are used to actually store instances of objects whose
characteristics are defined with the CREATE TYPE statement. You can create a
typed table using a variant of the CREATE TABLE statement. You can also
create a hierarchy of typed tables that is based on a hierarchy of structured
types. To store instances of subtypes in database tables, you must create a
corresponding table hierarchy.
Here is the SQL to create the tables in the Person table hierarchy:
CREATE TABLE Person OF Person_t
(REF IS Oid USER GENERATED);
Rows in the Employee subtable, therefore, will have a total of seven columns:
Oid, Name, Age, Address, SerialNum, Salary, and Dept.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 297
SELECT privileges from the subtable limits users who only have SELECT
privileges on the supertable to seeing the supertable columns of the rows of
the subtable. Users can only operate directly on a subtable if they hold the
necessary privilege on that subtable. So, to prevent users from selecting the
bonuses of the managers in the subtable, revoke the SELECT privilege on that
table and grant it only to those users for whom this information is necessary.
where column-name represents the name of the column in the CREATE TABLE
or ALTER TABLE statement, and column-options represents the options defined
for the column.
For example, to prevent users from inserting nulls into a SerialNum column,
specify the NOT NULL column option as follows:
(SerialNum WITH OPTIONS NOT NULL)
declares that the Dept column of this table and its subtables have a scope of
BusinessUnit. This means that the reference values in this column of the
Employee table are intended to refer to objects in the BusinessUnit table.
For example, the following query on the Employee table uses the dereference
operator to tell DB2 to follow the path from the Dept column to the
BusinessUnit table. The dereference operator returns the value of the Name
column:
SELECT Name, Salary, Dept->Name
FROM Employee;
For more information about references and scoping references, see “Using
Reference Types” on page 300.
Populating a Typed Table
After creating the structured types in the previous examples, and after
creating the corresponding tables and subtables, the structure of your database
looks like Figure 11 on page 299:
Person
(Oid, Name, Age, Address)
Employee Student
(..., SerialNum, Salary, Dept) (..., SerialNum, GPA)
Manager Architect
(..., Bonus) (..., StockOption)
When the hierarchy is established, you can use the INSERT statement, as
usual, to populate the tables. The only difference is that you must remember
to populate the object identifier columns and, optionally, any additional
attributes of the objects in each table or subtable. Because the object identifier
column is a REF type, which is strongly typed, you must cast the
user-provided object identifier values, using the cast function that the system
generated for you when you created the structured type.
INSERT INTO BusinessUnit (Oid, Name, Headcount)
VALUES(BusinessUnit_t(1), 'Toy', 15);
Chapter 12. Working with Complex Objects: User-Defined Structured Types 299
INSERT INTO Student (Oid, Name, Age, SerialNum, GPA)
VALUES(Student_t('h'), 'Helen', 20, ‘10357’, 3.5);
INSERT INTO Manager (Oid, Name, Age, SerialNum, Salary, Dept, Bonus)
VALUES(Manager_t('i'), 'Iris', 35, 251, 55000, BusinessUnit_t(1), 12000);
INSERT INTO Manager (Oid, Name, Age, SerialNum, Salary, Dept, Bonus)
VALUES(Manager_t('k'), 'Ken', 55, 482, 105000, BusinessUnit_t(2), 48000);
INSERT INTO Architect (Oid, Name, Age, SerialNum, Salary, Dept, StockOption)
VALUES(Architect_t('l'), 'Leo', 35, 661, 92000, BusinessUnit_t(2), 20000);
The previous example does not insert any addresses. For information about
how to insert structured type values into columns, see “Inserting Rows that
Contain Structured Type Values” on page 314.
When you insert rows into a typed table, the first value in each inserted row
must be the object identifier for the data being inserted into the tables. Also,
just as with non-typed tables, you must provide data for all columns that are
defined as NOT NULL. Finally, notice that any reference-valued expression of
the appropriate type can be used to initialize a reference attribute. In the
previous examples, the Dept reference of the employees is input as an
appropriately type-cast constant. However, you can also obtain the reference
using a subquery, as shown in the following example:
INSERT INTO Architect (Oid, Name, Age, SerialNum, Salary, Dept, StockOption)
VALUES(Architect_t('m'), 'Brian', 7, 882, 112000,
(SELECT Oid FROM BusinessUnit WHERE name = 'Toy'), 30000);
Dept column of
Employee table BusinessUnit table
BusinessUnit
Chapter 12. Working with Complex Objects: User-Defined Structured Types 301
create one typed table for parts and one typed table for suppliers. To show
the reference type definitions, the sample also includes the statements used to
create the types:
CREATE TYPE Company_t AS
(name VARCHAR(30),
location VARCHAR(30))
MODE DB2SQL ;
Parts table
Part_t type
Supplier table
Company_t type
You can use scoped references to write queries that, without scoped
references, would have to be written as outer joins or correlated subqueries.
For more information, see “Queries that Dereference References” on page 307.
The OF clause in the CREATE VIEW statement tells DB2 to base the columns
of the view on the attributes of the indicated structured type. In this case, DB2
bases the columns of the view on the VBusinessUnit_t structured type.
The MODE DB2SQL clause specifies the mode of the typed view. This is the
only valid mode currently supported.
The REF IS... clause is identical to that of the typed CREATE TABLE
statement. It provides a name for the object identifier column of the view
(VObjectID in this case), which is the first column of the view. If you create a
typed view on a root type, you must specify an object identifier column for
the view. If you create a typed view on a subtype, your view can inherit the
object identifier column.
The USER GENERATED clause specifies that the initial value for the object
identifier column must be provided by the user when inserting a row. Once
inserted, the object identifier column cannot be updated.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 303
The body of the view, which follows the keyword AS, is a SELECT statement
that determines the content of the view. The column-types returned by this
SELECT statement must be compatible with the column-types of the typed
view, including the initial object identifier column.
The two CREATE TYPE statements create the structured types that are needed
to create the object view hierarchy for this example.
The first typed CREATE VIEW statement above creates the root view of the
hierarchy, VPerson, and is very similar to the VBusinessUnit view definition.
The difference is the use of ONLY(Person) to ensure that only the rows in the
Person table hierarchy that are in the Person table, and not in any subtable,
are included in the VPerson view. This ensures that the Oid values in VPerson
are unique compared with the Oid values in VEmployee. The second CREATE
VIEW statement creates a subview VEmployee under the view VPerson. As was
the case for the UNDER clause in the CREATE TABLE...UNDER statement,
the UNDER clause establishes the view hierarchy. You must create a subview
in the same schema as its superview. Like typed tables, subviews inherit
columns from their superview. Rows in the VEmployee view inherit the
columns VObjectID and Name from VPerson and have the additional columns
Salary and Dept associated with the type VEmployee_t.
The INHERIT SELECT PRIVILEGES clause has the same effect when you
issue a CREATE VIEW statement as when you issue a typed CREATE TABLE
statement. For more information on the INHERIT SELECT PRIVILEGES
clause, see “Indicating that SELECT Privileges are Inherited” on page 297. The
If a view has a reference column, like the Dept column of the VEmployee view,
you must associate a scope with the column to use the column in SQL
dereference operations. If you do not specify a scope for the reference column
of the view and the underlying table or view column is scoped, then the
scope of the underlying column is passed on to the reference column of the
view. You can explicitly assign a scope to the reference column of the view by
using the WITH OPTIONS clause. In the previous example, the Dept column
of the VEmployee view receives the VBusinessUnit view as its scope. If the
underlying table or view column does not have a scope, and no scope is
explicitly assigned in the view definition, or no scope is assigned with an
ALTER VIEW statement, the reference column remains unscoped.
There are several important rules associated with restrictions on the queries
for typed views found in the SQL Reference that you should read carefully
before attempting to create and use a typed view.
Dropping a User-Defined Type (UDT) or Type Mapping
You can drop a user-defined type (UDT) or type mapping using the DROP
statement. For more information on type mappings, see “Working with Data
Type Mappings” on page 569. You cannot drop a UDT if it is used:
v In a column definition for an existing table or view.
v As the type of an existing typed table or typed view (structured type).
v As the supertype of another structured type.
You cannot drop a default type mapping; you can only override it by creating
another type mapping.
If you have created a transform for a UDT, and you plan to drop that UDT,
consider dropping the associated transform. To drop a transform, issue a
DROP TRANSFORM statement. For the complete syntax of the DROP
TRANSFORM statement, refer to the SQL Reference. Note that you can only
drop user-defined transforms. You cannot drop built-in transforms or their
associated group definitions.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 305
Altering or Dropping a View
The ALTER VIEW statement modifies an existing view by altering a reference
type column to add a scope. Any other changes you make to a view require
that you drop and then re-create the view.
When altering the view, the scope must be added to an existing reference type
column that does not already have a scope defined. Further, the column must
not be inherited from a superview.
The data type of the column-name in the ALTER VIEW statement must be
REF (type of the typed table name or typed view name).
Refer to the SQL Reference for additional information on the ALTER VIEW
statement.
Any views that are dependent on the dropped view become inoperative. For
more information on inoperative views, refer to the “Recovering Inoperative
Views” section of the Administration Guide.
Other database objects such as tables and indexes will not be affected
although packages and cached dynamic statements are marked invalid. For
more information, refer to the “Statement Dependencies” section of the
Administration Guide.
For more information on dropping and creating views, refer to the SQL
Reference.
Querying a Typed Table
If you have the required SELECT authority, you can query a typed table in the
same way that you query non-typed tables. The query returns the requested
columns from the qualifying rows from the target of the SELECT and all of its
subtables. For example, the following query on the data in the Person table
hierarchy returns the names and ages of all people; that is, all rows in the
Person table and its subtables. For information on writing a similar query if
one of the columns is a structured type column, see “Retrieving and
Modifying Structured Type Values” on page 316.
SELECT Name, Age
FROM Person;
The following query uses the dereference operator to obtain the Name column
from the BusinessUnit table:
SELECT Name, Salary, Dept->Name
FROM Employee
Chapter 12. Working with Complex Objects: User-Defined Structured Types 307
Ken 105000 Shoe
Leo 92000 Shoe
Brian 112000 Toy
Susan 37000.48 ---
The preceding example uses the dereference operator to return the value of
Name from the Employee table, and invokes the DEREF function to return the
dynamic type for the instance of Employee_t.
Authorization requirement: To use the DEREF function, you must have SELECT
authority on every table and subtable in the referenced portion of the table
hierarchy. In the above query, for example, you need SELECT privileges on
the Employee, Manager, and Architect typed tables.
Additional Query Specification Techniques
To protect the security of the data, the use of ONLY requires the SELECT
privilege on every subtable of Employee.
You can also use the ONLY clause to restrict the operation of an UPDATE or
DELETE statement to the named table. That is, the ONLY clause ensures that
the operation does not occur on any subtables of that named table.
For example, the following query returns people who are greater than 35
years old, and who are either managers or architects:
Chapter 12. Working with Complex Objects: User-Defined Structured Types 309
SELECT Name
FROM Employee E
WHERE E.Age > 35 AND
DEREF(E.Oid) IS OF (Manager_t, Architect_t);
You might use OUTER, for example, when you want to see information about
people who tend to achieve above the norm. The following query returns
information from the Person table hierarchy that have either a high salary
Salary or a high grade point average GPA:
SELECT *
FROM OUTER(Person) P
WHERE P.Salary > 200000
OR P.GPA > 3.95 ;
The use of OUTER requires the SELECT privilege on every subtable or view
of the referenced table because all of their information is exposed through its
usage.
Suppose that your application needs to see not just the attributes of these high
achievers, but what the most specific type is for each one. You can do this in a
Because the Address column of the Person typed table contains structured
types, you would have to define additional functions and issue additional
SQL to return the data from that column. For more information on returning
data from a structured type column, see “Retrieving and Modifying
Structured Type Values” on page 316. Assuming you perform these additional
steps, the preceding query returns the following output, where Additional
Attributes includes GPA and Salary:
1 OID NAME Additional Attributes
------------------ ------------- -------------------- ...
PERSON_T a Andrew ...
PERSON_T b Bob ...
PERSON_T c Cathy ...
EMPLOYEE_T d Dennis ...
EMPLOYEE_T e Eva ...
EMPLOYEE_T f Franky ...
MANAGER_T i Iris ...
ARCHITECT_T l Leo ...
EMPLOYEE_T s Susan ...
Note that you must always provide the clause USER GENERATED.
An INSERT statement to insert a row into the typed table, then, might look
like this:
Chapter 12. Working with Complex Objects: User-Defined Structured Types 311
INSERT INTO BusinessUnit (Oid, Name, Headcount)
VALUES(BusinessUnit_t(GENERATE_UNIQUE( )), 'Toy' 15);
To insert an employee that belongs to the Toy department, you can use a
statement like the following, which issues a subselect to retrieve the value of
the object identifier column from the BusinessUnit table, casts the value to the
BusinessUnit_t type, and inserts that value into the Dept column:
INSERT INTO Employee (Oid, Name, Age, SerialNum, Salary, Dept)
VALUES(Employee_t('d'), 'Dennis', 26, 105, 30000,
BusinessUnit_t(SELECT Oid FROM BusinessUnit WHERE Name='Toy'));
Empl Table
Figure 15 shows the type hierarchy used as an example in this section. The
root type is Address_t, which has three subtypes, each with an additional
attribute that reflects some aspect of how addresses are formed in that
country.
Address_t
(Street, Number, City, State)
Chapter 12. Working with Complex Objects: User-Defined Structured Types 313
Defining Tables with Structured Type Columns
Unless you are concerned with how structured types are laid out in the data
record, there is no additional syntax for creating tables with columns of
structured types. For example, the following statement adds a column of
Address_t type to a Customer_List untyped table:
ALTER TABLE Customer_List
ADD COLUMN Address Address_t;
If you are concerned with how structured types are laid out in the data
record, you can use the INLINE LENGTH clause in the CREATE TYPE
statement to indicate the maximum size of an instance of a structured type
column to store inline with the rest of the values in the row. For more
information on the INLINE LENGTH clause, refer to the CREATE TYPE
(Structured) statement in the SQL Reference.
Defining Types with Structured Type Attributes
A type can be created with a structured type attribute, or it can be altered
(before it is used) to add or drop such an attribute. For example, the following
CREATE TYPE statement contains an attribute of type Address_t:
CREATE TYPE Person_t AS
(Name VARCHAR(20),
Age INT,
Address Address_t)
REF USING VARCHAR(13)
MODE DB2SQL;
Person_t can be used as the type of a table, the type of a column in a regular
table, or as an attribute of another structured type.
Inserting Rows that Contain Structured Type Values
When you create a structured type, DB2 automatically generates a constructor
method for the type, and generates mutator and observer methods for the
attributes of the type. You can use these methods to create instances of
structured types, and insert these instances into a column of a table.
Assume that you want to add a new row to the Employee typed table, and
that you want that row to contain an address. Just as with built-in data types,
you can add this row using INSERT with the VALUES clause. However, when
you specify the value to insert into the address, you must invoke the
system-provided constructor function to create the value:
INSERT INTO Employee (Oid, Name, Age, SerialNum, Salary, Dept, Address)
VALUES(Employee_t('m'), 'Marie', 35, 005, 55000, BusinessUnit_t(2),
US_addr_t ( ) 1
To avoid having to explicitly call the mutator methods for each attribute of a
structured type every time you create an instance of the type, consider
defining your own SQL-bodied constructor function that initializes all of the
attributes. The following example contains the declaration for an SQL-bodied
constructor function for the US_addr_t type:
CREATE FUNCTION US_addr_t
(street Varchar(30),
number Char(15),
city Varchar(30),
state Varchar(20),
zip Char(10))
RETURNS US_addr_t
LANGUAGE SQL
RETURN Address_t()..street(street)..number(number)
..city(city)..state(state)..zip(zip);
Chapter 12. Working with Complex Objects: User-Defined Structured Types 315
Retrieving and Modifying Structured Type Values
There are several ways that applications and user-defined functions can access
data in structured type columns. If you want to treat an object as a single
value, you must first define transform functions, which are described in
“Creating the Mapping to the Host Language Program: Transform Functions”
on page 321. Once you define the correct transform functions, you can select
a structured object much as you can any other value:
SELECT Name, Dept, Address
FROM Employee
WHERE Salary > 20000;
Retrieving Attributes
To explicitly access individual attributes of an object, invoke the DB2 built-in
observer methods on those attributes. Using the observer methods, you can
retrieve the attributes individually rather than treating the object as a single
value.
The following example accesses data in the Address column by invoking the
observer methods on Address_t, the defined static type for the Address
column:
SELECT Name, Dept, Address..street, Address..number, Address..city,
Address..state
FROM Employee
WHERE Salary > 20000;
Note: DB2 enables you to invoke methods that take no parameters using
either <type-name>..<method-name>() or <type-name>..<method-name>,
where type-name represents the name of the structured type, and
attribute-name represents the name of the method that takes no
parameters.
You can also use observer methods to select each attribute into a host variable,
as follows:
SELECT Name, Dept, Address..street, Address..number, Address..city,
Address..state
INTO :name, :dept, :street, :number, :city, :state
FROM Employee
WHERE Empno = ‘000250’;
Note: You can only use the preceding approach to determine the subtype of a
structured type when the attributes of the subtype are all of the same
type, or can be cast to the same type. In the previous example, zip,
family_name, and neighborhood are all VARCHAR or CHAR types, and
can be cast to the same type.
For more information about the syntax of the TREAT expression or the TYPE
predicate, refer to the SQL Reference.
Modifying Attributes
To change an attribute of a structured column value, invoke the mutator
method for the attribute you want to change. For example, to change the
street attribute of an address, you can invoke the mutator method for street
with the value to which it will be changed. The returned value is an address
with the new value for street. The following example invokes a mutator
method for the attribute named street to update an address type in the
Employee table:
UPDATE Employee
SET Address = Address..street(‘Bailey’)
WHERE Address..street = ‘Bakely’;
The following example performs the same update as the previous example,
but instead of naming the structured column for the update, the SET clause
directly accesses the mutator method for the attribute named street:
UPDATE Employee
SET Address..street = ‘Bailey’
WHERE Address..street = ‘Bakely’;
Chapter 12. Working with Complex Objects: User-Defined Structured Types 317
Returning Information About the Type
As described in “Other Type-related Built-in Functions” on page 308, you can
use built-in functions to return the name, schema, or internal type ID of a
particular type. The following statement returns the exact type of the address
value associated with the employee named ‘Iris’:
SELECT TYPE_NAME(Address)
FROM Employee
WHERE Name='Iris';
Before you can use a transform function, you must use the CREATE
TRANSFORM statement to associate the transform function with a group
name and a type. The CREATE TRANSFORM statement identifies one or
more existing functions and causes them to be used as transform functions.
The following example names two pairs of functions to be used as transform
functions for the type Address_t. The statement creates two transform groups,
func_group and client_group, each of which consists of a FROM SQL
transform and a TO SQL transform.
CREATE TRANSFORM FOR Address_t
func_group ( FROM SQL WITH FUNCTION addresstofunc,
TO SQL WITH FUNCTION functoaddress )
client_group ( FROM SQL WITH FUNCTION stream_to_client,
TO SQL WITH FUNCTION stream_from_client ) ;
You can associate additional functions with the Address_t type by adding
more groups on the CREATE TRANSFORM statement. To alter the transform
definition, you must reissue the CREATE TRANSFORM statement with the
additional functions. For example, you might want to customize your client
functions for different host language programs, such as having one for C and
one for Java. To optimize the performance of your application, you might
want your transforms to work only with a subset of the object attributes. Or
you might want one transform that uses VARCHAR as the client
representation for an object and one transform that uses BLOB.
The names of your transform groups should generally reflect the function
they perform without relying on type names or in any way reflecting the logic
of the transform functions, which will likely be very different across the
different types. For example, you could use the name func_group or
object_functions for any group in which your TO and FROM SQL function
transforms are defined. You could use the name client_group or
program_group for a group that contains TO and FROM SQL client transforms.
In the following example, the Address_t and Polygon types use very different
transforms, but they use the same function group names
CREATE TRANSFORM FOR Address_t
func_group (TO SQL WITH FUNCTION functoaddress,
FROM SQL WITH FUNCTION addresstofunc );
Once you set the transform group to func_group in the appropriate situation,
as described in “Where Transform Groups Must Be Specified” on page 320,
DB2 invokes the correct transform function whenever you bind in or bind out
an address or polygon.
Restriction: Do not begin a transform group with the string ’SYS’; this group
is reserved for use by DB2.
When you define an external function or method and you do not specify a
transform group name, DB2 attempts to use the name DB2_FUNCTION, and
assumes that that group name was specified for the given structured type. If
you do not specify a group name when you precompile a client program that
Chapter 12. Working with Complex Objects: User-Defined Structured Types 319
references a given structured type, DB2 attempts to use a group name called
DB2_PROGRAM, and again assumes that the group name was defined for
that type.
For more information on the PRECOMPILE and BIND commands, refer to the
Command Reference.
Creating the Mapping to the Host Language Program: Transform
Functions
An application cannot directly select an entire object, although, as described in
“Retrieving Attributes” on page 316, you can select individual attributes of an
object into an application. An application usually does not directly insert an
entire object, although it can insert the result of an invocation of the
constructor function:
INSERT INTO Employee(Address) VALUES (Address_t());
Chapter 12. Working with Complex Objects: User-Defined Structured Types 321
Most likely, there will be different transforms for passing objects to routines, or
external UDFs and methods, than those for passing objects to client
applications. This is because when you pass the object to an external routine,
you decompose the object and pass it to the routine as a list of parameters.
With client applications, you must turn the object into a single built-in type,
such as a BLOB. This process is called encoding the object. Often these two
types of transforms are used together.
Note: The following topics cover the simple case in which the application
always receives a known exact type, such as Address_t. These topics do
not describe the likely scenario in which an external routine or a client
program may receive Address_t, Brazil_addr_t, Germany_addr_t, or
US_addr_t. However, you must understand the basic process before
attempting to apply that basic process to the more complex case, in
which the external routine or client needs to handle dynamically any
type or its subtypes. For information about how to dynamically handle
subtype instances, see “Retrieving Subtype Data from DB2 (Bind Out)”
on page 332.
The following example issues an SQL statement that invokes an external UDF
called MYUDF that takes an address as an input parameter, modifies the address
(to reflect a change in street names, for example), and returns the modified
address:
SELECT MYUDF(Address)
FROM PERSON;
1. Your FROM SQL transform function decomposes the structured object into
an ordered set of its base attributes. This enables the routine to receive the
object as a simple list of parameters whose types are basic built-in data
types. For example, assume that you want to pass an address object to an
external routine. The attributes of Address_t are VARCHAR, CHAR,
VARCHAR, and VARCHAR, in that order. The FROM SQL transform for
passing this object to a routine must accept this object as an input and
return VARCHAR, CHAR, VARCHAR, and VARCHAR. These outputs are
then passed to the external routine as four separate parameters, with four
Chapter 12. Working with Complex Objects: User-Defined Structured Types 323
corresponding null indicator parameters, and a null indicator for the
structured type itself. The order of parameters in the FROM SQL function
does not matter, as long as all functions that return Address_t types use
the same order. For more information, see “Passing Structured Type
Parameters to External Routines” on page 325.
2. Your external routine accepts the decomposed address as its input
parameters, does its processing on those values, and then returns the
attributes as output parameters.
3. Your TO SQL transform function must turn the VARCHAR, CHAR,
VARCHAR, and VARCHAR parameters that are returned from MYUDF back
into an object of type Address_t. In other words, the TO SQL transform
function must take the four parameters, and all of the corresponding null
indicator parameters, as output values from the routine. The TO SQL
function constructs the structured object and then mutates the attributes
with the given values.
Note: If MYUDF also returns a structured type, another transform function must
transform the resultant structured type when the UDF is used in a
SELECT clause. To avoid creating another transform function, you can
use SELECT statements with observer methods, as in the following
example:
SELECT Name
FROM Employee
WHERE MYUDF(Address)..city LIKE ‘Tor%’;
The TO SQL transform simply does the opposite of the FROM SQL function.
It takes as input the list of parameters from a routine and returns an instance
of the structured type. To construct the object, the following FROM SQL
function invokes the constructor function for the Address_t type:
CREATE FUNCTION functoaddress (street VARCHAR(30), number CHAR(15),
city VARCHAR(30), state VARCHAR(10)) 1
RETURNS Address_t 2
LANGUAGE SQL
CONTAINS SQL
RETURN
Address_t()..street(street)..number(number)
..city(city)..state(state) 3
Chapter 12. Working with Complex Objects: User-Defined Structured Types 325
parameter and a null indicator for the structured type itself. The following
example accepts the structured type Address_t and returns a base type:
CREATE FUNCTION stream_to_client (Address_t)
RETURNS VARCHAR(150) ...
The external routine must accept the null indicator for the instance of the
Address_t type (address_ind) and one null indicator for each of the attributes
of the Address_t type. There is also a null indicator for the VARCHAR output
parameter. The following code represents the C language function headers for
the functions which implement the UDFs:
void SQL_API_FN stream_to_client(
/*decomposed address*/
SQLUDF_VARCHAR *street,
SQLUDF_CHAR *number,
SQLUDF_VARCHAR *city,
SQLUDF_VARCHAR *state,
SQLUDF_VARCHAR *output,
/*null indicators for type attributes*/
SQLUDF_NULLIND *street_ind,
SQLUDF_NULLIND *number_ind,
SQLUDF_NULLIND *city_ind,
SQLUDF_NULLIND *state_ind,
/*null indicator for instance of the type*/
SQLUDF_NULLIND *address_ind,
/*null indicator for the VARCHAR output*/
SQLUDF_NULLIND *out_ind,
SQLUDF_TRAIL_ARGS)
The following code represents the C language headers for routines which
implement the UDFs. The arguments include variables and null indicators for
the attributes of the decomposed structured type and a null indicator for each
instance of a structured type, as follows:
void SQL_API_FN myudf(
SQLUDF_INTEGER *INT,
/* Decompose st1 input */
For example, assume that you want to execute the following SQL statement:
...
SQL TYPE IS Address_t AS VARCHAR(150) addhv;
...
Figure 17 on page 328 shows the process of binding out that address to the
client program.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 327
SELECT Address FROM Person INTO: addhv WHERE...;
VARCHAR
Server
Client
1. The object must first be passed to the FROM SQL function transform to
decompose it into its base type attributes.
2. Your FROM SQL client transform must encode the value into a single
built-in type, such as a VARCHAR or BLOB. This enables the client
program to receive the entire value in a single host variable.
This encoding can be as simple as copying the attributes into a contiguous
area of storage (providing for required alignments as necessary). Because
the encoding and decoding of attributes cannot generally be achieved with
SQL, client transforms are usually written as external UDFs.
For information about processing data between platforms, see “Data
Conversion Considerations” on page 330.
3. The client program processes the value.
Figure 18 on page 329 shows the reverse process of passing the address back
to the database.
Client
Server
1. TO SQL function transform
Address_t
1. The client application encodes the address into a format expected by the
TO SQL client transform.
2. The TO SQL client transform decomposes the single built-in type into a set
of its base type attributes, which is used as input to the TO SQL function
transform.
3. The TO SQL function transform constructs the address and returns it to
the database.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 329
TRANSFORM GROUP func_group
EXTERNAL NAME 'addressudf!address_from_sql_to_client'
NOT VARIANT
NO EXTERNAL ACTION
NOT FENCED
NO SQL
PARAMETER STYLE DB2SQL;
Client Transform for Binding in from a Client: The following DDL registers a
function that takes the VARCHAR-encoded object from the client, decomposes
it into its various base type attributes, and passes it to the TO SQL function
transform.
CREATE FUNCTION to_sql_from_client (VARCHAR (150))
RETURNS Address_t
LANGUAGE C
TRANSFORM GROUP func_group
EXTERNAL NAME 'addressudf!address_to_sql_from_client'
NOT VARIANT
NO EXTERNAL ACTION
NOT FENCED
NO SQL
PARAMETER STYLE DB2SQL;
How does DB2 know which function transform to invoke? Notice that the DDL in
both to_sql_from_client and from_sql_to_client include a clause called
TRANSFORM GROUP. This clause tells DB2 which set of transforms to use in
processing the address type in those functions. For more information, see
“Associating Transforms with a Type” on page 318.
Note: As much as possible, you should write transform functions so that they
correctly handle all of the complexities associated with the transfer of
data between server and client. When you design your application,
consider the specific requirements of your environment and evaluate
the tradeoffs between complete generality and simplicity. For example,
if you know that both the database server and all of its clients run in
an AIX environment and use the same code page, you could decide to
ignore the previously discussed considerations, because no conversions
are currently required. However, if your environment changes in the
future, you may have to exert considerable effort to revise your original
design to correctly handle data conversion.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 331
Transform Function Summary
Table 15 is intended to help you determine what transform functions you
need, depending on whether you are binding out to an external routine or a
client application.
Table 15. Characteristics of transform functions
Characteristic Exchanging values with an Exchanging values with a client
external routine application
Transform FROM SQL TO SQL FROM SQL TO SQL
direction
What is being Routine Routine result Output host Input host
transformed parameter variable variable
Behavior Decomposes Constructs Encodes Decodes
Transform Structured type Row of built-in Structured type One built-in
function types type
parameters
Transform Row of built-in Structured type One built-in Structured type
function result types (probably type
attributes)
Dependent on No No FROM SQL TO SQL UDF
another UDF transform transform
transform?
When is the At the time the UDF is registered Static: precompile time
transform Dynamic: Special register
group
specified?
Are there data No Yes
conversion
considerations?
Note: Although not generally the case, client type transforms can actually be
written in SQL if any of the following are true:
v The structured type contains only one attribute.
v The encoding and decoding of the attributes into a built-in type can
be achieved by some combination of SQL operators or functions.
In these cases, you do not have to depend on function transforms to
exchange the values of a structured type with a client application.
Chapter 12. Working with Complex Objects: User-Defined Structured Types 333
void SQL_API_FN address_to_client(
SQLUDF_VARCHAR *street,
SQLUDF_CHAR *number,
SQLUDF_VARCHAR *city,
SQLUDF_VARCHAR *state,
SQLUDF_VARCHAR *output,
{
sprintf (output, "[address_t] [Street:%s] [number:%s]
[city:%s] [state:%s]",
street, number, city, state);
*output_ind = 0;
}
/* Null indicators */
SQLUDF_NULLIND *street_ind,
SQLUDF_NULLIND *number_ind,
SQLUDF_NULLIND *city_ind,
SQLUDF_NULLIND *state_ind,
SQLUDF_NULLIND *zip_ind,
SQLUDF_NULLIND *us_address_ind,
SQLUDF_NULLIND *output_ind,
SQLUDF_TRAIL_ARGS)
{
sprintf (output, "[US_addr_t] [Street:%s] [number:%s]
Chapter 12. Working with Complex Objects: User-Defined Structured Types 335
The transform group contains the transform function Addr_stream associated
with the root type Address_t in 5 on page 335. Addr_stream is a SQL-bodied
function, defined in 4 on page 335, so it has no dependency on any other
transform function. The Addr_stream function returns VARCHAR(150), the
data type required by the :hvaddr host variable.
The Addr_stream function takes an input value of type Address_t, which can
be substituted with US_addr_t in this example, and determines the dynamic
type of the input value. When Addr_stream determines the dynamic type, it
invokes the corresponding external UDF on the value: address_to_client if
the dynamic type is Address_t; or USaddr_to_client if the dynamic type is
US_addr_t. These two UDFs are defined in 3 on page 333. Each UDF
decomposes their respective structured type to VARCHAR(150), the type
required by the Addr_stream transform function.
To accept the structured types as input, each UDF needs a FROM SQL
transform function to decompose the input structured type instance into
individual attribute parameters. The CREATE FUNCTION statements in 3 on
page 333 name the TRANSFORM GROUP that contains these transforms.
The CREATE FUNCTION statements for the transform functions are issued in
1 on page 333. The CREATE TRANSFORM statements that associate the
transform functions with their transform groups are issued in 2 on page 333.
To execute the INSERT statement for a structured type, your application must
perform the following steps:
Step 1. Create a TO SQL function transform for each variation of address.
The following example shows SQL-bodied UDFs that transform the
Address_t and US_addr_t types:
CREATE FUNCTION functoaddress
(str VARCHAR(30), num CHAR(15), cy VARCHAR(30), st VARCHAR (10))
RETURNS Address_t
LANGUAGE SQL
RETURN Address_t()..street(str)..number(num)..city(cy)..state(st);
/* Null indicators */
SQLUDF_NULLIND *encoding_ind,
SQLUDF_NULLIND *street_ind,
SQLUDF_NULLIND *number_ind,
SQLUDF_NULLIND *city_ind,
SQLUDF_NULLIND *state_ind,
SQLUDF_NULLIND *address_ind,
SQLUDF_TRAIL_ARGS )
{
char c[150];
char *pc;
strcpy(c, encoding);
Chapter 12. Working with Complex Objects: User-Defined Structured Types 337
pc = strtok (NULL, ":]");
pc = strtok (NULL, ":]");
strcpy (city, pc);
pc = strtok (NULL, ":]");
pc = strtok (NULL, ":]");
strcpy (state, pc);
/* Null indicators */
SQLUDF_NULLIND *encoding_ind,
SQLUDF_NULLIND *street_ind,
SQLUDF_NULLIND *number_ind,
SQLUDF_NULLIND *city_ind,
SQLUDF_NULLIND *state_ind,
SQLUDF_NULLIND *zip_ind,
SQLUDF_NULLIND *us_addr_ind,
SQLUDF_TRAIL_ARGS)
{
char c[150];
char *pc;
strcpy(c, encoding);
Chapter 12. Working with Complex Objects: User-Defined Structured Types 339
stream_address parses the VARCHAR(150) input parameter for a substring
that names the dynamic type: in this case, either ‘Address_t’ or ‘US_addr_t’.
stream_address then invokes the corresponding external UDF to parse the
VARCHAR(150) and returns an object of the specified type. There are two
client_to_address() UDFs, one to return each possible type. These UDFs are
defined in 3 on page 337. Each UDF takes the input VARCHAR(150), and
internally constructs the attributes of the appropriate structured type, thus
returning the structured type.
To return the structured types, each UDF needs a TO SQL transform function
to construct the output attribute values into an instance of the structured type.
The CREATE FUNCTION statements in 3 on page 337 name the
TRANSFORM GROUP that contains the transforms.
The SQL-bodied transform functions from 1 on page 336, and the associations
with the transform groups from 2 on page 337, are named in the CREATE
FUNCTION statements of 3 on page 337.
Working with Structured Type Host Variables
The actual name of the structured type is returned in SQLVAR2. For more
information about the structure of the SQLDA, refer to the SQL Reference.
Along with storing large objects (LOBs), a way is also needed to refer to, and
to use and modify, each LOB in the database. Each DB2 table may have a
large amount of associated LOB data. Although any single LOB value may not
exceed 2 gigabytes, a single row may contain as much as 24 gigabytes of LOB
data, and a table may contain as much as 4 terabytes of LOB data. The
content of the LOB column of a particular row at any point in time has a large
object value.
You can refer to and manipulate LOBs using host variables just as you would
any other data type. However, host variables use the client memory buffer
which may not be large enough to hold LOB values. Other means are
necessary to manipulate these large values. Locators are useful to identify and
manipulate a large object value at the database server and for extracting
pieces of the LOB value. File reference variables are useful for physically
moving a large object value (or a large part of it) to and from the client.
The subsections that follow discuss in more detail those topics introduced
above.
The three large object data types have the following definitions:
v Character Large Objects (CLOBs) — A character string made up of
single-byte characters with an associated code page. This data type is best
for holding text-oriented information where the amount of information
could grow beyond the limits of a regular VARCHAR data type (upper
limit of 4K bytes). Code page conversion of the information is supported as
well as compatibility with the other character types.
v Double-Byte Character Large Objects (DBCLOBs) — A character string
made up of double-byte characters with an associated code page. This data
type is best for holding text-oriented information where double-byte
character sets are used. Again, code page conversion of the information is
supported as well as compatibility with the other character types.
v Binary Large Objects (BLOBs) — A binary string made up of bytes with no
associated code page. This data type may be the most useful because it can
store binary data, making it a perfect source type for use by User-defined
Distinct Types (UDTs). UDTs using BLOBs as the source type are created to
store image, voice, graphical, and other types of business or application
specific data. For more information on UDTs, see “Chapter 11. User-defined
Distinct Types” on page 273.
A separate database location stores all large object values outside their records
in the table. There is a large object descriptor for each large object in each row
in a table. The large object descriptor contains control information used to
access the large object data stored elsewhere on disk. It is the storing of large
object data outside their records that allows LOBs to be 2 GB in size.
Accessing the large object descriptor causes a small amount of overhead when
manipulating LOBs. (For storage and performance reasons you would likely
not want to put small data items into LOBs.)
The maximum size for each large object column is part of the declaration of
the large object type in the CREATE TABLE statement. The maximum size of
a large object column determines the maximum size of any LOB descriptor in
that column. As a result, it also determines how many columns of all data
The lob-options-clause on CREATE TABLE gives you the choice to log (or
not) the changes made to the LOB column(s). This clause also allows for a
compact representation for the LOB descriptor (or not). This means you can
allocate only enough space to store the LOB or you can allocate extra space
for future append operations to the LOB. The tablespace-options-clause
allows you to identify a LONG table space to store the column values of long
field or LOB data types. For more information on the CREATE TABLE and
ALTER TABLE statements, refer to the SQL Reference.
With their potentially very large size, LOBs can slow down the performance
of your database system significantly when moved into or out of a database.
Even though DB2 does not allow logging of a LOB value greater than 1 GB,
LOB values with sizes near several hundred megabytes can quickly push the
database log to near capacity. An error, SQLCODE -355 (SQLSTATE 42993),
results from attempting to log a LOB greater than 1 GB in size. The
lob-options-clause in the CREATE TABLE and ALTER TABLE statements allows
users to turn off logging for a particular LOB column. Although setting the
option to NOT LOGGED improves performance, changes to the LOB values after
the most recent backup are lost during roll-forward recovery. For more
information on these topics, refer to the Administration Guide.
The LOB locator is associated with a LOB value or LOB expression, not a row
or physical storage location in the database. Therefore, after selecting a LOB
value into a locator, there is no operation that you could perform on the
original row(s) or tables(s) that would have any effect on the value referenced
by the locator. The value associated with the locator is valid until the unit of
work ends, or the locator is explicitly freed, whichever comes first. The
FREE LOCATOR statement releases a locator from its associated value. In a
similar way, a commit or roll-back operation frees all LOB locators associated
with the transaction.
The use of the LOB value within the program can help the programmer
determine which method is best. If the LOB value is very large and is needed
only as an input value for one or more subsequent SQL statements, then it is
best to keep the value in a locator. The use of a locator eliminates any
client/server communication traffic needed to transfer the LOB value to the
host variable and back to the server.
If the program needs the entire LOB value regardless of the size, then there is
no choice but to transfer the LOB. Even in this case, there are still three
options available to you. You can select the entire value into a regular or file
host variable, but it may also work out better to select the LOB value into a
locator and read it piecemeal from the locator into a regular host variable, as
suggested in the following example.
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc == 1) {
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3) {
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else {
printf ("\nUSAGE: lobloc [userid passwd]\n\n");
return 1;
} /* endif */
do {
EXEC SQL FETCH c1 INTO :number, :resume :lobind; 2
if (SQLCODE != 0) break;
if (lobind < 0) {
printf ("NULL LOB indicated\n");
printf ("%s\n",buffer);
Data Division.
Working-Storage Section.
copy "sqlenv.cbl".
copy "sql.cbl".
copy "sqlca.cbl".
Procedure Division.
Main Section.
display "Sample COBOL program: LOBLOC".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
Move 0 to buffer-length.
Fetch-Loop Section.
EXEC SQL FETCH c1 INTO :empnum, :resume :lobind 2
END-EXEC.
go to End-Fetch-Loop.
NULL-lob-indicated.
display "NULL LOB indicated".
End-Fetch-Loop. exit.
End-Prog.
stop run.
Locators permit the assembly and examination of the new resume without
actually moving or copying any bytes from the original resume. The
movement of bytes does not happen until the final assignment; that is, the
INSERT statement — and then only at the server.
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc == 1) {
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3) {
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else {
printf ("\nUSAGE: lobeval [userid passwd]\n\n");
return 1;
} /* endif */
/* Append our new section to the end (assume it has been filled in)
Effectively, this just moves the Department Information to the bottom
of the resume. */
EXEC SQL VALUES (:hv_doc_locator2 || :hv_new_section_buffer) INTO
:hv_doc_locator3;
EMB_SQL_CHECK("VALUES5");
/* Store this resume section in the table. This is where the LOB value
bytes really move */
EXEC SQL INSERT INTO emp_resume VALUES ('A00130', 'ascii',
:hv_doc_locator3); 4
EMB_SQL_CHECK("INSERT");
Data Division.
Working-Storage Section.
copy "sqlenv.cbl".
copy "sql.cbl".
copy "sqlca.cbl".
Procedure Division.
Main Section.
display "Sample COBOL program: LOBEVAL".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
* Append the new section to the end (assume it has been filled)
* Effectively, this just moves the Dept Info to the bottom of
* the resume.
EXEC SQL VALUES (:hv-doc-locator2 ||
:hv-new-section-buffer)
INTO :hv-doc-locator3 END-EXEC.
move "VALUES5" to errloc.
call "checkerr" using SQLCA errloc.
End-Prog.
stop run.
For very large objects, files are natural containers. In fact, it is likely that most
LOBs begin as data stored in files on the client before they are moved to the
database on the server. The use of file reference variables assists in moving
LOB data. Programs use file reference variables to transfer LOB data from the
client file directly to the database engine. The client application does not have
to write utility routines to read and write files using host variables (which
have size restrictions) to carry out the movement of LOB data.
Note: The file referenced by the file reference variable must be accessible from
(but not necessarily resident on) the system on which the program
runs. For a stored procedure, this would be the server.
When using file reference variables there are different options on both input
and output. You must choose an action for the file by setting the file_option
field in the file reference variable structure. Choices for assignment to the field
covering both input and output values are shown below.
Values (shown for C) and options when using input file reference variables
are as follows:
Values and options when using output file reference variables are as follows:
v SQL_FILE_CREATE (New file) — This option creates a new file. Should the
file already exist, an error message is returned. (The value for COBOL is
SQL-FILE-CREATE, and for FORTRAN is sql_file_create.)
v SQL_FILE_OVERWRITE (Overwrite file) — This option creates a new file
if none already exists. If the file already exists, the new data overwrites the
data in the file. (The value for COBOL is SQL-FILE-OVERWRITE, and for
FORTRAN is sql_file_overwrite.)
v SQL_FILE_APPEND (Append file) — This option has the output appended
to the file, if it exists. Otherwise, it creates a new file. (The value for
COBOL is SQL-FILE-APPEND, and for FORTRAN is sql_file_append.)
Notes:
1. In an Extended UNIX Code (EUC) environment, the files to which
DBCLOB file reference variables point are assumed to contain valid EUC
characters appropriate for storage in a graphic column, and to never
contain UCS-2 characters. For more information on DBCLOB files in an
EUC environment, see “Considerations for DBCLOB Files” on page 515.
2. If a LOB file reference variable is used in an OPEN statement, the file
associated with the LOB file reference variable must not be deleted until
the cursor is closed.
For more information on file reference variables, refer to the SQL Reference.
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc == 1) {
EXEC SQL CONNECT TO sample;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else if (argc == 3) {
strcpy (userid, argv[1]);
strcpy (passwd, argv[2]);
EXEC SQL CONNECT TO sample USER :userid USING :passwd;
EMB_SQL_CHECK("CONNECT TO SAMPLE");
}
else {
printf ("\nUSAGE: lobfile [userid passwd]\n\n");
return 1;
} /* endif */
EXEC SQL SELECT resume INTO :resume :lobind FROM emp_resume 3
WHERE resume_format='ascii' AND empno='000130';
if (lobind < 0) {
printf ("NULL LOB indicated \n");
} else {
printf ("Resume for EMPNO 000130 is in file : RESUME.TXT\n");
} /* endif */
Data Division.
Working-Storage Section.
copy "sqlenv.cbl".
copy "sql.cbl".
copy "sqlca.cbl".
Procedure Division.
Main Section.
display "Sample COBOL program: LOBFILE".
if userid = spaces
EXEC SQL CONNECT TO sample END-EXEC
else
display "Enter your password : " with no advancing
accept passwd-name.
NULL-LOB-indicated.
display "NULL LOB indicated".
End-Main.
EXEC SQL CONNECT RESET END-EXEC.
move "CONNECT RESET" to errloc.
call "checkerr" using SQLCA errloc.
End-Prog.
stop run.
The following example shows how to insert data from a regular file
referenced by :hv_text_file into a CLOB column (note that the path names
used in the example are for UNIX-based systems):
strcpy(hv_text_file.name, "/u/userid/dirname/filnam.1");
hv_text_file.name_length = strlen("/u/userid/dirname/filnam.1");
hv_text_file.file_options = SQL_FILE_READ; /* this is a 'regular' file */
Methods, like UDFs, enable you to write your own extensions to SQL by
defining the behavior of SQL objects. However, unlike UDFs, you can only
associate a method with a structured type stored as a column in a table.
In this case, only the rows of interest are passed across the interface
between the application and the database. For large tables, or for cases
where SELECTION_CRITERIA supplies significant filtering, the performance
improvement can be very significant.
Another case where a UDF can offer a performance benefit is when dealing
with Large Objects (LOB). If you have a function whose purpose is to
extract some information from a value of one of the LOB types, you can
perform this extraction right on the database server and pass only the
extracted value back to the application. This is more efficient than passing
the entire LOB value back to the application and then performing the
extraction. The performance value of packaging this function as a UDF
could be enormous, depending on the particular situation. (Note that you
can also extract a portion of a LOB by using a LOB locator. See “Example:
Deferring the Evaluation of a LOB Expression” on page 351 for an example
of a similar scenario.)
In addition, you can use the RETURNS TABLE clause of the CREATE
FUNCTION statement to define UDFs called table functions. Table functions
enable you to very efficiently use relational operations and the power of
SQL on data that resides outside a DB2 database (including non-relational
data stores). A table function takes individual scalar values of different
types and meanings as its arguments, and returns a table to the SQL
statement that invokes it. You can write table functions that generate only
the data in which you are interested, eliminating any unwanted rows or
columns. For more information on table functions, including rules on where
you can use them, refer to the SQL Reference.
This simple example returns all the boats from BOATS_INVENTORY that
are bigger than a particular boat in MY_BOATS. Note that the example only
However, you may also omit the <schema-name>., in which case, DB2 must
identify the function to which you are referring. For example:
BOAT_COMPARE FOO SUBSTR FLOOR
v Function Path
The concept of function path is central to DB2’s resolution of unqualified
references that occur when you do not use the schema-name. For the use of
function path in DDL statements that refer to functions, refer to the SQL
Reference. The function path is an ordered list of schema names. It provides
a set of schemas for resolving unqualified function references to UDFs and
methods as well as UDTs. In cases where a function reference matches
functions in more than one schema in the path, the order of the schemas in
the path is used to resolve this match. The function path is established by
means of the FUNCPATH option on the precompile and bind commands
for static SQL. The function path is set by the SET CURRENT FUNCTION
PATH statement for dynamic SQL. The function path has the following
default value:
"SYSIBM","SYSFUN","<ID>"
This applies to both static and dynamic SQL, where <ID> represents the
current statement authorization ID.
v Overloaded function names
Function names can be overloaded, which means that multiple functions,
even in the same schema, can have the same name. Two functions cannot,
however, have the same signature, which can be defined to be the qualified
function name concatenated with the defined data types of all the function
parameters in the order in which they are defined. For an example of an
overloaded function, see “Example: BLOB String Search” on page 373.
v Function selection algorithm
For information about the concept of mapping UDFs and methods and
built-in functions to data source functions in a federated system, refer to the
SQL Reference. For guidelines on creating such mappings, refer to “Invoking
Data Source Functions” on page 574.
After these steps are successfully completed, your UDF or method is ready for
use in DML or DDL statements such as CREATE VIEW. The steps of writing
and defining UDFs and methods are discussed in the following sections,
followed by a discussion on using UDFs and methods. For information on
compiling and linking UDFs and methods, refer to the Application Building
Guide. For information on debugging your UDF or method, see “Debugging
your UDF” on page 470.
Use the CREATE FUNCTION statement to define (or register) your UDF to
DB2. To register a method with DB2, use the CREATE TYPE or ALTER TYPE
statement to define a method for a structured type, then use the CREATE
METHOD statement to associate the method body with the method
specification. You can find detailed explanations for these statements and their
syntax in the SQL Reference.
In this example, the system uses the NOT NULL CALL default value. This is
desirable since you want the result to be NULL if either argument is NULL.
Since you do not require a scratchpad and no final call is necessary, the NO
SCRATCHPAD and NO FINAL CALL default values are used. As there is no
reason why EXPON cannot be parallel, the ALLOW PARALLELISM default
value is used.
Example: String Search
Your associate, Willie, has written a UDF to look for the existence of a given
short string, passed as an argument, within a given CLOB value, which is also
passed as an argument. The UDF returns the position of the string within the
CLOB if it finds the string, or zero if it does not. Because you are concerned
with database integrity for this function as you suspect the UDF is not fully
tested, you define the function as FENCED.
Note that a CAST FROM clause is used to specify that the UDF body really
returns a FLOAT value but you want to cast this to INTEGER before returning
the value to the statement which used the UDF. As discussed in the SQL
Reference, the INTEGER built-in function can perform this cast for you. Also,
you wish to provide your own specific name for the function and later
reference it in DDL (see “Example: String Search over UDT”). Because the
UDF was not written to handle NULL values, you use the NOT NULL CALL
default value. And because there is no scratchpad, you use the NO
SCRATCHPAD and NO FINAL CALL default values. As there is no reason
why FINDSTRING cannot be parallel, the ALLOW PARALLELISM default
value is used.
Example: BLOB String Search
Because you want this function to work on BLOBs as well as on CLOBs, you
define another FINDSTRING taking BLOB as the first parameter:
CREATE FUNCTION FINDSTRING (BLOB(500K), VARCHAR(200))
RETURNS INTEGER
CAST FROM FLOAT
SPECIFIC "willie_fblob_feb95"
EXTERNAL NAME '/u/willie/testfunc/testmod!findstr'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
DETERMINISTIC
NO EXTERNAL ACTION
FENCED
This example illustrates overloading of the UDF name, and shows that
multiple UDFs and methods can share the same body. Note that although a
BLOB cannot be assigned to a CLOB, the same source code can be used. There
is no programming problem in the above example as the programming
interface for BLOB and CLOB between DB2 and UDF is the same; length
followed by data. DB2 does not check if the UDF using a particular function
body is in any way consistent with any other UDF using the same body.
Example: String Search over UDT
This example is a continuation of the previous example. Say you are satisfied
with the FINDSTRING functions from “Example: BLOB String Search”, but
Note that this FINDSTRING function has a different signature from the
FINDSTRING functions in “Example: BLOB String Search” on page 373, so
there is no problem overloading the name. You wish to provide our own
specific name for possible later reference in DDL. Because you are using the
SOURCE clause, you cannot use the EXTERNAL NAME clause or any of the
related keywords specifying function attributes. These attributes are taken
from the source function. Finally, observe that in identifying the source
function you are using the specific function name explicitly provided in
“Example: BLOB String Search” on page 373. Because this is an unqualified
reference, the schema in which this source function resides must be in the
function path, or the reference will not be resolved.
Example: External Function with UDT Parameter
You have written another UDF to take a BOAT and examine its design
attributes and generate a cost for the boat in Canadian dollars. Even though
internally, the labor cost may be priced in euros, or Japanese yen, or US
dollars, this function needs to generate the cost to build the boat in the
required currency, Canadian dollars. This means it has to get current exchange
rate information from an exchange rate web page, and the answer depends on
the contents of the web page. This makes the function NOT DETERMINISTIC
(or VARIANT).
CREATE FUNCTION BOAT_COST (BOAT)
RETURNS INTEGER
EXTERNAL NAME '/u/marine/funcdir/costs!boatcost'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
NOT DETERMINISTIC
NO EXTERNAL ACTION
FENCED
Observe that CAST FROM and SPECIFIC are not specified, but that NOT
DETERMINISTIC is specified. Here again, FENCED is chosen for safety
reasons.
Note that in the SOURCE clause you have qualified the function name, just in
case there might be some other AVG function lurking in your function path.
Example: Counting
Your simple counting function returns a 1 the first time and increments the
result by one each time it is called. This function takes no SQL arguments,
and by definition it is a NOT DETERMINISTIC function since its answer
varies from call to call. It uses the scratchpad to save the last value returned,
and each time it is invoked it increments this value and returns it. You have
rigorously tested this function, and possess DBADM authority on the
database, so you will define it as NOT FENCED. (DBADM implies
CREATE_NOT_FENCED.)
CREATE FUNCTION COUNTER ()
RETURNS INT
EXTERNAL NAME '/u/roberto/myfuncs/util!ctr'
LANGUAGE C
PARAMETER STYLE DB2SQL
NO SQL
NOT DETERMINISTIC
NOT FENCED
SCRATCHPAD
DISALLOW PARALLEL
Note that no parameter definitions are provided, just empty parentheses. The
above function specifies SCRATCHPAD, and uses the default specification of
NO FINAL CALL. In this case, as the default size of the scratchpad (100
bytes) is sufficient, no storage has to be freed by means of a final call, and so
NO FINAL CALL is specified. Since the COUNTER function requires that a
single scratchpad be used to operate properly, DISALLOW PARALLEL is
added to prevent DB2 from operating it in parallel. To see an implementation
of this COUNTER function, refer to “Example: Counter” on page 451.
Within the context of a single session it will always return the same table, and
therefore it is defined as DETERMINISTIC. Note the RETURNS clause which
defines the output from DOCMATCH, including the column name DOC_ID.
Typically this table function would be used in a join with the table containing
the document text, as follows:
SELECT T.AUTHOR, T.DOCTEXT
FROM DOCS as T, TABLE(DOCMATCH('MATHEMATICS', 'ZORN''S LEMMA')) as F
WHERE T.DOCID = F.DOC_ID
Note the special syntax (TABLE keyword) for specifying a table function in a
FROM clause. In this invocation, the docmatch() table function returns a row
containing the single column DOC_ID for each mathematics document
referencing Zorn's Lemma. These DOC_ID values are joined to the master
document table, retrieving the author’s name and document text.
See “UDF And Method Concepts” on page 369 for a summary of the use and
importance of the function path and the function selection algorithm. You can
find the details for both of these concepts in the SQL Reference. The resolution
of any Data Manipulation Language (DML) reference to a function uses the
function selection algorithm, so it is important to understand how it works.
Referring to Functions
Each reference to a function, whether it is a UDF, or a built-in function,
contains the following syntax:
WW function_name ( ) WY
,
X expression
The position of the arguments is important and must conform to the function
definition for the semantics to be correct. Both the position of the arguments
and the function definition must conform to the function body itself. DB2
does not attempt to shuffle arguments to better match a function definition,
and DB2 does not understand the semantics of the individual function
parameters.
Use of column names in UDF argument expressions requires that the table
references which contain the column have proper scope. For table functions
referenced in a join, this means that for any argument which involves columns
from another table or table function, that other table or table function must
appear before the table function containing the reference, in the FROM clause.
For a complete discussion of the rules for using columns in the arguments of
table functions, refer to the SQL Reference.
Examples of Function Invocations
Some valid examples of function invocations are:
AVG(FLOAT_COLUMN)
BLOOP(COLUMN1)
BLOOP(FLOAT_COLUMN + CAST(? AS INTEGER))
BLOOP(:hostvar :indicvar)
BRIAN.PARSE(CHAR_COLUMN CONCAT USER, 1, 0, 0, 1)
CTR()
FLOOR(FLOAT_COLUMN)
PABLO.BLOOP(A+B)
PABLO.BLOOP(:hostvar)
"search_schema"(CURRENT FUNCTION PATH, 'GENE')
SUBSTR(COLUMN2,8,3)
SYSFUN.FLOOR(AVG(EMP.SALARY))
SYSFUN.AVG(SYSFUN.FLOOR(EMP.SALARY))
SYSIBM.SUBSTR(COLUMN2,11,LENGTH(COLUMN3))
SQRT(SELECT SUM(length*length)
FROM triangles
WHERE id= 'J522'
AND legtype <> 'HYP')
Note that if any of the above functions are table functions, the syntax to
reference them is slightly different than presented above. For example, if
PABLO.BLOOP is a table function, to properly reference it, use:
TABLE(PABLO.BLOOP(A+B)) AS Q
As the function selection logic does not know what data type the argument
may turn out to be, it cannot resolve the reference. You can use the CAST
specification to provide a type for the parameter marker, for example
INTEGER, and then the function selection logic can proceed:
BLOOP(CAST(? AS INTEGER))
Only the BLOOP functions in schema PABLO are considered. It does not
matter that user SERGE has defined a BLOOP function, or whether or not
there is a built-in BLOOP function. Now suppose that user PABLO has
defined two BLOOP functions in his schema:
CREATE FUNCTION BLOOP (INTEGER) RETURNS ...
CREATE FUNCTION BLOOP (DOUBLE) RETURNS ...
BLOOP is thus overloaded within the PABLO schema, and the function
selection algorithm would choose the best BLOOP, depending on the data
type of the argument, column1. In this case, both of the PABLO.BLOOPs take
numeric arguments, and if column1 is not one of the numeric types, the
statement will fail. On the other hand if column1 is either SMALLINT or
INTEGER, function selection will resolve to the first BLOOP, while if column1
is DECIMAL, DOUBLE, REAL, or BIGINT, the second BLOOP will be chosen.
You should investigate these other functions in the SQL Reference. The
INTEGER function is a built-in function in the SYSIBM schema. The
FLOOR, CEILING, and ROUND functions are UDFs shipped with DB2,
which you can find in the SYSFUN schema along with many other useful
functions.
Using Unqualified Function Reference
If, instead of a qualified function reference, you use an unqualified function
reference, DB2’s search for a matching function normally uses the function
path to qualify the reference. In the case of the DROP FUNCTION or
COMMENT ON FUNCTION functions, the reference is qualified using the
current authorization ID, if they are unqualified. Thus, it is important that you
know what your function path is, and what, if any, conflicting functions exist in the
schemas of your current function path. For example, suppose you are PABLO and
your static SQL statement is as follows, where COLUMN1 is data type INTEGER:
SELECT BLOOP(COLUMN1) FROM T
You have created the two BLOOP functions cited in “Using Qualified Function
Reference” on page 379, and you want and expect one of them to be chosen. If
the following default function path is used, the first BLOOP is chosen (since
column1 is INTEGER), if there is no conflicting BLOOP in SYSIBM or
SYSFUN:
"SYSIBM","SYSFUN","PABLO"
However, suppose you have forgotten that you are using a script for
precompiling and binding which you previously wrote for another purpose.
In this script, you explicitly coded your FUNCPATH parameter to specify the
following function path for another reason that does not apply to your current
work:
"KATHY","SYSIBM","SYSFUN","PABLO"
If Kathy has written a BLOOP function for her own purposes, the function
selection could very well resolve to Kathy’s function, and your statement
would execute without error. You are not notified because DB2 assumes that
you know what you are doing. It becomes your responsibility to identify the
incorrect output from your statement and make the required correction.
Summary of Function References
For both qualified and unqualified function references, the function selection
algorithm looks at all the applicable functions, both built-in and user-defined,
that have:
v The given name
Note that you are not permitted to overload the built-in conditional
operators such as >, =, LIKE, IN, and so on, in this way. See “Example:
Integer Divide Operator” on page 443 for an example of a UDF which
overloads the divide (/) operator.
v The function selection algorithm does not consider the context of the
reference in resolving to a particular function. Look at these BLOOP
functions, modified a bit from before:
CREATE FUNCTION BLOOP (INTEGER) RETURNS INTEGER ...
CREATE FUNCTION BLOOP (DOUBLE) RETURNS CHAR(10)...
Because the best match, resolved using the SMALLINT argument, is the
first BLOOP defined above, the second operand of the CONCAT resolves to
data type INTEGER. The statement fails because CONCAT demands string
arguments. If the first BLOOP was not present, the other BLOOP would be
chosen and the statement execution would be successful.
If there are multiple BOAT distinct types in the database, or BOAT UDFs in
other schema, you must exercise care with your function path. Otherwise
your results may be ambiguous.
Description
This section describes how to write UDFs and methods. The coding
conventions for UDFs and methods are the same, with the following
differences:
v Since DB2 associates each method with a specific structured type, the first
argument passed from DB2 to your method is always the instance of the
structured type on which you invoked the method.
v Methods, unlike UDFs, cannot return tables. You cannot invoke a method
as the argument for a FROM clause.
For small UDFs such as UDFs that contain only a simple expression, consider
using a SQL-bodied UDF. To create a SQL-bodied UDF, issue a CREATE
FUNCTION or CREATE METHOD statement that includes a method body
written using SQL, rather than pointing to an external UDF. SQL-bodied
UDFs enable you to declare and define the UDF in a single step, without
using an external language or compiler. SQL-bodied UDFs also offer the
possibility of increased performance, because the method body is written
using SQL accessible to the DB2 optimizer.
After a preliminary discussion on the interface between DB2 and a UDF, the
remaining discussion concerns how you implement UDFs. The information on
writing the UDF emphasizes the presence or absence of a scratchpad as one of
the primary considerations.
Note that a sourced UDF, which is different from an external UDF, does not
require an implementation in the form of a separate piece of code. Such a
UDF uses the same implementation as its source function, along with many of
its other attributes.
WW X SQL-result W
X SQL-argument X SQL-argument-ind
W WY
scratchpad call-type dbinfo
If the function is defined with NOT NULL CALL, the UDF body does
not need to check for a null value. However, if it is defined with
NULL CALL, any argument can be NULL and the UDF should check
it.
The indicator takes the form of a SMALLINT value, and this can be
defined in your UDF as described in “How the SQL Data Types are
Passed to a UDF” on page 402. DB2 aligns the data for
SQL-argument-ind according to the data type and the server platform.
DB2 treats the function result as null (-2) if the following is true:
v The database configuration parameter DFT_SQLMATHWARN is
’YES’
v One of the input arguments is a null because of an arithmetic error
v The SQL-result-ind is negative.
This is also true if you define the function with the NOT NULL CALL
option.
Even if the function is defined with NOT NULL CALL, the UDF body
must set the indicator of the result. For example, a divide function
could set the result to null when the denominator is zero.
The indicator takes the form of a SMALLINT value, and this can be
defined in your UDF as described in “How the SQL Data Types are
Passed to a UDF” on page 402.
DB2 aligns the data for SQL-result-ind according to the data type and
the server platform.
SQL-state
This argument is set by the UDF before returning to DB2. It takes the
form of a CHAR(5) value. Ensure that the argument definition in the
UDF is appropriate for a CHAR(5) as described in “How the SQL
Data Types are Passed to a UDF” on page 402, and can be used by the
UDF to signal warning or error conditions. It contains the value
'00000', when the function is called. The UDF can set the value to the
following:
PABLO.BLOOP WILLIE.FINDSTRING
This form enables you to use the same UDF body for multiple
external functions, and still differentiate between the functions when it
is invoked.
willie_find_feb99 SQL9904281052440430
As with the function-name argument, the reason for passing this value
is to give the UDF the means of distinguishing exactly which specific
function is invoking it.
diagnostic-message
This argument is set by the UDF before returning to DB2. The UDF
can use this argument to insert a message text in a DB2 message. It
takes the form of a VARCHAR(70) value. Ensure that the argument
definition in the UDF is appropriate for a VARCHAR(70). See “How
the SQL Data Types are Passed to a UDF” on page 402 for more
information.
When the UDF returns either an error or a warning, using the
SQL-state argument described above, it can include descriptive
information here. DB2 includes this information as a token in its
message.
DB2 sets the first character to null before calling the UDF. Upon
return, it treats the string as a C null-terminated string. This string
will be included in the SQLCA as a token for the error condition. At
least the first part of this string will appear in the SQLCA or DB2 CLP
message. However, the actual number of characters which will appear
depends on the lengths of the other tokens, because DB2 may truncate
the tokens to conform to the restrictive limit on total token length
imposed by the SQLCA. Avoid using X'FF' in the text since this
character is used to delimit tokens in the SQLCA.
The UDF code should not return more text than will fit in the
VARCHAR(70) buffer which is passed to it. DB2 will attempt to
determine if the UDF body has written beyond the end of this buffer
The scratchpad can be mapped in your UDF using the same type as
either a CLOB or a BLOB, since the argument passed has the same
structure. See “How the SQL Data Types are Passed to a UDF” on
page 402 for more information.
Ensure your UDF code does not make changes outside of the
scratchpad buffer. DB2 attempts to determine if the UDF body has
written beyond the end of this buffer by a few characters, SQLCODE
DB2 initializes the scratchpad so that the data field is aligned for the
storage of any data type. This may result in the entire scratchpad
structure, including the length field, not being properly aligned. For
more information on declaring and accessing scratchpads, see
“Writing Scratchpads on 32-bit and 64-bit Platforms” on page 410.
call-type
This argument, if present, is set by DB2 before calling the UDF. For
scalar functions this argument is only present if FINAL CALL is
specified in the CREATE FUNCTION statement, but for table
functions it is ALWAYS present. It follows the scratchpad argument; or
the diagnostic-message argument if the scratchpad argument is not
present. This argument takes the form of an INTEGER value. Ensure
that this argument definition in the UDF is appropriate for INTEGER.
See “How the SQL Data Types are Passed to a UDF” on page 402 for
more information.
Note that even though all the current possible values are listed below,
your UDF should contain a switch or case statement which explicitly
tests for all the expected values, rather than containing ″if A do AA,
else if B do BB, else it must be C so do CC″ type logic. This is because
it is possible that additional call types may be added in the future,
and if you don’t explicitly test for condition C you will have trouble
when new possibilities are added.
Notes:
1. For all the call-types, it may be appropriate for the UDF to set a
SQL-state and diagnostic-message return value. This information will
not be repeated in the following descriptions of each call-type. For
all calls DB2 will take the indicated action as described previously
for these arguments.
2. The include file sqludf.h is intended for use with UDFs and is
described in “The UDF Include File: sqludf.h” on page 411. The file
contains symbolic defines for the following call-type values, which
are spelled out as constants.
Releasing resources.
Releasing resources.
Write UDFs to release any resources that they acquire. For table
functions, there are two natural places for this release: the CLOSE call
and the FINAL call. The CLOSE call balances each OPEN call and can
occur multiple times in the execution of a statement. The FINAL call
only occurs if FINAL CALL is specified for the UDF, and occurs only
once per statement.
For additional platforms that are not contained in the above list,
see the contents of the sqludf.h file.
14. Number of table function column list entries (numtfcol)
The number of non-zero entries in the table function column list
specified in the table function column list field below.
15. Reserved field (resd1)
This field is for future use. It is defined as 24 characters long.
16. Table function column list (tfcolumn)
If this is a table function, this field is a pointer to an array of
short integers which is dynamically allocated by DB2. If this is a
scalar function, this pointer is null.
This field is used only for table functions. Only the first n entries,
where n is specified in the number of table function column list
entries field, numtfcol, are of interest. n may be equal to 0, and is
less than or equal to the number of result columns defined for
the function in the RETURNS TABLE(...) clause of the CREATE
FUNCTION statement. The values correspond to the ordinal
numbers of the columns which this statement needs from the
table function. A value of ‘1’ means the first defined result
column, ‘2’ means the second defined result column, and so on,
and the values may be in any order. Note that n could be equal
to zero, that is, the variable numtfcol might be zero, for a
statement similar to
SELECT COUNT(*) FROM TABLE(TF(...)) AS QQ, where no actual
column values are needed by the query.
This array represents an opportunity for optimization. The UDF
need not return all values for all the result columns of the table
function, only those needed in the particular context, and these
are the columns identified (by number) in the array. Since this
optimization may complicate the UDF logic in order to gain the
performance benefit, the UDF can choose to return every defined
column.
17. Unique application identifier (appl_id)
This field is a pointer to a C null-terminated string which
uniquely identifies the application’s connection to DB2. It is
regenerated at each database connect.
The string has a maximum length of 32 characters, and its exact
format depends on the type of connection established between
the client and DB2. Generally it takes the form
<x>.<y>.<ts>
A table function logically returns a table to the SQL statement that references
it, but the physical interface between DB2 and the table function is row by
row. For table functions, the arguments are:
Observe that the normal value outputs of the UDF, as well as the SQL-result,
SQL-result-ind, and SQL-state, are returned to DB2 using arguments passed
from DB2 to the UDF. Indeed, the UDF is written not to return anything in
the functional sense (that is, the function’s return type is void). See the void
definition and the return statement in the following example:
#include ...
void SQL_API_FN divid(
... arguments ... )
It is the data type for each function parameter defined in the CREATE FUNCTION
statement that governs the format for argument values. Promotions from the
argument data type may be needed to get the value in that format. Such
promotions are performed automatically by DB2 on the argument values;
argument promotion is discussed in the SQL Reference.
For the function result, it is the data type specified in the CAST FROM clause
of the CREATE FUNCTION statement that defines the format. If no CAST
FROM clause is present, then the data type specified in the RETURNS clause
defines the format.
In the following example, the presence of the CAST FROM clause means that
the UDF body returns a SMALLINT and that DB2 casts the value to INTEGER
before passing it along to the statement where the function reference occurs:
Example:
For the above UDF, the first two parameters correspond to the wage and
number of hours. You invoke the UDF WEEKLY_PAY in your SQL select
statement as follows:
SELECT WEEKLY_PAY (WAGE, HOURS, ...) ...;
For a CHAR(n) parameter, DB2 always moves n bytes of data to the buffer
and sets the n+1 byte to null. For a RETURNS CHAR(n) value, DB2 always
takes the n bytes and ignores the n+1 byte. For this RETURNS CHAR(n)
case, you are warned against the inadvertent inclusion of a null-character in
the first n characters. DB2 will not recognize this as anything but a normal
part of the data, and it might later on cause seemingly anomalous results if
it was not intended.
If FOR BIT DATA is specified, exercise caution about using the normal C
string handling functions in the UDF. Many of these functions look for a
null to delimit the string, and the null-character (X'00') could be a legitimate
character in the middle of the data value.
struct sqludf_vc_fbd
{
unsigned short length; /* length of data */
char data[1]; /* first char of data */
};
Example:
struct sqludf_vc_fbd *arg1; /* example for VARCHAR(n) FOR BIT DATA */
struct sqludf_vc_fbd *result; /* also for LONG VARCHAR FOR BIT DATA */
v VARCHAR(n) without FOR BIT DATA.
Valid. Represent in C as char...[n+1]. (This is a C null-terminated string.)
For a VARCHAR(n) parameter, DB2 will put a null in the (k+1) position,
where k is the length of the particular occurrence. The C string-handling
functions are thus well suited for manipulation of these values. For a
RETURNS VARCHAR(n) value, the UDF body must delimit the actual
value with a null, because DB2 will determine the result length from this
null character.
Example:
char arg2[51]; /* example for VARCHAR(50) */
char *result; /* also perfectly acceptable */
v GRAPHIC(n)
Valid. Represent in C as sqldbchar[n+1]. (This is a null-terminated graphic
string). Note that you can use wchar_t[n+1] on platforms where wchar_t is
defined to be 2 bytes in length; however, sqldbchar is recommended. See
“Selecting the wchar_t or sqldbchar Data Type in C and C++” on page 610
for more information on these two data types.
For a GRAPHIC(n) parameter, DB2 moves n double-byte characters to the
buffer and sets the following two bytes to null. Data passed from DB2 to a
UDF is in DBCS format, and the result passed back is expected to be in
DBCS format. This behavior is the same as using the WCHARTYPE
NOCONVERT precompiler option described in “The WCHARTYPE
Precompiler Option in C and C++” on page 611. For a RETURNS
GRAPHIC(n) value, DB2 always takes the n double-byte characters and
ignores the following bytes.
When defining graphic UDF parameters, consider using VARGRAPHIC
rather than GRAPHIC as DB2 does not promote VARGRAPHIC arguments
to GRAPHIC. For example, suppose you define a UDF as follows:
CREATE FUNCTION SIMPLE(GRAPHIC)...
Example:
sqldbchar arg1[14]; /* example for GRAPHIC(13) */
sqldbchar *arg1; /* also perfectly acceptable */
v VARGRAPHIC(n)
Valid. Represent in C as sqldbchar[n+1]. (This is a null-terminated graphic
string). Note that you can use wchar_t[n+1] on platforms where wchar_t is
defined to be 2 bytes in length; however, sqldbchar is recommended. See
“Selecting the wchar_t or sqldbchar Data Type in C and C++” on page 610
for more information on these two data types.
For a VARGRAPHIC(n) parameter, DB2 will put a graphic null in the (k+1)
position, where k is the length of the particular occurrence. A graphic null
refers to the situation where all the bytes of the last character of the graphic
string contain binary zeros ('\0's). Data passed from DB2 to a UDF is in
DBCS format, and the result passed back is expected to be in DBCS format.
This behavior is the same as using the WCHARTYPE NOCONVERT
precompiler option described in “The WCHARTYPE Precompiler Option in
C and C++” on page 611. For a RETURNS VARGRAPHIC(n) value, the UDF
body must delimit the actual value with a graphic null, because DB2 will
determine the result length from this graphic null character.
Example:
sqldbchar args[51], /* example for VARGRAPHIC(50) */
sqldbchar *result, /* also perfectly acceptable */
v LONG VARGRAPHIC
Valid. Represent in C as a structure:
struct sqludf_vg
{
unsigned short length; /* length of data */
sqldbchar data[1]; /* first char of data */
};
Note that in the above structure, you can use wchar_t in place of sqldbchar
on platforms where wchar_t is defined to be 2 bytes in length, however, the
use of sqldbchar is recommended. See “Selecting the wchar_t or sqldbchar
Data Type in C and C++” on page 610 for more information on these two
data types.
Example:
struct sqludf_vg *arg1; /* example for VARGRAPHIC(n) */
struct sqludf_vg *result; /* also for LONG VARGRAPHIC */
v DATE
Valid. Represent in C same as CHAR(10), that is as char...[11]. The date
value is always passed to the UDF in ISO format: yyyy-mm-dd.
Example:
char arg1[11]; /* example for DATE */
char *result; /* also perfectly acceptable */
v TIME
Valid. Represent in C same as CHAR(8), that is, as char...[9]. The time
value is always passed to the UDF in ISO format: hh.mm.ss.
Example:
char *arg; /* example for DATE */
char result[9]; /* also perfectly acceptable */
v TIMESTAMP
Valid. Represent in C same as CHAR(26), that is. as char...[27]. The
timestamp value is always passed with format: yyyy-mm-dd-
hh.mm.ss.nnnnnn.
Example:
char arg1[27]; /* example for TIMESTAMP */
char *result; /* also perfectly acceptable */
v BLOB(n) and CLOB(n)
Valid. Represent in C as a structure:
The [1] merely indicates an array to the compiler. It does not mean that
only one character is passed; because the address of the structure is passed,
and not the actual structure, it just provides a way to use array logic.
Example:
struct sqludf_lob *arg1; /* example for BLOB(n), CLOB(n) */
struct sqludf_lob *result;
v DBCLOB(n)
Valid. Represent in C as a structure:
struct sqludf_lob
{
sqluint32 length; /* length in graphic characters */
sqldbchar data[1]; /* first byte of lob */
};
Note that in the above structure, you can use wchar_t in place of sqldbchar
on platforms where wchar_t is defined to be 2 bytes in length, however, the
use of sqldbchar is recommended. See “Selecting the wchar_t or sqldbchar
Data Type in C and C++” on page 610 for more information on these two
data types.
The [1] merely indicates an array to the compiler. It does not mean that
only one graphic character is passed; because the address of the structure is
passed, and not the actual structure, it just provides a way to use array
logic.
Example:
The type udf_locator is defined in the header file sqludf.h, which is discussed
in “The UDF Include File: sqludf.h” on page 411. The use of these locators is
discussed in “Using LOB Locators as UDF Parameters or Results” on
page 434.
Writing Scratchpads on 32-bit and 64-bit Platforms
To make your UDF code portable between 32-bit and 64-bit platforms, you
must change the way in which you create and use scratchpads that contain
64-bit values. Do not declare an explicit length variable for a scratchpad
structure that contains one or more 64-bit values, such as 64-bit pointers or
sqlint64 BIGINT variables. For example, the following example might result
in a data alignment exception on a 64-bit platform because the structure
declaration includes an explicit length variable:
struct scratch1
{
sqlint32 length;
char chars[4];
sqlint64 bigint_var;
};
Some of the UDF examples in the next section illustrate the inclusion and use
of sqludf.h.
Some sample Java UDF method bodies are provided in the UDFsrv.java
sample. You can find the associated CREATE FUNCTION statements and
examples of calling those UDFs in the UDFcli.java and UDFclie.sqlj samples.
See the sqllib/samples/java directory for the samples and README
instructions for compiling and running the samples.
Coding a Java UDF
In general, if you declare a UDF taking arguments of SQL types t1, t2, and t3,
returning type t4, it will be called as a Java method with the expected Java
signature:
public void name ( T1 a, T2 b, T3 c, T4 d) { .....}
Where:
v name is the method name
v T1 through T4 are the Java types that correspond to SQL types t1 through
t4.
v a, b, and c are arbitrary variable names for the input arguments.
v d is an arbitrary variable name that represents the UDF result being
computed.
Java UDFs that implement table functions require more arguments. Beside the
variables representing the input, an additional variable appears for each
column in the resulting row. For example, a table function may be declared as:
public void test4(String arg1, int result1,
Blob result2, String result3);
SQL NULL values are represented by Java variables that are not initialized.
These variables have a value of zero if they are primitive types, and Java null
if they are object types, in accordance with Java rules. To tell an SQL NULL
apart from an ordinary zero, you can call the function isNull for any input
argument:
{ ....
if (isNull(1)) { /* argument #1 was a SQL NULL */ }
else { /* not NULL */ }
}
In the above example, the argument numbers start at one. The isNull()
function, like the other functions that follow, are inherited from the
COM.ibm.db2.app.UDF class.
To return a result from a scalar or table UDF, use the set() method in the
UDF, as follows:
{ ....
set(2, value);
}
Where ’2’ is the index of an output argument, and value is a literal or variable
of a compatible type. The argument number is the index in the argument list
of the selected output. In the first example in this section, the int result
variable has an index of 4; in the second, result1 through result3 have
indices of 2 through 4. An output argument that is not set before the UDF
returns will have a NULL value.
Like C modules used in UDFs and stored procedures, you cannot use the Java
standard I/O streams (System.in, System.out, and System.err) in Java
UDFs. For an example of a Java UDF, see the file DB2Udf.java in the
sqllib/samples/java directory.
For Java table functions that use a scratchpad, control when you get a new
scratchpad instance by using the FINAL CALL or NO FINAL CALL option on
the CREATE FUNCTION statement, as indicated by the execution models in
“Table Function Execution Model for Java” on page 415.
For scalar functions, you use the same instance for the entire statement.
As with other UDFs, Java UDFs can be FENCED or NOT FENCED. NOT
FENCED UDFs run inside the address space of the database engine; FENCED
UDFs run in a separate process. Although Java UDFs cannot inadvertently
corrupt the address space of their embedding process, they can terminate or
slow down the process. Therefore, when you debug UDFs written in Java, you
should run them as FENCED UDFs.
Notes:
1. By ″UDF method″ we mean the Java class method which implements the
UDF. This is the method identified in the EXTERNAL NAME clause of the
CREATE FUNCTION statement.
2. For table functions with NO SCRATCHPAD specified, the calls to the UDF
method are as indicated in this table, but because the user is not asking for
any continuity via a scratchpad, DB2 will cause a new object to be
instantiated before each call, by calling the class constructor. It is not clear
that table functions with NO SCRATCHPAD (and thus no continuity) can
do very useful things, but they are supported.
3. These models are TOTALLY COMPATIBLE with what happens with the
other UDF languages: C/C++ and OLE.
Note that this section assumes that you are familiar with OLE automation
terms and concepts. This book does not present any introductory OLE
material. For an overview of OLE automation, refer to Microsoft Corporation:
The Component Object Model Specification, October 1995. For details on OLE
automation, refer to OLE Automation Programmer’s Reference, Microsoft Press,
1996, ISBN 1-55615-851-3.
After you code an OLE automation object, you need to register the methods
of the object as UDFs using the SQL CREATE FUNCTION statement.
Registering an OLE automation UDF is very similar to registering any
external C or C++ UDF, but you must use the following options:
v LANGUAGE OLE
v FENCED, since OLE automation UDFs must run in FENCED mode
The external name consists of the OLE progID identifying the OLE
automation object and the method name separated by ! (exclamation mark):
CREATE FUNCTION bcounter () RETURNS INTEGER
EXTERNAL NAME 'bert.bcounter!increment'
LANGUAGE OLE
FENCED
SCRATCHPAD
The calling conventions for OLE method implementations are identical to the
conventions for functions written in C or C++. An implementation of the
above method in the BASIC language looks like the following (notice that in
BASIC the parameters are by default defined as call by reference):
Public Sub increment(output As Long, _
indicator As Integer, _
sqlstate As String, _
fname As String, _
fspecname As String, _
sqlmsg As String, _
scratchpad() As Byte, _
calltype As Long)
Note:
1. With FOR BIT DATA specified
Data passed between DB2 and OLE automation UDFs is passed as call by
reference. SQL types such as BIGINT, DECIMAL, or LOCATORS, or OLE
automation types such as Boolean or CURRENCY that are not listed in the
table are not supported. Character and graphic data mapped to BSTR is
converted from the database code page to the UCS-2 (also known as Unicode,
IBM code page 13488) scheme. Upon return, the data is converted back to the
database code page. These conversions occur regardless of the database code
page. If code page conversion tables to convert from the database code page
to UCS-2 and from UCS-2 to the database code page are not installed, you
receive an SQLCODE -332 (SQLSTATE 57017).
Table 17 shows the mapping of the various SQL data types to the intermediate
OLE automation data types, and the data types in the language of interest
(BASIC or C++). OLE data types are language independent, (that is, Table 16
on page 419 holds true for all languages).
Table 17. Mapping of SQL and OLE Data Types to BASIC and C++ Data Types
SQL Type OLE Automation Type UDF Language
BASIC C++ Type
Type
SMALLINT short Integer short
INTEGER long Long long
REAL float Single float
FLOAT or DOUBLE double Double double
DATE, TIME, TIMESTAMP DATE Date DATE
CHAR(n), VARCHAR(n), LONG BSTR String BSTR
VARCHAR, CLOB(n)
GRAPHIC(n), VARGRAPHIC(n), BSTR String BSTR
LONG GRAPHIC, DBCLOB(n)
CHAR(n)1, VARCHAR(n)1, SAFEARRAY[unsigned char] Byte() SAFEARRAY
LONG VARCHAR1, BLOB(n)
Note:
1. With FOR BIT DATA specified
OLE supports type libraries that describe the properties and methods of OLE
automation objects. Exposed objects, properties, and methods are described in
the Object Description Language (ODL). The ODL description of the above
C++ method is as follows:
HRESULT increment ([out] long *output,
[out] short *indicator,
[out] BSTR *sqlstate,
[in] BSTR *fname,
[in] BSTR *fspecname,
[out] BSTR *sqlmsg,
[in,out] SAFEARRAY (unsigned char) *scratchpad,
[in] long *calltype);
Scalar functions contain one output parameter and output indicator, whereas
table functions contain multiple output parameters and output indicators
corresponding to the RETURN columns of the CREATE FUNCTION
statement.
OLE automation defines the BSTR data type to handle strings. BSTR is
defined as a pointer to OLECHAR: typedef OLECHAR *BSTR. For allocating
and freeing BSTRs, OLE imposes the rule, that the callee frees a BSTR passed
in as a by-reference parameter before assigning the parameter a new value.
The following C++ UDF returns the first 5 characters of a CLOB input
parameter:
// UDF DDL: CREATE FUNCTION crunch (clob(5k)) RETURNS char(5)
To use OLE DB table functions with DB2 Universal Database, you must install
OLE DB 2.0 or later, available from Microsoft at http://www.microsoft.com. If
you attempt to invoke an OLE DB table function without first installing OLE
DB, DB2 issues SQLCODE 465, SQLSTATE 58032,reason code 35. For the
system requirements and OLE DB providers available for your data sources,
refer to your data source documentation. For a list of samples that define and
use OLE DB table functions, see “Appendix B. Sample Programs” on page 729.
For the OLE DB specification, see the Microsoft OLE DB 2.0 Programmer’s
Reference and Data Access SDK, Microsoft Press, 1998.
The EXTERNAL NAME clause can take either of the following forms:
'server!rowset'
or
'!rowset!connectstring'
where:
server identifies a server registered with CREATE SERVER statement
rowset
identifies a rowset, or table, exposed by the OLE DB provider; this
value should be empty if the table has an input parameter to pass
through command text to the OLE DB provider.
connectstring
contains initialization properties needed to connect to an OLE DB
provider. For the complete syntax and semantics of the connection
string, see the ″Data Link API of the OLE DB Core Components″ in
the Microsoft OLE DB 2.0 Programmer’s Reference and Data Access SDK,
Microsoft Press, 1998.
You can use a connection string in the EXTERNAL NAME clause of a CREATE
FUNCTION statement, or specify the CONNECTSTRING option in a CREATE
SERVER statement.
Instead of putting the connection string in the EXTERNAL NAME clause, you
can create and use a server name. For example, assuming you have defined
the server Nwind as described in “Defining a Server Name for an OLE DB
Provider” on page 427, you could use the following CREATE FUNCTION
statement:
CREATE FUNCTION orders ()
RETURNS TABLE (orderid INTEGER, ...)
LANGUAGE OLEDB
EXTERNAL NAME 'Nwind!orders';
OLE DB table functions also allow you to specify one input parameter of any
character string data type. Use the input parameter to pass command text
directly to the OLE DB provider. If you define an input parameter, do not
provide a rowset name in the EXTERNAL NAME clause. DB2 passes the
command text to the OLE DB provider for execution and the OLE DB
provider returns a rowset to DB2. Column names and data types of the
resulting rowset need to be compatible with the RETURNS TABLE definition
in the CREATE FUNCTION statement. Since binding to the column names of
the rowset is based on matching column names, you must ensure that you
name the columns properly.
SELECT *
FROM TABLE (favorites (' select top 3 sales.stor_id as store_id, ' ||
' stores.stor_name as name, ' ||
' sum(sales. qty) as sales ' ||
' from sales, stores ' ||
' where sales.stor_id = stores.stor_id ' ||
' group by sales.stor_id, stores.stor_name ' ||
' order by sum(sales.qty) desc')) as f;
If the names contain special characters or match keywords, enclose the names
in the quote characters specified for your OLE DB provider. The quote
characters are defined in the literal information of your OLE DB provider as
DBLITERAL_QUOTE_PREFIX and DBLITERAL_QUOTE_SUFFIX. For example, in the
following EXTERNAL NAME the specified rowset includes catalog name pubs
and schema name dbo for a rowset called authors, with the quote character "
used to enclose the names.
To provide a server name for an OLE DB data source that you can use for
many CREATE FUNCTION statements, use the CREATE SERVER statement
as follows:
v provide a name that identifies the OLE DB provider within DB2
v specify WRAPPER OLEDB
v provide connection information in the CONNECTSTRING option
For example, you can define the server name Nwind for the Microsoft Access
OLE DB provider with the following CREATE SERVER statement:
CREATE SERVER Nwind
WRAPPER OLEDB
OPTIONS (CONNECTSTRING 'Provider=Microsoft.Jet.OLEDB.3.51;
Data Source=c:\msdasdk\bin\oledb\nwind.mdb');
You can then use the server name Nwind to identify the OLE DB provider in a
CREATE FUNCTION statement, for example:
CREATE FUNCTION orders ()
RETURNS TABLE (orderid INTEGER, ...)
LANGUAGE OLEDB
EXTERNAL NAME 'Nwind!orders';
For the complete syntax of the CREATE SERVER statement, refer to the SQL
Reference. For information on user mappings for OLE DB providers, see
“Defining a User Mapping”.
Defining a User Mapping
You can provide user mappings for your DB2 users to provide access to OLE
DB data sources with an alternate username and password. To map
usernames for specific users, you can define user mappings with the CREATE
USER MAPPING statement. To provide a user mapping shared by all users,
add the username and password to the connection string of your CREATE
FUNCTION or CREATE SERVER statement. For example, to create a specific
user mapping for the DB2 user JOHN on the OLE DB server Nwind, use the
following CREATE USER MAPPING statement:
To provide the equivalent access to all of the DB2 users that call the OLE DB
table function orders, use the following CONNECTSTRING either in a
CREATE FUNCTION or CREATE SERVER statement:
CREATE FUNCTION orders ()
RETURNS TABLE (orderid INTEGER, ...)
LANGUAGE OLEDB
EXTERNAL NAME '!orders!Provider=Microsoft.Jet.OLEDB.3.51;User ID=dave;
Password=mypwd;Data Source=c:\msdasdk\bin\oledb\nwind.mdb';
For the complete syntax of the CREATE USER MAPPING statement, refer to
the SQL Reference.
Supported OLE DB Data Types
The following table shows how DB2 data types map to the OLE DB data
types described in Microsoft OLE DB 2.0 Programmer’s Reference and Data Access
SDK, Microsoft Press, 1998. Use the mapping table to define the appropriate
RETURNS TABLE columns in your OLE DB table functions. For example, if
you define an OLE DB table function with a column of data type INTEGER,
DB2 requests the data from the OLE DB provider as DBTYPE_I4.
For mappings of OLE DB provider source data types to OLE DB data types,
refer to the OLE DB provider documentation. For examples of how the ANSI
SQL, Microsoft Access, and Microsoft SQL Server providers might map their
respective data types to OLE DB data types, refer to the Microsoft OLE DB 2.0
Programmer’s Reference and Data Access SDK, Microsoft Press, 1998.
Note: OLE DB data type conversion rules are defined in the Microsoft OLE DB
2.0 Programmer’s Reference and Data Access SDK, Microsoft Press, 1998.
For example:
v To retrieve the OLE DB data type DBTYPE_CY, the data may get
converted to OLE DB data type DBTYPE_NUMERIC(19,4) which
maps to DB2 data type DEC(19,4).
v To retrieve the OLE DB data type DBTYPE_I1, the data may get
converted to OLE DB data type DBTYPE_I2 which maps to DB2 data
type SMALLINT.
v To retrieve the OLE DB data type DBTYPE_GUID, the data may get
converted to OLE DB data type DBTYPE_BYTES which maps to DB2
data type CHAR(12) FOR BIT DATA.
This statement returns all the documents containing the particular text
string value represented by the first argument. What match would like to
do is:
v First time only.
Retrieve a list of all the document IDs which contain the string
myocardial infarction from the document application which is
maintained outside of DB2. This retrieval is a costly process, so the
function would like to do it only one time, and save the list somewhere
handy for subsequent calls.
v On each call.
Both of these needs are met by the ability to specify a SCRATCHPAD in the
CREATE FUNCTION statement:
So for the counter example, the last value returned could be kept in the
scratchpad. And the match example could keep the list of documents in the
scratchpad if the scratchpad is big enough, or otherwise could allocate
memory for the list and keep the address of the acquired memory in the
scratchpad.
Because it is recognized that a UDF may want to acquire system resources, the
UDF can be defined with the FINAL CALL keyword. This keyword tells DB2
to call the UDF at end-of-statement processing so that the UDF can release its
system resources. In particular, since the scratchpad is of fixed size, the UDF
may want to allocate memory for itself and thus uses the final call to free the
memory. For example the match function above cannot predict how many
documents will match the given text string. So a better definition for match is:
Note that for UDFs that use a scratchpad and are referenced in a subquery,
DB2 may decide to make a final call (if the UDF is so specified) and refresh
the scratchpad between invocations of the subquery. You can protect yourself
If you do specify FINAL CALL, please note that your UDF receives a call of
type FIRST. This could be used to acquire and initialize some persistent
resource.
Note: This model describes the ordinary error processing for scalar UDFs. In
the event of a system failure or communication problem, a call
indicated by the error processing model may not be made. For
example, for a FENCED UDF, if the db2udf fenced process is somehow
prematurely terminated, DB2 cannot make the indicated calls.
The error processing model for table functions is defined in “Table Function
Considerations” on page 432 section of this chapter.
Note: This model describes the ordinary error processing for scalar UDFs. In
the event of a system failure or communication problem, a call
indicated by the error processing model may not be made. For
example, for a FENCED UDF, if the db2udf fenced process is somehow
prematurely terminated, DB2 cannot make the indicated calls.
Do not modify the locator values as this makes them unusable, and the APIs
will return errors.
These special APIs can only be used in UDFs which are defined as NOT
FENCED. This implies that these UDFs in test phase should not be used on a
production database, because of the possibility that a UDF with bugs could
cause the system harm. When operating on a test database, no lasting harm
can result from the UDF if it should have bugs. When the UDF is known to
be free of errors it can then be applied to the production database.
The APIs which follow are defined using the function prototypes contained in
the sqludf.h UDF include file.
Return codes. Interpret the return code passed back to the UDF by DB2 for
each API as follows:
0 Success.
-1 Locator passed to the API was freed by sqludf_free_locator() prior
to making the call.
-2 Call was attempted in FENCED mode UDF.
-3 Bad input value was provided to the API. For examples of bad input
values specific to each API, see its description below.
other Invalid locator or other error (for example, memory error). The value
that is returned for these cases is the SQLCODE corresponding to the
error condition. For example, -423 means invalid locator. Please note
that before returning to the UDF with one of these ″other″ codes, DB2
makes a judgment as to the severity of the error. For severe errors,
DB2 remembers that the error occurred, and when the UDF returns to
DB2, regardless of whether the UDF returns an error SQLSTATE to
DB2, DB2 takes action appropriate for the error condition. For
non-severe errors, DB2 forgets that the error has occurred, and leaves
it up to the UDF to decide whether it can take corrective action, or
return an error SQLSTATE to DB2.
v sqludf_length().
Given a LOB locator, it returns the length of the LOB value represented by
the locator. The locator in question is generally a locator passed to the UDF
by DB2, but could be a locator representing a result value being built (using
sqludf_append()) by the UDF.
Typically, a UDF uses this API when it wants to find out the length of a
LOB value when it receives a locator.
A return code of 3 may indicate:
– udfloc_p (address of locator) is zero
– return_len_p (address of where to put length) is zero
v sqludf_substr()
Given a LOB locator, a beginning position within the LOB, a desired length,
and a pointer to a buffer, this API places the bytes into the buffer and
returns the number of bytes it was able to move. (Obviously the UDF must
provide a buffer large enough for the desired length.) The number of bytes
moved could be shorter than the desired length, for example if you request
50 bytes beginning at position 101 and the LOB value is only 120 bytes
long, the API will move only 20 bytes.
Typically, this is the API that a UDF uses when it wants to see the bytes of
the LOB value, when it receives a locator.
Obviously, with both FENCED and NOT FENCED UDFs, you should:
– ensure the UDF is robustly written
– subject the UDF to a rigorous design and code review
– test the UDF in an environment where no harm can be done if it is not
correctly written; for example, a test database.
Most abends caused by a UDF are caught by DB2, which returns a -430
SQLCODE and prevents the database from being corrupted. However,
certain types of UDF misbehavior, including a massive overwrite of a return
value buffer, can cause DB2 to fail as well as the UDF. Pay attention
particularly to any UDF which returns variable-length data, or which
calculates how many bytes it must move to the return value buffer.
v For considerations on using UDFs with EUC code sets, see “Considerations
for UDFs” on page 515.
The extern "C" prevents type decoration (or ‘mangling’) of the function
name by the C++ compiler. Without this declaration, you have to include
all the type decoration for the function name when you issue the CREATE
FUNCTION statement.
For information on where to find all the examples supplied, and how to
invoke them, see “Appendix B. Sample Programs” on page 729.
After populating the table, issue the following statement using CLP to display
its contents:
SELECT INT1, INT2, PART, SUBSTR(DESCR,1,50) FROM TEST
Note the use of the SUBSTR function on the CLOB column to make the
output more readable. You receive the following CLP output:
INT1 INT2 PART 4
----------- ----------- ----- --------------------------------------------------
16 1 brain The only part of the body capable of forgetting.
8 2 heart The seat of the emotions?
4 4 elbow That bendy place in mid-arm.
2 0 - -
97 16 xxxxx Unknown.
5 record(s) selected.
Refer to the previous information on table TEST as you read the examples and
scenarios which follow.
Example: Integer Divide Operator
Suppose you are unhappy with the way integer divide works in DB2 because
it returns an error, SQLCODE -802 (SQLSTATE 22003), and terminates the
statement when the divisor is zero. (Note that if you enable friendly arithmetic
with the DFT_SQLMATHWARN configuration parameter, DB2 returns a
NULL instead of an error in this situation.) Instead, you want the integer
divide to return a NULL, so you code this UDF:
/*************************************************************************
* function divid: performs integer divid, but unlike the / operator
* shipped with the product, gives NULL when the
* denominator is zero.
*
* This function does not use the constructs defined in the
* "sqludf.h" header file.
*
* inputs: INTEGER num numerator
* INTEGER denom denominator
* output: INTEGER out answer
**************************************************************************/
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN divid (
sqlint32 *num, /* numerator */
sqlint32 *denom, /* denominator */
sqlint32 *out, /* output result */
short *in1null, /* input 1 NULL indicator */
short *in2null, /* input 2 NULL indicator */
short *outnull, /* output NULL indicator */
char *sqlstate, /* SQL STATE */
char *funcname, /* function name */
char *specname, /* specific function name */
char *mesgtext) { /* message text insert */
(This statement is for an AIX version of this UDF. For other platforms, you
may need to modify the value specified in the EXTERNAL NAME clause.)
Now if you run the following pair of statements (CLP input is shown):
You get this output from CLP (if you do not enable friendly arithmetic with the
database configuration parameter DFT_SQLMATHWARN):
INT1 INT2 3 4
----------- ----------- ----------- -----------
16 1 16 16
The SQL0802N error message occurs because you have set your CURRENT
FUNCTION PATH special register to a concatenation of schemas which does
not include MATH, the schema in which the "/" UDF is defined. And
therefore you are executing DB2’s built-in divide operator, whose defined
behavior is to give the error when a ″divide by zero″ condition occurs. The
fourth row in the TEST table provides this condition.
However, if you change the function path, putting MATH in front of SYSIBM
in the path, and rerun the SELECT statement:
You then get the desired behavior, as shown by the following CLP output:
INT1 INT2 3 4
----------- ----------- ----------- -----------
16 1 16 16
8 2 4 4
4 4 1 1
2 0 - -
97 16 6 6
5 record(s) selected.
Even though three UDFs are added, additional code does not have to be
written as they are sourced on MATH."/".
And now, with the definition of these four "/" functions, any users who
want to take advantage of the new behavior on integer divide need only
place MATH ahead of SYSIBM in their function path, and can write their
SQL as usual.
While the preceding example does not consider the BIGINT data type, you
can easily extend the example to include BIGINT.
Example: Fold the CLOB, Find the Vowel
Suppose you have coded up two UDFs to help you with your text handling
application. The first UDF folds your text string after the nth byte. In this
example, fold means to put the part that was originally after the n byte before
the part that was originally in front of the n+1 byte. In other words, the UDF
moves the first n bytes from the beginning of the string to the end of the
string. The second function returns the position of the first vowel in the text
string. Both of these functions are coded in the udf.c example file:
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <sqludf.h>
#include <sqlca.h>
#include <sqlda.h>
#include "util.h"
/*************************************************************************
* function fold: input string is folded at the point indicated by the
* second argument.
*
SQLUDF_INTEGER len1;
if (SQLUDF_NULL(in1null) || SQLUDF_NULL(in2null)) {
/* one of the arguments is NULL. The result is then "INVALID INPUT" */
strcpy( ( char * ) out->data, "INVALID INPUT" ) ;
out->length = strlen("INVALID INPUT");
} else {
len1 = in1->length; /* length of the CLOB */
/*************************************************************************
* function findvwl: returns the position of the first vowel.
* returns an error if no vowel is found
* when the function is created, must be defined as
* NOT NULL CALL.
* inputs: VARCHAR(500) in
* output: INTEGER out
**************************************************************************/
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN findvwl (
SQLUDF_VARCHAR *in, /* input character string */
SQLUDF_SMALLINT *out, /* output location of vowel */
SQLUDF_NULLIND *innull, /* input NULL indicator */
SQLUDF_NULLIND *outnull, /* output NULL indicator */
SQLUDF_TRAIL_ARGS) { /* trailing arguments */
Note the use of the SUBSTR built-in function to make the selected CLOB
values display more nicely. It shows how the output is folded (best seen in
the second, third and fifth rows, which have a shorter CLOB value than the
first row, and thus the folding is more evident even with the use of SUBSTR).
And it shows (fourth row) how the INVALID INPUT string is returned by the
FOLD UDF when its input text string (column DESCR) is null. This SELECT
also shows simple nesting of function references; the reference to FOLD is
within an argument of the SUBSTR function reference.
This example shows how the 38999 SQLSTATE value and error message token
returned by findvwl() are handled: message SQL0443N returns this
information to the user. The PART column in the fifth row contains no vowel,
and this is the condition which triggers the error in the UDF.
And finally note how DB2 has generated a null output from FINDV for the
fourth row, as a result of the NOT NULL CALL specification in the CREATE
statement for FINDV.
And here again note that the fourth row produces a null result from FINDV
because of the NOT NULL CALL.
Example: Counter
Suppose you want to simply number the rows in your SELECT statement. So
you write a UDF which increments and returns a counter. This UDF uses a
scratchpad:
/* structure scr defines the passed scratchpad for the function "ctr" */
struct scr {
sqlint32 len;
sqlint32 countr;
char not_used[96];
} ;
/*************************************************************************
* function ctr: increments and reports the value from the scratchpad.
*
* This function does not use the constructs defined in the
* "sqludf.h" header file.
*
* input: NONE
* output: INTEGER out the value from the scratchpad
**************************************************************************/
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN ctr (
sqlint32 *out, /* output answer (counter) */
short *outnull, /* output NULL indicator */
char *sqlstate, /* SQL STATE */
char *funcname, /* function name */
char *specname, /* specific function name */
char *mesgtext, /* message text insert */
struct scr *scratchptr) { /* scratch pad */
(This statement is for an AIX version of this UDF. For other platforms, you
may need to modify the value specified in the EXTERNAL NAME clause.)
INT1 2 3
----------- ----------- -----------
16 1 16
8 2 4
4 3 1
2 4 0
97 5 19
5 record(s) selected.
Observe that the second column shows the straight COUNTER() output. The
third column shows that the two separate references to COUNTER() in the
SELECT statement each get their own scratchpad; had they not each gotten
their own, the output in the second column would have been 1 3 5 7 9,
instead of the nice orderly 1 2 3 4 5.
Example: Weather Table Function
The following is an example table function, tfweather_u, (supplied by DB2 in
the programming example tblsrv.c), that returns weather information for
/* Scratchpad data */
/* Preserve information from one function call to the next call */
typedef struct {
/* FILE * file_ptr; if you use weather data text file */
int file_pos ; /* if you use a weather data buffer */
} scratch_area ;
#ifdef __cplusplus
extern "C"
#endif
/* This is a subroutine. */
/* Find a full city name using a short name */
int get_name( char * short_name, char * long_name ) {
int name_pos = 0 ;
#ifdef __cplusplus
extern "C"
#endif
/* This is a subroutine. */
/* Clean all field data and field null indicator data */
int clean_fields( int field_pos ) {
while ( fields[field_pos].fld_length != 0 ) {
memset( fields[field_pos].fld_field, '\0', 31 ) ;
fields[field_pos].fld_ind = SQL_ISNULL ;
field_pos++ ;
}
return( 0 ) ;
#ifdef __cplusplus
extern "C"
#endif
/* This is a subroutine. */
/* Fills all field data and field null indicator data ... */
/* ... from text weather data */
int get_value( char * value, int field_pos ) {
fld_desc * field ;
char field_buf[31] ;
double * double_ptr ;
int * int_ptr, buf_pos ;
while ( fields[field_pos].fld_length != 0 ) {
field = &fields[field_pos] ;
memset( field_buf, '\0', 31 ) ;
memcpy( field_buf,
( value + field->fld_offset ),
field->fld_length ) ;
buf_pos = field->fld_length ;
while ( ( buf_pos > 0 ) &&
( field_buf[buf_pos] == ' ' ) )
field_buf[buf_pos--] = '\0' ;
buf_pos = 0 ;
while ( ( buf_pos < field->fld_length ) &&
( field_buf[buf_pos] == ' ' ) )
buf_pos++ ;
if ( strlen( ( char * ) ( field_buf + buf_pos ) ) > 0 ||
strcmp( ( char * ) ( field_buf + buf_pos ), "n/a") != 0 ) {
field->fld_ind = SQL_NOTNULL ;
}
field_pos++ ;
}
return( 0 ) ;
#ifdef __cplusplus
extern "C"
#endif
void SQL_API_FN weather( /* Return row fields */
SQLUDF_VARCHAR * city,
SQLUDF_INTEGER * temp_in_f,
SQLUDF_INTEGER * humidity,
SQLUDF_VARCHAR * wind,
SQLUDF_INTEGER * wind_velocity,
SQLUDF_DOUBLE * barometer,
SQLUDF_VARCHAR * forecast,
/* You may want to add more fields here */
scratch_area * save_area ;
char line_buf[81] ;
int line_buf_pos ;
break ;
}
memset( line_buf, '\0', 81 ) ;
strcpy( line_buf, weather_data[save_area->file_pos] ) ;
line_buf[3] = '\0' ;
break ;
/* Special last call UDF for cleanup (no real args!): Close table */
case SQL_TF_CLOSE:
/* If you use a weather data text file */
/* fclose(save_area->file_ptr); */
/* save_area->file_ptr = NULL; */
The above CREATE FUNCTION statement is for a UNIX version of this UDF.
For other platforms, you may need to modify the value specified in the
EXTERNAL NAME clause.
/* local vars */
short j; /* local indexing var */
int rc; /* return code variable for API calls */
sqlint32 input_len; /* receiver for input LOB length */
sqlint32 input_pos; /* current position for scanning input LOB */
char lob_buf[100]; /* data buffer */
sqlint32 input_rec; /* number of bytes read by sqludf_substr */
sqlint32 output_rec; /* number of bytes written by sqludf_append */
/*---------------------------------------------
rc = sqludf_create_locator(SQL_TYP_CLOB, &lob_output);
/* Error and exit if unable to create locator */
if (rc) {
memcpy (sqlstate, "38901", 5);
/* special sqlstate for this condition */
goto exit;
}
/* Find out the size of the input LOB value */
rc = sqludf_length(lob_input, &input_len) ;
/* Error and exit if unable to find out length */
if (rc) {
memcpy (sqlstate, "38902", 5);
/* special sqlstate for this condition */
goto exit;
}
/* Loop to read next 100 bytes, and append to result if it meets
* the criteria.
*/
for (input_pos = 0; (input_pos < input_len); input_pos += 100) {
/* Read the next 100 (or less) bytes of the input LOB value */
rc = sqludf_substr(lob_input, input_pos, 100,
(unsigned char *) lob_buf, &input_rec) ;
/* Error and exit if unable to read the segment */
if (rc) {
memcpy (sqlstate, "38903", 5);
/* special sqlstate for this condition */
goto exit;
}
/* apply the criteria for appending this segment to result
* if (...predicate involving buffer and criteria...) {
* The condition for retaining the segment is TRUE...
* Write that buffer segment which was last read in
*/
rc = sqludf_append(lob_output,
(unsigned char *) lob_buf, input_rec, &output_rec) ;
/* Error and exit if unable to read the 100 byte segment */
if (rc) {
memcpy (sqlstate, "38904", 5);
/* special sqlstate for this condition */
goto exit;
}
/* } end if criteria for inclusion met */
} /* end of for loop, processing 100-byte chunks of input LOB
* if we fall out of for loop, we are successful, and done.
*/
(This statement is for an AIX version of this UDF. For other platforms, you
may need to modify the value specified in the EXTERNAL NAME clause.)
UPDATE tablex
SET col_a = 99,
col_b = carve (:hv_clob, '...criteria...')
WHERE tablex_key = :hv_key;
The UDF is used to subset the CLOB value represented by the host variable
:hv_clob and update the row represented by key value in host variable
:hv_key.
In this update example by the way, it may be that :hv_clob is defined in the
application as a CLOB_LOCATOR. It is not this same locator which will be
passed to the ″carve″ UDF! When :hv_clob is ″bound in″ to the DB2 engine
agent running the statement, it is known only as a CLOB. When it is then
passed to the UDF, DB2 generates a new locator for the value. This conversion
back and forth between CLOB and locator is not expensive, by the way; it
does not involve any extra memory copies or I/O.
Example: Counter OLE Automation UDF in BASIC
The following example implements a counter class using Microsoft Visual
BASIC. The class has an instance variable, nbrOfInvoke, that tracks the
number of invocations. The constructor of the class initializes the number to 0.
The increment method increments nbrOfInvoke by 1 and returns the current
state.
nbrOfInvoke = nbrOfInvoke + 1
End Sub
End Sub
5 record(s) selected.
The COM CCounter class definition in C++ includes the declaration of the
increment method as well as nbrOfInvoke:
class FAR CCounter : public ICounter
{
...
STDMETHODIMP CCounter::increment(long *out,
short *outnull,
BSTR *sqlstate,
BSTR *fname,
BSTR *fspecname,
BSTR *msgtext,
SAFEARRAY **spad,
long *calltype );
long nbrOfInvoke;
...
};
return NOERROR;
};
In the above example, sqlstate and msgtext are [out] parameters of type
BSTR*, that is, DB2 passes a pointer to NULL to the UDF. To return values for
these parameters, the UDF allocates a string and returns it to DB2 (for
example, *sqlstate = SysAllocString (L"01H00")), and DB2 frees the
memory. The parameters fname and fspecname are [in] parameters. DB2
allocates the memory and passes in values which are read by the UDF, and
then DB2 frees the memory.
The class factory of the CCounter class creates counter objects. You can
register the class factory as a single-use or multi-use object (not shown in this
example).
STDMETHODIMP CCounterCF::CreateInstance(IUnknown FAR* punkOuter,
REFIID riid,
void FAR* FAR* ppv)
{
CCounter *pObj;
...
// create a new counter object
pObj = new CCounter;
...
};
While processing the following query, DB2 creates two different instances of
class CCounter. An instance is created for each UDF reference in the query.
5 record(s) selected.
MySession.Logon ProfileName:="Profile1"
Set MyMsgColl = MySession.Inbox.Messages
MySession.Logoff
Set MySession = Nothing
Else
sqlstate = "02000"
Else
timereceived = MyMsg.timereceived
subject = Left(MyMsg.subject, 15)
size = MyMsg.size
text = Left(MyMsg.text, 30)
End If
End If
End Sub
On the table function OPEN call, the CreateObject statement creates a mail
session, and the logon method logs on to the mail system (user name and
password issues are neglected). The message collection of the mail inbox is
used to retrieve the first message. On the FETCH calls, the message header
information and the first 30 characters of the current message are assigned to
the table function output parameters. If no messages are left, SQLSTATE 02000
is returned. On the CLOSE call, the example logs off and sets the session
object to nothing, which releases all the system and memory resources
associated with the previously referenced object when no other variable refers
to it.
3 record(s) selected.
DB2 does check for certain types of limited actions that erroneously modify
storage (for example, if the UDF moves a few too many characters to a
scratchpad or to the result buffer). In that case, DB2 returns an error,
SQLCODE -450 (SQLSTATE 39501), if it detects such a malfunction. DB2 is
also designed to fail gracefully in the event of an abnormal termination of a
UDF with SQLCODE -430 (SQLSTATE 38503), or a user interrupt of the UDF
with SQLCODE -431 (SQLSTATE 38504).
Note that valuable debugging tools such as printf() do not normally work as
debugging aids for your UDF, because the UDF normally runs in a
background process where stdout has no meaning. As an alternative to using
printf(), it may be possible for you to instrument your UDF with file output
logic, and for debugging purposes write indicative data and control
information to a file.
You can use triggers to support general forms of integrity such as business
rules. For example, your business may wish to refuse orders that exceed its
customers’ credit limit. A trigger can be used to enforce this constraint. In
general, triggers are powerful mechanisms to capture transitional business
rules. Transitional business rules are rules that involve different states of the
data.
For example, suppose a salary cannot be increased by more than 10 per cent.
To check this rule, the value of the salary before and after the increase must
be compared. For rules that do not involve more than one state of the data,
check and referential integrity constraints may be more appropriate (refer to
the SQL Reference for more information). Because of the declarative semantics
of check and referential constraints, their use is recommended for constraints
that are not transitional.
You can also use triggers for tasks such as automatically updating summary
data. By keeping these actions as a part of the database and ensuring that
they occur automatically, triggers enhance database integrity. For example,
suppose you want to automatically track the number of employees managed
by a company:
You must also define the action, called the triggered action, that the trigger
performs when its trigger event occurs. The triggered action consists of one or
more SQL statements which can execute either before or after the database
manager performs the trigger event. Once a trigger event occurs, the database
manager determines the set of rows in the subject table that the update
operation affects and executes the trigger.
When you create a trigger, you declare the following attributes and behavior:
v The name of the trigger.
v The name of the subject table.
v The trigger activation time (BEFORE or AFTER the update operation
executes).
v The trigger event (INSERT, DELETE, or UPDATE).
v The old values transition variable, if any.
v The new values transition variable, if any.
v The old values transition table, if any.
v The new values transition table, if any.
v The granularity (FOR EACH STATEMENT or FOR EACH ROW).
v The triggered action of the trigger (including a triggered action condition
and triggered SQL statement(s)).
v If the trigger event is UPDATE, then the trigger column list for the trigger
event of the trigger, as well as an indication of whether the trigger column
list was explicit or implicit.
v The trigger creation timestamp.
v The current function path.
For more information on the CREATE TRIGGER statement, refer to the SQL
Reference.
The above statement defines the trigger new_hire, which activates when you
perform an insert operation on table employee.
You associate every trigger event, and consequently every trigger, with exactly
one subject table and exactly one update operation. The update operations
are:
Insert operation
An insert operation can only be caused by an INSERT statement.
Therefore, triggers are not activated when data is loaded using
utilities that do not use INSERT, such as the LOAD command.
Update operation
An update operation can be caused by an UPDATE statement or as a
result of a referential constraint rule of ON DELETE SET NULL.
Delete operation
A delete operation can be caused by a DELETE statement or as a
result of a referential constraint rule of ON DELETE CASCADE.
If the trigger event is an update operation, the event can be associated with
specific columns of the subject table. In this case, the trigger is only activated
if the update operation attempts to update any of the specified columns. This
provides a further refinement of the event that activates the trigger. For
example, the following trigger, REORDER, activates only if you perform an
update operation on the columns ON_HAND or MAX_STOCKED, of the table PARTS.
CREATE TRIGGER REORDER
AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS
REFERENCING NEW AS N_ROW
FOR EACH ROW MODE DB2SQL
WHEN (N_ROW.ON_HAND < 0.10 * N_ROW.MAX_STOCKED)
BEGIN ATOMIC
VALUES(ISSUE_SHIP_REQUEST(N_ROW.MAX_STOCKED -
N_ROW.ON_HAND,
N_ROW.PARTNO));
END
The set of affected rows for the associated trigger contains all the rows in the
parts table whose part_no is greater than 15 000.
Trigger Granularity
When a trigger is activated, it runs according to its granularity as follows:
FOR EACH ROW
It runs as many times as the number of rows in the set of affected
rows.
FOR EACH STATEMENT
It runs once for the entire trigger event.
If the set of affected rows is empty (that is, in the case of a searched UPDATE
or DELETE in which the WHERE clause did not qualify any rows), a FOR
EACH ROW trigger does not run. But a FOR EACH STATEMENT trigger still
runs once.
For example, keeping a count of number of employees can be done using FOR
EACH ROW.
CREATE TRIGGER NEW_HIRED
AFTER INSERT ON EMPLOYEE
FOR EACH ROW MODE DB2SQL
UPDATE COMPANY_STATS SET NBEMP = NBEMP + 1
You can achieve the same affect with one update by using a granularity of
FOR EACH STATEMENT.
CREATE TRIGGER NEW_HIRED
AFTER INSERT ON EMPLOYEE
REFERENCING NEW_TABLE AS NEWEMPS
FOR EACH STATEMENT MODE DB2SQL
UPDATE COMPANY_STATS
SET NBEMP = NBEMP + (SELECT COUNT(*) FROM NEWEMPS)
If the activation time is BEFORE, the triggered actions are activated for each
row in the set of affected rows before the trigger event executes. Note that
BEFORE triggers must have a granularity of FOR EACH ROW.
If the activation time is AFTER, the triggered actions are activated for each
row in the set of affected rows or for the statement, depending on the trigger
granularity. This occurs after the trigger event executes, and after the database
manager checks all constraints that the trigger event may affect, including
actions of referential constraints. Note that AFTER triggers can have a
granularity of either FOR EACH ROW or FOR EACH STATEMENT.
BEFORE triggers are not used for further modifying the database because they
are activated before the trigger event is applied to the database. Consequently,
they are activated before integrity constraints are checked and may be
violated by the trigger event.
Transition Variables
When you carry out a FOR EACH ROW trigger, it may be necessary to refer
to the value of columns of the row in the set of affected rows, for which the
trigger is currently executing. Note that to refer to columns in tables in the
database (including the subject table), you can use regular SELECT statements.
A FOR EACH ROW trigger may refer to the columns of the row for which it
is currently executing by using two transition variables that you can specify in
the REFERENCING clause of a CREATE TRIGGER statement. There are two
kinds of transition variables, which are specified as OLD and NEW, together
with a correlation-name. They have the following semantics:
OLD correlation-name
Specifies a correlation name which captures the original state of the
row, that is, before the triggered action is applied to the database.
NEW correlation-name
Specifies a correlation name which captures the value that is, or was,
used to update the row in the database when the triggered action is
applied to the database.
Note: Transition variables can only be specified for FOR EACH ROW triggers.
In a FOR EACH STATEMENT trigger, a reference to a transition
variable is not sufficient to specify to which of the several rows in the
set of affected rows the transition variable is referring.
Transition Tables
In both FOR EACH ROW and FOR EACH STATEMENT triggers, it may be
necessary to refer to the whole set of affected rows. This is necessary, for
example, if the trigger body needs to apply aggregations over the set of
affected rows (for example, MAX, MIN, or AVG of some column values). A
trigger may refer to the set of affected rows by using two transition tables that
can be specified in the REFERENCING clause of a CREATE TRIGGER
statement. Just like the transition variables, there are two kinds of transition
tables, which are specified as OLD_TABLE and NEW_TABLE together with a
table-name, with the following semantics:
OLD_TABLE table-name
Specifies the name of the table which captures the original state of the
set of affected rows (that is, before the triggering SQL operation is
applied to the database).
NEW_TABLE table-name
Specifies the name of the table which captures the value that is used
to update the rows in the database when the triggered action is
applied to the database.
For example:
Note that NEW_TABLE always has the full set of updated rows, even on a
FOR EACH ROW trigger. When a trigger acts on the table on which the
trigger is defined, NEW_TABLE contains the changed rows from the
statement that activated the trigger. However, NEW_TABLE does not contain
the changed rows that were caused by statements within the trigger, as that
would cause a separate activation of the trigger.
The transition tables are read-only. The same rules that define the kinds of
transition variables that can be defined for which trigger event, apply for
transition tables:
UPDATE
An UPDATE trigger can refer to both OLD_TABLE and NEW_TABLE
transition tables.
INSERT
An INSERT trigger can only refer to a NEW_TABLE transition table
because before the activation of the INSERT operation the affected
rows do not exist in the database. That is, there is no original state of
the rows that defines old values before the triggered action is applied
to the database.
DELETE
A DELETE trigger can only refer to an OLD transition table because
there are no new values specified in the delete operation.
Note: It is important to observe that transition tables can be specified for both
granularities of AFTER triggers: FOR EACH ROW and FOR EACH
STATEMENT.
The scope of the OLD_TABLE and NEW_TABLE table-name is the trigger body. In
this scope, this name takes precedence over the name of any other table with
the same unqualified table-name that may exist in the schema. Therefore, if the
OLD_TABLE or NEW_TABLE table-name is for example, X, a reference to X (that is,
an unqualified X) in the FROM clause of a SELECT statement will always
refer to the transition table even if there is a table named X in the in the
Triggered Action
The activation of a trigger results in the running of its associated triggered
action. Every trigger has exactly one triggered action which, in turn, has two
components:
v An optional triggered action condition or WHEN clause
v A set of triggered SQL statement(s).
The triggered action condition defines whether or not the set of triggered
statements are performed for the row or for the statement for which the
triggered action is executing. The set of triggered statements define the set of
actions performed by the trigger in the database as a consequence of its event
having occurred.
For example, the following trigger action specifies that the set of triggered
SQL statements should only be activated for rows in which the value of the
on_hand column is less than ten per cent of the value of the max_stocked
column. In this case, the set of triggered SQL statements is the invocation of
the issue_ship_request function.
CREATE TRIGGER REORDER
AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS
REFERENCING NEW AS N_ROW
FOR EACH ROW MODE DB2SQL
The triggered action condition is evaluated once for each row if the trigger is
a FOR EACH ROW trigger, and once for the statement if the trigger is a FOR
EACH STATEMENT trigger.
This clause provides further control that you can use to fine tune the actions
activated on behalf of a trigger. An example of the usefulness of the WHEN
In most cases, if any triggered SQL statement returns a negative return code,
the triggering SQL statement together with all trigger and referential
constraint actions are rolled back, and an error is returned: SQLCODE -723
(SQLSTATE 09000). The trigger name, SQLCODE, SQLSTATE and many of the
tokens from the failing triggered SQL statement are returned. Error conditions
occurring when triggers are running that are critical or roll back the entire
unit of work are not returned using SQLCODE -723 (SQLSTATE 09000).
Functions Within SQL Triggered Statement
Functions, including user-defined functions (UDFs), may be invoked within a
triggered SQL statement. Consider the following example:,
CREATE TRIGGER REORDER
AFTER UPDATE OF ON_HAND, MAX_STOCKED ON PARTS
REFERENCING NEW AS N_ROW
FOR EACH ROW MODE DB2SQL
WHEN (N_ROW.ON_HAND < 0.10 * N_ROW.MAX_STOCKED)
BEGIN ATOMIC
VALUES (ISSUE_SHIP_REQUEST (N_ROW.MAX_STOCKED - N_ROW.ON_HAND,
N_ROW.PARTNO));
END
UDFs are written in either the C or C++ programming language. This enables
control of logic flows, error handling and recovery, and access to system and
library functions. (See “Chapter 15. Writing User-Defined Functions (UDFs)
and Methods” on page 385 for a description of UDFs.) This capability allows a
triggered action to perform non-SQL types of operations when a trigger is
activated. For example, such a UDF could send an electronic mail message
and thereby act as an alert mechanism. External actions, such as messages, are
not under commit control and will be run regardless of success or failure of
the rest of the triggered actions.
For example, consider some rules related to the HIREDATE column of the
EMPLOYEE table, where HIREDATE is the date that the employee starts
working.
v HIREDATE must be date of insert or a future date
v HIREDATE cannot be more than 1 year from date of insert.
v If HIREDATE is between 6 and 12 months from date of insert, notify
personnel manager using a UDF called send_note.
Trigger Cascading
When you run a triggered SQL statement, it may cause the event of another,
or even the same, trigger to occur, which in turn, causes the other, (or a
second instance of the same) trigger to be activated. Therefore, activating a
trigger can cascade the activation of one or more other triggers.
The above triggers are activated when you run an INSERT operation on the
employee table. In this case, the timestamp of their creation defines which of
the above two triggers is activated first.
Notice that the queries do not extract information once and store it explicitly
as columns of tables. If this was done, it would increase the performance of
the queries, not only because the UDFs are not invoked repeatedly, but also
because you can then define indexes on the extracted information.
Using triggers, you can extract this information whenever new electronic mail
is stored in the database. To achieve this, add new columns to the
ELECTRONIC_MAIL table and define a BEFORE trigger to extract the
corresponding information as follows:
ALTER TABLE ELECTRONIC_MAIL
ADD COLUMN SENDER VARCHAR (200)
ADD COLUMN RECEIVER VARCHAR (200)
ADD COLUMN SENT_ON DATE
ADD COLUMN SUBJECT VARCHAR (200)
Now, whenever new electronic mail is inserted into the message column, its
sender, its receiver, the date on which it was sent, and its subject are extracted
from the message and stored in separate columns.
Preventing Operations on Tables
Suppose you want to prevent mail you sent, which was undelivered and
returned to you (perhaps because the e-mail address was incorrect), from
being stored in the e-mail’s table.
Defining Actions
Now assume that your general manager wants to keep the names of
customers who have sent three or more complaints in the last 72 hours in a
separate table. The general manager also wants to be informed whenever a
customer name is inserted in this table more than once.
Collating Sequences
The database manager compares character data using a collating sequence. This
is an ordering for a set of characters that determines whether a particular
character sorts higher, lower, or the same as another.
Note: Character string data defined with the FOR BIT DATA attribute, and
BLOB data, is sorted using the binary sort sequence.
For example, a collating sequence can be used to indicate that lowercase and
uppercase versions of a particular character are to be sorted equally.
For example, consider the characters B (X'42') and b (X'62'). If (according to the
collating sequence table) they both have a sort weight of X'42' (B), they collate
the same. If the sort weight for B is X'9E', and the sort weight for b is X'9D', b
will be sorted before B. Actual weights depend on the collating sequence table
used, which in turn depends on the code set and locale. Note that a collating
sequence table is not the same as a code page table, which defines code
points.
In all cases, DB2 uses the collation table that was specified at database
creation time. If you want the multi-byte characters to be sorted the way that
they appear in their code point table, you must specify IDENTITY as the
collation sequence when you create the database.
Note: For DBCS characters in GRAPHIC fields, the sort sequence is always
IDENTITY.
If weights that are not unique are used, characters that are not identical may
compare equally. Because of this, string comparison must be a two-phase
process:
1. Compare the characters in each string based on their weights.
2. If step 1 yields equality, compare the characters of each string based on
their code point values.
If the collating sequence contains 256 unique weights, only the first step is
performed. If the collating sequence is the identity sequence, only the second
step is performed. In either case, there is a performance benefit.
For more information about character comparisons, refer to the SQL Reference.
returns
ab
Ab
abel
Abel
ABEL
abels
You could also specify the following SELECT statement when creating view
″v1″, make all comparisons against the view in uppercase, and request table
INSERTs in mixed case:
CREATE VIEW v1 AS SELECT TRANSLATE(c1) FROM T1
At the database level, you can set the collating sequence as part of the
sqlecrea - Create Database API. This allows you to decide if ″a″ is processed
before ″A″, or if ″A″ is processed after ″a″, or if they are processed with equal
weighting. This will make them equal when collating or sorting using the
ORDER BY clause. ″A″ will always come before ″a″, because they are equal in
every sense. The only basis upon which to sort is the hexadecimal value.
Thus
SELECT c1 FROM T1 WHERE c1 LIKE 'ab%'
returns
ab
abel
abels
and
SELECT c1 FROM T1 WHERE c1 LIKE 'A%'
returns
Abel
Ab
ABEL
returns
Thus, you may want to consider using the scalar function TRANSLATE(), as
well as sqlecrea. Note that you can only specify a collating sequence using
sqlecrea. You cannot specify a collating sequence from the command line
processor (CLP). For information about the TRANSLATE() function, refer to
the SQL Reference. For information about sqlecrea, refer to the Administrative
API Reference.
You can also use the UCASE function as follows, but note that DB2 performs
a table scan instead of using an index for the select:
SELECT * FROM EMP WHERE UCASE(JOB) = 'NURSE'
SELECT.....
ORDER BY COL2
COL2 COL2
---- ----
V1G 7AB
Y2W V1G
7AB Y2W
Figure 19. Example of How a Sort Order in an EBCDIC-Based Sequence Differs from a Sort Order
in an ASCII-Based Sequence
COL2 COL2
---- ----
TW4 TW4
X72 X72
39G
The CREATE DATABASE API accepts a data structure called the Database
Descriptor Block (SQLEDBDESC). You can define your own collating sequence
within this structure.
For information on the include files that contain collating sequences, see the
following sections:
v For C/C++, “Include Files for C and C++” on page 583
v For COBOL, “Include Files for COBOL” on page 665
v For FORTRAN, “Include Files for FORTRAN” on page 688.
Deriving Code Page Values
The application code page is derived from the active environment when the
database connection is made. If the DB2CODEPAGE registry variable is set, its
value is taken as the application code page. However, it is not necessary to set
the DB2CODEPAGE registry variable because DB2 will determine the
appropriate code page value from the operating system. Setting the
DB2CODEPAGE registry variable to incorrect values may cause unpredictable
results.
The database code page is derived from the value specified (explicitly or by
default) at the time the database is created. For example, the following defines
how the active environment is determined in different operating environments:
UNIX On UNIX based operating systems, the active
environment is determined from the locale
setting, which includes information about
language, territory and code set.
OS/2 On OS/2, primary and secondary code pages
are specified in the CONFIG.SYS file. You can
For a complete list of environment mappings for code page values, refer to
the Administration Guide.
Deriving Locales in Application Programs
Locales are implemented one way on Windows and another way on UNIX
based systems. There are two locales on UNIX based systems:
v The environment locale allows you to specify the language, currency
symbol, and so on, that you want to use.
v The program locale contains the current language, currency symbol, and so
on, of a program that is running.
When your program is started, it gets a default C locale. It does not get a copy
of the environment locale. If you set the program locale to any locale other
than ″C″, DB2 Universal Database uses your current program locale to
determine the code page and territory settings for your application
environment. Otherwise, these values are obtained from the operating system
environment. Note that setlocale() is not thread-safe, and if you issue
setlocale() from within your application, the new locale is set for the entire
process.
If you use host variables that use graphic data in your C or C++ applications,
there are special precompiler, application performance, and application design
issues you need to consider. For a detailed discussion of these considerations,
see “Handling Graphic Host Variables in C and C++” on page 609. If you deal
with EUC code sets in your applications, refer to “Japanese and Traditional
Chinese EUC and UCS-2 Code Set Considerations” on page 511 for guidelines
that you should consider.
The server does not convert file names. To code a file name, either use the
ASCII invariant set, or provide the path in the hexadecimal values that are
physically stored in the file system.
The code points for each of these characters, by code page, is as follows:
Table 19. Code Points for Special Double-byte Characters
Code Page Double-Byte Double-Byte Double-byte Double-Byte
Percentage Underscore Space Substitution
Character
932 X'8193' X'8151' X'8140' X'FCFC'
938 X'8193' X'8151' X'8140' X'FCFC'
942 X'8193' X'8151' X'8140' X'FCFC'
943 X'8193' X'8151' X'8140' X'FCFC'
948 X'8193' X'8151' X'8140' X'FCFC'
949 X'A3A5' X'A3DF' X'A1A1' X'AFFE'
950 X'A248' X'A1C4' X'A140' X'C8FE'
954 X'A1F3' X'A1B2' X'A1A1' X'F4FE'
964 X'A2E8' X'A2A5' X'A1A1' X'FDFE'
970 X'A3A5' X'A3DF' X'A1A1' X'AFFE'
1381 X'A3A5' X'A3DF' X'A1A1' X'FEFE'
1383 X'A3A5' X'A3DF' X'A1A1' X'A1A1'
13488 X'FF05' X'FF3F' X'3000' X'FFFD'
By default, when you invoke DB2 DARI stored procedures and UDFs, they
run under a default national language environment which may not match the
database’s national language environment. Consequently, using country or
code page specific operations, such as the C wchar_t graphic host variables
and functions, may not work as you expect. You need to ensure that, if
applicable, the correct environment is initialized when you invoke the stored
procedure or UDF.
Executing an Application
At execution time, the active code page of the user application when a
database connection is made is in effect for the duration of the connection. All
data is interpreted based on this code page; this includes dynamic SQL
statements, user input data, user output data, and character fields in the
SQLCA.
A Note of Caution
Failure to follow these guidelines may produce unpredictable results. These
conditions cannot be detected by the database manager, so no error or
warning message will result. For example, a C application contains the
following SQL statements operating against a table T1 with one column
defined as C1 CHAR(20):
(0) EXEC SQL CONNECT TO GLOBALDB;
(1) EXEC SQL INSERT INTO T1 VALUES ('a-constant');
strcpy(sqlstmt, "SELECT C1 FROM T1 WHERE C1='a-constant');
(2) EXEC SQL PREPARE S1 FROM :sqlstmt;
Where:
application code page at bind time = x
application code page at execution time = y
database code page = z
At execution time, 'a-constant' (x→z) is inserted into the table when statement
(1) is executed. However, the WHERE clause of statement (2) will be executed
with 'a-constant' (y→z). If the code points in the constant are such that the two
conversions (x→z and y→z) yield different results, the SELECT in statement (2)
will fail to retrieve the data inserted by statement (1).
Note: The DB2 for OS/2 Version 1.0 or Version 1.2 database server does not
support character conversion between different code pages. Ensure
that the code pages on server and client are compatible. For a list of
supported code page conversions, refer to the Administration Guide.
v When a client or application importing a PC/IXF file runs in a code page
that is different from the file being imported.
This data conversion will occur on the database client machine before the
client accesses the database server. Additional data conversion may take
place if the application is running in a code page that is different from the
code page of the database (as stated in the previous point).
Data conversion, if any, also depends on how the import utility was called.
See the Administration Guide for more information.
v When DB2 Connect is used to access data on a host or AS/400 server. In
this case the data receiver converts the character data. For example, data
that is sent to DB2 for MVS/ESA is converted to the appropriate MVS
coded character set identifier (CCSID) by DB2 for MVS/ESA. The data sent
back to the DB2 Connect machine from DB2 for MVS/ESA is converted by
DB2 Connect. For more information, see the DB2 Connect User’s Guide.
The source code page is determined from the source of the data; data from the
application has a source code page equal to the application code page, and
data from the database has a source code page equal to the database code
page.
The determination of target code page is more involved; where the data is to
be placed, including rules for intermediate operations, is considered:
v If the data is moved directly from an application into a database, with no
intervening operations, the target code page is the database code page.
v If the data is being imported into a database from a PC/IXF file, there are
two character conversion steps:
1. However, a literal inserted into a column defined as FOR BIT DATA could be converted if that literal was part of
an SQL statement which was converted.
For a list of the code pages supported by DB2 Universal Database, refer to the
Administration Guide. The values under the heading “Group” can be used to
determine where conversions are supported. Any code page can be converted
to any other code page that is listed in the same IBM-defined language group.
For example, code page 437 can be converted to 37, 819, 850, 1051, 1252, or
1275.
The considerations for graphic string data should not be a factor in unequal
code page situations. Each string always has the same number of characters,
regardless of whether the data is in the application or the database code page.
See “Unequal Code Page Situations” on page 516 for information on dealing
with unequal code page situations.
DBCS Character Sets
Each combined single-byte character set (SBCS) or double-byte character set
(DBCS) code page allows for both single- and double-byte character code
points. This is usually accomplished by reserving a subset of the 256 available
code points of a mixed code table for single-byte characters, with the
remainder of the code points either undefined, or allocated to the first byte of
double-byte code points. These code points are shown in the following table.
Table 20. Mixed Character Set Code Points
Country Supported Mixed Code Points for Code Points for
Code Page Single-byte First Byte of
Characters Double-byte
Characters
Japan 932, 943 x00-7F, xA1-DF x81-9F, xE0-FC
Japan 942 x00-80, xA0-DF, x81-9F, xE0-FC
xFD-FF
Taiwan 938 (*) x00-7E x81-FC
Taiwan 948 (*) x00-80, FD, FE x81-FC
Korea 949 x00-7F x8F-FE
Taiwan 950 x00-7E x81-FE
China 1381 x00-7F x8C-FE
Korea 1363 x00-7F x81-FE
China 1386 x00 x81-FE
Note: (*) This is an old code page that is no longer recommended.
Within each implied DBCS code table, there are 256 code points available as
the second byte for each valid first byte. Second byte values can have any
value from 0x40 to 0x7E, and from 0x80 to 0xFE. Note that in DBCS
environments, DB2 does not perform validity checking on individual
double-byte characters.
Extended UNIX Code (EUC) Character Sets
Each EUC code page allows for both single-byte character code points, and up
to three different sets of multi-byte character code points. This is
accomplished by reserving a subset of the 256 available code points of each
implied SBCS code page identifier for single-byte characters. The remainder of
the code points is undefined, allocated as an element of a multi-byte character,
or allocated as a single-shift introducer of a multi-byte character. These code
points are shown in the following tables.
Table 21. Japanese EUC Code Points
Group 1st Byte 2nd Byte 3rd Byte 4th Byte
G0 x20-7E n/a n/a n/a
G1 xA1-FE xA1-FE n/a n/a
G2 x8E xA1-FE n/a n/a
G3 x8E xA1-FE xA1-FE n/a
Code points not assigned to any of these categories are not defined, and are
processed as single-byte undefined code points.
Running CLI/ODBC/JDBC/SQLJ Programs in a DBCS Environment
For details on running Java programs that access DB2 Universal Database in a
double-byte character set (DBCS) environment, refer to DB2 Java - DBCS
Support online (http://www.ibm.com/software/data/db2/java/dbcsjava.html).
This web page currently contains the following information:
JDBC and SQLJ programs access DB2 using the DB2 CLI/ODBC driver and
therefore use the same configuration file (db2cli.ini). The following entries
must be added to this configuration file if you run Java programs that access
DB2 Universal Database in a DBCS environment:
PATCH1 = 65536
This forces the driver to manually insert a ″G″ in front of character
literals which are in fact graphic literals. This PATCH1 value should
always be set when working in a double byte environment.
PATCH1 = 64
This forces the driver to NULL terminate graphic output strings. This
is needed by Microsoft Access in a double byte environment. If you
need to use this PATCH1 value as well then you would add the two
values together (64+65536 = 65600) and set PATCH1=65600. See Note
#2 below for more information about specifying multiple PATCH1
values.
PATCH2 = 7
This forces the driver to map all graphic column data types to char
column data type. This is needed in a double byte environment.
PATCH2 = 10
This setting should only be used in an EUC (Extended Unix Code)
environment. It ensures that the CLI driver provides data for character
variables (CHAR, VARCHAR, etc...) in the proper format for the JDBC
driver. The data in these character types will not be usable in JDBC
without this setting.
Note:
DB2 Universal Database supports the entire set of UCS-2 characters, including
all the combining characters, but does not perform any composition or
decomposition of characters. For more information on the Unicode standard,
refer to the Unicode Standard Version 2.0 from Addison-Wesley. For more
information about UCS-2, refer to ISO/IEC 10646-1 from the International
Standard Organization.
If you are working with applications or databases using these character sets
you may need to consider dealing with UCS-2 encoded data. When
converting UCS-2 graphic data to the application’s EUC code page, there is
the possibility of an increase in the length of data. For details of data
expansion, see “Character Conversion Expansion Factor” on page 507. When
large amounts of data are being displayed, it may be necessary to allocate
buffers, convert, and display the data in a series of fragments.
The following sections discuss how to handle data in this environment. For
these sections, the term EUC is used to refer only to Japanese and Traditional
Chinese EUC character sets. Note that the discussions do not apply to DB2
Korean or Simplified-Chinese EUC support since graphic data in these
character sets is represented using the EUC encoding.
A7A1
UCS-2 C4A1
C4A1
Thus, the original code points A7A1 and C4A1 end up as code point C4A1 after
conversion.
If you require the code page conversion tables for EUC code pages 946
(Traditional Chinese EUC) or 950 (Traditional Chinese Big-5) and UCS-2, see
the online Product and Service Technical Library
(http://www.ibm.com/software/data/db2/library/).
Considerations for UDFs: UDFs are invoked at the database server and are
meant to deal with data encoded in the same code set as the database. In the
case of databases running under the Japanese or Traditional Chinese code set,
mixed character data is encoded using the EUC code set under which the
database is created. Graphic data is encoded using UCS-2. This means that
UDFs need to recognize and handle graphic data which will be encoded with
UCS-2.
For example, you create a UDF called VARCHAR which converts a graphic
string to a mixed character string. The VARCHAR function has to convert a
graphic string encoded as UCS-2 to an EUC representation if the database is
created under the EUC code sets.
To ensure that you always have sufficient storage allocated to cover the
maximum possible expansion after character conversion, you should allocate
storage equal to the value max_target_length obtained from the following
calculation:
1. Determine the expansion factor for the data.
For data transfer from the application to the database:
expansion_factor = ABS[SQLERRD(1)]
if expansion_factor = 0
expansion_factor = 1
All the above checks are required to allow for overflow which may occur
during the length calculation. The specific checks are:
1 Numeric overflow occurs during the calculation of
temp_target_length in step 2.
If the result of multiplying two positive values together is greater
than the maximum value for the data type, the result wraps around
and is returned as a value less than the larger of the two values.
For example, the maximum value of a 2-byte signed integer
(which is used for the length of non-CLOB data types) is 32 767. If
the actual_source_length is 25 000 and the expansion factor is 2,
Any end-user application or API library has the potential of not being able to
handle all possibilities in an unequal code page situation. In addition, while
some parameter validation such as string length is performed at the client for
When you perform a DESCRIBE against a select list item which is resolved in
the application context (for example VALUES SUBSTR(?,1,2)); then for any
character or graphic data involved, you should evaluate the returned SQLLEN
DBCS Application with EUC Database: If your application code page is a DBCS
code page and issues a DESCRIBE against an EUC database, a situation
similar to that in “EUC Application with DBCS Database” occurs. However, in
this case, your application may require less storage than indicated by the
value of the SQLLEN field. The worst case in this situation is that all of the
data is single-byte or double-byte under EUC, meaning that exactly SQLLEN
Using Fixed or Variable Length Data Types: Due to the possible change in
length of strings when conversions occur between DBCS and EUC code pages,
you should consider not using fixed length data types. Depending on whether
you require blank padding, you should consider changing the SQLTYPE from
a fixed length character string, to a varying length character string after
performing the DESCRIBE. For example, if an EUC to DBCS connection is
informed of a maximum expansion factor of two, the application should
allocate ten bytes (based on the CHAR(5) example in “EUC Application with
DBCS Database” on page 521).
If the SQLTYPE is fixed-length, the EUC application will receive the column
as an EUC data stream converted from the DBCS data (which itself may have
up to five bytes of trailing blank pads) with further blank padding if the code
page conversion does not cause the data element to grow to its maximum
size. If the SQLTYPE is varying-length, the original meaning of the content of
the CHAR(5) column is preserved, however, the source five bytes may have a
target of between five and ten bytes. Similarly, in the case of possible data
shrinkage (DBCS application and EUC database), you should consider
working with varying-length data types.
Rules for String Conversions: If you are designing applications for mixed code
page environments, refer to the SQL Reference for any of the following
situations:
v Corresponding string columns in full selects with set operations (UNION,
INTERSECT and EXCEPT)
v Operands of concatenation
v Operands of predicates (with the exception of LIKE)
v Result expressions of a CASE statement
v Arguments of the scalar function COALESCE (and VALUE)
v Expression values of the IN list of an IN predicate
v Corresponding expressions of a multiple row VALUES clause.
In these situations, conversions may take place to the application code page
instead of the database code page.
Character Conversions Past Data Type Limits: In EUC and DBCS unequal code
page environments, situations may occur after conversion takes place, when
the length of the mixed character or graphic string exceeds the maximum
length allowed for that data type. If the length of the string, after expansion,
exceeds the limit of the data type, then type promotion does not occur.
Instead, an error message is returned indicating that the maximum allowed
expansion length has been exceeded. This situation is more likely to occur
while evaluating predicates than with inserts. With inserts, the column width
is more readily known by the application, and the maximum expansion factor
can be readily taken into account. In many cases, this side effect of character
conversion can be avoided by casting the value to an associated data type
with a longer maximum length. For example, the maximum length of a
CHAR value is 254 bytes while the maximum length of a VARCHAR is 32672
bytes. In cases where expansion does exceed the maximum length of the data
type, an SQLCODE -334 (SQLSTATE 22524) is returned.
When DB2 converts characters from a code page to UTF-8, the total number
of bytes that represent the characters may expand or shrink, depending on the
code page and the code points of the characters. 7-bit ASCII remains invariant
in UTF-8, and each ASCII character requires one byte. Non-ASCII UCS-2
characters become two or three bytes each. For more information about UTF-8
conversions, refer to the Administration Guide, or refer to the Unicode standard
documents.
With DB2, you can run remote server functions such as BACKUP, RESTORE,
DROP DATABASE, CREATE DATABASE and so on as if they were local
applications. For more information on using these functions remotely, refer to
the Administration Guide.
Remote Unit of Work
A unit of work is a single logical transaction. It consists of a sequence of SQL
statements in which either all of the operations are successfully performed or
the sequence as a whole is considered unsuccessful.
A remote unit of work lets a user or application program read or update data
at one location per unit of work. It supports access to one database within a
unit of work. While an application program can access several remote
databases, it can only access one database within a unit of work.
You can use multisite update to read and update multiple DB2 Universal
Database databases within a unit of work. If you have installed DB2 Connect
or use the DB2 Connect capability provided with DB2 Universal Database
Enterprise Edition you can also use multisite update with host or AS/400
database servers, such as DB2 Universal Database for OS/390 and DB2
By doing this within one unit of work, you ensure that either both databases
are updated or neither database is updated.
CONNECT TO D1 CONNECT TO D1
SELECT SELECT
UPDATE UPDATE
COMMIT
CONNECT TO D2
CONNECT TO D2 INSERT
INSERT RELEASE CURRENT
COMMIT
SET CONNECTION D1
CONNECT TO D1 SELECT
SELECT RELEASE D1
COMMIT COMMIT
CONNECT RESET
The SQL statements in the left column access only one database for each unit
of work. This is a remote unit of work (RUOW) application.
The SQL statements in the right column access more than one database within
a unit of work. This is a multisite update application.
If you are writing tools or utilities, you may want to issue a message to
your users if the connection is read-only.
Multisite update precompiler options become effective when the first database
connection is made. You can use the SET CLIENT API to supersede
connection settings when there are no existing connections (before any
connection is established or after all connections are disconnected). You can
use the QUERY CLIENT API to query the current connection settings of the
application process.
The binder fails if an object referenced in your application program does not
exist. There are three possible ways to deal with multisite update applications:
This section assumes that you are familiar with the terms relating to the
development of multithreaded applications (such as critical section and
semaphore). If you are not familiar with these terms, consult the
programming documentation for your operating system.
A DB2 application can execute SQL statements from multiple threads using
contexts. A context is the environment from which an application runs all SQL
statements and API calls. All connections, units of work, and other database
resources are associated with a specific context. Each context is associated
with one or more threads within an application.
For each executable SQL statement in a context, the first run-time services call
always tries to obtain a latch. If it is successful, it continues processing. If not
(because an SQL statement in another thread of the same context already has
the latch), the call is blocked on a signaling semaphore until that semaphore is
posted, at which point the call gets the latch and continues processing. The
latch is held until the SQL statement has completed processing, at which time
it is released by the last run-time services call that was generated for that
particular SQL statement.
The net result is that each SQL statement within a context is executed as an
atomic unit, even though other threads may also be trying to execute SQL
statements at the same time. This action ensures that internal data structures
are not altered by different threads at the same time. APIs also use the latch
used by run-time services; therefore, APIs have the same restrictions as
run-time services routines within each context.
By default, all applications have a single context that is used for all database
access. While this is perfect for a single threaded application, the serialization
of SQL statements makes a single context inadequate for a multithreaded
application. By using the following DB2 APIs, your application can attach a
separate context to each thread and allow contexts to be passed between
threads:
v sqleSetTypeCtx()
v sqleBeginCtx()
v sqleEndCtx()
context 2
get semaphore
Suppose the first context successfully executes the SELECT and the
UPDATE statements while the second context gets the semaphore and
accesses the data structure. The first context now tries to get the semaphore,
but it cannot because the second context is holding the semaphore. The
second context now attempts to read a row from table TAB1, but it stops on
a database lock held by the first context. The application is now in a state
where context 1 cannot finish before context 2 is done and context 2 is
waiting for context 1 to finish. The application is deadlocked, but because
the database manager does not know about the semaphore dependency
neither context will be rolled back. This leaves the application suspended.
The techniques for avoiding deadlocks are shown in terms of the above
example, but you can apply them to all multithreaded applications. In general,
treat the database manager as you would treat any protected resource and
you should not run into problems with multithreaded applications.
The context APIs described in “Multiple Thread Database Access” on page 533
allow an application to use concurrent transactions. Each context created in an
application is independent from the other contexts. This means you create a
context, connect to a database using the context, and run SQL statements
against the database without being affected by the activities such as running
COMMIT or ROLLBACK statements of other contexts.
For example, suppose you are creating an application that allows a user to
run SQL statements against one database, and keeps a log of the activities
performed in a second database. Since the log must be kept up to date, it is
necessary to issue a COMMIT statement after each update of the log, but you
do not want the user’s SQL statements affected by commits for the log. This is
a perfect situation for concurrent transactions. In your application, create two
contexts: one connects to the user’s database and is used for all the user’s
SQL; the other connects to the log database and is used for updating the log.
With this design, when you commit a change to the log database, you do not
affect the user’s current unit of work.
context 2
SELECT * FROM TAB1
COMMIT
context 1
COMMIT
Suppose the first context successfully executes the UPDATE statement. The
update establishes locks on all the rows of TAB1. Now context 2 tries to
select all the rows from TAB1. Since the two contexts are independent,
context 2 waits on the locks held by context 1. Context 1, however, cannot
release its locks until context 2 finishes executing. The application is now
deadlocked, but the database manager does not know that context 1 is
waiting on context 2 so it will not force one of the contexts to be rolled
back. This leaves the application suspended.
The techniques for avoiding deadlocks are shown in terms of the above
example, but you can apply them to all applications which use concurrent
transactions.
Due to the unique nature of this environment, DB2 has special behavior and
requirements for applications coded to run in it:
v Multiple databases can be connected to and updated within a unit of work
without consideration of distributed unit of work precompiler options or
client settings.
v The DISCONNECT statement is disallowed, and will be rejected with
SQLCODE -30090 (SQLSTATE 25000) if attempted.
Either a client application or a server procedure can pass the data across the
network. It can be passed using one of the following data types:
v VARCHAR
v LONG VARCHAR
v CLOB
v BLOB
See “Data Types” on page 77 for more information about this topic.
See “Conversion Between Different Code Pages” on page 504 for more
information about how and when data conversion occurs.
Improving Performance
To take advantage of the performance benefits that partitioned environments
offer, you should consider using special programming techniques. For
example, if your application accesses DB2 data from more than one database
manager partition, you need to consider the information contained herein. For
an overview of partitioned environments, refer to the Administration Guide and
the SQL Reference.
Using FOR READ ONLY Cursors
If you declare a cursor from which you intend only to read, include FOR
READ ONLY or FOR FETCH only in the OPEN CURSOR declaration. (FOR
READ ONLY and FOR FETCH ONLY are equivalent statements.) FOR READ
ONLY cursors allow the coordinator partition to retrieve multiple rows at a
time, dramatically improving the performance of subsequent FETCH
statements. When you do not explicitly declare cursors FOR READ ONLY, the
coordinator partition treats them as updatable cursors. Updatable cursors
incur considerable expense because they require the coordinator partition to
retrieve only a single row per FETCH.
Using Directed DSS and Local Bypass
To optimize Online Transaction Processing (OLTP) applications, you may want
to avoid simple SQL statements that require processing on all data partitions.
You should design the application so that SQL statements can retrieve data
from single partitions. These techniques avoid the expense the coordinator
partition incurs communicating with one or all of the associated partitions.
Directed DSS
A distributed subsection (DSS) is the action of sending subsections to the
database partition that needs to do some work for a parallel query. It also
To optimize your application using directed DSS, divide complex queries into
multiple simple queries. For example, in the following query the coordinator
partition matches the partition key with multiple values. Because the data that
satisfies the query lies on multiple partitions, the coordinator partition
broadcasts the query to all partitions:
SELECT ... FROM t1
WHERE PARTKEY IN (:hostvar1, :hostvar2)
Instead, break the query into multiple SELECT statements (each with a single
host variable) or use a single SELECT statement with a UNION to achieve the
same result. The coordinator partition can take advantage of simpler SELECT
statements to use directed DSS to communicate only to the necessary
partitions. The optimized query looks like:
SELECT ... AS res1 FROM t1
WHERE PARTKEY=:hostvar1
UNION
SELECT ... AS res2 FROM t1
WHERE PARTKEY=:hostvar2
Note that the above technique will only improve performance if the number
of selects in the UNION is significantly smaller than the number of partitions.
Local bypass is enabled automatically whenever possible, but you can increase
its use by routing transactions to the partition containing the data for that
transactions. One technique for doing this is to have a remote client maintain
connections to each partition. A transaction can then use the correct
For a given INSERT statement with the VALUES clause, the DB2 SQL
compiler may not buffer the insert based on semantic, performance, or
implementation considerations. If you prepare or bind your application with
the INSERT BUF option, ensure that it is not dependent on a buffered insert.
This means:
v Errors may be reported asynchronously for buffered inserts, or
synchronously for regular inserts. If reported asynchronously, an insert
error may be reported on a subsequent insert within the buffer, or on the
other statement which closes the buffer. The statement that reports the error
is not executed. For example, consider using a COMMIT statement to close
a buffered insert loop. The commit reports an SQLCODE -803 (SQLSTATE
23505) due to a duplicate key from an earlier insert. In this scenario, the
commit is not executed. If you want your application to really commit, for
example, some updates that are performed before it enters the buffered
insert loop, you must reissue the COMMIT statement.
v Rows inserted may be immediately visible through a SELECT statement
using a cursor without a buffered insert. With a buffered insert, the rows
will not be immediately visible. Do not write your application to depend on
these cursor-selected rows if you precompile or bind it with the INSERT
BUF option.
An application that is bound with INSERT BUF should be written so that the
same INSERT statement with VALUES clause is iterated repeatedly before any
statement or API that closes a buffered insert is issued.
Note: You should do periodic commits to prevent the buffered inserts from
filling the transaction log.
If errors are detected during the closing of the INSERT statement, the SQLCA
for the new request will be filled in describing the error, and the new request
is not done. Also, the entire group of rows that were inserted through the
buffered INSERT statement since it was opened are removed from the database.
The state of the application will be as defined for the particular error detected.
For example:
v If the error is a deadlock, the transaction is rolled back (including any
changes made before the buffered insert section was opened).
v If the error is a unique key violation, the state of the database is the same
as before the statement was opened. The transaction remains active, and
any changes made before the statement was opened are not affected.
For example, consider the following application that is bound with the
buffered insert option:
EXEC SQL UPDATE t1 SET COMMENT='about to start inserts';
DO UNTIL EOF OR SQLCODE < 0;
READ VALUE OF hv1 FROM A FILE;
EXEC SQL INSERT INTO t2 VALUES (:hv1);
IF 1000 INSERTS DONE, THEN DO
EXEC SQL INSERT INTO t3 VALUES ('another 1000 done');
RESET COUNTER;
END;
END;
EXEC SQL COMMIT;
Suppose the file contains 8 000 values, but value 3 258 is not legal (for
example, a unique key violation). Each 1 000 inserts results in the execution of
another SQL statement, which then closes the INSERT INTO t2 statement.
During the fourth group of 1 000 inserts, the error for value 3 258 will be
detected. It may be detected after the insertion of more values (not necessarily
the next one). In this situation, an error code is returned for the
INSERT INTO t2 statement.
Suppose, instead, that you have 3 900 rows to insert. Before being told of the
error on row number 3 258, the application may exit the loop and attempt to
issue a COMMIT. The unique-key-violation return code will be issued for the
COMMIT statement, and the COMMIT will not be performed. If the
application wants to COMMIT the 3000 rows which are in the database thus
far (the last execution of EXEC SQL INSERT INTO t3 ... ends the savepoint for
those 3000 rows), then the COMMIT has to be REISSUED! Similar
considerations apply to ROLLBACK as well.
Note: When using buffered inserts, you should carefully monitor the
SQLCODES returned to avoid having the table in an indeterminate
state. For example, if you remove the SQLCODE < 0 clause from the
THEN DO statement in the above example, the table could end up
containing an indeterminate number of rows.
The application can then be run from any supported client platform.
Example: Extracting Large Volume of Data (largevol.c)
Although DB2 Universal Database provides excellent features for parallel
query processing, the single point of connection of an application or an
EXPORT command can become a bottleneck if you are extracting large
volumes of data. This occurs because the passing of data from the database
Assume that you have a table called EMPLOYEE which is stored on 20 nodes,
and you generate a mailing list (FIRSTNME, LASTNAME, JOB) of all
employees who are in a legitimate department (that is, WORKDEPT is not
NULL).
The following query is run on each node in parallel, and then generates the
entire answer set at a single node (the coordinator node):
SELECT FIRSTNME, LASTNAME, JOB FROM EMPLOYEE WHERE WORKDEPT IS NOT NULL
But, the following query could be run on each partition in the database (that
is, if there are five partitions, five separate queries are required, one at each
partition). Each query generates the set of all the employee names whose
record is on the particular partition where the query runs. Each local result set
can be redirected to a file. The result sets then need to be merged into a single
result set.
On AIX, you can use a property of Network File System (NFS) files to
automate the merge. If all the partitions direct their answer sets to the same
file on an NFS mount, the results are merged. Note that using NFS without
blocking the answer into large buffers results in very poor performance.
SELECT FIRSTNME, LASTNAME, JOB FROM EMPLOYEE WHERE WORKDEPT IS NOT NULL
AND NODENUMBER(NAME) = CURRENT NODE
The result can either be stored in a local file (meaning that the final result
would be 20 files, each containing a portion of the complete answer set), or in
a single NFS-mounted file.
The following example uses the second method, so that the result is in a
single file that is NFS mounted across the 20 nodes. The NFS locking
mechanism ensures serialization of writes into the result file from the different
partitions. Note that this example, as presented, runs on the AIX platform
with an NFS file system installed.
#define _POSIX_SOURCE
#define INCL_32
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
/* Initialization */
if (argc == 3) {
strcpy( dbname, argv[2] ); /* get database name from the argument */
EXEC SQL CONNECT TO :dbname IN SHARE MODE ;
if ( SQLCODE != 0 ) {
printf( "Error: CONNECT TO the database failed. SQLCODE = %ld\n",
SQLCODE );
cls:
/* Write the last piece of data out to the file */
if (buffer_len > 0) {
lock_rc = fcntl(iFileHandle, lock_command, &lock);
if (lock_rc != 0) goto file_lock_err;
lock_rc = lseek(iFileHandle, 0, SEEK_END);
if (lock_rc < 0) goto file_seek_err;
lock_rc = write(iFileHandle, (void *)file_buf, buffer_len);
if (lock_rc < 0) goto file_write_err;
lock_rc = fcntl(iFileHandle, lock_command, &unlock);
if (lock_rc != 0) goto file_unlock_err;
}
free(file_buf);
close(iFileHandle);
EXEC SQL CLOSE c1;
exit (0);
ext:
if ( SQLCODE != 0 )
printf( "Error: SQLCODE = %ld.\n", SQLCODE );
EXEC SQL WHENEVER SQLERROR CONTINUE;
EXEC SQL CONNECT RESET;
if ( SQLCODE != 0 ) {
printf( "CONNECT RESET Error: SQLCODE = %ld\n", SQLCODE );
exit(4);
This method is applicable not only to a select from a single table, but also for
more complex queries. If, however, the query requires noncollocated
operations (that is, the Explain shows more than one subsection besides the
Coordinator subsection), this can result in too many processes on some
partitions if the query is run in parallel on all partitions. In this situation, you
can store the query result in a temporary table TEMP on as many partitions as
required, then do the final extract in parallel from TEMP.
If you want to extract all employees, but only for selected job classifications,
you can define the TEMP table with the column names, FIRSTNME,
LASTNAME, and JOB, as follows:
INSERT INTO TEMP
SELECT FIRSTNME, LASTNAME, JOB FROM EMPLOYEE WHERE WORKDEPT IS NOT NULL
AND EMPNO NOT IN (SELECT EMPNO FROM EMP_ACT WHERE
EMPNO<200)
Error-Handling Considerations
In a partitioned environment, DB2 breaks up SQL statements into subsections,
each of which are processed on the partition that contains the relevant data.
As a result, an error may occur on a partition that does not have access to the
application. This does not occur in a single-partition environment.
The severe error SQLCODE -1224 (SQLSTATE 55032) can occur for a variety of
reasons. If you receive this message, check the SQLCA, which will indicate
which node failed. Then check the db2diag.log file shared between the nodes
for details. See “Identifying the Partition that Returned the Error” on page 560
for additional information.
Merged Multiple SQLCA Structures
One SQL statement may be executed by a number of agents on different
nodes, and each agent may return a different SQLCA for different errors or
warnings. The coordinating agent also has its own SQLCA. In addition, the
SQLCA also has fields that indicate global numbers (such as the sqlerrd fields
that indicate row counts). To provide a consistent view for applications, all the
SQLCA values are merged into one structure. This structure is described in
SQL Reference.
If an SQL statement or API call is successful, the partition number in this field
is not significant.
Debugging
You can use the tools described in the following sections for use in debugging
your applications. For more information, refer to the Troubleshooting Guide.
Diagnosing a Looping or Suspended application
It is possible that, after you start a query or application, you suspect that it is
suspended (it does not show any activity) or that it is looping (it shows
activity, but no results are returned to the application). Ensure that you have
One of the functions of the database system monitor that is useful for
debugging applications is to display the status of all active agents. To obtain
the greatest use from a snapshot, ensure that statement collection is being
done before you run the application (preferably immediately after you run
DB2START) as follows:
db2_all "db2 UPDATE MONITOR SWITCHES USING STATEMENT ON"
When you suspect that your application or query is either stalled or looping,
issue the following command:
db2_all "db2 GET SNAPSHOT FOR AGENTS ON database
Refer to the System Monitor Guide and Reference for information on how read
the information collected from the snapshot, and for the details of using the
database system monitor.
Applications can use DB2 SQL to request values of any data types that DB2
can recognize, except for LOB data types. To write to a data source—for
The federated database’s system catalog contains information not only about
the objects in the database, but also about the data sources and certain tables,
views, and functions in them. The catalog, then, contains information about
the entire federated system; accordingly, it is called a global catalog.
You set column options in the ALTER NICKNAME statement. For information
about this statement, see the SQL Reference.
The federated server maps the isolation level you request to a corresponding
one at the data source. To illustrate this, Table 28 lists:
v The isolation levels that you can request. They are:
CS Cursor stability
RR Repeatable read
RS Read stability
UR Uncommitted read
v The Oracle isolation levels that the requested levels map to.
Table 28. Comparable Isolation Levels between the Federated Server and Oracle Data
Sources.
Federated CS RR RS UR
Server
(DB2)
Oracle Default Transaction read-only Transaction read-only Same as
cursor
stability
Suppose that you create a nickname for an Oracle table that has a column C2
with a type of NUMBER(9,0). If you do not change the default mapping, the
type for C2 will be locally defined as INTEGER. And because the 4 bytes of
INTEGER support a maximum precision of 10, you can be sure that all values
of C2 will be returned when C2 is queried from the federated server.
For listings of the default data type mappings, see the SQL Reference.
In the CREATE TYPE MAPPING statement, you can indicate whether the new
mapping that you want is to apply to a specific data source (for example, a
data source that a department in your organization uses) or to all data sources
of a specific type (for example, all Oracle data sources), or to all data sources
of a specific version of a type (for example, all Oracle 8.0.3 data sources).
where +002 signifies that the decimal point should be moved two places to
the right, and +003 signifies that the decimal point should be moved three
places to the right.
So that queries of BONUS can return values that look like dollar amounts,
you could, for this particular table, remap NUMBER(32,3) to a DB2 DECIMAL
type with a precision and scale that reflect the format of actual bonuses. For
example, if you knew that the dollar portion of the bonuses would not exceed
To change the type mapping for a column of a specific table, use the ALTER
NICKNAME statement. With this statement, you can change the type defined
locally for a column of a table for which a nickname has been defined.
This section:
v Illustrates ways to code distributed requests
v Introduces you to a way to abet optimization of certain distributed requests
Coding Distributed Requests
In general, a distributed request uses one or more of three SQL conventions to
specify where data is to be retrieved from: subqueries, set operators, and join
subselects. This section provides examples within the context of the following
scenario: A federated server is configured to access a DB2 Universal Database
for OS/390 data source, a DB2 Universal Database for AS/400 data source,
and an Oracle data source. Stored in each data source is a table that contains
employee information. The federated server references these tables by
nicknames that point to where the tables reside: UDB390_EMPLOYEES,
AS400_EMPLOYEES, and ORA_EMPLOYEES. (Nicknames do not have to
reference data sources; the ones in this scenario do so only to underline the
point that the tables reside in different RDBMSs.) In addition to
ORA_EMPLOYEES, the Oracle data source has a table, nicknamed
ORA_COUNTRIES, that contains information about the countries that the
employees live in.
The following query retrieves all employee names and country codes that are
present in both the AS400_EMPLOYEES and UDB390_EMPLOYEES tables,
even though each table resides in a different data source.
SELECT name, country_code
FROM as400_employees
INTERSECT
SELECT name, country_code
FROM udb390_employees
The query below combines employee names and their corresponding country
names by comparing the country codes listed in two tables. Each table resides
in a different data source.
SELECT t1.name, t2.country_name
FROM djadmin.as400_employees t1, djadmin.ora_countries t2
WHERE t1.country_code = t2.country_code
Several server options address a major area of interaction between DB2 and
data sources: optimization of queries. For example, just as you can use the
column option “varchar_no_trailing_blanks” to inform the DB2 optimizer of
specific data source VARCHAR columns that have no trailing blanks, so can
you use a server option—also called “varchar_no_trailing_blanks”—to inform
the optimizer of data sources whose VARCHAR columns are all free of
trailing blanks. For a summary of how such information helps the optimizer
to create an access strategy, see Table 27 on page 567.
In addition, you can set the server option “plan_hints” to a value that enables
DB2 to provide Oracle data sources with statement fragments, called plan
hints, that help Oracle optimizers do their job. Specifically, plan hints can help
an optimizer to decide matters such as which index to use in accessing a
table, and which table join sequence to use in retrieving data for a result set.
For documentation of the SET SERVER OPTION statement, see the SQL
Reference. For descriptions of all server options and their settings, see the
Administration Guide: Implementation.
Before DB2 can access a data source function that it does not recognize, you
must create a mapping between this function and a counterpart that is stored
in the federated database. To create the mapping, select the counterpart and
submit the DDL statement for creating the mapping. This statement is called
CREATE FUNCTION MAPPING.
The data source function and its federated database counterpart should
correspond in the following ways:
v Both should have the same number of input parameters.
v The data types of the input parameters of the data source function should
be compatible with the data types of the input parameters of the federated
database counterpart.
For documentation on the CREATE FUNCTION MAPPING and CREATE
FUNCTION statements, see the SQL Reference.
Reducing the Overhead of Invoking a Function
The DDL for mapping a federated server function to a data source
function—the CREATE FUNCTION MAPPING statement—can include
estimated statistics on the overhead that would be consumed when the data
source function is invoked. For example, the statement can specify the
estimated number of instructions that would be required to invoke the data
source function, and the estimated number of I/Os that would be expended
for each byte of the argument set that is passed to this function. These
estimates are stored in the global catalog; you can see them in the
SYSCAT.FUNCMAPOPTIONS view. In addition, if a DB2 function (rather than
a function template) participates in the mapping, the catalog contains
After the mapping is created, you can submit distributed requests that
reference the DB2 function. For example, if you mapped a DB2 user-defined
function called DOLLAR to an Oracle user-defined function called
US_DOLLAR, your request would specify DOLLAR rather than US_DOLLAR.
When the request is processed, the optimizer evaluates multiple access
strategies. Some of them reflect the estimated overhead of invoking the DB2
function; others reflect the estimated overhead of invoking the data source
function. The strategy that is expected to cost the least amount of overhead is
the one that is used.
If any estimates of consumed overhead change, you can record the change in
the global catalog. To record new estimates for the data source function, first
drop or disable the function mapping (for information about how to do this,
see “Discontinuing Function Mappings” on page 576). Then recreate the
mapping with the CREATE FUNCTION MAPPING statement, specifying the
new estimates in the statement. When you run the statement, the new
estimates will be added to the SYSCAT.FUNCTIONS catalog view. To record
changed estimates for the DB2 function, update the SYSSTAT.FUNCTIONS
catalog view directly.
The extern "C" prevents type decoration of the function name by the C++
compiler. Without this declaration, you have to include all the type decoration
for the function name when you call the stored procedure, or issue the
CREATE FUNCTION statement.
You can use the OUTPUT precompile option to override the name and path of
the output modified source file. If you use the TARGET C or TARGET
CPLUSPLUS precompile option, the input file does not need a particular
extension.
To locate files included using EXEC SQL INCLUDE, the DB2 C precompiler
searches the current directory first, then the directories specified by the
DB2INCLUDE environment variable. Consider the following examples:
However, if you specify the PREPROCESSOR option, all the #line macros
generated by the precompiler reference the preprocessed file from the external
C preprocessor. For more information about the PREPROCESSOR option, see
“C Macro Expansion” on page 600.
Some debuggers and other tools that relate source code to object code do not
always work well with the #line macro. If the tool you wish to use behaves
unexpectedly, use the NOLINEMACRO option (used with DB2 PREP) when
precompiling. This will prevent the #line macros from being generated.
For example:
EXEC SQL SELECT col INTO :hostvar FROM table;
Note that the actual characters used for end-of-line and TAB vary from
platform to platform. For example, OS/2 uses Carriage Return/Line Feed
for end-of-line, whereas UNIX-based systems use just a Line Feed.
It is also possible to have several local host variables with the same name
as long as they all have the same type and size. To do this, declare the first
occurrence of the host variable to the precompiler between BEGIN
DECLARE SECTION and END DECLARE SECTION statements, and leave
subsequent declarations of the variable out of declare sections. The
following code shows an example of this:
void f3(int i)
{
EXEC SQL BEGIN DECLARE SECTION;
char host_var_3[25];
EXEC SQL END DECLARE SECTION;
EXEC SQL SELECT COL2 INTO :host_var_3 FROM TBL2;
}
void f4(int i)
{
char host_var_3[25];
EXEC SQL INSERT INTO TBL2 VALUES (:host_var_3);
}
Since f3 and f4 are in the same module, and since host_var_3 has the same
type and length in both functions, a single declaration to the precompiler is
sufficient to use it in both places.
Declaring Host Variables in C and C++
An SQL declare section must be used to identify host variable declarations.
This alerts the precompiler to any host variables that can be referenced in
subsequent SQL statements.
A numeric host variable can be used as an input or output variable for any
numeric SQL input or output value. A character host variable can be used as
an input or output variable for any character, date, time or timestamp SQL
input or output value. The application must ensure that output variables are
long enough to contain the values that they receive.
“Syntax for Numeric Host Variables in C or C++” shows the syntax for
declaring numeric host variables in C or C++.
sqlint32
(4)
long
int
sqlint64
__int64
long long
int
(5)
long
int
W X varname ; WY
= value
X *
& const
volatile
Notes:
1 REAL (SQLTYPE 480), length 4
2 DOUBLE (SQLTYPE 480), length 8
3 SMALLINT (SQLTYPE 500)
4 For maximum application portability, use sqlint32 and sqlint64 for
INTEGER and BIGINT host variables, respectively. By default, the use of
long host variables results in precompile error SQL0402 on platforms
where long is a 64 bit quantity, such as 64 BIT UNIX. Use the PREP
Form 1: Syntax for Fixed and Null-terminated Character Host Variables in C/C++
WW char W
auto const unsigned
extern volatile
static
register
W X CHAR ; WY
C String = value
CHAR
(1)
varname
X *
& const
volatile
X *
& const
volatile
Notes:
1 CHAR (SQLTYPE 452), length 1
2 Null-terminated C string (SQLTYPE 460); length can be any valid
constant expression
(1)
W { short var1 ; char var2 [length] ; } W
int unsigned
W X varname Values ; WY
X *
& const
volatile
Values
= { value-1 , value-2 }
For details on using graphic host variables, see “Handling Graphic Host
Variables in C and C++” on page 609.
W X CHAR ; WY
C String = value
CHAR
(2)
varname
X *
& const
volatile
C String
(3)
varname [length]
( varname )
X *
& const
volatile
Notes:
1 To determine which of the two graphic types should be used, see
“Selecting the wchar_t or sqldbchar Data Type in C and C++” on
page 610.
2 GRAPHIC (SQLTYPE 468), length 1
3 Null-terminated graphic string (SQLTYPE 400)
Graphic Host Variable Considerations:
1. The single-graphic form declares a fixed-length graphic string host
variable of length 1 with SQLTYPE of 468 or 469.
WW struct W
auto const tag
extern volatile
static
register
(1) (2)
W { short var-1 ; var-2 [length ] ; } W
int sqldbchar
wchar_t
W X Variable ; WY
X *
& const
volatile
Variable:
variable-name
= { value-1 , value-2 }
Notes:
1 To determine which of the two graphic types should be used, see
“Selecting the wchar_t or sqldbchar Data Type in C and C++” on
page 610.
2 length can be any valid constant expression. Its value after evaluation
(1)
WW SQL TYPE IS BLOB (length ) W
auto const CLOB
extern volatile DBCLOB
static
register
X *
& const
volatile
W ; WY
LOB Data
={init-len,″init-data″}
=SQL_BLOB_INIT(″init-data″)
=SQL_CLOB_INIT(″init-data″)
=SQL_DBCLOB_INIT(″init-data″)
Notes:
1 length can be any valid constant expression, in which the constant K, M,
or G can be used. The value of length after evaluation for BLOB and
Note: Wide character literals, for example, L"Hello", should only be used
in a precompiled program if the WCHARTYPE CONVERT
precompile option is selected.
8. The precompiler generates a structure tag which can be used to cast to the
host variable’s type.
BLOB Example:
Declaration:
static Sql Type is Blob(2M) my_blob=SQL_BLOB_INIT("mydata");
CLOB Example:
Declaration:
volatile sql type is clob(125m) *var1, var2 = {10, "data5data5"};
DBCLOB Example:
Declaration:
SQL TYPE IS DBCLOB(30000) my_dbclob1;
Declaration:
SQL TYPE IS DBCLOB(30000) my_dbclob2 = SQL_DBCLOB_INIT(L"mydbdata");
W X Variable ; WY
Variable
Declaration:
SQL TYPE IS CLOB_LOCATOR my_locator;
Variable
X * variable-name
& const = init-value
volatile
Note:
CLOB File Reference Example (other LOB file reference type declarations are
similar):
Declaration:
static volatile SQL TYPE IS BLOB_FILE my_file;
C Macro Expansion
The C/C++ precompiler cannot directly process any C macro used in a
declaration within a declare section. Instead, you must first preprocess the
source file with an external C preprocessor. To do this, specify the exact
command for invoking a C preprocessor to the precompiler through the
PREPROCESSOR option.
When you specify the PREPROCESSOR option, the precompiler first processes
all the SQL INCLUDE statements by incorporating the contents of all the files
referred to in the SQL INCLUDE statement into the source file. The
precompiler then invokes the external C preprocessor using the command you
specify with the modified source file as input. The preprocessed file, which
the precompiler always expects to have an extension of ″.i″, is used as the
new source file for the rest of the precompiling process.
The previous declarations resolve to the following after you use the
PREPROCESSOR option:
EXEC SQL BEGIN DECLARE SECTION;
char a[4];
char b[12];
struct
{
short length;
char data[18];
} m;
SQL TYPE IS BLOB(4) x;
SQL TYPE IS CLOB(15) y;
SQL TYPE IS DBCLOB(6144) z;
EXEC SQL END DECLARE SECTION;
The fields of a host structure can be any of the valid host variable types.
These include all numeric, character, and large object types. Nested host
structures are also supported up to 25 levels. In the example above, the field
info is a sub-structure, whereas the field name is not, as it represents a
VARCHAR field. The same principle applies to LONG VARCHAR,
VARGRAPHIC and LONG VARGRAPHIC. Pointer to host structure is also
supported.
There are two ways to reference the host variables grouped in a host structure
in an SQL statement:
1. The host structure name can be referenced in an SQL statement.
EXEC SQL SELECT id, name, years, salary
INTO :staff_record
FROM staff
WHERE id = 10;
Other uses of host structures, which may cause an SQL0087N error to occur,
include PREPARE, EXECUTE IMMEDIATE, CALL, indicator variables and
SQLDA references. Host structures with exactly one field are permitted in
such situations, as are references to individual fields (second example).
Indicator Tables in C and C++
An indicator table is a collection of indicator variables to be used with a host
structure. It must be declared as an array of short integers. For example:
short ind_tab[10];
The following lists each host structure field with its corresponding indicator
variable in the table:
staff_record.id ind_tab[0]
staff_record.name ind_tab[1]
staff_record.info.years ind_tab[2]
staff_record.info.salary ind_tab[3]
A scalar indicator variable can also be used in the place of an indicator table
to provide an indicator for the first field of the host structure. This is
equivalent to having an indicator table with only 1 element. For example:
short scalar_ind;
The array will be expanded into its elements when test_record is referenced
in an SQL statement making :test_record equivalent to
:test_record.i[0], :test_record.i[1].
Null-terminated Strings in C and C++
C/C++ null-terminated strings have their own SQLTYPE (460/461 for
character and 468/469 for graphic).
When specified in any other SQL context, a host variable of SQLTYPE 460
with length n is treated as a VARCHAR data type with length n as defined
above. When specified in any other SQL context, a host variable of SQLTYPE
468 with length n is treated as a VARGRAPHIC data type with length n as
defined above.
Pointer Data Types in C and C++
Host variables may be declared as pointers to specific data types with the
following restrictions:
v If a host variable is declared as a pointer, then no other host variable may
be declared with that same name within the same source file. The following
example is not allowed:
char mystring[20];
char (*mystring)[20];
v Use parentheses when declaring a pointer to a null-terminated character
array. In all other cases, parentheses are not allowed. For example:
EXEC SQL BEGIN DECLARE SECTION;
char (*arr)[10]; /* correct */
char *(arr); /* incorrect */
char *arr[10]; /* incorrect */
EXEC SQL END DECLARE SECTION;
Data members are only directly accessible in SQL statements through the
implicit this pointer provided by the C++ compiler in class member functions.
You cannot explicitly qualify an object instance (such as
SELECT name INTO :my_obj.staff_name ...) in an SQL statement.
If you directly refer to class data members in SQL statements, the database
manager resolves the reference using the this pointer. For this reason, you
should leave the optimization level precompile option (OPTLEVEL) at the
default setting of 0 (no optimization). This means that no SQLDA
optimization will be done by the database manager. (This is true whenever
pointer host variables are involved in SQL statements.)
The following example shows how you might directly use class data members
which you have declared as host variables in an SQL statement.
public:
..
.
The following example shows a new method, asWellPaidAs that takes a second
object, otherGuy. This method references its members indirectly through a local
pointer or reference host variable, as you cannot reference its members
directly within the SQL statement.
short int STAFF::asWellPaidAs( STAFF otherGuy )
{
EXEC SQL BEGIN DECLARE SECTION;
short &otherID = otherGuy.staff_id
double otherSalary;
EXEC SQL END DECLARE SECTION;
EXEC SQL SELECT SALARY INTO :otherSalary
FROM STAFF WHERE id = :otherID;
if( sqlca.sqlcode == 0 )
return staff_salary >= otherSalary;
else
return 0;
}
You can define all DB2 C graphic host variable types using either wchar_t or
sqldbchar. You must use wchar_t if you build your application using the
WCHARTYPE CONVERT precompile option (as described in “The
WCHARTYPE Precompiler Option in C and C++” on page 611).
To minimize conversions you can either use the NOCONVERT option and
handle the conversions in your application, or not use GRAPHIC columns.
For the client environments where wchar_t encoding is in two-byte Unicode,
for example Windows NT or AIX version 4.3 and higher, you can use the
NOCONVERT option and work directly with UCS-2. In such cases, your
application should handle the difference between big-endian and little-endian
architectures. With NOCONVERT option, DB2 Universal Database uses
sqldbchar which is always two-byte big-endian.
Table 30 shows the C/C++ equivalent of each column type. When the
precompiler finds a host variable declaration, it determines the appropriate
SQL type value. The database manager uses this value to convert the data
exchanged between the application and itself.
Note: There is no host variable support for the DATALINK data type in any
of the DB2 host languages.
Table 30. SQL Data Types Mapped to C/C++ Declarations
SQL Column Type1 C/C++ Data Type SQL Column Type Description
SMALLINT short 16-bit signed integer
(500 or 501) short int
sqlint16
INTEGER long 32-bit signed integer
(496 or 497) long int
sqlint322
BIGINT long long 64-bit signed integer
(492 or 493) long
__int64
sqlint643
REAL4 float Single-precision floating point
(480 or 481)
DOUBLE5 double Double-precision floating point
(480 or 481)
DECIMAL(p,s) No exact equivalent; use double Packed decimal
(484 or 485)
(Consider using the CHAR and DECIMAL
functions to manipulate packed decimal
fields as character data.)
1<=n<=32 672
Alternately use char[n+1] where n null-terminated variable-length character
is large enough to hold the data string
1<=n<=32 672 Note: Assigned an SQL type of 460/461.
LONG VARCHAR struct tag { Non null-terminated varying character string
(456 or 457) short int; with 2-byte string length indicator
char[n]
}
32 673<=n<=32 700
CLOB(n) sql type is Non null-terminated varying character string
(408 or 409) clob(n) with 4-byte string length indicator
1<=n<=16 336
Alternately use sqldbchar[n+1] null-terminated variable-length double-byte
where n is large enough to hold character string
the data Note: Assigned an SQL type of 400/401.
1<=n<=16 336
LONG VARGRAPHIC struct tag { Non null-terminated varying double-byte
(472 or 473) short int; character string with 2-byte string length
sqldbchar[n] indicator
}
16 337<=n<=16 350
Note: The following data types are only available in the DBCS or EUC environment when precompiled with the
WCHARTYPE CONVERT option.
GRAPHIC(1) wchar_t v Single wide character (for C-type)
(468 or 469) v Single double-byte character (for column
type)
GRAPHIC(n) No exact equivalent; use wchar_t Fixed-length double-byte character string
(468 or 469) [n+1] where n is large enough to
hold the data
1<=n<=127
VARGRAPHIC(n) struct tag { Non null-terminated varying double-byte
(464 or 465) short int; character string with 2-byte string length
wchar_t [n] indicator
}
1<=n<=16 336
Alternately use char[n+1] where n null-terminated variable-length double-byte
is large enough to hold the data character string
1<=n<=16 336 Note: Assigned an SQL type of 400/401.
16 337<=n<=16 350
Note: The following data types are only available in the DBCS or EUC environment.
DBCLOB(n) sql type is Non null-terminated varying double-byte
(412 or 413) dbclob(n) character string with 4-byte string length
indicator
1<=n<=1 073 741 823
6
DBCLOB locator variable sql type is Identifies DBCLOB entities residing on the
(968 or 969) dbclob_locator server
DBCLOB file reference sql type is Descriptor for file containing DBCLOB data
variable6 dbclob_file
(924 or 925)
Notes:
1. The first number under SQL Column Type indicates that an indicator variable is not provided, and the second
number indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values,
or to hold the length of a truncated string. These are the values that would appear in the SQLTYPE field of the
SQLDA for these data types.
2. For platform compatibility, use sqlint32. On 64-bit UNIX platforms, ″long″ is a 64 bit integer. On 64-bit Windows
operating systems and 32-bit UNIX platforms ″long″ is a 32 bit integer.
3. For platform compatibility, use sqlint64. The DB2 Universal Database sqlsystm.h header file will type define
sqlint64 as ″__int64″ on the Windows NT platform when using the Microsoft compiler, ″long long″ on 32-bit UNIX
platforms, and ″long″ on 64 bit UNIX platforms.
4. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
5. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is
v DOUBLE PRECISION
6. This is not a column type but a host variable type.
The following is a sample SQL declare section with host variables declared for
supported SQL data types.
EXEC SQL BEGIN DECLARE SECTION;
..
.
..
.
The following are additional rules for supported C/C++ data types:
v The data type char can be declared as char or unsigned char.
v The database manager processes null-terminated variable-length character
string data type char[n] (data type 460), as VARCHAR(m).
– If LANGLEVEL is SAA1, the host variable length m equals the character
string length n in char[n] or the number of bytes preceding the first
null-terminator (\0), whichever is smaller.
– If LANGLEVEL is MIA, the host variable length m equals the number of
bytes preceding the first null-terminator (\0).
v The database manager processes null-terminated, variable-length graphic
string data type, wchar_t[n] or sqldbchar[n] (data type 400), as
VARGRAPHIC(m).
– If LANGLEVEL is SAA1, the host variable length m equals the character
string length n in wchar_t[n] or sqldbchar[n], or the number of
characters preceding the first graphic null-terminator, whichever is
smaller.
– If LANGLEVEL is MIA, the host variable length m equals the number of
characters preceding the first graphic null-terminator.
v Unsigned numeric data types are not supported.
v The C/C++ data type int is not allowed since its internal representation is
machine dependent.
FOR BIT DATA in C and C++
The standard C or C++ string type 460 should not be used for columns
designated FOR BIT DATA. The database manager truncates this data type
when a null character is encountered. Use either the VARCHAR (SQL type
448) or CLOB (SQL type 408) structures.
..
.
For an example of how to compile and run an SQLJ program, see “Compiling
and Running SQLJ Programs” on page 646.
Java Class Libraries
DB2 Universal Database provides class libraries for JDBC and SQLJ support,
which you must provide in your CLASSPATH or include with your applets as
follows:
db2java.zip
Provides the JDBC driver and JDBC and SQLJ support classes,
including stored procedure and UDF support.
sqlj.zip
Provides the SQLJ translator class files.
runtime.zip
Provides Java run-time support for SQLJ applications and applets.
Java Packages
To use the class libraries included with DB2 in your own applications, you
must include the appropriate import package statements at the top of your
source files. You can use the following packages in your Java applications:
java.sql.*
The JDBC API included in your JDK. You must import this package in
every JDBC and SQLJ program.
sqlj.runtime.*
SQLJ support included with every DB2 client. You must import this
package in every SQLJ program.
sqlj.runtime.ref.*
SQLJ support included with every DB2 client. You must import this
package in every SQLJ program.
Supported SQL Data Types in Java
Table 31 on page 626 shows the Java equivalent of each SQL data type, based
on the JDBC specification for data type mappings. Note that some mappings
depend on whether you use the JDBC version 1.22 or 2.0 driver. The JDBC
driver converts the data exchanged between the application and the database
using the following mapping schema. Use these mappings in your Java
applications and your PARAMETER STYLE JAVA stored procedures and
UDFs. For information on data type mappings for PARAMETER STYLE
DB2GENERAL stored procedures and UDFs, see “Supported SQL Data Types”
on page 756.
For example:
int sqlCode=0; // Variable to hold SQLCODE
String sqlState=“00000”; // Variable to hold SQLSTATE
try
{
// JDBC statements may throw SQLExceptions
stmt.executeQuery("Your JDBC statement here");
catch (SQLException e)
{
sqlCode = e.getErrorCode() // Get SQLCODE
sqlState = e.getSQLState() // Get SQLSTATE
The profdb utility uses the Java Virtual Machine to run the main() method of
class sqlj.runtime.profile.util.AuditorInstaller. For more details on
usage and options for the AuditorInstaller class, visit the DB2 Java Web site
at http://www.ibm.com/software/data/db2/java.
Creating Java Applications and Applets
Whether your application or applet uses JDBC or SQLJ, you need to
familiarize yourself with the JDBC specification, which is available from Sun
Microsystems. See the DB2 Java Web site at
http://www.ibm.com/software/data/db2/java/ for links to JDBC and SQLJ
resources. This specification describes how to call JDBC APIs to access a
database and manipulate data in that database.
You should also read through this section to learn about DB2’s extensions to
JDBC and its few limitations (see “JDBC 2.0” on page 634). If you plan to
create UDFs or stored procedures in Java, see “Creating and Using Java
User-Defined Functions” on page 412 and “Java Stored Procedures and UDFs”
on page 654, as there are considerations that are different for Java than for
other languages.
To build and run JDBC and SQLJ applications and applets, you must set up
your operating system environment according to the instructions in the
Application Building Guide.
SQLJ
Application
SQLJ
Run-Time Classes
Remote
Database
Java
JDBC DB2 Client
Application
Applet Support in Java: Figure 22 on page 630 illustrates how the JDBC
applet driver, also known as the net driver, works. The driver consists of a
JDBC client and a JDBC server, db2jd. The JDBC client driver is loaded on the
Web browser along with the applet. When the applet requests a connection to
a DB2 database, the client opens a TCP/IP socket to the JDBC server on the
machine where the Web server is running. After a connection is set up, the
client sends each of the subsequent database access requests from the applet
to the JDBC server though the TCP/IP connection. The JDBC server then
makes corresponding CLI (ODBC) calls to perform the task. Upon completion,
the JDBC server sends the results back to the client through the connection.
SQLJ applets add the SQLJ client driver on top of the JDBC client driver, but
otherwise work the same as JDBC applets.
For information on starting the DB2 JDBC server, refer to the db2jstrt
command in the Command Reference.
HTTPd
SQLJ Applet
JDBC Server
Local DB2
SQLJ Run-Time
Database
Classes CLI
HTTP
Java/ TCP/IP
JDBC JDBC
Socket
Applet Client
Remote DB2
Database
JDBC Programming
Both applications and applets typically perform the following tasks:
1. Import the appropriate Java packages and classes (java.sql.*)
2. Load the appropriate JDBC driver (COM.ibm.db2.jdbc.app.DB2Driver for
applications; COM.ibm.db2.jdbc.net.DB2Driver for applets)
3. Connect to the database, specifying the location with a URL as defined in
the JDBC specification and using the db2 subprotocol. Applets require you
to provide the user ID, password, host name, and the port number for the
applet server. Applications implicitly use the default value for user ID and
password from the DB2 client catalog, unless you explicitly specify
alternate values.
4. Pass SQL statements to the database
5. Receive the results
6. Close the connection
After coding your program, compile it as you would any other Java program.
You don’t need to perform any special precompile or bind steps.
How the DB2Appl Program Works
The following sample program, DB2Appl.java, demonstrates how to code a
JDBC program for DB2.
class DB2Appl {
static {
try {
Class.forName("COM.ibm.db2.jdbc.app.DB2Driver").newInstance();
} catch (Exception e) {
System.out.println(e);
}
}
// URL is jdbc:db2:dbname
String url = "jdbc:db2:sample"; 3
try {
if (argv.length == 0) {
// connect with default id/password
con = DriverManager.getConnection(url);
}
else if (argv.length == 2) {
String userid = argv[0];
String passwd = argv[1];
System.out.println("Received results:");
rs.close();
System.out.print("Changed "+rowsUpdated);
if (1 == rowsUpdated)
System.out.println(" row.");
else
System.out.println(" rows.");
stmt.close();
con.close();
} catch( Exception e ) {
System.out.println(e);
}
}
}
To build your application, you must also install the JDK for your operating
system. For information on setting up your Java environment, building DB2
Java applications, and running DB2 Java applications, refer to the Application
Building Guide.
Distributing and Running a JDBC Applet
Like other Java applets, you distribute your JDBC applet over the network
(intranet or Internet). Typically you would embed the applet in a hypertext
markup language (HTML) page. For example, to call the sample applet
DB2Applt.java, (provided in sqllib/samples/java) you might use the
following <APPLET> tag:
<applet code="DB2Applt.class" width=325 height=275 archive="db2java.zip">
<param name="server" value="webhost">
<param name="port" value="6789">
</applet>
Note: To ensure that the Web browser downloads db2java.zip from the server,
ensure that the CLASSPATH environment variable on the client does
not include db2java.zip. Your applet may not function correctly if the
client uses a local version of db2java.zip.
For information on installing the JDBC 2.0 drivers for your operating system,
refer to the Application Building Guide.
The DB2 JDBC 2.0 driver does not support the following features:
v Updatable Scrollable ResultSet
v New SQL types (Array, Ref, Distinct, Java Object)
v Customized SQL type mapping
Java Naming and Directory Interface (JNDI) for Naming Databases: DB2
provides the following support for the Javing Naming and Directory Interface
(JNDI) for naming databases:
javax.naming.Context
This interface is implemented by COM.ibm.db2.jndi.DB2Context, which
handles the storage and retrieval of DataSource objects. In order to
support persistent associations of logical data source names to
physical database information, such as database names, these
associations are saved in a file named .db2.jndi. For an application,
the file resides (or is created if none exists) in the directory specified
by the USER.HOME environment variable. For an applet, you must
create this file in the root directory of the web server to facilitate the
lookup() operation. Applets do not support the bind(), rebind(),
unbind() and rename() methods of this class. Only applications can
bind DataSource objects to JNDI.
javax.sql.Datasource
This interface is implemented by COM.ibm.db2.jdbc.DB2DataSource.
You can save an object of this class in any implementation of
javax.naming.Context. This class also makes use of connection
pooling support.
javax.naming.InitialContextFactory
This interface is implemented by
COM.ibm.db2.jndi.DB2InitialContextFactory, which creates an
instance of DB2Context. Applications automatically set the value of the
JAVA.NAMING.FACTORY.INITIAL environment variable to
COM.ibm.db2.jndi.DB2InitialContextFactory To use this class in an
applet, call InitialContext() using the following syntax:
Hashtable env = new Hashtable( 5 );
env.put( "java.naming.factory.initial",
"COM.ibm.db2.jndi.DB2InitialContextFactory" );
Context ctx = new InitialContext( env );
Java Transaction APIs (JTA): DB2 supports the Java Transaction APIs (JTA)
through the DB2 JDBC application driver. DB2 does not provide JTA support
with the DB2 JDBC net driver.
javax.sql.XAConnection
This interface is implemented by COM.ibm.db2.jdbc.DB2XAConnection.
javax.sql.XADataSource
This interface is implemented by COM.ibm.db2.jdbc.DB2XADataSource,
and is a factory of COM.ibm.db2.jdbc.DB2PooledConnection objects.
javax.transactions.xa.XAResource
This interface is implemented by COM.ibm.db2.jdbc.app.DBXAResource.
javax.transactions.xa.Xid
This interface is implemented by COM.ibm.db2.jdbc.DB2Xid.
Note: You cannot use the DB2 JDBC 2.0 driver support for LOB and graphic
types in stored procedures or UDFs. To use LOB or graphic types in
stored procedures or UDFs, you must use the JDBC 1.22 driver support.
Note: If you use the JDBC 1.22 driver, the JDBCVERSION keyword does not
affect LOB support for JDBC.
or
For an SQLJ applet, you need both db2java.zip and runtime.zip files. If you
choose not to package all your applet classes, classes in db2java.zip and
runtime.zip into a single Jar file, put both db2java.zip and runtime.zip
(separated by a comma) into the archive parameter in the ″applet″ tag. For
those browsers that do not support multiple zip files in the archive tag,
specify db2java.zip in the archive tag, and unzip runtime.zip with your
applet classes in a working directory that is accessible to your web browser.
Embedding SQL Statements in Java
Static SQL statements in SQLJ appear in SQLJ clauses. SQLJ clauses are the
mechanism by which SQL statements in Java programs are communicated to
the database.
The SQLJ translator recognizes SQLJ clauses and SQL statements because of
their structure, as follows:
v SQLJ clauses begin with the token #sql
v SQLJ clauses end with a semicolon
The simplest SQLJ clauses are executable clauses and consist of the token #sql
followed by an SQL statement enclosed in braces. For example, the following
SQLJ clause may appear wherever a Java statement may legally appear. Its
purpose is to delete all rows in the table named TAB:
#sql { DELETE FROM TAB };
In an SQLJ executable clause, the tokens that appear inside the braces are SQL
tokens, except for the host variables. All host variables are distinguished by
the colon character so the translator can identify them. SQL tokens never
In general, SQL tokens are case insensitive (except for identifiers delimited by
double quotation marks), and can be written in upper, lower, or mixed case.
Java tokens, however, are case sensitive. For clarity in examples, case
insensitive SQL tokens are uppercase, and Java tokens are lowercase or mixed
case. Throughout this chapter, the lowercase null is used to represent the Java
″null″ value, and the uppercase NULL to represent the SQL null value.
You can then use the translated and compiled iterator in a different source
file. To use the iterator:
1. Declare an instance of the generated iterator class
2. Assign the SELECT statement for the positioned UPDATE or DELETE to
the iterator instance
3. Execute positioned UPDATE or DELETE statements using the iterator
/**********************
** Register Driver **
**********************/
static
{
try
{
Class.forName("COM.ibm.db2.jdbc.app.DB2Driver").newInstance();
}
catch (Exception e)
{
e.printStackTrace();
}
}
/********************
** Main **
********************/
// URL is jdbc:db2:dbname
String url = "jdbc:db2:sample";
The following query contains the host variable, :x, which is the Java variable,
field, or parameter x visible in the scope containing the query:
SELECT COL1, COL2 FROM TABLE1 WHERE :x > COL3
All host variables specified in compound SQL are input host variables by
default. You have to specify the parameter mode identifier OUT or INOUT
before the host variable in order to mark it as an output host variable. For
example:
#sql {begin compound atomic static
select count(*) into :OUT count1 from employee;
end compound}
Stored procedures may have IN, OUT, or INOUT parameters. In the above
case, the value of host variable myarg is changed by the execution of that
clause. An SQLJ executable clause may call a function by means of the SQL
VALUES construct. For example, assume a function F that returns an integer.
The following example illustrates a call to that function that then assigns its
result to Java local variable x:
{
int x;
#sql x = { VALUES( F(34) ) };
}
sqlj.url=jdbc:db2:sample
sqlj.driver=COM.ibm.db2.jdbc.app.DB2Driver
sqlj.online=sqlj.semantics.JdbcChecker
sqlj.offline=sqlj.semantics.OfflineChecker
where dbname is the name of the database. You can also specify these
options on the command line. For example, to specify the database mydata
when translating MyClass, you can issue the following command:
sqlj -url=jdbc:db2:mydata MyClass.sqlj
Note that the SQLJ translator automatically compiles the translated source
code into class files, unless you explicitly turn off the compile option with
the -compile=false clause.
2. Install DB2 SQLJ Customizers on generated profiles and create the DB2
packages in the DB2 database dbname:
db2profc -user=user-name -password=user-password -url=jdbc:db2:dbname
-prepoptions="bindfile using MyClass0.bnd package using MyClass0"
MyClass_SJProfile0.ser
db2profc -user=user-name -password=user-password -url=jdbc:db2:dbname
-prepoptions="bindfile using MyClass1.bnd package using MyClass1"
MyClass_SJProfile1.ser
...
3. Execute the SQLJ program:
java MyClass
The translator generates the SQL syntax for the database for which the SQLJ
profile is customized. For example,
i = { VALUES ( F(:x) ) };
VALUES(F(?)) INTO ?
but when connecting to a DB2 Universal Database for OS/390 database, DB2
customizes the VALUE statement into:
SELECT F(?) INTO ? FROM SYSIBM.SYSDUMMY1
If you run the DB2 SQLJ profile customizer, db2profc, against a DB2 Universal
Database database and generate a bind file, you cannot use that bind file to
bind up to a DB2 for OS/390 database when there is a VALUES clause in the
bind file. This also applies to generating a bind file against a DB2 for OS/390
database and trying to bind with it to a DB2 Universal Database database.
For detailed information on building and running DB2 SQLJ programs, refer
to the Application Building Guide.
SQLJ Translator Options
The SQLJ translator supports the same precompile options as the DB2
PRECOMPILE command, with the following exceptions:
CONNECT
DISCONNECT
DYNAMICRULES
NOLINEMACRO
OPTLEVEL
OUTPUT
SQLCA
SQLFLAG
SQLRULES
SYNCPOINT
TARGET
WCHARTYPE
To print the content of the profiles generated by the SQLJ translator in plain
text, use the profp utility as follows:
profp MyClass_SJProfile0.ser
profp MyClass_SJProfile1.ser
...
To print the content of the DB2 customized version of the profile in plain text,
use the db2profp utility as follows, where dbname is the name of the database:
db2profp -user=user-name -password=user-password -url=jdbc:db2:dbname
MyClass_SJProfile0.ser
db2profp -user=user-name -password=user-password -url=jdbc:db2:dbname
MyClass_SJProfile1.ser
...
To run your UDFs and stored procedures on the server, DB2 calls the JVM.
Ensure that the appropriate Java Development Kit (JDK) or Java Runtime
Environment is installed and configured on your DB2 server before starting
up the database.
The runtime libraries for the JVM must be available in the system search
paths (PATH or LIBPATH or LD_LIBRARY_PATH, and CLASSPATH). For
more information on setting up the Java environment, refer to the Application
Building Guide.
DB2 loads or starts the JVM on the first call to a Java UDF or stored
procedure. For NOT FENCED UDFs and stored procedures, DB2 loads one
JVM per database instance, and runs it inside the address space of the
database engine to improve performance. For FENCED UDFs, DB2 uses a
distinct JVM inside the db2udf process; similarly, FENCED stored procedures
use a distinct JVM inside the db2dari process. In all cases, the JVM stays
loaded until the embedding process ends.
Note: If you are running a database server with local clients node type, you
must set the maxdari database manager configuration parameter to a
non-zero value before you invoke a Java stored procedure.
You can study the Java stored procedure samples that are provided in the
sqllib/samples/java directory. For a list of the sample programs included
with DB2, see “Appendix B. Sample Programs” on page 729.
Remember that all Java class files that you use to implement a stored
procedure or UDF must reside in either a JAR file you have installed in the
database, or in the correct stored procedure or UDF path for your operating
system as discussed in “Where to Put Java Classes” on page 650.
Note: On a mixed code page database server, Java user-defined functions and
stored procedures cannot use CLOB type arguments, because random
access on character boundaries on large mixed code page strings has
not yet been implemented. Full support for all LOB types is intended
Note: If you update or replace Java routine class files, you must issue a CALL
SQLJ.REFRESH_CLASSES() statement to enable DB2 to load the
updated classes. For more information on the CALL
SQLJ.REFRESH_CLASSES() statement, refer to “Updating Java Classes
for Routines” on page 651.
To enable DB2 to find and use your stored procedures and UDFs, you must
store the corresponding class files in the function directory, which is a directory
defined for your operating system as follows:
Unix operating systems
sqllib/function
OS/2 or Windows 32-bit operating systems
instance_name\function, where instance_name represents the value of
the DB2INSTPROF instance-specific registry setting.
For example, the function directory for a Windows NT server with
DB2 installed in the C:\sqllib directory, and with no specified
DB2INSTPROF registry setting, is:
C:\sqllib\function
If you choose to use individual class files, you must store the class files in the
appropriate directory for your operating system. If you declare a class to be
part of a Java package, create the corresponding subdirectories in the function
directory and place the files in the corresponding subdirectory. For example, if
you create a class ibm.tests.test1 for a Linux system, store the
corresponding Java bytecode file (named test1.class) in
sqllib/function/ibm/tests.
The JVM that DB2 invokes uses the CLASSPATH environment variable to
locate Java files. DB2 adds the function directory and sqllib/java/db2java.zip
to the front of your CLASSPATH setting.
To set your environment so that the JVM can find the Java class files, you may
need to set the jdk11_path configuration parameter, or else use the default
Note: You cannot update NOT FENCED routines without stopping and
restarting the database manager.
Debugging Stored Procedures in Java
DB2 provides the capability to interactively debug a stored procedure written
in JDBC when the stored procedure executes on an AIX or Windows NT
server. The easiest way to invoke debugging is through the DB2 Stored
Procedure Builder. See the online help for the Stored Procedure Builder for
more information about how to do this.
Preparing to Debug
1. Compile the stored procedure in debug mode according to your JDK
documentation.
2. Prepare the server.
v If the source code is stored on the server, set the CLASSPATH
environment variable to include the Java source code directory or store
the source code in the function directory, as defined in “Where to Put
Java Classes” on page 650.
v Use the db2set command to enable debugging for your instance:
db2set DB2ROUTINE_DEBUG=ON
3. Set the client environment variables.
v If the source code is stored on the client, set the DB2_DBG_PATH
environment variable to the directory which contains the source code for
the stored procedure.
4. Create the debug table.
If you use the Stored Procedure Builder to invoke debugging, you can use the
debugger utility to populate and manage the debug table. Otherwise, to
enable debugging support for a given stored procedure, issue the following
command from the CLP:
DB2 INSERT INTO db2dbg.routine_debug_user (AUTHID, TYPE,
ROUTINE_SCHEMA, SPECIFICNAME, DEBUG_ON, CLIENT_IPADDR)
VALUES ('authid', 'S', 'schema', 'proc_name', 'Y', 'IP_num')
where:
authid The user name used for debugging the stored procedure, that is, the
user name used to connect to the database.
schema
The schema name for the stored procedure.
Whether you create the debug table manually or through the Stored
Procedure Builder, the debug table is named DB2DBG.ROUTINE_DEBUG and
has the following definition:
Table 32. DB2DBG.ROUTINE_DEBUG Table Definition
Column Name Data Type Attributes Description
AUTHID VARCHAR(128) NOT NULL, The application authid under which the
DEFAULT USER debugging for this stored procedure is to
be performed. This is the user ID that
was provided on connect to the database.
TYPE CHAR(1) NOT NULL Valid values: ’S’ (Stored Procedure)
ROUTINE_SCHEMA VARCHAR(128) NOT NULL Schema name of the stored procedure to
be debugged
SPECIFICNAME VARCHAR(18) NOT NULL Specific name of the stored procedure to
be debugged
DEBUG_ON CHAR(1) NOT NULL, Valid values:
DEFAULT ’N’ v Y - enables debugging for the stored
procedure named in
ROUTINE_SCHEMA.SPECIFICNAME
v N - disables debugging for stored
procedure named in
ROUTINE_SCHEMA.SPECIFICNAME.
This is the default.
In the debugger, you can step through the source code, display variables, and
set breakpoints in the source code. For detailed information on using the
debugger, see the debugger documentation contained in the online help.
Java Stored Procedures and UDFs
Java stored procedures and UDFs, collectively known as Java routines, must be
registered in the DB2 catalog. DB2 Universal Database Version 7 supports the
SQLJ Routines core specification for registering and deploying Java routines.
Use PARAMETER STYLE JAVA in your CREATE PROCEDURE and CREATE
FUNCTION statements to specify compliance with SQLJ Routines.
When you install a JAR file, DB2 extracts the Java class files from the JAR file
and registers each class in the system catalog. DB2 copies the JAR file to a
jar/schema subdirectory of the function directory. DB2 gives the new copy of
the JAR file the name given in the jar-id clause. Do not directly modify a JAR
file which has been installed in the DB2 instance. Instead, you can use the
CALL SQLJ.REMOVE_JAR and CALL SQLJ.REPLACE_JAR commands to
remove or replace an installed JAR file.
Notes:
1 Specifies the URL containing the JAR file to be installed or replaced. The
only URL scheme supported is ’file:’.
2 Specifies the JAR identifier in the database to be associated with the
file specified by the jar-url.
Note: On OS/2 and Windows 32-bit operating systems, DB2 stores JAR files
in the path specified by the DB2INSTPROF instance-specific registry
setting. To make JAR files unique for an instance, you must specify a
unique value for DB2INSTPROF for that instance.
Subsequent SQL commands that use of the Procedure.jar file refer to it with
the name myproc_jar. To remove a JAR file from the database, use the CALL
REMOVE_JAR command with the following syntax:
Notes:
1 Specifies the JAR identifier of the JAR file that is to be removed from the
database
To remove the JAR file myProc_jar from the database, enter the following
command at the Command Line Processor:
CALL SQLJ.REMOVE_JAR('myProc_jar')
Functions That Return A Single Value in Java: Declare Java methods that
return a single value with the Java return type that corresponds to the
respective SQL data type (see “Supported SQL Data Types in Java” on
page 625). You can write a scalar UDF that returns an SQL INTEGER value as
follows:
public class JavaExamples {
public static int getDivision(String division) throws SQLException {
if (division.equals("Corporate")) return 1;
else if (division.equals("Eastern")) return 2;
else if (division.equals("Midwest")) return 3;
Functions That Return Multiple Values in Java: Java methods which are
cataloged as stored procedures may return one or more values. You can also
write Java stored procedures that return multiple result sets; see “Returning
Result Sets from Stored Procedures” on page 225. To code a method which
will return a predetermined number of values, declare the return type void
and include the types of the expected output as arrays in the method
signature. You can write a stored procedure which returns the names, years of
service, and salaries of the two most senior employees with a salary under a
given threshold as follows:
public Class JavaExamples {
public static void lowSenioritySalary
(String[] name1, int[] years1, BigDecimal[] salary1,
String[] name2, int[] years2, BigDecimal[] salary2,
Integer threshhold) throws SQLException {
#sql iterator ByNames (String name, int years, BigDecimal salary);
ByNames result;
#sql result = {"SELECT name, years, salary
FROM staff
WHERE salary < :threshhold
ORDER BY years DESC"};
if (result.next()) {
name1[0] = result.name();
years1[0] = result.years();
salary1[0] = result.salary();
}
else {
name1[0] = "****";
return;
}
if (result.next()) {
name2[0] = result.name();
years2[0] = result.years();
salary2[0] = result.salary();
}
else {
name2[0] = "****";
return;
}
}
}
However, the JDBC 1.22 specification does not explicitly mention large objects
(LOBs) or graphic types. DB2 provides the following support for LOBs and
graphic types if you use the JDBC 1.22 driver.
If you use LOBs or graphic types in your applications, treat LOBs as the
corresponding LONGVAR type. Because LOB types are declared in SQL with
a maximum length, ensure that you do not return arrays or strings longer
than the declared limit. This consideration applies to SQL string types as well.
Treat GRAPHIC and DBCLOB data types as the corresponding CHAR types.
To convert data from the server code page to Unicode, the DB2 client first
converts the data from the server code page to the client code page. The client
then converts the data from the client code page to Unicode. The following
JDBC APIs convert data to or from Unicode:
getString
Converts from server code page to Unicode.
setString
Converts from Unicode to server code page.
getUnicodeStream
Converts from server code page to Unicode.
setUnicodeStream
Converts from Unicode to server code page.
The following JDBC APIs involve conversion between the client code page
and the server code page:
setAsciiStream
Converts from client code page to server code page.
getAsciiStream
Converts from server code page to client code page.
JDBC defines the default values for session state of newly created connections.
In most cases, SQLJ adopts these default values. However, whereas a newly
created JDBC connection has auto commit mode on by default, an SQLJ
connection context requires the auto commit mode to be specified explicitly
upon construction.
Connection Resource Management in Java
Calling the close() method of a connection context instance causes the
associated JDBC connection instance and the underlying database connection
to be closed. Since connection contexts may share the underlying database
connection with other connection contexts and/or JDBC connections, it may
not be desirable to close the underlying database connection when a
connection context is closed. A programmer may wish to release the resources
maintained by the connection context (for example, statement handles)
without actually closing the underlying database connection. To this end,
connection context classes also support a close() method which takes a
Boolean argument indicating whether or not to close the underlying database
connection: the constant CLOSE_CONNECTION if the database connection should
be closed, and KEEP_CONNECTION if it should be retained. The variant of
close() that takes no arguments is a shorthand for calling
close(CLOSE_CONNECTION).
Both SQLJ connection context objects and JDBC connection objects respond to
the close() method. When writing an SQLJ program, it is sufficient to call the
close() method on only the connection context object. This is because closing
the connection context also closes the JDBC connection associated with it.
However, it is not sufficient to close only the JDBC connection returned by the
getConnection() method of a connection context. This is because the close()
method of a JDBC connection does not cause the containing connection
context to be closed, and therefore resources maintained by the connection
context are not released until it is garbage collected.
Because Perl is an interpreted language and the Perl DBI Module uses
dynamic SQL, Perl is an ideal language for quickly creating and revising
prototypes of DB2 applications. The Perl DBI Module uses an interface that is
quite similar to the CLI and JDBC interfaces, which makes it easy for you to
port your Perl prototypes to CLI and JDBC.
Most database vendors provide a database driver for the Perl DBI Module,
which means that you can also use Perl to create applications that access data
from many different database servers. For example, you can write a Perl DB2
application that connects to an Oracle database using the DBD::Oracle
database driver, fetch data from the Oracle database, and insert the data into a
DB2 database using the DBD::DB2 database driver.
Perl Restrictions
The Perl DBI module supports only dynamic SQL. When you need to execute
a statement multiple times, you can improve the performance of your Perl
DB2 applications by issuing a prepare call to prepare the statement.
where:
$dbhandle
represents the database handle returned by the connect statement
dbalias
represents a DB2 alias cataloged in your DB2 database directory
$userID
represents the user ID used to connect to the database
$password
represents the password for the user ID used to connect to the
database
The following Perl code creates a statement handle that accepts a parameter
marker for the WHERE clause of a SELECT statement. The code then executes
the statement twice using the input values 25000 and 35000 to replace the
parameter marker.
my $sth = $dbhandle->prepare(
'SELECT firstnme, lastname
FROM employee
WHERE salary > ?'
);
my $rc = $sth->execute(25000);
..
.
my $rc = $sth->execute(35000);
my $database='dbi:DB2:sample';
my $user='';
my $password='';
my $sth = $dbh->prepare(
q{ SELECT firstnme, lastname
FROM employee }
)
or die "Can't prepare statement: $DBI::errstr";
my $rc = $sth->execute
or die "Can't execute statement: $DBI::errstr";
# check for problems which may have terminated the fetch early
warn $DBI::errstr if $DBI::err;
$sth->finish;
By default, the output file has an extension of .cbl, but you can use the
OUTPUT precompile option to specify a new name and path for the output
modified source file.
If you build the DB2 sample programs with the supplied script files, you must
change the include file path specified in the script files to the cobol_i
directory and not the cobol_a directory.
If you do not use the ″System/390 host data type support″ feature of the IBM
COBOL compiler, or you use an earlier version of this compiler, then the DB2
include files for your applications are in the following directory:
$HOME/sqllib/include/cobol_a
The include files that are intended to be used in your applications are
described below.
SQL (sql.cbl) This file includes language-specific prototypes for the binder,
precompiler, and error message retrieval APIs. It also defines
system constants.
SQLAPREP (sqlaprep.cbl)
This file contains definitions required to write your own
precompiler.
SQLCA (sqlca.cbl)
This file defines the SQL Communication Area (SQLCA)
structure. The SQLCA contains variables that are used by the
database manager to provide an application with error
information about the execution of SQL statements and API
calls.
SQLCA_92 (sqlca_92.cbl)
This file contains a FIPS SQL92 Entry Level compliant version
of the SQL Communications Area (SQLCA) structure. This file
should be included in place of the sqlca.cbl file when
writing DB2 applications that conform to the FIPS SQL92
Entry Level standard. The sqlca_92.cbl file is automatically
included by the DB2 precompiler when the LANGLEVEL
precompiler option is set to SQL92E.
SQLCODES (sqlcodes.cbl)
This file defines constants for the SQLCODE field of the
SQLCA structure.
SQLDA (sqlda.cbl)
This file defines the SQL Descriptor Area (SQLDA) structure.
The SQLDA is used to pass data between an application and
the database manager.
SQLEAU (sqleau.cbl)
This file contains constant and structure definitions required
For example:
EXEC SQL SELECT col INTO :hostvar FROM table END-EXEC.
Note that the actual characters used for end-of-line and TAB vary from
platform to platform. For example, OS/2 uses Carriage Return/Line Feed
for end-of-line, whereas UNIX-based systems use just a Line Feed.
Syntax for Numeric Host Variables in COBOL shows the syntax for numeric
host variables.
W . WY
(1) IS
COMP-3 VALUE value
IS COMPUTATIONAL-3
USAGE COMP-5
COMPUTATIONAL-5
Notes:
1 An alternative for COMP-3 is PACKED-DECIMAL.
Floating Point
(1)
WW 01 variable-name COMPUTATIONAL-1 W
77 IS COMP-1
USAGE (2)
COMPUTATIONAL-2
COMP-2
W . WY
IS
VALUE value
Notes:
1 REAL (SQLTYPE 480), Length 4
2 DOUBLE (SQLTYPE 480), Length 8
Numeric Host Variable Considerations:
1. Picture-string must have one of the following forms:
v S9(m)V9(n)
v S9(m)V
v S9(m)
2. Nines may be expanded (e.g., ″S999″ instead of S9(3)″)
3. m and n must be positive integers.
Syntax for Character Host Variables in COBOL: Fixed Length shows the
syntax for character host variables.
W . WY
IS
VALUE value
Variable Length
WW 01 variable-name . WY
IS
WW 49 identifier-1 PICTURE S9(4) W
PIC
W . WY
COMP-5 IS
IS COMPUTATIONAL-5 VALUE value
USAGE
IS
WW 49 identifier-2 PICTURE picture-string W
PIC IS
VALUE value
W . WY
IS
W DISPLAY-1 . WY
IS
VALUE value
Variable Length
WW 01 variable-name . WY
IS
WW 49 identifier-1 PICTURE S9(4) W
PIC
IS IS
WW 49 identifier-2 PICTURE picture-string USAGE DISPLAY-1 W
PIC
W . WY
IS
VALUE value
W ( length ) . WY
K
M
G
Declaring:
01 MY-BLOB USAGE IS SQL TYPE IS BLOB(2M).
CLOB Example:
Declaring:
01 MY-CLOB USAGE IS SQL TYPE IS CLOB(125M).
DBCLOB Example:
Declaring:
01 MY-DBCLOB USAGE IS SQL TYPE IS DBCLOB(30000).
Declaring:
01 MY-LOCATOR USAGE SQL TYPE IS BLOB-LOCATOR.
Declaring:
01 MY-FILE USAGE IS SQL TYPE IS BLOB-FILE.
Group data items in the declare section can have any of the valid host
variable types described above as subordinate data items. This includes all
numeric and character types, as well as all large object types. You can nest
group data items up to 10 levels. Note that you must declare VARCHAR
character types with the subordinate items at level 49, as in the above
example. If they are not at level 49, the VARCHAR is treated as a group data
item with two subordinates, and is subject to the rules of declaring and using
group data items. In the example above, staff-info is a group data item,
whereas staff-name is a VARCHAR. The same principle applies to LONG
VARCHAR, VARGRAPHIC and LONG VARGRAPHIC. You may declare
group data items at any level between 02 and 49.
You can use group data items and their subordinates in four ways:
Method 1.
Method 2.
Note: The reference to staff-id is qualified with its group name using the
prefix staff-record., and not staff-id of staff-record as in pure
COBOL.
Method 3.
Method 4.
To resolve the ambiguous reference, you can use partial qualification of the
subordinate item, for example:
EXEC SQL SELECT id, name, dept, job
INTO
:staff-id,
:staff-name,
:staff-info.staff-dept,
:staff-info.staff-job
FROM staff WHERE id = 10 END-EXEC.
For example:
01 staff-indicator-table.
05 staff-indicator pic s9(4) comp-5
occurs 7 times.
This indicator table can be used effectively with the first format of group item
reference above:
EXEC SQL SELECT id, name, dept, job
INTO :staff-record :staff-indicator
FROM staff WHERE id = 10 END-EXEC.
Note: If there are k more indicator entries in the indicator table than there are
subordinates in the data item (for example, if staff-indicator has 10
entries, making k=6), the k extra entries at the end of the indicator table
are ignored. Likewise, if there are k fewer indicator entries than
subordinates, the last k subordinates in the group item do not have
indicators associated with them. Note that you can refer to individual
elements in an indicator table in an SQL statement.
Using REDEFINES in COBOL Group Data Items
You can use the REDEFINES clause when declaring host variables. If you
declare a member of a group data item with the REDEFINES clause and that
group data item is referred to as a whole in an SQL statement, any
subordinate items containing the REDEFINES clause are not expanded. For
example:
01 foo.
10 a pic s9(4) comp-5.
10 a1 redefines a pic x(2).
10 b pic x(10).
That is, the subordinate item a1, declared with the REDEFINES clause is not
automatically expanded out in such situations. If a1 is unambiguous, you can
explicitly refer to a subordinate with a REDEFINES clause in an SQL
statement, as follows:
... INTO :foo.a1 ...
or
... INTO :a1 ...
Table 33 on page 682 shows the COBOL equivalent of each column type.
When the precompiler finds a host variable declaration, it determines the
appropriate SQL type value. The database manager uses this value to convert
the data exchanged between the application and itself.
Not every possible data description for host variables is recognized. COBOL
data items must be consistent with the ones described in the following table.
If you use other data items, an error can result.
Note: There is no host variable support for the DATALINK data type in any
of the DB2 host languages.
1<=n<=32 672
LONG VARCHAR 01 name. Long variable-length
(456 or 457) 49 length PIC S9(4) COMP-5. character string
49 data PIC X(n).
32 673<=n<=32 700
CLOB(n) 01 MY-CLOB USAGE IS SQL TYPE IS CLOB(n). Large object
(408 or 409) variable-length character
1<=n<=2 147 483 647 string
CLOB locator variable4 01 MY-CLOB-LOCATOR USAGE IS SQL TYPE IS Identifies CLOB entities
(964 or 965) CLOB-LOCATOR. residing on the server
CLOB file reference variable4 01 MY-CLOB-FILE USAGE IS SQL TYPE IS Descriptor for file
(920 or 921) CLOB-FILE. containing CLOB data
BLOB(n) 01 MY-BLOB USAGE IS SQL TYPE IS BLOB(n). Large object
(404 or 405) variable-length binary
1<=n<=2 147 483 647 string
4
BLOB locator variable 01 MY-BLOB-LOCATOR USAGE IS SQL TYPE IS Identifies BLOB entities
(960 or 961) BLOB-LOCATOR. residing on the server
BLOB file reference variable4 01 MY-CLOB-FILE USAGE IS SQL TYPE IS Descriptor for file
(916 or 917) CLOB-FILE. containing CLOB data
DATE 01 identifier PIC X(10). 10-byte character string
(384 or 385)
TIME 01 identifier PIC X(8). 8-byte character string
(388 or 389)
TIMESTAMP 01 identifier PIC X(26). 26-byte character string
(392 or 393)
The following is a sample SQL declare section with a host variable declared
for each supported SQL data type.
EXEC SQL BEGIN DECLARE SECTION END-EXEC.
*
01 age PIC S9(4) COMP-5.
01 divis PIC S9(9) COMP-5.
01 salary PIC S9(6)V9(3) COMP-3.
01 bonus USAGE IS COMP-1.
01 wage USAGE IS COMP-2.
01 nm PIC X(5).
01 varchar.
The following are additional rules for supported COBOL data types:
v PIC S9 and COMP-3/COMP-5 are required where shown.
v You can use level number 77 instead of 01 for all column types except
VARCHAR, LONG VARCHAR, VARGRAPHIC, LONG VARGRAPHIC and
all LOB variable types.
v Use the following rules when declaring host variables for DECIMAL(p,s)
column types. Refer to the following sample:
01 identifier PIC S9(m)V9(n) COMP-3
– Use V to denote the decimal point.
– Values for n and m must be greater than or equal to 1.
– The value for n + m cannot exceed 31.
– The value for s equals the value for n.
– The value for p equals the value for n + m.
– The repetition factors (n) and (m) are optional. The following examples
are all valid:
01 identifier PIC S9(3)V COMP-3
01 identifier PIC SV9(3) COMP-3
01 identifier PIC S9V COMP-3
01 identifier PIC SV9 COMP-3
– PACKED-DECIMAL can be used instead of COMP-3.
v Arrays are not supported by the COBOL precompiler:
FOR BIT DATA in COBOL
Certain database columns can be declared FOR BIT DATA. These columns,
which generally contain characters, are used to hold binary information. The
CHAR(n), VARCHAR, LONG VARCHAR, and BLOB data types are the
Your application is responsible for converting to and from UCS-2 since this
conversion must be conducted before the data is copied to, and after it is
copied from, the SQLDA. DB2 Universal Database does not supply any
conversion routines that are accessible to your application. Instead, you must
use the system calls available from your operating system. In the case of a
UCS-2 database, you may also consider using the VARCHAR and
VARGRAPHIC scalar functions.
For further information on these functions, refer to the SQL Reference. For
general EUC application development guidelines, see “Japanese and
Traditional Chinese EUC and UCS-2 Code Set Considerations” on page 511.
To locate the INCLUDE file, the DB2 FORTRAN precompiler searches the
current directory first, then the directories specified by the DB2INCLUDE
environment variable. Consider the following examples:
v EXEC SQL INCLUDE payroll
If the file specified in the INCLUDE statement is not enclosed in quotation
marks, as above, the precompiler searches for payroll.sqf, then payroll.f
(payroll.for on OS/2) in each directory in which it looks.
v EXEC SQL INCLUDE 'pay/payroll.f'
If the file name is enclosed in quotation marks, as above, no extension is
added to the name. (For OS/2, the file would be specified as
'pay\payroll.for'.)
If the file name in quotation marks does not contain an absolute path, then
the contents of DB2INCLUDE are used to search for the file, prepended to
whatever path is specified in the INCLUDE file name. For example, with
DB2 for AIX, if DB2INCLUDE is set to ‘/disk2:myfiles/fortran’, the
precompiler searches for ‘./pay/payroll.f’, then ‘/disk2/pay/payroll.f’,
and finally ‘./myfiles/cobol/pay/payroll.f’. The path where the file is
actually found is displayed in the precompiler messages. On OS/2,
substitute back slashes (\) for the forward slashes, and substitute ‘for’ for
the ‘f’ extension in the above example.
The end of the source line serves as the statement terminator. If the line is
continued, the statement terminator is the end of the last continued line.
For example:
EXEC SQL SELECT COL INTO :hostvar FROM TABLE
Note that the actual characters used for end-of-line and TAB vary from
platform to platform. For example, OS/2 uses Carriage Return/Line Feed
for end-of-line, whereas UNIX-based systems use just a Line Feed.
WW INTEGER*2 X varname WY
INTEGER*4 / initial-value /
REAL*4
REAL *8
DOUBLE PRECISION
WW CHARACTER X varname WY
*n / initial-value /
Variable Length
,
Declaring:
sql type is varchar(1000) my_varchar
Declaring:
sql type is varchar(10000) my_lvarchar
Declaring:
sql type is blob(2m) my_blob
CLOB Example:
Declaring:
sql type is clob(125m) my_clob
Declaring:
SQL TYPE IS CLOB_LOCATOR my_locator
Table 34 shows the FORTRAN equivalent of each column type. When the
precompiler finds a host variable declaration, it determines the appropriate
SQL type value. The database manager uses this value to convert the data
exchanged between the application and itself.
Note: There is no host variable support for the DATALINK data type in any
of the DB2 host languages.
Table 34. SQL Data Types Mapped to FORTRAN Declarations
SQL Column Type1 FORTRAN Data Type SQL Column Type Description
SMALLINT INTEGER*2 16-bit, signed integer
(500 or 501)
INTEGER INTEGER*4 32-bit, signed integer
(496 or 497)
REAL2 REAL*4 Single precision floating point
(480 or 481)
DOUBLE3 REAL*8 Double precision floating point
(480 or 481)
DECIMAL(p,s) No exact equivalent; use REAL*8 Packed decimal
(484 or 485)
The following is a sample SQL declare section with a host variable declared
for each supported data type:
EXEC SQL BEGIN DECLARE SECTION
INTEGER*2 AGE /26/
INTEGER*4 DEPT
REAL*4 BONUS
REAL*8 SALARY
CHARACTER MI
CHARACTER*112 ADDRESS
SQL TYPE IS VARCHAR (512) DESCRIPTION
The following are additional rules for supported FORTRAN data types:
v You may define dynamic SQL statements longer than 254 characters by
using VARCHAR, LONG VARCHAR, OR CLOB host variables.
Your application is responsible for converting to and from UCS-2 since this
conversion must be conducted before the data is copied to, and after it is
copied from, the SQLDA. DB2 Universal Database does not supply any
conversion routines that are accessible to your application. Instead, you must
use the system calls available from your operating system. In the case of a
UCS-2 database, you may also consider using the VARCHAR and
VARGRAPHIC scalar functions.
REXX/SQL stored procedures are supported on the OS/2 and Windows 32-bit
operating systems, but not on AIX.
Registering SQLEXEC, SQLDBS and SQLDB2 in REXX
Before using any of the DB2 APIs or issuing SQL statements in an application,
you must register the SQLDBS, SQLDB2 and SQLEXEC routines. This notifies
the REXX interpreter of the REXX/SQL entry points. The method you use for
registering varies slightly between the OS/2 and AIX platforms. The following
examples show the correct syntax for registering each routine:
On OS/2, the RxFuncAdd commands need to be executed only once for all
sessions.
Make each request by passing a valid SQL statement to the SQLEXEC routine.
Use the following syntax:
CALL SQLEXEC 'statement'
SQL statements can be continued onto more than one line. Each part of the
statement should be enclosed in single quotation marks, and a comma must
delimit additional statement text as follows:
CALL SQLEXEC 'SQL text',
'additional text',
.
.
.
'final text'
REXX sets the variable VAR to the 3-byte character string 100. If single
quotation marks are to be included as part of the string, follow this example:
VAR = "'100'"
When inserting numeric data into a CHARACTER field, the REXX interpreter
treats numeric data as integer data, thus you must concatenate numeric
strings explicitly and surround them with single quotation marks.
Note: The values –8 through –18 are returned only by the GET
ERROR MESSAGE API.
SQLMSG
If SQLCA.SQLCODE is not 0, this variable contains the text message
associated with the error code.
SQLISL
The isolation level. Possible values are:
RR Repeatable read.
RS Read stability.
CS Cursor stability. This is the default.
UR Uncommitted read.
NC No commit (NC is only supported by some host or AS/400
servers.)
SQLCA
The SQLCA structure updated after SQL statements are processed and
DB2 APIs are called. The entries of this structure are described in the
Administrative API Reference.
SQLRODA
The input/output SQLDA structure for stored procedures invoked
using the CALL statement. It is also the output SQLDA structure for
stored procedures invoked using the Database Application Remote
Interface (DARI) API. The entries of this structure are described in the
Administrative API Reference.
SQLRIDA
The input SQLDA structure for stored procedures invoked using the
Database Application Remote Interface (DARI) API. The entries of this
structure are described in the Administrative API Reference.
SQLRDAT
An SQLCHAR structure for server procedures invoked using the
In REXX SQL, LOB types are determined from the string content of your host
variable as follows:
Host variable string content Resulting LOB type
:hv1=’ordinary quoted string longer than 32K ...’ CLOB
:hv2=″’string with embedded delimiting quotation marks ″, CLOB
″longer than 32K...’″
:hv3=″G’DBCS string with embedded delimiting single ″, DBCLOB
″quotation marks, beginning with G, longer than 32K...’″
:hv4=″BIN’string with embedded delimiting single ″, BLOB
″quotation marks, beginning with BIN, any length...’″
You must declare LOB locator host variables in your application. When
REXX/SQL encounters these declarations, it treats the declared host variables
as locators for the remainder of the program. Locator values are stored in
REXX variables in an internal format.
Example:
CALL SQLEXEC 'DECLARE :hv1, :hv2 LANGUAGE TYPE CLOB LOCATOR'
Example:
CALL SQLEXEC 'FREE LOCATOR :hv1, :hv2'
“Syntax for LOB File Reference Variables in REXX” shows the syntax for
declaring LOB file reference host variables in REXX.
Example:
CALL SQLEXEC 'DECLARE :hv3, :hv4 LANGUAGE TYPE CLOB FILE'
File reference variables in REXX contain three fields. For the above example
they are:
hv3.FILE_OPTIONS.
Set by the application to indicate how the file will be used.
hv3.DATA_LENGTH.
Set by DB2 to indicate the size of the file.
hv3.NAME.
Set by the application to the name of the LOB file.
Note: A file reference host variable is a compound variable in REXX, thus you
must set values for the NAME, NAME_LENGTH and FILE_OPTIONS fields in
addition to declaring them.
Clearing LOB Host Variables in REXX
On OS/2 it may be necessary to explicitly clear REXX SQL LOB locator and
file reference host variable declarations as they remain in effect after your
application program ends. This is because the application process does not
exit until the session in which it is run is closed. If REXX SQL LOB
declarations are not cleared, they may interfere with other applications that
are running in the same session after a LOB application has been executed.
You should code this statement at the end of LOB applications. Note that you
can code it anywhere as a precautionary measure to clear declarations which
might have been left by previous applications (for example, at the beginning
of a REXX SQL application).
Note: There is no host variable support for the DATALINK data type in any
of the DB2 host languages.
Notes:
1. The first number under Column Type indicates that an indicator variable is not provided, and the second number
indicates that an indicator variable is provided. An indicator variable is needed to indicate NULL values, or to
hold the length of a truncated string.
2. FLOAT(n) where 0 < n < 25 is a synonym for REAL. The difference between REAL and DOUBLE in the SQLDA is
the length value (4 or 8).
3. The following SQL types are synonyms for DOUBLE:
v FLOAT
v FLOAT(n) where 24 < n < 54 is
v DOUBLE PRECISION
4. This is not a column type but a host variable type.
On OS/2, your application file must have a .CMD extension. After creation,
you can run your application directly from the operating system command
prompt.
On Windows 32-bit operating systems, your application file can have any
name. After creation, you can run your application from the operating system
command prompt by invoking the REXX interpreter as follows:
REXX file_name
On AIX, your application file can have any extension. You can run your
application using either of the following two methods:
1. At the shell command prompt, type rexx name where name is the name of
your REXX program.
2. If the first line of your REXX program contains a ″magic number″ (#!) and
identifies the directory where the REXX/6000 interpreter resides, you can
run your REXX program by typing its name at the shell command prompt.
For example, if the REXX/6000 interpreter file is in the /usr/bin directory,
include the following as the very first line of your REXX program:
#! /usr/bin/rexx
Run your REXX program by typing its file name at the shell command
prompt.
Note: On AIX, you should set the LIBPATH environment variable to include
the directory where the REXX SQL library, db2rexx is located. For
example:
export LIBPATH=/lib:/usr/lib:/usr/lpp/db2_07_01/lib
Note: In some cases, it may be necessary to explicitly bind these files to the
database.
When you use the SQLEXEC routine, the package created with cursor stability
is used as a default. If you require one of the other isolation levels, you can
change isolation levels with the SQLDBS CHANGE SQL ISOLATION LEVEL
API, before connecting to the database. This will cause subsequent calls to the
SQLEXEC routine to be associated with the specified isolation level.
OS/2 REXX applications cannot assume that the default isolation level is in
effect unless they know that no other REXX programs in the session have
changed the setting. Before connecting to a database, a REXX application
should explicitly set the isolation level.
For information on how the DB2 APIs work, see the complete descriptions in
the DB2 API chapter of the Administrative API Reference.
If a DB2 API you want to use cannot be called using the SQLDBS routine (and
consequently, not listed in the Administrative API Reference), you may still call
the API by calling the DB2 command line processor (CLP) from within the
REXX application. However, since the DB2 CLP directs output either to the
standard output device or to a specified file, your REXX application cannot
directly access the output from the called DB2 API nor can it easily make a
determination as to whether the called API is successful or not. The SQLDB2
You can use the SQLDB2 routine to call DB2 APIs using the following syntax:
CALL SQLDB2 'command string'
Calling a DB2 API using SQLDB2 is equivalent to calling the CLP directly,
except for the following:
v The call to the CLP executable is replaced by the call to SQLDB2 (all other
CLP options and parameters are specified the same way).
v The REXX compound variable SQLCA is set after calling the SQLDB2 but is
not set after calling the CLP executable.
v The default display output of the CLP is set to off when you call SQLDB2,
whereas the display is set to on output when you call the CLP executable.
Note that you can turn the display output of the CLP to on by passing the
+o or the −o− option to the SQLDB2.
Since the only REXX variable that is set after you call SQLDB2 is the SQLCA,
you only use this routine to call DB2 APIs that do not return any data other
than the SQLCA and that are not currently implemented through the SQLDBS
interface. Thus, only the following DB2 APIs are supported by SQLDB2:
Activate Database
Add Node
Bind for DB2 Version 1(1) (2)
Bind for DB2 Version 2 or 5(1)
Create Database at Node
Drop Database at Node
Drop Node Verify
Deactivate Database
Deregister
Load(3)
Load Query
Precompile Program(1)
Rebind Package(1)
Redistribute Nodegroup
Register
Start Database Manager
Stop Database Manager
Note: Although the SQLDB2 routine is intended to be used only for the DB2
APIs listed above, it can also be used for other DB2 APIs that are not
supported through the SQLDBS routine. Alternatively, the DB2 APIs
can be accessed through the CLP from within the REXX application.
Data can also be passed to stored procedures through SQLDA REXX variables,
using the USING DESCRIPTOR syntax of the CALL statement. Table 36 shows
how the SQLDA is set up. In the table, ':value' is the stem of a REXX host
variable that contains the values needed for the application. For the
DESCRIPTOR, 'n' is a numeric value indicating a specific sqlvar element of the
SQLDA. The numbers on the right refer to the notes following Table 36.
Table 36. Client-side REXX SQLDA for Stored Procedures using the CALL Statement
USING DESCRIPTOR :value.SQLD 1
:value.n.SQLTYPE 1
:value.n.SQLLEN 1
:value.n.SQLDATA 1 2
:value.n.SQLDIND 1 2
When using descriptors, SQLDATA must be initialized and contain data that
is type compatible with any data that is returned from the server procedure.
You should perform this initialization even if the SQLIND field contains a
negative value.
You can use Table 37 as a quick reference aid. For a complete discussion of all
the statements, including their syntax, refer to the SQL Reference.
Table 37. SQL Statements (DB2 Universal Database)
SQL Statement Dynamic1 Command Call Level Interface3 (CLI) SQL
Line Procedure
Processor
(CLP)
ALLOCATE CURSOR X
assignment statement X
ASSOCIATE LOCATORS X
ALTER { BUFFERPOOL, X X X
NICKNAME,10 NODEGROUP,
SERVER,10 TABLE,
TABLESPACE, USER
MAPPING,10 TYPE, VIEW }
BEGIN DECLARE SECTION2
CALL X9 X4 X
CASE statement X
CLOSE X SQLCloseCursor(), X
SQLFreeStmt()
COMMENT ON X X X X
COMMIT X X SQLEndTran, SQLTransact() X
4
Compound SQL (Embedded) X
compound statement X
The sample programs used in this book show examples of embedded SQL
statements and API calls in the supported host languages. The sample
programs are written to be short and simple. Production applications should
check the return codes, and especially the SQLCODE or SQLSTATE from all
API calls and SQL statements. For information on handling error conditions,
SQLCODEs, and SQLSTATEs, see “Diagnostic Handling and the SQLCA
Structure” on page 115. See the Application Building Guide for details on how to
install, build, and execute these programs in your environment.
Notes:
1. This section describes sample programs for the programming languages
for all platforms supported by DB2. Not all sample programs have been
ported to all platforms or supported programming languages.
2. DB2 sample programs are provided ″as is″ without any warranty of any
kind. The user, and not IBM, assumes the entire risk of quality,
performance, and repair of any defects.
The sample programs come with the DB2 Application Development (DB2 AD)
Client. You can use the sample programs as templates to create your own
applications.
Sample program file extensions differ for each supported language, and for
embedded SQL and non-embedded SQL programs within each language. File
extensions may also differ for groups of programs within a language. These
different sample file extensions are categorized in the following tables:
Sample File Extensions by Language
Table 38 on page 731.
Sample File Extensions by Program Group
Table 39 on page 731.
Note:
Directory Delimiters
On UNIX are /. On OS/2 and Windows platforms, are \. In the
tables, the UNIX delimiters are used unless the directory is only
available on Windows and/or OS/2.
File Extensions
Are provided for the samples in the tables where only one
extension exists.
You can find the sample programs in the samples subdirectory of the directory
where DB2 has been installed. There is a subdirectory for each supported
language. The following examples show you how to locate the samples
written in C or C++ on each supported platform.
v On UNIX platforms.
Java Samples
Table 45. Java Database Connectivity (JDBC) Sample Programs
Sample Program
Name Program Description
DB2Appl.java A JDBC application that queries the sample database using the invoking user’s
privileges.
DB2Applt.java A JDBC applet that queries the database using the JDBC applet driver. It uses the
user name, password, server, and port number parameters specified in
DB2Applt.html.
DB2Applt.html An HTML file that embeds the applet sample program, DB2Applt. It needs to be
customized with server and user information.
DB2UdCli.java A Java client application that calls the Java user-defined function, DB2Udf.
Dynamic.java Demonstrates a cursor using dynamic SQL.
MRSPcli.java This is the client program that calls the server program MRSPsrv. The program
demonstrates multiple result sets being returned from a Java stored procedure.
MRSPsrv.java This is the server program that is called by the client program, MRSPcli. The
program demonstrates multiple result sets being returned from a Java stored
procedure.
Outcli.java A Java client application that calls the SQLJ stored procedure, Outsrv.
PluginEx.java A Java program that adds new menu items and toolbar buttons to the DB2 Web
Control Center.
Spclient.java A JDBC client application that calls PARAMETER STYLE JAVA stored procedures
in the Spserver stored procedure class.
Spcreate.db2 A CLP script that contains the CREATE PROCEDURE statements to register the
methods contained in the Spserver class as stored procedures.
Spdrop.db2 A CLP script that contains the DROP PROCEDURE statements necessary for
deregistering the stored procedures contained in the Spserver class.
Spserver.java A JDBC program demonstrating PARAMETER STYLE JAVA stored procedures. The
client program is Spclient.java.
UDFcli.java A JDBC client application that calls functions in the Java user-defined function
library, UDFsrv.
Table 50. Object Linking and Embedding Database (OLE DB) Table Functions
Sample Program
Name Program Description
jet.db2 Microsoft.Jet.OLEDB.3.51 Provider
This chapter describes how you can write DB2DARI and DB2GENERAL
parameter style stored procedures and DB2GENERAL UDFs.
If your application will be working with character strings defined as FOR BIT
DATA, you need to initialize the SQLDAID field to indicate that the SQLDA
includes FOR BIT DATA definitions and the SQLNAME field of each SQLVAR
that defines a FOR BIT DATA element.
If your application will be working with large objects, that is, data with types
of CLOB, BLOB, or DBCLOB, you will also need to initialize the secondary
SQLVAR elements. For information on the SQLDA structure, refer to the SQL
Reference.
Using Host Variables in a DB2DARI Client
Declare SQLVARs using the same approach discussed in “Allocating Host
Variables” on page 192. In addition, the client application should set the
indicator of output-only SQLVARs to -1 as discussed in “Data Structure
Manipulation” on page 753. This will improve the performance of the
parameter passing mechanism by avoiding having to pass the contents of the
SQLDATA pointer, as only the indicator is sent. You should set the SQLTYPE
field to a nullable data type for these parameters. If the SQLTYPE indicates a
non-nullable data type, the indicator variable is not checked by the database
manager.
Using the SQLDA in a Stored Procedure
The stored procedure is invoked by the SQL CALL statement and executes
using data passed to it by the client application. Information is returned to the
client application using the stored procedure’s SQLDA structure.
The parameters of the SQL CALL statement are treated as both input and
output parameters and are converted into the following format for the stored
procedure:
SQL_API_RC SQL_API_FN proc_name( void *reserved1,
void *reserved2,
struct sqlda *inoutsqlda,
struct sqlca *sqlca )
The SQL_API_FN is a macro that specifies the calling convention for a function
that may vary across each supported operating system. This macro is required
when you write stored procedures or UDFs.
Note: The SQLDA structure is not passed to the stored procedure if the
number of elements, SQLD, is set to 0. In this case, if the SQLDA is not
passed, the stored procedure receives a NULL pointer.
Note that an indicator variable is not reset if the client or the server sets it to a
negative value (indicating that the SQLVAR should not be passed). If the host
variable to which the SQLVAR refers is given a value in the stored procedure
or the client code, its indicator variable should be set to zero or a positive
value so that the value is passed. For example, consider a stored procedure
which takes one output-only parameter, called as follows:
empind = -1;
EXEC SQL CALL storproc(:empno:empind);
When the stored procedure sets the value for the first SQLVAR, it should also
set the value of the indicator to a non-negative value so that the result is
passed back to empno.
Summary of Data Structure Usage
Table 53 summarizes the use of the various structure fields by the stored
procedures application. In the table, sqlda is an SQLDA structure passed to
the stored procedure and n is a numeric value indicating a specific SQLVAR
element of the SQLDA. The numbers on the right refer to the notes following
the table.
Table 53. Stored Procedures Parameter Variables
Input/Output SQLDA sqlda.SQLDAID 4
sqlda.SQLDABC 4
sqlda.SQLN 2 4
sqlda.SQLD 2 3 5
Input/Output SQLVAR sqlda.n.SQLTYPE 2 3 5
sqlda.n.SQLLEN 2 3 5
sqlda.n.SQLDATA 1 2 3 6 8
Note: It is possible to use the same variable for both input and output.
If the client application issues multiple calls to invoke the same stored
procedure, SQLZ_HOLD_PROC should be the return value of the stored procedure.
The stored procedure will not be unloaded.
Notes:
1. The difference between REAL and DOUBLE in the SQLDA is the length value (4 or 8).
2. Parenthesized types, such as the C null-terminated graphic string, occur in stored procedures when
the calling application uses embedded SQL with some host variable types.
3. The Blob and Clob classes are provided in the COM.ibm.db2.app package. Their interfaces include
routines to generate an InputStream and OutputStream for reading from and writing to a Blob, and a
Reader and Writer for a Clob. See “Classes for Java Stored Procedures and UDFs” for descriptions of
the classes.
4. SQL DATE, TIME, and TIMESTAMP values use the ISO string encoding in Java, as they do for UDFs
coded in C.
If such an object is returned as an output using the set() method, code page
conversions may be applied in order to represent the Java Unicode characters
in the database code page.
Classes for Java Stored Procedures and UDFs
Java stored procedures are very similar to Java UDFs. Like table functions,
they can have multiple outputs. They also use the same conventions for
NULL values, and the same set routine for output. The main difference is that
a Java class that contains stored procedures must inherit from the
COM.ibm.db2.app.StoredProc class instead of the COM.ibm.db2.app.UDF class.
Refer to “COM.ibm.db2.app.StoredProc” on page 758 for a description of the
COM.ibm.db2.app.StoredProc class.
This interface provides the following routine to fetch a JDBC connection to the
embedding application context:
public java.sql.Connection getConnection()
There are five classes/interfaces that you can use with Java Stored Procedures
or UDFs:
v COM.ibm.db2.app.StoredProc
v COM.ibm.db2.app.UDF
v COM.ibm.db2.app.Lob
v COM.ibm.db2.app.Blob
v COM.ibm.db2.app.Clob
The following sections describe the public aspects of these classes’ behavior:
COM.ibm.db2.app.StoredProc
A Java class that contains methods intended to be called as PARAMETER
STYLE DB2GENERAL stored procedures must be public and must implement
this Java interface. You must declare such a class as follows:
public class <user-STP-class> extends COM.ibm.db2.app.StoredProc{ ... }
Any exception returned from the stored procedure is caught by the database
and returned to the caller with SQLCODE -4302, SQLSTATE 38501. A JDBC
SQLException or SQLWarning is handled specially and passes its own
SQLCODE, SQLSTATE etc. to the calling application verbatim.
This constructor is called by the database before the stored procedure call.
public boolean isNull(int) throws Exception
This function tests whether an input argument with the given index is an SQL
NULL.
This function sets the output argument with the given index to the given
value. The index has to refer to a valid output argument, the data type must
match, and the value must have an acceptable length and contents. Strings
with Unicode characters must be representable in the database code page.
Errors result in an exception being thrown.
public java.sql.Connection getConnection() throws Exception
This function returns a JDBC object that represents the calling application’s
connection to the database. It is analogous to the result of a null SQLConnect()
call in a C stored procedure.
COM.ibm.db2.app.UDF
A Java class that contains methods intended to be called as PARAMETER
STYLE DB2GENERAL UDFs must be public and must implement this Java
interface. You must declare such a class as follows:
public class <user-UDF-class> extends COM.ibm.db2.app.UDF{ ... }
You can only call methods of the COM.ibm.db2.app.UDF interface in the context
of the currently executing UDF. For example, you cannot use operations on
LOB arguments, result- or status-setting calls, etc., after a UDF returns. A Java
exception will be thrown if this rule is violated.
Argument-related calls use a column index to identify the column being set.
These start at 1 for the first argument. Output arguments are numbered
higher than the input arguments. For example, a scalar UDF with three inputs
uses index 4 for the output.
Any exception returned from the UDF is caught by the database and returned
to the caller with SQLCODE -4302, SQLSTATE 38501.
Table function UDF methods use getCallType() to find out the call type for a
particular call. It returns a value as follows (symbolic defines are provided for
these values in the COM.ibm.db2.app.UDF class definition):
v -2 FIRST call
v -1 OPEN call
v 0 FETCH call
v 1 CLOSE call
v 2 FINAL call
public boolean isNull(int) throws Exception
This function tests whether an input argument with the given index is an SQL
NULL.
public boolean needToSet(int) throws Exception
This function tests whether an output argument with the given index needs to
be set. This may be false for a table UDF declared with DBINFO, if that
column is not used by the UDF caller.
public void set(int, short) throws Exception
public void set(int, int) throws Exception
public void set(int, double) throws Exception
public void set(int, float) throws Exception
public void set(int, java.math.BigDecimal) throws Exception
public void set(int, String) throws Exception
public void set(int, COM.ibm.db2.app.Blob) throws Exception
public void set(int, COM.ibm.db2.app.Clob) throws Exception
This function sets the output argument with the given index to the given
value. The index has to refer to a valid output argument, the data type must
match, and the value must have an acceptable length and contents. Strings
with Unicode characters must be representable in the database code page.
Errors result in an exception being thrown.
public void setSQLstate(String) throws Exception
This function may be called from a UDF to set the SQLSTATE to be returned
from this call. A table UDF should call this function with ″02000″ to signal the
end-of-table condition. If the string is not acceptable as an SQLSTATE, an
exception will be thrown.
This function is similar to the setSQLstate function. It sets the SQL message
result. If the string is not acceptable (for example, longer than 70 characters),
an exception will be thrown.
public String getFunctionName() throws Exception
This function returns a raw, unprocessed DBINFO structure for the executing
UDF, as a byte array. You must first declare it with the DBINFO option.
public String getDBname() throws Exception
public String getDBauthid() throws Exception
public String getDBtbschema() throws Exception
public String getDBtbname() throws Exception
public String getDBcolname() throws Exception
public String getDBver_rel() throws Exception
public String getDBplatform() throws Exception
public String getDBapplid() throws Exception
These functions return the value of the appropriate field from the DBINFO
structure of the executing UDF.
public int[] getDBcodepg() throws Exception
This function returns the SBCS, DBCS, and composite code page numbers for
the database, from the DBINFO structure. The returned integer array has the
respective numbers as its first three elements.
public byte[] getScratchpad() throws Exception
This function returns a copy of the scratchpad of the currently executing UDF.
You must first declare the UDF with the SCRATCHPAD option.
public void setScratchpad(byte[]) throws Exception
This function overwrites the scratchpad of the currently executing UDF with
the contents of the given byte array. You must first declare the UDF with the
SCRATCHPAD option. The byte array must have the same size as
getScratchpad() returns.
COM.ibm.db2.app.Lob
This class provides utility routines that create temporary Blob or Clob objects
for computation inside user-defined functions or stored procedures.
COM.ibm.db2.app.Blob
An instance of this class is passed by the database to represent a BLOB as
UDF or stored procedure input, and may be passed back as output. The
application may create instances, but only in the context of an executing UDF
or stored procedure. Uses of these objects outside such a context will throw an
exception.
This function returns a new InputStream to read the contents of the BLOB.
Efficient seek/mark operations are available on that object.
public java.io.OutputStream getOutputStream() throws Exception
COM.ibm.db2.app.Clob
An instance of this class is passed by the database to represent a CLOB or
DBCLOB as UDF or stored procedure input, and may be passed back as
output. The application may create instances, but only in the context of an
executing UDF or stored procedure. Uses of these objects outside such a
context will throw an exception.
Clob instances store characters in the database code page. Some Unicode
characters may not be representable in this code page, and may cause an
exception to be thrown during conversion. This may happen during an
append operation, or during a UDF or StoredProc set() call. This is necessary
to hide the distinction between a CLOB and a DBCLOB from the Java
programmer.
This function returns a new Reader to read the contents of the CLOB or
DBCLOB. Efficient seek/mark operations are available on that object.
public java.io.Writer getWriter() throws Exception
Without using stored procedures, the sample program would have been
designed to transmit data across the network in four separate requests in
order to process each SQL statement, as shown in Figure 23.
Instead, the sample program makes use of the stored procedures technique to
transmit all of the data across the network in one request, allowing the server
procedure to execute the SQL statements as a group. This technique is shown
in Figure 24.
See “Using GET ERROR MESSAGE in Example Programs” on page 118 for the
source code for this error checking utility.
if (argc != 4) {
printf ("\nUSAGE: inpcli remote_database userid passwd\n\n");
return 1;
}
/********************************************************\
* Call the Remote Procedure via CALL with Host Variables *
\********************************************************/
printf("Use CALL with Host Variable to invoke the Server Procedure"
" named inpsrv.\n");
tableind = dataind0 = dataind1 = dataind2 = 0;
inout_sqlda->sqlvar[0].sqltype = SQL_TYP_NCSTR;
inout_sqlda->sqlvar[0].sqldata = table_name;
inout_sqlda->sqlvar[0].sqllen = strlen( table_name ) + 1;
inout_sqlda->sqlvar[0].sqlind = &tableind;
inout_sqlda->sqlvar[1].sqltype = SQL_TYP_NCSTR;
inout_sqlda->sqlvar[1].sqldata = data_item0;
inout_sqlda->sqlvar[1].sqllen = strlen( data_item0 ) + 1;
inout_sqlda->sqlvar[1].sqlind = &dataind0;
inout_sqlda->sqlvar[2].sqltype = SQL_TYP_NCSTR;
inout_sqlda->sqlvar[2].sqldata = data_item1;
inout_sqlda->sqlvar[2].sqllen = strlen( data_item1 ) + 1;
inout_sqlda->sqlvar[2].sqlind = &dataind1;
inout_sqlda->sqlvar[3].sqltype = SQL_TYP_NCSTR;
inout_sqlda->sqlvar[3].sqldata = data_item2;
inout_sqlda->sqlvar[3].sqllen = strlen( data_item2 ) + 1;
inout_sqlda->sqlvar[3].sqlind = &dataind2;
/***********************************************\
* Call the Remote Procedure via CALL with SQLDA *
\***********************************************/
printf("Use CALL with SQLDA to invoke the Server Procedure named "
"inpsrv.\n");
#ifdef __cplusplus
extern "C"
#endif
SQL_API_RC SQL_API_FN inpsrv(void *reserved1, 1
void *reserved2,
struct sqlda *inout_sqlda,
struct sqlca *ca)
{
/* Declare a local SQLCA */
EXEC SQL INCLUDE SQLCA;
/*-----------------------------------------------------------------*/
/* Assign the data from the SQLDA to local variables so that we */
/* don't have to refer to the SQLDA structure further. This will */
/* provide better portability to other platforms such as DB2 MVS */
/* where they receive the parameter list differently. */
/*-----------------------------------------------------------------*/
table_name = inout_sqlda->sqlvar[0].sqldata;
num_of_data = inout_sqlda->sqld - 1;
/*-----------------------------------------------------------------*/
/* Create President Table */
/* - For simplicity, we'll ignore any errors from the */
/* CREATE TABLE so that you can run this program even when the */
/* table already exists due to a previous run. */
/*-----------------------------------------------------------------*/
/*-----------------------------------------------------------------*/
/* Generate and execute a PREPARE for an INSERT statement, and */
/* then insert the three presidents. */
/*-----------------------------------------------------------------*/
strcat(insert_stmt, table_name );
strcat(insert_stmt, " VALUES (?)"); 3
/*-----------------------------------------------------------------*/
/* Return to caller */
/* - Copy the SQLCA */
/* - Update the output SQLDA. Since there's no output to */
/* return, we are setting the indicator values to -128 to */
/* return only a null value. */
/*-----------------------------------------------------------------*/
ext: 5
memcpy(ca, &sqlca, sizeof(struct sqlca));
if (inout_sqlda != NULL)
{
for (cntr = 0; cntr < inout_sqlda->sqld; cntr++)
{
*(inout_sqlda->sqlvar[cntr].sqlind) = -128;
}
}
return(SQLZ_DISCONNECT_PROC);
}
DB2 Connect enables you to use the following APIs with host database
products such as DB2 Universal Database for OS/390, as long as the item is
supported by the host database product:
v Embedded SQL, both static and dynamic
v The DB2 Call Level Interface
v The Microsoft ODBC API
v JDBC.
Some SQL statements differ among relational database products. You may
encounter SQL statements that are:
v The same for all the database products that you use regardless of standards
v Documented in the SQL Reference and are therefore available in all IBM
relational database products
v Unique to one database system that you access.
SQL statements in the first two categories are highly portable, but those in the
third category will first require changes. In general, SQL statements in Data
Definition Language (DDL) are not as portable as those in Data Manipulation
Language (DML).
DB2 Connect accepts some SQL statements that are not supported by DB2
Universal Database. DB2 Connect passes these statements on to the host or
AS/400 server. For information on limits on different platforms, such as the
maximum column length, refer to the SQL Reference.
If you move a CICS application from OS/390 or VSE to run under another
CICS product (for example, CICS for AIX), it can also access the OS/390 or
Note: You can use DB2 Connect with a DB2 Universal Database Version 7
database, although it would be more efficient to use the DB2 private
protocol without DB2 Connect. Most of the incompatibility issues listed
in the following sections will not apply if you are using DB2 Connect
against a DB2 Universal Database Version 7 database, except in cases
where a restriction is due to a limitation of DB2 Connect itself, for
example, the non-support of Abstract Data Types.
When you program in a host or AS/400 environment, you should consider
the following specific factors:
v Using Data Definition Language (DDL)
v Using Data Manipulation Language (DML)
v Using Data Control Language (DCL)
v Connecting and disconnecting
v Precompiling
v Defining a sort order
v Managing referential integrity
v Locking
v Differences in SQLCODEs and SQLSTATEs
v Using system catalogs
v Isolation levels
v Stored procedures
v NOT ATOMIC compound SQL
v Distributed unit of work
v SQL statements supported or rejected by DB2 Connect.
DDL statements differ among the IBM database products because storage is
handled differently on different systems. On host or AS/400 server systems,
there can be several steps between designing a database and issuing a
CREATE TABLE statement. For example, a series of statements may translate
the design of logical objects into the physical representation of those objects in
storage.
The precompiler passes many such DDL statements to the host or AS/400
server when you precompile to a host or AS/400 server database. The same
statements would not precompile against a database on the system where the
You can explicitly disconnect by using the CONNECT RESET statement (for
type 1 connect), the RELEASE and COMMIT statements (for type 2 connect),
or the DISCONNECT statement (either type of connect, but not in a TP
monitor environment).
Note: An application can receive SQLCODEs indicating errors and still end
normally; DB2 Connect commits the data in this case. If you do not
want the data to be committed, you must issue a ROLLBACK
command.
The FORCE command lets you disconnect selected users or all users from the
database. This is supported for host or AS/400 server databases; the user can
be forced off the DB2 Connect workstation.
Precompiling
There are some differences in the precompilers for different IBM relational
database systems. The precompiler for DB2 Universal Database differs from
the host or AS/400 server precompilers in the following ways:
v It makes only one pass through an application.
v When binding against DB2 Universal Database databases, objects must exist
for a successful bind. VALIDATE RUN is not supported.
Blocking
The DB2 Connect program supports the DB2 database manager blocking bind
options:
UNAMBIG
Only unambiguous cursors are blocked (the default).
ALL Ambiguous cursors are blocked.
NO Cursors are not blocked.
The DB2 Connect program uses the block size defined in the DB2 database
manager configuration file for the RQRIOBLK field. Current versions of DB2
Connect support block sizes up to 32 767. If larger values are specified in the
DB2 database manager configuration file, DB2 Connect uses a value of 32 767
but does not reset the DB2 database manager configuration file. Blocking is
handled the same way using the same block size for dynamic and static SQL.
Specify the block size in the DB2 database manager configuration file by using
the CLP, the Control Center, or an API, as listed in the Administrative API
Reference and Command Reference.
Each host or AS/400 server system has limitations on the use of these
attributes:
DB2 Universal Database for OS/390
All four attributes can be different. The use of a different qualifier
requires special administrative privileges. For more information on the
conditions concerning the usage of these attributes, refer to the
Command Reference for DB2 Universal Database for OS/390.
DB2 for VSE & VM
All of the attributes must be identical. If USER1 creates a bind file
(with PREP), and USER2 performs the actual bind, USER2 needs DBA
authority to bind for USER1. Only USER1’s user name is used for the
attributes.
DB2 Universal Database for AS/400
The qualifier indicates the collection name. The relationship between
qualifiers and ownership affects the granting and revoking of
privileges on the object. The user name that is logged on is the creator
and owner unless it is qualified by a collection ID, in which case the
collection ID is the owner. The collection ID must already exist before
it is used as a qualifier.
DB2 Universal Database
All four attributes can be different. The use of a different owner
requires administrative authority and the binder must have
CREATEIN privilege on the schema (if it already exists).
Note: DB2 Connect provides support for the SET CURRENT PACKAGESET
command for DB2 Universal Database for OS/390 and DB2 Universal
Database.
Note: Database tables can now be stored on DB2 Universal Database for
OS/390 in ASCII format. This permits faster exchange of data between
DB2 Connect and DB2 Universal Database for OS/390, and removes the
need to provide field procedures which must otherwise be used to
convert data and resequence it.
Locking
The way in which the database server performs locking can affect some
applications. For example, applications designed around row-level locking
and the isolation level of cursor stability are not directly portable to systems
that perform page-level locking. Because of these underlying differences,
applications may need to be adjusted.
The DB2 Universal Database for OS/390 and DB2 Universal Database
products have the ability to time-out a lock and send an error return code to
waiting applications.
The catalog functions in CLI get around this problem by presenting support of
the same API and result sets for catalog queries across the DB2 family.
Isolation Levels
DB2 Connect accepts the following isolation levels when you prep or bind an
application:
RR Repeatable Read
RS Read Stability
CS Cursor Stability
UR Uncommitted Read
NC No Commit
The isolation levels are listed in order from most protection to least protection.
If the host or AS/400 server does not support the isolation level that you
specify, the next higher supported level is used.
Table 55 on page 782 shows the result of each isolation level on each host or
AS/400 application server.
Notes:
1. There is no equivalent COMMIT option on DB2 Universal Database for AS/400 that matches
RR. DB2 Universal Database for AS/400 supports RR by locking the whole table.
2. Results in RR for Version 3.1, and results in RS for Version 4.1 with APAR PN75407 or Version
5.1.
3. Results in CS for Version 3.1, and results in UR for Version 4.1 or Version 5.1.
4. Results in CS for Version 3.1, and results in UR for Version 4.1 with APAR PN60988 or Version
5.1.
5. Isolation level NC is not supported with DB2 for VSE & VM.
With DB2 Universal Database for AS/400, you can access an unjournalled
table if an application is bound with an isolation level of UR and blocking set
to ALL, or if the isolation level is set to NC.
Stored Procedures
v Invocation
A client program can invoke a server program by issuing an SQL CALL
statement. Each server works a little differently to the other servers in this
case.
OS/390
The schema name must be no more than 8 bytes long, the
procedure name must be no more than 18 bytes long, and the
stored procedure must be defined in the SYSIBM.SYSPROCEDURES
catalog on the server.
VSE or VM
The procedure name must not be more than 18 bytes long and must
be defined in the SYSTEM.SYSROUTINES catalog on the server.
OS/400
The procedure name must be an SQL identifier. You can also use
the DECLARE PROCEDURE or CREATE PROCEDURE statements
to specify the actual path name (the schema-name or
collection-name) to locate the stored procedure.
For the syntax of the SQL CALL statement, refer to the SQL Reference.
You can invoke the server program on DB2 Universal Database with the
same parameter convention that server programs use on DB2 Universal
Database for OS/390, DB2 Universal Database for AS/400, or DB2 for VSE
& VM. For more information on invoking DB2 Universal Database stored
procedures, see “Chapter 7. Stored Procedures” on page 187. For more
information on the parameter convention on other platforms, refer to the
DB2 product documentation for that platform.
All the SQL statements in a stored procedure are executed as part of the
SQL unit of work started by the client SQL program.
v Do not pass indicator values with special meaning to or from stored
procedures.
Between DB2 Universal Database, the systems pass whatever you put into
the indicator variables. However, when using DB2 Connect, you can only
pass 0, -1, and -128 in the indicator variables.
v You should define a parameter to return any errors or warning encountered
by the server application.
A server program on DB2 Universal Database can update the SQLCA to
return any error or warning, but a stored procedure on DB2 Universal
Database for OS/390 or DB2 Universal Database for AS/400 has no such
support. If you want to return an error code from your stored procedure,
you must pass it as a parameter. The SQLCODE and SQLCA is only set by
the server for system detected errors.
v DB2 for VSE & VM Version 7 or higher and DB2 Universal Database for
OS/390 Version 5.1 or higher are the only host or AS/400 Application
Servers that can return the result sets of stored procedures at this time.
Stored Procedure Builder
DB2 Stored Procedure Builder provides an easy-to-use development
environment for creating, installing, and testing stored procedures. It allows
you to focus on creating your stored procedure logic rather than the details of
registering, building, and installing stored procedures on a DB2 server.
Additionally, with Stored Procedure Builder, you can develop stored
procedures on one operating system and build them on other server operating
systems.
Stored Procedure Builder manages your work by using projects. Each Stored
Procedure Builder project saves your connections to specific databases, such as
DB2 for OS/390 servers. In addition, you can create filters to display subsets
of the stored procedures on each database. When opening a new or existing
Stored Procedure Builder project, you can filter stored procedures so that you
view stored procedures based on their name, schema, language, or collection
ID (for OS/390 only).
Additionally, you can obtain SQL costing information about the SQL stored
procedure, including information about CPU time and other DB2 costing
information for the thread on which the SQL stored procedure is running. In
particular, you can obtain costing information about latch/lock contention
wait time, the number of getpages, the number of read I/Os, and the number
of write I/Os.
DB2 Connect supports NOT ATOMIC compound SQL. This means that
processing of compound SQL continues following an error. (With ATOMIC
compound SQL, which is not supported by DB2 Connect, an error would roll
back the entire group of compound SQL.)
NOT ATOMIC compound SQL can be used with all of the supported host or
AS/400 application servers.
If multiple SQL errors occur, the SQLSTATEs of the first seven failing
statements are returned in the SQLERRMC field of the SQLCA with a
message that multiple errors occurred. For more information, refer to the SQL
Reference.
Note: For more information on BEA Tuxedo, refer to the DB2 Connect User’s
Guide.
v If you have TCP/IP network connections, then a DB2 for OS/390 V5.1 or
later server can participate in a distributed unit of work. If the application
is controlled by a Transaction Processing Monitor such as IBM TXSeries,
CICS for Open Systems, Encina Monitor, or Microsoft Transaction Server,
then you must use the sync point manager.
If a common DB2 Connect Enterprise Edition server is used by both native
DB2 applications and TP monitor applications to access host data over
TCP/IP connections then the sync point manager must be used.
If a single DB2 Connect Enterprise Edition server is used to access host data
using both SNA and TCP/IP network protocols and two phase commit is
required, then the sync point manager must be used. This is true for both
DB2 applications and TP monitor applications.
The following statements are supported for host or AS/400 server processing
but are not added to the bind file or the package and are not supported by
the command line processor:
v DESCRIBE statement_name INTO descriptor_name USING NAMES
v PREPARE statement_name INTO descriptor_name USING NAMES
FROM ...
DB2 for VSE & VM extended dynamic SQL statements are rejected with -104
and syntax error SQLCODEs.
Consider the relative collation of four characters in a EBCDIC code page 500
database, when they are collated in binary:
Character Code Page 500 Code Point
’a’ X'81'
’b’ X'82'
’A’ X'C1'
’B’ X'C2'
The code page 500 binary collation sequence (the desired sequence) is:
'a' < 'b' < 'A' < 'B'
If you create the database with ASCII code page 850, binary collation would
yield:
Character Code Page 850 Code Point
’a’ X'61'
’b’ X'62'
’A’ X'41'
’B’ X'42'
The code page 850 binary collation (which is not the desired sequence) is:
'A' < 'B' < 'a' < 'b'
To achieve the desired collation, you need to create your database with a
user-defined collating sequence. A sample collating sequence for just this
purpose is supplied with DB2 in the sqle850a.h include file. The content of
sqle850a.h is shown in Figure 25 on page 790.
#ifdef __cplusplus
extern "C" {
#endif
#endif /* SQL_H_SQLE850A */
To see how to achieve code page 500 binary collation on code page 850
characters, examine the sample collating sequence in sqle_850_500. For each
code page 850 character, its weight in the collating sequence is simply its
corresponding code point in code page 500.
For example, consider the letter ‘a’. This letter is code point X'61' for code
page 850 as shown in Figure 27 on page 793. In the array sqle_850_500, letter
‘a’ is assigned a weight of X'81' (that is, the 98th element in the array
sqle_850_500).
Consider how the four characters collate when the database is created with
the above sample user-defined collating sequence:
Character Code Page 850 Code Point / Weight (from sqle_850_500)
’a’ X'61' / X'81'
’b’ X'62' / X'82'
’A’ X'41' / X'C1'
’B’ X'42' / X'C2'
In this example, you achieve the desired collation by specifying the correct
weights to simulate the desired behavior.
Closely observing the actual collating sequence, notice that the sequence itself
is merely a conversion table, where the source code page is the code page of
the data base (850) and the target code page is the desired binary collating
code page (500). Other sample collating sequences supplied by DB2 enable
different conversions. If a conversion table that you require is not supplied
with DB2, additional conversion tables can be obtained from the IBM
publication, Character Data Representation Architecture, Reference and Registry,
SC09-2190. You will find the additional conversion tables in a CD-ROM
enclosed with this publication.
To access product information online, you can use the Information Center. For
more information, see “Accessing Information with the Information Center”
on page 809. You can view task information, DB2 books, troubleshooting
information, sample programs, and DB2 information on the Web.
The installation manuals, release notes, and tutorials are viewable in HTML
directly from the product CD-ROM. Most books are available in HTML on the
product CD-ROM for viewing and in Adobe Acrobat (PDF) format on the DB2
publications CD-ROM for viewing and printing. You can also order a printed
copy from IBM; see “Ordering the Printed Books” on page 805. The following
table lists books that can be ordered.
On OS/2 and Windows platforms, you can install the HTML files under the
sqllib\doc\html directory. DB2 information is translated into different
On UNIX platforms, you can install multiple language versions of the HTML
files under the doc/%L/html directories, where %L represents the locale. For
more information, refer to the appropriate Quick Beginnings book.
You can obtain DB2 books and access information in a variety of ways:
v “Viewing Information Online” on page 808
v “Searching Information Online” on page 812
v “Ordering the Printed Books” on page 805
v “Printing the PDF Books” on page 804
Table 56. DB2 Information
Name Description Form Number HTML
Directory
PDF File Name
DB2 Guide and Reference Information
Administration Guide Administration Guide: Planning provides SC09-2946 db2d0
an overview of database concepts, db2d1x70
information about design issues (such as
logical and physical database design),
and a discussion of high availability.
Notes:
1. The character x in the sixth position of the file name indicates the
language version of a book. For example, the file name db2d0e70 identifies
the English version of the Administration Guide and the file name db2d0f70
identifies the French version of the same book. The following letters are
used in the sixth position of the file name to indicate the language version:
Language Identifier
Brazilian Portuguese b
2. Late breaking information that could not be included in the DB2 books is
available in the Release Notes in HTML format and as an ASCII file. The
HTML version is available from the Information Center and on the
product CD-ROMs. To view the ASCII file:
v On UNIX-based platforms, see the Release.Notes file. This file is located
in the DB2DIR/Readme/%L directory, where %L represents the locale
name and DB2DIR represents:
– /usr/lpp/db2_07_01 on AIX
– /opt/IBMdb2/V7.1 on HP-UX, PTX, Solaris, and Silicon Graphics
IRIX
– /usr/IBMdb2/V7.1 on Linux.
v On other platforms, see the RELEASE.TXT file. This file is located in the
directory where the product is installed. On OS/2 platforms, you can
also double-click the IBM DB2 folder and then double-click the Release
Notes icon.
Printing the PDF Books
If you prefer to have printed copies of the books, you can print the PDF files
found on the DB2 publications CD-ROM. Using the Adobe Acrobat Reader,
you can print either the entire book or a specific range of pages. For the file
name of each book in the library, see Table 56 on page 796.
The PDF files are included on the DB2 publications CD-ROM with a file
extension of PDF. To access the PDF files:
1. Insert the DB2 publications CD-ROM. On UNIX-based platforms, mount
the DB2 publications CD-ROM. Refer to your Quick Beginnings book for
the mounting procedures.
2. Start the Acrobat Reader.
3. Open the desired PDF file from one of the following locations:
v On OS/2 and Windows platforms:
x:\doc\language directory, where x represents the CD-ROM drive and
language represent the two-character country code that represents your
language (for example, EN for English).
v On UNIX-based platforms:
/cdrom/doc/%L directory on the CD-ROM, where /cdrom represents the
mount point of the CD-ROM and %L represents the name of the desired
locale.
You can also copy the PDF files from the CD-ROM to a local or network drive
and read them from there.
Ordering the Printed Books
You can order the printed DB2 books either individually or as a set (in North
America only) by using a sold bill of forms (SBOF) number. To order books,
contact your IBM authorized dealer or marketing representative, or phone
1-800-879-2755 in the United States or 1-800-IBM-4YOU in Canada. You can
also order the books from the Publications Web page at
http://www.elink.ibmlink.ibm.com/pbl/pbl.
Two sets of books are available. SBOF-8935 provides reference and usage
information for the DB2 Warehouse Manager. SBOF-8931 provides reference
and usage information for all other DB2 Universal Database products and
features. The contents of each SBOF are listed in the following table:
Information Catalog
Manager Help
Satellite Administration
Center Help
You can view the online books or sample programs with any browser that
conforms to HTML Version 3.2 specifications.
You can also access the Information Center by using the toolbar and the Help
menu on the DB2 Windows platform.
The Information Center provides a find feature, so you can look for a specific
topic without browsing the lists.
For a full text search, follow the hypertext link in the Information Center to
the Search DB2 Online Information search form.
Refer to the release notes if you experience any other problems when
searching the HTML information.
Note: The Search function is not available in the Linux, PTX, and Silicon
Graphics IRIX environments.
Using DB2 Wizards
Wizards help you complete specific administration tasks by taking you
through each task one step at a time. Wizards are available through the
Control Center and the Client Configuration Assistant. The following table
lists the wizards and describes their purpose.
Note: The Create Database, Create Index, Configure Multisite Update, and
Performance Configuration wizards are available for the partitioned
database environment.
For information about how you can serve the DB2 Universal Database online
documentation files from a central machine, refer to the NetQuestion
Appendix in the Installation and Configuration Supplement.
Searching Information Online
To find information in the HTML files, use one of the following methods:
v Click Search in the top frame. Use the search form to find a specific topic.
This function is not available in the Linux, PTX, or Silicon Graphics IRIX
environments.
v Click Index in the top frame. Use the index to find a specific topic in the
book.
v Display the table of contents or index of the help or the HTML book, and
then use the find function of the Web browser to find a specific topic in the
book.
v Use the bookmark function of the Web browser to quickly return to a
specific topic.
v Use the search function of the Information Center to find specific topics. See
“Accessing Information with the Information Center” on page 809 for
details.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give
you any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
The following paragraph does not apply to the United Kingdom or any
other country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY
OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow
disclaimer of express or implied warranties in certain transactions, therefore,
this statement may not apply to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those
Web sites. The materials at those Web sites are not part of the materials for
this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the
purpose of enabling: (i) the exchange of information between independently
created programs and other programs (including this one) and (ii) the mutual
use of the information which has been exchanged, should contact:
IBM Canada Limited
Office of the Lab Director
1150 Eglinton Ave. East
North York, Ontario
M3C 1H7
CANADA
The licensed program described in this information and all licensed material
available for it are provided by IBM under terms of the IBM Customer
Agreement, IBM International Program License Agreement, or any equivalent
agreement between us.
This information may contain examples of data and reports used in daily
business operations. To illustrate them as completely as possible, the examples
include the names of individuals, companies, brands, and products. All of
these names are fictitious and any similarity to the names and addresses used
by an actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work
must include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM
Corp. Sample Programs. © Copyright IBM Corp. _enter the year or years_. All
rights reserved.
ACF/VTAM IBM
AISPO IMS
AIX IMS/ESA
AIX/6000 LAN DistanceMVS
AIXwindows MVS/ESA
AnyNet MVS/XA
APPN Net.Data
AS/400 OS/2
BookManager OS/390
CICS OS/400
C Set++ PowerPC
C/370 QBIC
DATABASE 2 QMF
DataHub RACF
DataJoiner RISC System/6000
DataPropagator RS/6000
DataRefresher S/370
DB2 SP
DB2 Connect SQL/DS
DB2 Extenders SQL/400
DB2 OLAP Server System/370
DB2 Universal Database System/390
Distributed Relational SystemView
Database Architecture VisualAge
DRDA VM/ESA
eNetwork VSE/ESA
Extended Services VTAM
FFST WebExplorer
First Failure Support Technology WIN-OS/2
Java or all Java-based trademarks and logos, and Solaris are trademarks of
Sun Microsystems, Inc. in the United States, other countries, or both.
Tivoli and NetView are trademarks of Tivoli Systems Inc. in the United States,
other countries, or both.
Index 821
client transforms (continued) COBOL data types (continued) collating sequence (continued)
data type conversion COMP-3 682 simulating EBCDIC binary
considerations 330 COMP-4 681 collation 789
implemented using external COMP-5 682 sort order example 497
UDFs 329 DBCLOB 682 specifying 498
overview of 327 DBCLOB-FILE 682 use in character
clob C/C++ type 615 DBCLOB-LOCATOR 682 comparisons 495
clob_file C/C++ type 615 PICTURE (PIC) clause 682 collation
CLOB-FILE COBOL type 682 USAGE clause 682 Chinese (Traditional) code
CLOB_FILE FORTRAN type 698 COBOL language sets 515
CLOB FORTRAN type 698 data types supported 681 Japanese code sets 515
clob_locator C/C++ type 615 code page collection ID attribute
CLOB-LOCATOR COBOL type 682 allocating storage for unequal DB2 Universal Database for
CLOB_LOCATOR FORTRAN situations 516 AS/400 778
type 698 binding considerations 55 package 778
CLOB parameter to UDF 408 character conversion 504 COLLECTION parameters 54
CLOB SQL data type 77, 420 DB2CODEPAGE registry collections 778
C/C++ 615 variable 499 column
COBOL 682 handling expansion at setting null values in 77
FORTRAN 698 application 519 supported SQL data types 77
Java 625 handling expansion at using indicator variables on
Java stored procedures server 519 nullable data columns 80
(DB2GENERAL) 756 in SQLERRMC field of column functions 370
OLE DB table function 428 SQLCA 776 column options 298
REXX 712 locales ALTER NICKNAME
CLOBs (Character Large Objects) deriving in applications 500 statement 567
uses and definition 341 how DB2 derives locales 500 description 566
CLOSE call 395 national language support column types
closed state, buffered insert 551 (NLS) 504 creating 313
stored procedure overview of 313
closing a buffered insert 548
considerations 222 column types, creating in
COBOL
supported Windows code C/C++ 615
declaring host variables 671
pages 499 column types, creating in
embedding SQL statements 45
unequal situations 508, 516 COBOL 681
file reference declarations 677
code point 494 column types, creating in
include files, list of 665
code point, definition of 494 FORTRAN 698
indicator tables 680
code set columns
input and output files 665
derived 176
Japanese and traditional Chinese in SQLERRMC field of
generated 176
EUC considerations 685 SQLCA 776
identity 176
LOB data declarations 675 coding Java UDFs 412
COM.ibm.db2.app.Blob 762
LOB locator declarations 676 collating sequence
COM.ibm.db2.app.Clob 762
object oriented restrictions 686 case independent
COM.ibm.db2.app.Lob 761
restrictions in 665 comparisons 495
rules for indicator variables 675 COM.ibm.db2.app.StoredProc 758
code point 494
supported data types 681 COM.ibm.db2.app.UDF 412, 759
EBCDIC and ASCII 779
COBOL data types EBCDIC and ASCII sort order COM.ibm.db2.jdbc.app.DB2Driver 630
BINARY 681 example 497 COM.ibm.db2.jdbc.net.DB2Driver 630
BLOB 682 identity sequence 494 command line processor
BLOB-FILE 682 include files prototyping utility 40
BLOB-LOCATOR 682 in COBOL 667 Command Line Processor 716
CLOB 682 in FORTRAN 689 commands
CLOB-FILE 682 include files in C/C++ 584 EXCSQLSTT 786
CLOB-LOCATOR 682 multi-byte characters 494 FORCE 777
COMP 681 overview of 494 comments
COMP-1 682 samples of 499 SQL, rules for 669, 692
Index 823
creating data types (continued)
Java stored procedures 649
D conversion
data
Java UDFs 412 between DB2 and
OLE automation UDFs 417 avoiding bottlenecks when COBOL 682
extracting 552 between DB2 and
creating packages for compiled
extracting large volumes 552 FORTRAN 698
applications 49
creating typed views 303 data control language (DCL) 776 between DB2 and OLE DB
creator attributes data definition language (DDL) 774 table function 428
package 778 Data Definition Language (DDL) conversion between DB2 and
critical section routine, in multiple issuing in savepoints 181 C/C++ 615
threads, guidelines 535 data manipulation language conversion between DB2 and
critical sections 535 (DML) 775 OLE automation 420
ctr() UDF C program listing 451 data relationship consideration conversion between DB2 and
REXX 712
CURRENT EXPLAIN MODE application logic 29
referential integrity 28 conversion considerations 330
register 54
data value control
CURRENT FUNCTION PATH triggers 28
consideration 26
register 54 data sources in federated systems
DBCLOBs 341
CURRENT QUERY OPTIMIZATION accessing tables, views 564 decimal
register 54 invoking functions 574
in FORTRAN 699
cursor mapping data types from 569 description 11
ambiguous 93 mapping DB2 functions to 574
Extended UNIX Code
completing a unit of work 82 mapping isolation levels to 568 consideration 522
declared WITH HOLD 82 using distributed requests to FOR BIT DATA, in COBOL 684
declaring 82 query 571
FOR BIT DATA in C/C++ 620
FOR FETCH ONLY 92 using pass-through to query 576 how they are passed to a
issuing a COMMIT statement 82 data structure UDF 402
naming, in REXX 706 allocating for stored in C/C++ 615
naming and defining of 81 procedures 192 in COBOL 681
positioning at table end 104 manipulating for DB2DARI in FORTRAN 698
processing, in dynamic SQL 131 stored procedure 753 Java 625
processing, sample program 84, SQLEDBDESC 498 Java stored procedures
133 user-defined, with multiple (DB2GENERAL) 756
processing, summary of 81 threads 534 list of types and their
processing with SQLDA data structures, declaring 11 representations in UDFs 402
structure 147 data transfer numeric 775
read-only 81, 92 updating 105 object-oriented 267
read only, application OLE automation 420
data type mappings 569
requirements for 82 pointer to, declaring in
ALTER NICKNAME
retrieving multiple rows with 81 C/C++ 606
statement 571
sample program 93 selecting graphic types 610
updatable 93 CREATE TYPE MAPPING
statement 570 SQL column types, list of 77
use in CLI 170 supported
creating for data sources 570
CURSOR.SQB COBOL program in COBOL 681, 682
creating for specific
listing 90 in COBOL, rules for 684
columns 570
CURSOR.SQC C program listing 86 in FORTRAN 698
default 569
Cursor.sqlj Java program listing 88 in FORTRAN, rules for 700
data types
cursor stability 780 VARCHAR in C/C++ 620
BLOBs 341
cursor usage in REXX 714 C/C++ 615, 620 data value control consideration
cursors character conversion application logic and variable
ambiguous 777 overflow 523 type 27
dynamic 777 class data members, declaring in data types 26
unambiguous 777 C/C++ 607 referential integrity
cursors declared WITH HOLD CLOB in C/C++ 620 constraints 26
X/Open XA Interface 540 CLOBs 341 table check constraints 26
Index 825
dbminfo argument, elements of 396 DELETE double-byte character set
(continued) DB2 Connect support 775 Chinese (Traditional) code
tfcolumn (table function column triggers 480 sets 514
list) 399 DELETE operation and triggers 476 configuration parameters 510
unique application identifier DEREF function considerations for collation 515
(appl_id) 399 definition 308 Japanese code sets 514
version/release number privileges required 309 mixed code set
(ver_rel) 398 dereference operator 293 environments 515
DCL (data control language) 776 dereference operators Traditional Chinese
DDL (data definition language) 774 queries using 307 considerations 513
deadlocks derived columns 176 unequal code pages 516
error in buffered insert 551 DESCRIBE statement 786 double-byte character set (DBCS)
in multithreaded DB2 Connect support 787 Chinese (Traditional) code
applications 535 double-byte character set sets 511
preventing in multiple consideration 521 Japanese code sets 511
contexts 536 Extended UNIX Code double-byte code pages 513
debugging consideration 520 double C/C++ type 615
Java programs 627 Extended UNIX Code double Java type 625
SQL procedures 252, 255 consideration with EUC double OLE automation type 420
SQLJ programs 627 database 521 DOUBLE parameter to UDF 403,
stored procedures 223, 236 processing arbitrary 404
using Stored Procedure statements 152 DOUBLE PRECISION parameter to
Builder 651 structured types 340 UDF 404
using Visual Studio 236 descriptor handle 170 DOUBLE SQL data type 77, 420
debugging your UDF 470 designing DB2 applications, DROP statement
DECIMAL parameter to UDF 403 guidelines 21 type mappings 305
DECIMAL SQL data type 77, 420 DFT_SQLMATHWARN user-defined types 305
C/C++ 615 configuration parameter 389 dropping user-defined types 305
COBOL 682 diagnostic-message, passing to dropping view 306
FORTRAN 698 UDF 392 DSN (DB2 Universal Database for
Java 625 differences between different DB2 OS/390) 776
Java stored procedures products 774 DSS (distributed subsection) 545
(DB2GENERAL) 756 differences between host or AS/400 DUOW 525
OLE DB table function 428 server and workstation 786, 787 DYNAMIC.CMD REXX program
REXX 712 differences in SQLCODEs and listing 141
DECLARE CURSOR statement 16 SQLSTATEs 780 dynamic cursors 777
DECLARE CURSOR statement, distinct type 367 Dynamic.java Java program
overview of 81 distinct types listing 137
DECLARE PROCEDURE statement defining a distinct type 274 dynamic memory, allocating in the
(OS/400) 782 defining tables 275 UDF 439
declare section manipulating DYNAMIC.SQB COBOL program
creating with db2dclgn 73 examples of 277 listing 139
in C/C++ 589, 618 resolving unqualified distinct DYNAMIC.SQC C program
in COBOL 671, 683 types 274 listing 135
in FORTRAN 693, 699 strong typing 277 dynamic SQL
rules for statements 71 distributed environment 773 arbitrary statements, processing
DECLARE statement 786 distributed requests of 152
DECLARE STATEMENT coding 571 authorization considerations 34
DB2 Connect support 787 optimizing 572 comparing to static SQL 128
declared temporary tables 177 distributed subsection (DSS) 545 considerations 128
declaring divid() UDF C program listing 443 contrast with dynamic SQL 61
host variable, rules for 71 DML (data manipulation cursor processing 131
indicator variables 75 language) 775 cursor processing, sample
deferring the evaluation of a LOB Double-Byte Character Large program 133
expression example 351 Objects 341 DB2 Connect support 773
Index 827
examples (continued) Extended UNIX Code (EUC) extracting a document to a file
resume using CREATE character conversion (CLOB elements in a table)
DISTINCT TYPE 275 overflow 522 example 360
sales using CREATE TABLE 275
sample SQL declare section for
character conversions in stored
procedures 523
F
supported SQL data types 618 faster application development using
character string length
syntax for character host triggers 474
overflow 523
variables in FORTRAN 694, federated systems
Chinese (Traditional) code
695 column options 566
sets 511, 514
use of distinct types in data integrity 568
Chinese (Traditional) in
UNION 282 data source functions 574
C/C++ 614
user-defined sourced functions data source tables, views
Chinese (Traditional) in
on distinct types 280 cataloging information
REXX 720
using a locator to work with a about 564
client-based parameter
CLOB value 345 considerations,
validation 519
using class data members in an restrictions 565
considerations for collation 515
SQL statement 607 nicknames for 564
considerations for DBCLOB
using parameter markers in data type mappings 569
files 515
search and update 162 distributed requests 571
double-byte code pages 513
V5SPCLI.SQC C program function mapping options 575
expansion at application 519
listing 767 function mappings 574
expansion at server 519
V5SPSRV.SQC C program introduction 563
expansion samples 520
listing 771 isolation levels 568
fixed or varying length data
Varinp.java Java program nicknames 564
types 522
listing 166 pass-through 576
graphic constants 514
VARINP.SQB COBOL program server options 572
graphic data handling 514
listing 168 FENCED option and UDFs 439
Japanese and traditional Chinese
VARINP.SQC C program FETCH call 395
COBOL consideration 685
listing 164 FETCH statement
FORTRAN consideration 701
EXCSQLSTT command 786 host variables 131
Japanese code sets 511, 514
repeated access, technique
EXEC SQL INCLUDE SQLCA Japanese in C/C++ 614
for 102
multithreading Japanese in REXX 720
scroll backwards, technique
considerations 534 mixed code pages 513
for 102
EXEC SQL INCLUDE statement, mixed code set
using SQLDA structure
C/C++ language restrictions 585 environments 515
with 146
EXECUTE IMMEDIATE statement, rules for string conversions 523
file extensions
summary of 128 stored procedures
sample programs 729
EXECUTE statement, summary considerations 515
file reference declarations in
of 128 Traditional Chinese
REXX 711
execution requirements for considerations 513
file reference variables
UDF considerations 515
REXX 715 examples of using 360
unequal code pages 516
exit routines, use restrictions 118 for manipulating LOBs 341
using the DESCRIBE
expansion of data on the host or input values 358
statement 520
AS/400 server 775 output values 359
EXPLAIN, prototyping utility 41 extensibility and distinct types 273 final call, to a UDF 394
Explain Snapshot 55 extern declaration FINAL CALL clause 395
FINAL CALL keyword 394
EXPLSNAP bind option 55 C++ 582
finalize Java method 414
exponentiation and defining UDFs EXTERNAL ACTION option and find the vowel, fold the CLOB for
example 372 UDFs 439 UDFs example 447
extended dynamic SQL statements findvwl() UDF C program
EXTERNAL clause 194
not supported in DB2 listing 447
Connect 787 EXTERNAL NAME clause 426, 427
FIPS 127-2 standard 15
extended UNIX code (EUC) extracting FIRST call 395
character sets 509 large volumes of data 552 first call, to a UDF 394
Index 829
handlers (continued) host variables (continued) include file
overview 243 LOB data declarations in C/C++ requirements for 583
hierarchy FORTRAN 696 COBOL requirements for 665
structured types 285 LOB data in REXX 710 FORTRAN requirements for 688
holdability in SQLJ iterators 640 LOB locator declarations in SQL
host or AS/400 C/C++ 598 COBOL 666
accessing host servers 532 LOB locator declarations in FORTRAN 688
Host or AS/400 environment COBOL 676 SQL for C/C++ 583
LOB locator declarations in SQL1252A
programming 773
FORTRAN 697 COBOL 668
host variables
LOB locator declarations in FORTRAN 690
allocating in stored REXX 710
procedures 192 SQL1252B
multi-byte character COBOL 668
class data members, handling in encoding 609
C/C++ 607 FORTRAN 690
naming SQLADEF for C/C++ 583
clearing LOB host variables in in COBOL 671
REXX 712 SQLAPREP
in FORTRAN 693 COBOL 666
considerations for stored naming in C/C++ 588
procedures 222 FORTRAN 688
naming in REXX 707 SQLAPREP for C/C++ 583
declaring 71 null-terminated strings, handling
in COBOL 671 SQLCA
in C/C++ 604 COBOL 666
in FORTRAN 693 referencing
declaring, examples of 74 FORTRAN 688
in COBOL 671 SQLCA_92
declaring, rules for 71 in FORTRAN 692
declaring, sample programs 105 COBOL 666
referencing from SQL, FORTRAN 689
declaring as pointer to data examples 75
type 606 SQLCA_CN
referencing in C/C++ 588 FORTRAN 688
declaring graphic referencing in REXX 707
in COBOL 674 SQLCA_CS
relating to an SQL statement 13 FORTRAN 688
declaring graphic in C/C++ 593 selecting graphic data types 610
declaring in C/C++ 589 SQLCA for C/C++ 583
static SQL 71 SQLCLI for C/C++ 583
declaring structured types 340 use in dynamic SQL 127
declaring using db2dclgn 73 SQLCLI1 for C/C++ 583
used to pass blocks of data 543 SQLCODES
declaring using variable list WCHARTYPE precompiler
statement 153 COBOL 666
option 611 FORTRAN 689
definition 71
how to use this book 4 SQLCODES for C/C++ 583
determining how to define for
HTML SQLDA
use with a column 14
file reference declarations in sample programs 803 COBOL 666
C/C++ 599 HTML page FORTRAN 689
file reference declarations in SQLDA for C/C++ 583
tagging for Java applets 633
COBOL 677 SQLDACT
file reference declarations in I FORTRAN 689
FORTRAN 697 SQLE819A
IBM DB2 Universal Database Project
file reference declarations in COBOL 667
Add-In for Microsoft Visual
REXX 711 FORTRAN 689
C++ 30, 32
FORTRAN, overview of 693 SQLE819A for C/C++ 584
IBM DB2 Universal Database Tools SQLE819B
graphic data 609 Add-In for Microsoft Visual C++,
in REXX 707 COBOL 667
activating 32 FORTRAN 689
initializing for stored identity columns 176
procedure 191 SQLE819B for C/C++ 584
identity sequence 494 SQLE850A
initializing in C/C++ 600
LOB data declarations in implementing a UDF 370 COBOL 667
C/C++ 596 implicit connect 776 FORTRAN 690
LOB data declarations in IN stored procedure SQLE850A for C/C++ 584
COBOL 675 parameters 193, 206
Index 831
Java (continued) Java class files
distributing and running CLASSPATH environment
K
keys
applets 633 variable 650
foreign 779
distributing and running java_heap_sz configuration
primary 779
applications 633 parameter 650
embedding SQL statements 45
installing JAR files 654, 655
jdk11_path configuration L
parameter 650 LABEL ON statement 786
JDBC 2.0 Optional Package API where to place 650 LANGLEVEL precompile option
support 635 Java data types MIA 620
JDBC example program 630 BigDecimal 625 SAA1 620
JDBC specification 628 Blob 625 using SQL92E and SQLSTATE or
JNDI support 635 double 625 SQLCODE variables 620, 685,
overview 623 Int 625 700
overview of DB2 support java.math.BigDecimal 625 LANGLEVEL SQL92E precompile
for 628 short 625 option 779
SQLCODE 627 String 625 language identifier
SQLJ (Embedded SQL for Java) Java Database Connectivity 630 books 803
calling stored procedures 646 java_heap_sz configuration LANGUAGE OLE clause 417
example program using 642 parameter 650 large object descriptor 341
host variables 646 large object value 341
Java I/O streams
SQLJ (Embedded SQLJ for latch
System.err 412
Java) 637 status with multiple threads 533
System.in 412
applets 638 late-breaking information 804
System.out 412
db2profc 637 limitations
db2profp 637 java.math.BigDecimal Java type 625
Java Naming and Directory Interface stored procedures
declaring cursors 640 (DB2DARI) 753
declaring iterators 640 (JNDI) 635
linking
embedding SQL statments Java packages and classes 630
overview of 52
in 639 COM.ibm.db2.app 625
linking a UDF 370
example clauses 639 JAVA stored procedures 198 LOB data type
holdability 640 Java UDF consideration 387 supported by DB2 Connect
positioned DELETE JDBC Version 7 775
statement 640 1.22 drivers 634 LOB locator APIs, used in UDFs
positioned UPDATE 2.0 Core API 634 sqludf_append API 434
statement 640 2.0 drivers 634 sqludf_create_locator API 434
profconv 637 2.0 Optional Package API 635 sqludf_free_locator API 434
restrictions 638 access to data consideration 24 sqludf_length API 434
returnability 640 COM.ibm.db2.jdbc sqludf_substr API 434
translator 637 .app.DB2Driver 630 LOB locator example program
SQLJ specification 628 COM.ibm.db2.jdbc listing 461
SQLMSG 627 .net.DB2Driver 630 LOB locators
SQLSTATE 627 comparison with SQLJ 623 scenarios for using 438
stored procedures 654, 655 example program using 630 used in UDFs 434
examples 656 getAsciiStream method 657 lob-options-clause of the CREATE
Transaction API (JTA) 636 getString method 657 TABLE statement 343
UDFs (user-defined getUnicodeStream method 657 LOBEVAL.SQB COBOL program
functions) 654, 655 setAsciiStream method 657 listing 355, 362
examples 656 setString method 657 LOBEVAL.SQC C program
Java application setUnicodeStream method 657 listing 353, 361
SCRATCHPAD SQLJ interoperability 658 LOBLOC.SQB COBOL program
consideration 414 jdk11_path configuration listing 348
signature for UDFs 412 parameter 650 LOBLOC.SQC C program
using graphical and large JNDI (Java Naming and Directory listing 346
objects 657 Interface) 635 LOBs (Large Objects)
JTA (Java Transaction API) 636 and DB2 object extensions 267
Index 833
multi-byte considerations (continued) nicknames (continued) object identifiers (continued)
Japanese EUC code sets in CREATE NICKNAME generating automatically 311
REXX 720 statement 565 object instances
multiple definitions of SQLCA, using with views 567 for OLE automation UDFs 418
avoiding 15 NOCONVERT Object Linking and Embedding
multiple threads WCHARTYPE (OLE) 416
application dependencies in stored procedures 222 object-orientation and UDFs 366
between contexts 535 NOLINEMACRO object oriented COBOL
database dependencies between PREP option 586 restrictions 686
contexts 535 nonexecutable SQL statements object-oriented extensions and
guidelines 534 DECLARE CURSOR 16 distinct types 273
preventing deadlocks between INCLUDE 16 object-relational
contexts 536 INCLUDE SQLDA 16 application domain and
using in DB2 applications 533 normal call, to a UDF 394 object-orientation 267
multiple triggers, ordering of 485 NOT ATOMIC compound SQL 785 constraint mechanisms 267
multisite update NOT DETERMINISTIC option and data types 267
coding SQL for a multisite UDFs 439 definition 267
update application 526 NOT FENCED LOB locator LOBs 267
configuration parameters 530 UDFs 434 triggers 267
considerations with stored NOT FENCED stored procedures UDTs and UDFs 267
procedures 223 considerations 225 why use the DB2 object
DB2 Connect support 785 precompiling 224 extensions 267
general considerations 525 working with 223 observer methods 289
overview 525 NOT NULL CALL clause 389 ODBC
restrictions 530 NOT NULL CALL option and access to data consideration 24
support 785 UDFs 439 OLE automation data types 420
when to use 526 null-terminated character form BSTR 420
multisite update configuration C/C++ type 615 DATE 420
parameter null-terminator 620 double 420
LOCKTIMEOUT 530 NULL value float 420
RESYNC_INTERVAL 530 receiving, preparing for 75 long 420
SPM_LOG_NAME 531 numeric conversion overflows 781 SAFEARRAY 420
SPM_NAME 530 numeric data types 775 short 420
SPM_RESYNC_AGENT_LIMIT 530 numeric host variables OLE automation object counting
TM_DATABASE 530 C/C++ 590 example 376
TP_MON_NAME 530 COBOL 671 OLE automation server 417
mutator methods 289 FORTRAN 693 OLE automation types 418
N NUMERIC parameter to UDF 403 OLE automation types and BASIC
national language support (NLS) NUMERIC SQL data type 420 types 420
character conversion 504 C/C++ 615 OLE automation types and C++
code page 504 COBOL 682 types 420
considerations 493 FORTRAN 698 OLE automation UDFs
mixed-byte data 775 Java 625 creatable multi-use OLE
nested stored procedures 208 Java stored procedures automation server 422
parameter passing 248 (DB2GENERAL) 756 creatable single-use OLE
recursion 249 OLE DB table function 428 automation server 422
restrictions 249 REXX 712 implementation 417
returning result sets 249 implementation in BASIC 420
SQL procedures 248 O implementation in C++ 421
Netscape browser object identifier columns 287, 288 object instances 418
installing 809 naming 296 scratchpad considerations 418
nicknames object identifiers UDFs 416
cataloging related choosing representation type OLE DB
information 564 for 300 supported in DB2 25
considerations, restrictions 565 creating constraints on 312 table functions 423
Index 835
PREPARE statement (continued) re-entrant RELEASE SAVEPOINT
summary of 128 stored procedures 224 statement 180
preprocessor functions and the SQL UDFs 430 releasing your connection
precompiler 600 re-use and UDFs 366 CMS applications 18
prerequisites, for programming 9 REAL*2 FORTRAN type 698 to DB2 19
primary key 779 REAL*4 FORTRAN type 698 Remote Data Object specification
printf() for debugging UDFs 470 REAL*8 FORTRAN type 698 supported in DB2 25
printing PDF books 804 REAL parameter to UDF 404 remote unit of work 525
problem resolution REAL SQL data type 77, 420 renaming, package 53
stored procedures 236 C/C++ 615 REORGANIZE TABLE
processing SQL statements in COBOL 682 command 513
REXX 705 FORTRAN 698 repeatable read, technique for 102
program variable type, data value Java 625 reporting errors 560
control consideration 27 Java stored procedures representation types 288
programmatic ID (progID) for OLE (DB2GENERAL) 756
restore wizard 811
automation UDFs 417 OLE DB table function 428 restoring data 19
programming considerations REXX 712 restrictions
accessing host or AS/400 for UDFs 441
rebinding
servers 532 in C/C++ 600
description 58
collating sequences 493 in COBOL 665
REBIND PACKAGE
conversion between different in FORTRAN 687
command 58
code pages 493 in REXX 704
REDEFINES in COBOL 680
in a host or AS/400 on using buffered inserts 552
REF USING clause 287
environment 773 restrictions for DB2DARI stored
in C/C++ 581 reference columns procedures 753
in COBOL 665 assigning scope in typed result code 15
in FORTRAN 687 views 305 RESULT REXX predefined
in REXX 703 defining the scope of 298 variable 708
national language support 493 reference types result sets
programming in complex casting 288 from stored procedures 225
environments 493 choosing representation type returning from SQL
X/Open XA interface 539 for 300 procedures 249
programming framework 20 comparing 288, 300 resume using CREATE DISTINCT
protecting UDFs 439 definition 287 TYPE example 275
prototyping SQL code 40 references RESYNC_INTERVAL multisite
PUT statement comparison with referential update configuration
not supported in DB2 constraints 293 parameter 530
Connect 787 defining relationships using 292 retrieving
dereference operator 293 multiple rows 81
Q REFERENCING clause in the one row 63
QSQ (DB2 Universal Database for CREATE TRIGGER statement 480 rows 92
AS/400) 776 referential integrity 779 retrieving data
qualification and member operators comparison to scoped a second time 102
in C/C++ 608 references 303 scroll backwards, technique
qualifier attributes data relationship for 102
different platforms 778 consideration 28 updating 105
package 778 referential integrity constraint return code 15
query products, access to data data value control SQLCA structure 115
consideration 25 consideration 26 returnability in SQLJ iterators 640
QUERYOPT bind option 55
registering RETURNS clause in the CREATE
R OLE automation UDFs 417 FUNCTION statement 402
RAISE_ERROR built-in UDFs 370 RETURNS TABLE clause 388
function 483 registering Java stored REVOKE statement
RDO specification procedures 649 DB2 Connect support 776
supported in DB2 25 release notes 804 issuing on table hierarchies 297
Index 837
SET PASSTHRU RESET source file SQL Data Types (continued)
statement 577 creating, overview of 47 FLOAT 77, 420
SET PASSTHRU statement 577 file name extensions 49 FOR BIT DATA 420
SET SERVER OPTION modified source file, definition FORTRAN 698
statement 573 of 49 GRAPHIC 420
setAsciiStream JDBC method 657 requirements 51 INTEGER 77, 420
setString JDBC method 657 SQL file extensions 47 Java 625
setting up a DB2 program 11 source-level debuggers and LONG GRAPHIC 420
setting up document server 811 UDFs 470 LONG VARCHAR 77, 420
setUnicodeStream JDBC sourced UDF 278 LONG VARGRAPHIC 77, 420
method 657 special registers NUMERIC 420
severe errors CURRENT EXPLAIN MODE 54 OLE DB table function 428
considerations in a partitioned CURRENT FUNCTION REAL 77, 420
environment 559 PATH 54 REXX 712
shared memory size for UDFs 441 CURRENT QUERY SMALLINT 77, 420
shift-out and shift-in characters 775 OPTIMIZATION 54 TIME 77, 420
short C/C++ type 615 specific-name, passing to UDF 392 TIMESTAMP 77, 420
short int C/C++ type 615 SPM_LOG_SIZE multisite update VARCHAR 77, 420
short Java type 625 configuration parameter 531 VARGRAPHIC 77, 420
short OLE automation type 420 SPM_NAME multisite update SQL data types, passed to a
signal handlers configuration parameter 530 UDF 402
installing, sample programs 105 SPM_RESYNC_AGENT_LIMIT SQL declare section 11
with SQL statements 117 multisite update configuration SQL/DS using DB2 Connect 773
SIGNAL SQLSTATE SQL statement parameter 530 SQL_FILE_READ, input value
to validate input data 474 SQL option 358
signature, two functions and the authorization considerations 33 SQL include file
same 369 authorization considerations for for C/C++ applications 583
SIGUSR1 interrupt 118 dynamic SQL 34 for COBOL applications 666
SIMPLE stored procedures 198 authorization considerations for for FORTRAN applications 688
SIMPLE WITH NULLS stored static SQL 35 SQL procedures
procedures 198 authorization considerations CALL statements in 248
SMALLINT 389 using APIs 35 condition handlers 243
SMALLINT parameter to UDF 403 dynamically prepared 171 debugging 252, 255
SMALLINT SQL data type 77, 420 SQL_API_FN macro 402, 752 dynamic SQL 246
C/C++ 615 SQL-argument 394 log file 255
COBOL 682 SQL-argument, passing to UDF 388 receiving result sets 251
FORTRAN 698 SQL-argument-ind 394 recursion 249
Java 625 SQL-argument-ind, passing to RESIGNAL 245
Java stored procedures UDF 389 restrictions 249
(DB2GENERAL) 756 SQL arguments, passing from DB2 to returning result sets 249
OLE DB table function 428 a UDF 387 SIGNAL 245
REXX 712 SQL Communications Area SQL-result 394, 432
SmartGuides (SQLCA) 14 SQL-result, passing to UDF 388
wizards 810 SQL data types 418, 420 SQL-result-ind 394, 432
snapshot monitor SQL Data Types SQL-result-ind, passing to UDF 389
diagnosing a suspended or BIGINT 77 SQL-state, passing to UDF 390
looping application 560 BLOB 77, 420 SQL statement execution
solving problems CHAR 77, 420 serialization 533
numeric conversion CLOB 77, 420 SQL statements
overflows 781 COBOL 682 C/C++ syntax 586
sort order conversion to C/C++ 615 categories 773
collating sequence 779 DATE 77, 420 COBOL syntax 668
defining 779 DBCLOB 77, 420 DB2 Connect support 786, 787
sorting, specifying collating DECIMAL 77 exception handlers 118
sequence 498 DOUBLE 420 FORTRAN syntax 691
Index 839
SQLE932A include file SQLJ (Embedded SQL for Java) sqludf_free_locator API 434
for C/C++ applications 584 (continued) sqludf.h include file 402
for COBOL applications 667 embedding SQL statements sqludf.h include file for UDFs 411
for FORTRAN applications 690 in 639 SQLUDF include file
SQLE932B include file example clauses 639 description 411
for C/C++ applications 584 example program using 642 for C/C++ applications 585
for COBOL applications 667 holdability 640 UDF interface 387
for FORTRAN applications 690 host variables 646 sqludf_length API 434
sqleAttachToCtx() API 533 Java Database Connectivity sqludf_substr API 434
SQLEAU include file (JDBC) interoperability 658 SQLUTBCQ include file
for C/C++ applications 584 overview 637 for COBOL applications 668
for COBOL applications 666 positioned DELETE SQLUTBSQ include file
for FORTRAN applications 689 statement 640 for COBOL applications 668
sqleBeginCtx() API 533 positioned UPDATE SQLUTIL include file
sqleDetachFromCtx() API 533 statement 640 for C/C++ applications 585
sqleEndCtx() API 533 profconv command 637 for COBOL applications 668
sqleGetCurrentCtx() API 533 restrictions 638 for FORTRAN applications 691
sqleInterruptCtx() API 533 returnability 640 SQLUV include file
SQLENV include file translator command 637 for C/C++ applications 585
for C/C++ applications 584 SQLJ (Embedded SQLJ for Java) SQLUVEND include file
for COBOL applications 667 comparison with Java Database for C/C++ applications 585
for FORTRAN applications 689 Connectivity (JDBC) 623 SQLVAR entities
SQLERRD(1) 507, 516, 518 SQLJACB include file declaring sufficient number
SQLERRD(2) 507, 516, 518 for C/C++ applications 585 of 145
SQLERRD(3) SQLLEN field 754 variable number of,
in an XA environment 541 SQLMON include file declaring 143
SQLERRMC field of SQLCA 507, for C/C++ applications 585 SQLWARN structure, overview
776, 785 for COBOL applications 668 of 115
SQLERRP field of SQLCA 776 for FORTRAN applications 690 SQLXA include file
sqleSetTypeCtx() API 533 SQLMONCT include file for C/C++ applications 585
SQLETSD include file for COBOL applications 668
SQLZ_DISCONNECT_PROC return
for COBOL applications 667 SQLMSG
value 755
SQLException in Java programs 627
SQLZ_HOLD_PROC return
handling 121 SQLMSG predefined variable 708
value 755
retrieving SQLCODE from 627 SQLRDAT predefined variable 708
statement handle 170
retrieving SQLMSG from 627 SQLRIDA predefined variable 708
statements
retrieving SQLSTATE from 627 SQLRODA predefined variable 708
ACQUIRE 786
SQLEXEC SQLSTATE
BEGIN DECLARE SECTION 11
processing SQL statements in differences 780 call 782
REXX 705 in CLI 170 COMMIT 18
SQLEXEC, registering for in Java programs 627
COMMIT WORK RELEASE 787
REXX 704 in SQLERRMC field of
connect 776
SQLEXEC REXX API 703 SQLCA 785 CONNECT 16
SQLEXT include file standalone 779
CREATE STORGROUP 774
for CLI applications 584 SQLSTATE field, in error CREATE TABLESPACE 774
SQLIND field 754 messages 115 DECLARE 786, 787
sqlint64 C/C++ type 615 SQLSTATE include file DELETE 775
SQLISL predefined variable 708 for C/C++ applications 585 DESCRIBE 786, 787
SQLJ (Embedded SQL for Java) for COBOL applications 668 END DECLARE SECTION 11
applets 638 for FORTRAN applications 690 GRANT 776
calling stored procedures 646 SQLSYSTM include file INCLUDE SQLCA 14
db2profc command 637 for C/C++ applications 585 INSERT 775
db2profp command 637 SQLTYPE field 754 LABEL ON 786
declaring cursors 640 sqludf_append API 434 PREPARE 787
declaring iterators 640 sqludf_create_locator API 434 ROLLBACK 19, 777
Index 841
structured types (continued) syntax (continued) table functions 370
invoking methods on 290 embedded SQL statement application design
MODE DB2SQL clause 284 (continued) considerations 432
mutator methods 289 comments in FORTRAN 692 contents of call-type
noninstantiable types 295 in COBOL 668 argument 395
observer methods 289 in FORTRAN 691 in Java 412
overview 284 embedded SQL statement OLE DB 423
passing instances to client avoiding line breaks 587 table names
applications 327 embedded SQL statement resolving unqualified 54
referring to row objects 287 comments in C/C++ 587 tables
representation types 288 embedded SQL statement temporary 177
retrieving attribute values 289 comments in REXX 707 tablespace-options-clause of the
retrieving instances as single embedded SQL statement in CREATE TABLE statement 343
values 316 C/C++ 586 target partition
retrieving internal ID of 308 embedded SQL statement behavior without buffered
retrieving schema name of 308 substitution of white space insert 549
retrieving subtype attributes characters 588 temporary tables 177
of 316 graphic host variable in territory
retrieving type name of 308 C/C++ 593 in SQLERRMC field of
returning information about 318 processing SQL statement in SQLCA 776
static types 295 REXX 705 test data
storing 285 syntax for referring to generating 38
storing as rows 291 functions 377 test database
storing in columns 313 SYSCAT.FUNCMAPOPTIONS CREATE DATABASE API 37
storing objects in columns 293 catalog view 574 creating 36
updating attributes of 289, 316, SYSCAT.FUNCTIONS catalog recommendations 37
317 view 575 testing and debugging utilities
subtables SYSIBM.SYSPROCEDURES catalog database system monitor 40
creating 291 (OS/390) 782 Explain facility 40
inheriting attributes from SYSSTAT.FUNCTIONS catalog flagger 40
subtables 297 view 575 system catalog views 40
subtitutability 292, 295 system catalog updating system catalog
subtypes 285 dropping view implications 306 statistics 40
binding in with tranform using 781 testing environment
functions 336 system catalog views for partitioned
returning attributes using prototyping utility 41 environments 558
OUTER 310 system configuration parameter for setting up 36
writing transform functions shared memory size 441 test databases, guidelines for
for 332 System.err Java I/O stream 412 creating 36
success code 15 System.in Java I/O stream 412 testing your UDF 470
supertypes 285 System.out Java I/O stream 412 tfweather_u table function C
surrogate functions 474 T program listing 453
suspended application table TIME parameter to UDF 408
diagnosing 560 committing changes 18 TIME SQL data type 77, 420
symbol substitutions, C/C++ data source tables 564 C/C++ 615
language restrictions 600 lob-options-clause of the CREATE COBOL 682
Sync Point Manager 531 TABLE statement 343 FORTRAN 698
syntax positioning cursor at end 104 Java 625
character host variables 592 tablespace-options-clause of the Java stored procedures
declare section CREATE TABLE statement 343 (DB2GENERAL) 756
in COBOL 671 table check constraint, data value OLE DB table function 428
in FORTRAN 693 control consideration 26 REXX 712
declare section in C/C++ 589 table function 388 timeout on a lock 780
embedded SQL statement SQL-result argument 388 TIMESTAMP parameter to
comments in COBOL 669 table function example 376 UDF 408
Index 843
UDFs (User-defined functions) UDFs (User-defined functions) Unicode (UCS-2) (continued)
(continued) (continued) UDF considerations 515
and DB2 object extensions 267 output and input to screen and unique key violation
C++ consideration 442 keyboard 441 buffered insert 551
calling convention 402 overloading function names 369 unit of work
casting 383 passing arguments from DB2 to a completing 82
caveats 441 UDF 387 cursor considerations 82
Chinese (Traditional) code process of implementation 370 distributed 525
sets 515 rationale 366 remote 525
code page differences 441 re-entrant UDFs 430 unqualified function reference
coding in Java 412 referring to functions 377 example 380
concepts 369 registering 371 unqualified reference 369
considerations when using restrictions and caveats 441 unqualified table names
protected resources 441 save state in function 430 resolving 54
creating and using in Java 412 schema-name and UDFs 369 UPDAT.CMD REXX program
db2udf executable 441 SCRATCHPAD 431 listing 113
debugging your UDF 470 scratchpad considerations 430 UPDAT.SQB COBOL program
definition 365 shared memory size 441 listing 111
DETERMINISTIC 431 sourced 278 UPDAT.SQC C program listing 107
example 447 SQL_API_FN 444 Updat.sqlj Java program listing 109
examples of UDF code 443 SQL data types, how they are update operation 475
EXTERNAL ACTION passed 402 UPDATE operation and
option 439 SQLUDF include file 387, 411 triggers 476
FENCED option 439 SUBSTR built-in function 450 UPDATE statement
function path 369 summary of function DB2 Connect support 775
function selection algorithm 369 references 380 USAGE clause in COBOL types 682
general considerations 381 system configuration parameter use of distinct types in UNION
hints and tips for coding 439 for shared memory size 441 example 282
implementing 366 table functions 432 user-defined collating
infix notation 381 type of functions 370 sequence 779, 789
input and output to screen and unqualified reference 369 user-defined function, application
keyboard 441 using LOB locators 434 logic consideration 29
interface between DB2 and a writing 371, 385 user-defined sourced functions on
UDF 387 UDFs (User-defined Functions) distinct types example 280
invoking 377 synergy with triggers, UDTs, and user-defined type (UDT)
parameter markers in LOBs 486 dropping restrictions 305
functions 379 UDFs and LOB types 382 user defined types
qualified function UDTs (User-defined types) supported by DB2 Connect 775
reference 379 and DB2 object extensions 267 USER MAPPING in OLE DB table
unqualified function UDTs (User-defined Types) functions 427
reference 380 synergy with triggers, UDFs, and user updatable catalog statistics
Japanese code sets 515 LOBs 486 prototyping utility 41
Java consideration 387 unambiguous cursors 777 using
list of types and their unequal code pages 516 Java stored procedures 649
representations in UDFs 402
allocating storage 516 Java UDFs 412
LOB locator usage scenarios 438
unfenced stored procedures 223 using a locator to work with a CLOB
LOB types 382
Unicode value example 345
NOT DETERMINISTIC 430
Java 657 using qualified function reference
NOT DETERMINISTIC
Unicode (UCS-2) example 379
option 439
character conversion 524 UTILAPI.C program listing 119
NOT FENCED 445
NOT NULL CALL 445 character conversion utility APIs
NOT NULL CALL option 439 overflow 522 include file for C/C++
OLE automation UDFs 416 Chinese (Traditional) code applications 585
sets 511 include file for COBOL
Japanese code sets 511 applications 667, 668
Index 845
846 Application Development Guide
Contacting IBM
If you have a technical problem, please review and carry out the actions
suggested by the Troubleshooting Guide before contacting DB2 Customer
Support. This guide suggests information that you can gather to help DB2
Customer Support to serve you better.
If you live in the U.S.A., then you can call one of the following numbers:
v 1-800-237-5511 for customer support
v 1-888-426-4343 to learn about available service options
Product Information
If you live in the U.S.A., then you can call one of the following numbers:
v 1-800-IBM-CALL (1-800-426-2255) or 1-800-3IBM-OS2 (1-800-342-6672) to
order products or get general information.
v 1-800-879-2755 to order publications.
http://www.ibm.com/software/data/
The DB2 World Wide Web pages provide current DB2 information
about news, product descriptions, education schedules, and more.
http://www.ibm.com/software/data/db2/library/
The DB2 Product and Service Technical Library provides access to
frequently asked questions, fixes, books, and up-to-date DB2 technical
information.
For information on how to contact IBM outside of the United States, refer to
Appendix A of the IBM Software Support Handbook. To access this document,
go to the following Web page: http://www.ibm.com/support/, and then
select the IBM Software Support Handbook link near the bottom of the page.
SC09-2949-00