DB2Troubleshooting Db2p0e953 PDF
DB2Troubleshooting Db2p0e953 PDF
DB2Troubleshooting Db2p0e953 PDF
5
for Linux, UNIX, and Windows
Version 9 Release 5
Troubleshooting Guide
Updated December, 2010
GI11-7857-03
DB2 Version 9.5
for Linux, UNIX, and Windows
Version 9 Release 5
Troubleshooting Guide
Updated December, 2010
GI11-7857-03
Note
Before using this information and the product it supports, read the general information under Appendix B, “Notices,” on
page 135.
Edition Notice
This document contains proprietary information of IBM. It is provided under a license agreement and is protected
by copyright law. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative.
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at www.ibm.com/
planetwide
To order DB2 publications from DB2 Marketing and Sales in the United States or Canada, call 1-800-IBM-4YOU
(426-4968).
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright IBM Corporation 1993, 2010.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
About this book . . . . . . . . . . . v Troubleshooting optimization guidelines and
profiles . . . . . . . . . . . . . . . 39
Chapter 1. Learning more about Troubleshooting the storage key support . . . 42
Data compression dictionary is not automatically
troubleshooting . . . . . . . . . . . 1 created . . . . . . . . . . . . . . . 43
Introduction to troubleshooting . . . . . . . . 1 Troubleshooting global variable problems . . . 43
About first occurrence data capture . . . . . . . 3 Troubleshooting scripts . . . . . . . . . 45
Collecting diagnosis information based on Troubleshooting data inconsistencies . . . . . 46
common outage problems . . . . . . . . . 3 Troubleshooting index to data inconsistencies . . 46
Configuring for automatic collection of diagnostic Collecting data for DB2 . . . . . . . . . . 46
information . . . . . . . . . . . . . . 4 Collecting data for installation problems. . . . 47
Data collected as part of FODC and its placement 5 Collecting data for data movement problems . . 47
Automatic FODC data generation . . . . . . 5 Collecting data for DAS and instance
First occurrence data capture information. . . . 6 management problems . . . . . . . . . 48
DB2 Query Patroller and First Occurrence Data Analyzing data for DB2 . . . . . . . . . . 49
Capture (FODC) . . . . . . . . . . . . 8 Analyzing data for installation problems . . . 49
Monitor and audit facilities using First Occurrence Analyzing DB2 license compliance reports . . . 50
Data Capture (FODC) . . . . . . . . . . 9 Submitting data to IBM Software Support . . . . 52
Graphical tools using First Occurrence Data
Capture (FODC) . . . . . . . . . . . . 9
About administration notification log files . . . . 11
Chapter 4. Troubleshooting DB2
Administration notification log . . . . . . . 11 Connect . . . . . . . . . . . . . . 55
Setting the error capture level for the
administration notification log file . . . . . . 11 Chapter 5. Troubleshooting . . . . . . 57
Interpreting administration notification log file Gathering relevant information . . . . . . . . 57
entries . . . . . . . . . . . . . . . 12 Initial connection is not successful . . . . . . . 57
About DB2 diagnostic log (db2diag.log) files . . . 13 Problems encountered after an initial connection . . 58
Diagnostic data directory path . . . . . . . 13 Unsupported DDM commands . . . . . . . 59
Setting the diagnostic log file error capture level 16 Diagnostic tools . . . . . . . . . . . . . 61
Interpreting the db2diag.log file informational Common DB2 Connect problems . . . . . . . 61
record . . . . . . . . . . . . . . . 17
Interpreting diagnostic log file entries . . . . 17 Chapter 6. Tools for troubleshooting 65
db2cos (callout script) output files . . . . . . . 20 Learning more about internal return codes . . . . 65
Dump Files . . . . . . . . . . . . . . 22 Checking archive log files with the db2cklog tool. . 66
Trap files . . . . . . . . . . . . . . . 23 Overview of the db2dart tool . . . . . . . . 69
Formatting trap files (Windows) . . . . . . 23 Comparison of INSPECT and db2dart . . . . 69
Platform specific error log information . . . . . 24 Analyzing db2diag log files using db2diag tool . . 71
System core files (Linux and UNIX) . . . . . 25 Displaying and altering the Global Registry (UNIX)
Accessing system core file information (Linux using db2greg . . . . . . . . . . . . . 75
and UNIX) . . . . . . . . . . . . . 25 Identifying the version and service level of your
Accessing event logs (Windows) . . . . . . 26 product. . . . . . . . . . . . . . . . 75
Exporting event logs (Windows) . . . . . . 26 Mimicking databases using db2look . . . . . . 76
Accessing the Dr. Watson log file (Windows) . . 26 Listing DB2 products installed on your system
Combining DB2 database and OS diagnostics . . . 27 (Linux and UNIX) . . . . . . . . . . . . 79
Monitoring and troubleshooting using db2pd . . . 81
Chapter 2. DB2 Health Advisor Service 31 Collecting environment information using
How to use the DB2 Health Advisor Service . . . 31 db2support . . . . . . . . . . . . . . 90
Basic trace diagnostics . . . . . . . . . . . 94
Chapter 3. Troubleshooting DB2 . . . . 35 DB2 Traces . . . . . . . . . . . . . 95
Current release troubleshooting guidance . . . . 35 DRDA Traces . . . . . . . . . . . . . 98
Troubleshooting high availability . . . . . . 35 Control Center traces . . . . . . . . . . 106
Troubleshooting installation of DB2 database JDBC traces . . . . . . . . . . . . . 106
systems. . . . . . . . . . . . . . . 36 CLI trace files . . . . . . . . . . . . 108
Troubleshooting partitioned database Platform-specific tools . . . . . . . . . . 113
environments . . . . . . . . . . . . . 38 Diagnostic tools (Windows). . . . . . . . 113
Diagnostic tools (Linux and UNIX) . . . . . 114
iv Troubleshooting Guide
About this book
This guide provides information to get you started solving problems with DB2®
database clients and servers. It helps you to:
v Identify problems and errors in a concise manner
v Solve problems based on their symptoms
v Learn about available diagnostic tools
v Develop a troubleshooting strategy for day-to-day operations.
Introduction to troubleshooting
The first step in good problem analysis is to describe the problem completely.
Without a problem description, you will not know where to start investigating the
cause of the problem. This step includes asking yourself such basic questions as:
v What are the symptoms?
v Where is the problem happening?
v When does the problem happen?
v Under which conditions does the problem happen?
v Is the problem reproducible?
Answering these and other questions will lead to a good description to most
problems, and is the best way to start down the path of problem resolution.
When starting to describe a problem, the most obvious question is "What is the
problem?" This might seem like a straightforward question; however, it can be
broken down into several other questions to create a more descriptive picture of
the problem. These questions can include:
v Who or what is reporting the problem?
v What are the error codes and error messages?
v How does it fail? For example: loop, hang, crash, performance degradation,
incorrect result.
v What is the business impact?
Determining where the problem originates is not always easy, but it is one of the
most important steps in resolving a problem. Many layers of technology can exist
between the reporting and failing components. Networks, disks, and drivers are
only a few components to be considered when you are investigating problems.
v Is the problem platform specific, or common to multiple platforms?
v Is the current environment and configuration supported?
v Is the application running locally on the database server or on a remote server?
v Is there a gateway involved?
These types of questions will help you isolate the problem layer, and are necessary
to determine the problem source. Remember that just because one layer is
reporting a problem, it does not always mean the root cause exists there.
Responding to questions like this will help you create a detailed time line of
events, and will provide you with a frame of reference in which to investigate.
Knowing what else is running at the time of a problem is important for any
complete problem description. If a problem occurs in a certain environment or
under certain conditions, that can be a key indicator of the problem cause.
v Does the problem always occur when performing the same task?
v Does a certain sequence of events need to occur for the problem to surface?
v Do other applications fail at the same time?
Answering these types of questions will help you explain the environment in
which the problem occurs, and correlate any dependencies. Remember that just
because multiple problems might have occurred around the same time, it does not
necessarily mean that they are always related.
2 Troubleshooting Guide
However, reproducible problems can have a disadvantage: if the problem is of
significant business impact, you don't want it recurring. If possible, recreating the
problem in a test or development environment is often preferable in this case.
v Can the problem be recreated on a test machine?
v Are multiple users or applications encountering the same type of problem?
v Can the problem be recreated by running a single command, a set of commands,
or a particular application, or a standalone application?
v Can the problem be recreated by entering the equivalent command/query from
a DB2 command line?
When trouble occurs when working with DB2 instances and databases, you should
collect data at the time the problem happens. First occurrence data collection
(FODC) is the term used to describe the actions taken when trouble occurs in your
DB2 environment. You control what data is collected during outages through the
setting of options in the DB2FODC registry variable using the db2pdcfg tool. Use
db2pdcfg -fodc to change the DB2FODC registry variable options. The options
influence the database system behavior regarding data capture in FODC situations.
To correlate the outage with the DB2 diagnostic logs and the other troubleshooting
files, a diagnostic message is written to both the administration notification log and
the db2diag.log. The diagnostic message includes the FODC directory name and the
timestamp when the FODC directory was created. The FODC package description
file is placed in the new FODC directory.
Table 1. Automatic FODC types and packages
Invocation
Package Description type Script executed
FODC_Trap_<timestamp> An instance wide trap has occurred Automatic db2cos_trap(.bat)
Flags are set indicating actions to be taken by the database manager when an error
or a warning is encountered during database operations. The actions that are
carried out include:
v Producing a stack trace in the db2diag.log. (Default)
v Running the callout script, db2cos. (Default)
v Stopping the trace (db2trc) command.
Change the first occurrence data capture (FODC) options using the Configure DB2
database for problem determination behavior (db2pdcfg) command. The FODC
options are set in the DB2FODC registry variable using the db2pdcfg tool. The
options influence the database system behavior regarding data capture in FODC
situations.
4 Troubleshooting Guide
Data collected as part of FODC and its placement
Depending on the type of outage within the instance, first occurrence data capture
(FODC) results in the creation of subdirectories and specific content that is
collected. A series of subdirectories is created along with the collection of files and
logs.
One or more of the following subdirectories is created under the FODC directory:
v DB2CONFIG containing DB2 configuration output and files
v DB2PD containing db2pd output or output files
v DB2SNAPS containing DB2 snapshots
v DB2TRACE containing DB2 traces
v OSCONFIG containing operating system configuration files
v OSSNAPS containing operating system monitor information
v OSTRACE containing operating system traces
These directories may not always exist depending on the configuration of
DB2FODC or the mode in which db2fodc is run.
Depending on the type of outage, the following content is found in the FODC
directory and subdirectories:
v Trap files
v All the different binary and plain text dump files generated during the data
capture as part of the outage and completed by different components
v db2evlog's event log file
v DB2 trace dump if trace has been on at the time of the outage
v The directory containing the core file
v DB2FODC log files:
– Only one "log" file is used for a manual FODC. db2fodc_hang.log (for hangs)
or db2fodc_badpage.log (for bad pages)
v Data corruption related information
– Process information: ps (on UNIX) and db2pd -edus output
– Additional information collected currently by db2support (optional):
- errpt -a output (on AIX)
- System logs on UNIX platforms. For example, /var/adm/messages for
SunOS and /var/adm/syslog.log on HP/UX. This will be done provided
that these files may be collected (on Linux, you must have root access to
copy the syslog file).
One or many messages, including those defined as "critical" are used to mark the
origin of an outage.
Dump files that are specific to components within the database manager are stored
in the appropriate FODC package directory.
6 Troubleshooting Guide
v Operating system: All
v Default location: Located in the directory specified by the diagpath
database manager configuration parameter
v Created automatically when the instance is created.
v The DB2 event log file is a circular log of infrastructure-level events
occurring in the database manager. The file is fixed in size, and acts as
circular buffer for the specific events that are logged as the instance
runs. Every time you stop the instance, the previous event log will be
replaced, not appended. If the instance traps, a db2eventlog.XXX.crash
file is also generated. These files are intended for use by IBM software
support.
DB2 callout script (db2cos) output files
v Operating system: All
v Default location: Located in the directory specified by the diagpath
database manager configuration parameter
v Created automatically when a panic, trap or segmentation violation
occurs. Can also be created during specific problem scenarios, as
specified using the db2pdcfg command.
v The default db2cos script will invoke db2pd commands to collect
information in an unlatched manner. The contents of the db2cos output
files will vary depending on the commands contained in the db2cos
script.
v The db2cos script is shipped under the bin/ directory. On UNIX, this
directory is read-only. To create your own modifiable version of this
script, copy the db2cos script to the adm/ directory. You are free to
modify this version of the script. If the script is found in the adm/
directory, it is that version that is run. Otherwise, the default version in
the bin/ directory is run.
Dump files
v Operating system: All
v Default location: Located in the directory specified by the diagpath
database manager configuration parameter
v Created automatically when particular problem scenarios arise.
v For some error conditions, extra information is logged in binary files
named after the failing process ID. These files are intended for use by
IBM software support.
Trap files
v Operating system: All
v Default location: Located in the directory specified by the diagpath
database manager configuration parameter
v Created automatically when the instance ends abnormally. Can also be
created at will using the db2pd command.
v The database manager generates a trap file if it cannot continue
processing due to a trap, segmentation violation, or exception.
Core files
v Operating system: Linux and UNIX
v Default location: Located in the directory specified by the diagpath
database manager configuration parameter
8 Troubleshooting Guide
qpuser.log
v Operating system: All
v Default location: Located in the directory identified by the diagpath
database manager configuration parameter.
v Created automatically when the Query Patroller system becomes active.
v Contains informational messages about Query Patroller; for example,
indicating when Query Patroller stops and starts. It is intended for use
by Query Patroller administrators.
10 Troubleshooting Guide
v Default location: The exported tag file path and log file name are
specified in the Options tab of the Export tool in the Information
Catalog Center
v Generated by the Export tool in the Information Catalog Center
v Contains tag file export information, such as the times and dates when
the export process started and stopped. It also includes any error
messages that are encountered during the export operation.
Information Catalog Center tag file IMPORT log
v Operating system: All
v Default location: The imported tag file path and log file name are
specified in the Import tool in the Information Catalog Center
v Generated by the Import tool in the Information Catalog Center
v Contains tag file import history information, such as the times and dates
when the import process started and stopped. It also includes any error
messages that are encountered during the import operation.
Administration notification log messages are also logged to the db2diag.log using a
standardized message format.
The Administration log, as with all logs in the database, grows continuously. Some
logs grow more quickly than others depending on what is logged in each file.
When a log gets too large, you should back it up and then erase it. A new log is
generated automatically the next time it is required by the system.
The following example shows the header information for a sample log entry, with
all the parts of the log identified.
Note: Not every log entry will contain all of these parts.
2006-02-15-19.33.37.630000 1 Instance:DB2 2 Node:000 3
PID:940(db2syscs.exe) TID: 6604 Appid:*LOCAL.DB2.020205091435 5
recovery manager 6 sqlpresr 7 Probe:1 8 Database:SAMPLE 9
ADM1530E 10 Crash recovery has been initiated. 11
Legend:
1. A timestamp for the message.
2. The name of the instance generating the message.
3. For multi-partition systems, the database partition generating the message. (In
a non-partitioned database, the value is "000".)
4. The process identifier (PID), followed by the name of the process, followed by
the thread identifier (TID) that are responsible for the generation of the
message.
5.
Identification of the application for which the process is working. In this
example, the process generating the message is working on behalf of an
application with the ID *LOCAL.DB2.020205091435.
This value is the same as the appl_id monitor element data. For detailed
information about how to interpret this value, see the documentation for the
appl_id monitor element.
To identify more about a particular application ID, either:
v Use the LIST APPLICATIONS command on a DB2 server or LIST DCS
APPLICATIONS on a DB2 Connect™ gateway to view a list of application
IDs. From this list, you can determine information about the client
experiencing the error, such as its node name and its TCP/IP address.
v Use the GET SNAPSHOT FOR APPLICATION command to view a list of
application IDs.
6. The DB2 component that is writing the message. For messages written by user
applications using the db2AdminMsgWrite API, the component will read “User
Application”.
12 Troubleshooting Guide
7. The name of the function that is providing the message. This function
operates within the DB2 component that is writing the message. For messages
written by user applications using the db2AdminMsgWrite API, the function will
read “User Function”.
8. Unique internal identifier. This number allows DB2 customer support and
development to locate the point in the DB2 source code that reported the
message.
9. The database on which the error occurred.
10. When available, a message indicating the error type and number as a
hexadecimal code.
11. When available, message text explaining the logged event.
Overview
The specification of the diagnostic data directory path, using the diagpath database
manager configuration parameter, can determine which one of the following
directory path methods is used for diagnostic data storage:
Single diagnostic data directory path
All diagnostic data for the DB2 instance is stored within a single directory,
no matter whether the database is partitioned or not. In a partitioned
database environment, diagnostic data from different partitions within the
host will be all dumped to this single diagnostic data directory path. This
is the default condition when the diagpath value is set to null or any valid
path name without the $h or $n pattern identifiers.
Split diagnostic data directory path
For partitioned database environments, diagnostic data can be stored
separately within a directory named according to the host, database
partition, or both. Therefore, each type of diagnostic file, within a given
diagnostic directory, contains diagnostic information from only one host, or
from only one database partition, or from both one host and one database
partition.
Benefits
The benefits of specifying the diagnostic data directory path are as follows:
Merging and sorting records of multiple diagnostic files of the same type, based on
timestamps, can be done with the db2diag -merge command in the case of a split
diagnostic data directory path. For more information, see: “db2diag - db2diag logs
analysis tool command” in the Command Reference.
You will be able to split a diagnostic data directory path to separately store
diagnostic information according to the database partition server or database
partition from which the diagnostic data dump originated.
Restrictions
Procedure
v Splitting diagnostic data directory path per database partition server
– To split the default diagnostic data directory path, execute the following step:
- Set the diagpath database manager configuration parameter to split the
default diagnostic data directory path per database partition server by
issuing the following command:
db2 update dbm cfg using diagpath ’"$h"’
This command creates a subdirectory under the default diagnostic data
directory with the computer name, as in the following:
Default_diagpath/HOST_dp-partition-server-name
– To split a user specified diagnostic data directory path (for example,
/home/usr1/db2dump/), execute the following step:
14 Troubleshooting Guide
- Set the diagpath database manager configuration parameter to split the
/home/usr1/db2dump/ diagnostic data directory path per database partition
server by issuing the following command:
db2 update dbm cfg using diagpath ’"/home/usr1/db2dump/ $h"’
HOST_boson:
NODE0000 NODE0001 NODE0002
HOST_boson/NODE0000:
db2diag.log db2eventlog.000 db2resync.log db2sampl_Import.msg events usr1.nfy
HOST_boson/NODE0000/events:
db2optstats.0.log
HOST_boson/NODE0001:
db2diag.log db2eventlog.001 db2resync.log usr1.nfy stmmlog
HOST_boson/NODE0001/stmmlog:
stmm.0.log
HOST_boson/NODE0002:
db2diag.log db2eventlog.002 db2resync.log usr1.nfy
What to do next
Note:
v If a diagnostic data directory path split per database partition is specified ($n or
$h$n), the NODE0000 directory will always be created for each database partition
server. The NODE0000 directory can be ignored if database partition 0 does not
exist on the database partition server where the NODE0000 directory was created.
v To check that the setting of the diagnostic data directory path was successfully
split, execute the following command:
db2 get dbm cfg | grep DIAGPATH
A successfully split diagnostic data directory path returns the values $h, $n, or
$h$n with a preceding blank space. For example, the output returned is similar
to the following:
Diagnostic data directory path (DIAGPATH) = /home/usr1/db2dump/ $h$n
To merge separate db2diag log files to make analysis and troubleshooting easier,
use the db2diag -merge command. For additional information, see: “db2diag -
db2diag logs analysis tool command” in the Command Reference and “Analyzing
db2diag log files using db2diag tool” on page 71..
16 Troubleshooting Guide
v To check the current setting, issue the command GET DBM CFG.
Look for the following variable:
Diagnostic error capture level (DIAGLEVEL) = 3
v To change the value dynamically, use the UPDATE DBM CFG command.
To change a database manager configuration parameter online:
db2 attach to <instance-name>
db2 update dbm cfg using <parameter-name> <value>
db2 detach
For example:
DB2 UPDATE DBM CFG USING DIAGLEVEL X
where X is the desired notification level. If you are diagnosing a problem that
can be reproduced, support personnel may suggest that you use DIAGLEVEL 4
while performing troubleshooting.
The Informational record is output for "db2start" on every logical partition. This
results in multiple informational records: one per logical partition. Since the
informational record contains memory values which are different on every
partition, this information might be useful.
As an alternative to using db2diag, you can use a text editor to view the diagnostic
log file on the machine where you suspect a problem to have occurred. The most
recent events recorded are the furthest down the file.
Note: The administration and diagnostic logs grow continuously. When they get
too large, back them up and then erase the files. A new set of files is generated
automatically the next time they are required by the system.
Note: Not every log entry will contain all of these parts. Only the first several
fields (timestamp to TID) and FUNCTION will be present in all the db2diag.log
records.
2007-05-18-14.20.46.973000-2401 I27204F6552 LEVEL: Info3
PID : 32284 TID : 87965 PROC : db2syscs.exe6
INSTANCE: DB2MPP7 NODE : 0028 DB : WIN3DB19
APPHDL : 0-5110 APPID: 9.26.54.62.45837.07051818204211
AUTHID : UDBADM12
EDUID : 879613 EDUNAME: db2agntp14 (WIN3DB1) 2
FUNCTION:15 DB2 UDB, data management, sqldInitDBCB, probe:4820
DATA #1 :16 String, 26 bytes
Setting ADC Threshold to:
DATA #2 : unsigned integer, 8 bytes
1048576
Legend:
1. A timestamp and timezone for the message.
18 Troubleshooting Guide
Note: When the hexadecimal versions of the IP address or port number
begin with 0-9, they are changed to G-P respectively. For example, "0" is
mapped to "G", "1" is mapped to "H", and so on. The IP address,
AC10150C.NA04.006D07064947 is interpreted as follows: The IP address
remains AC10150C, which translates to 172.16.21.12. The port number is
NA04. The first character is "N", which maps to "7". Therefore, the
hexadecimal form of the port number is 7A04, which translates to 31236 in
decimal form.
This value is the same as the appl_id monitor element data. For detailed
information about how to interpret this value, see the documentation for
the appl_id monitor element.
To identify more about a particular application ID, either:
v Use the LIST APPLICATIONS command on a DB2 server or LIST DCS
APPLICATIONS on a DB2 Connect gateway to view a list of application
IDs. From this list, you can determine information about the client
experiencing the error, such as its database partition name and its
TCP/IP address.
v Use the GET SNAPSHOT FOR APPLICATION command to view a list
of application IDs.
v Use the db2pd -applications -db <sample> command.
12 The authorization identifier.
13 The engine dispatchable unit identifier.
14 The name of the engine dispatchable unit.
15. The product name ("DB2"), component name (“data management”), and
function name (“sqlInitDBCB”) that is writing the message (as well as the
probe point (“4820”) within the function).
16. The information returned by a called function. There may be multiple data
fields returned.
Now that you have seen a sample db2diag.log entry, here is a list of all of the
possible fields:
<timestamp><timezone> <recordID> LEVEL: <level> (<source>)
PID : <pid> TID : <tid> PROC : <procName>
INSTANCE: <instance> NODE : <node> DB : <database>
APPHDL : <appHandle> APPID: <appID>
AUTHID : <authID>
EDUID : <eduID> EDUNAME: <engine dispatchable unit name>
FUNCTION: <prodName>, <compName>, <funcName>, probe:<probeNum>
MESSAGE : <messageID> <msgText>
CALLED : <prodName>, <compName>, <funcName> OSERR: <errorName> (<errno>)
RETCODE : <type>=<retCode> <errorDesc>
ARG #N : <typeTitle>, <typeName>, <size> bytes
... argument ...
DATA #N : <typeTitle>, <typeName>, <size> bytes
... data ...
The fields which were not already explained in the example, are:
v
<source> Indicates the origin of the logged error. (You can find it at the end of
the first line in the sample.) The possible values are:
– origin - message is logged by the function where error originated (inception
point)
– OS - error has been produced by the operating system
The default db2cos scripts are found under the bin directory. On UNIX operating
systems, this directory is read-only. You can copy the db2cos script file to the adm
directory and modify the file at that location if you need to. If a db2cos script is
found in the adm directory, it is run; otherwise, the script in the bin directory is
run.
In a multiple partition configuration, the script will only be invoked for the
trapping agent on the partition encountering the trap. If there is a need to collect
information from other partitions, you can update the db2cos script to use the
20 Troubleshooting Guide
db2_all command or, if all of the partitions are on the same machine, specify the
-alldbpartitionnums option on the db2pd command.
The types of signals that trigger the invocation of db2cos are also configurable via
the db2pdcfg -cos command. The default configuration is for the db2cos script to
run when either a panic or trap occurs. However, generated signals will not launch
the db2cos script by default.
The order of events when a panic, trap, segmentation violation or exception occurs
is as follows:
1. Trap file is created
2. Signal handler is called
3. db2cos script is called (depending on the db2cos settings enabled)
4. An entry is logged in the Administration Notification Log
5. An entry is logged in the db2diag.log
The default information collected by the db2pd command in the db2cos script
includes details about the operating system, the Version and Service Level of the
installed DB2 product, the database manager and database configuration, as well
as information about the state of the agents, memory pools, memory sets, memory
blocks, applications, utilities, transactions, buffer pools, locks, transaction logs,
table spaces and containers. In addition, it provides information about the state of
the dynamic, static, and catalog caches, table and index statistics, the recovery
status, as well as the reoptimized SQL statements and an active statement list. If
you need to collect further information, you simply update the db2cos script with
the additional commands.
When the default db2cos script is called, it produces output files in the directory
specified by the DIAGPATH database manager configuration parameter. The files
are named XXX.YYY.ZZZ.cos.txt, where XXX is the process ID (PID), YYY is the
thread ID (TID) and ZZZ is the database partition number (or 000 for single
partition databases). If multiple threads trap, there will be a separate invocation of
the db2cos script for each thread. In the event that a PID and TID combination
occurs more than once, the data will be appended to the file. There will be a
timestamp present so you can distinguish the iterations of output.
The db2cos output files will contain different information depending on the
commands specified in the db2cos script. If the default script is not altered, entries
similar to the following will appear (followed by detailed db2pd output):
2005-10-14-10.56.21.523659
PID : 782348 TID : 1 PROC : db2cos
INSTANCE: db2inst1 NODE : 0 DB : SAMPLE
APPHDL : APPID: *LOCAL.db2inst1.051014155507
FUNCTION: oper system services, sqloEDUCodeTrapHandler, probe:999
EVENT : Invoking /home/db2inst1/sqllib/bin/db2cos from
oper system services sqloEDUCodeTrapHandler
Trap Caught
OSName: AIX
NodeName: n1
Version: 5
...
The db2diag.log will contain entries related to the occurrence as well. For example:
2005-10-14-10.42.17.149512-300 I19441A349 LEVEL: Event
PID : 782348 TID : 1 PROC : db2sysc
INSTANCE: db2inst1 NODE : 000
FUNCTION: DB2 UDB, trace services, pdInvokeCalloutScript, probe:10
START : Invoking /home/db2inst1/sqllib/bin/db2cos from oper system
services sqloEDUCodeTrapHandler
Dump Files
Dump files are created when an error occurs for which there is additional
information that would be useful in diagnosing a problem (such as internal control
blocks). Every data item written to the dump files has a timestamp associated with
it to help with problem determination. Dump files are in binary format and are
intended for DB2 customer support representatives.
Note: For partitioned database environments, the file extension identifies the
partition number. For example, the following entry indicates that the dump file
was created by a DB2 process running on partition 10:
Dump File: /home/db2/sqllib/db2dump/6881492.2.010.dump.bin
22 Troubleshooting Guide
Trap files
DB2 generates a trap file if it cannot continue processing because of a trap,
segmentation violation, or exception.
All signals or exceptions received by DB2 are recorded in the trap file. The trap file
also contains the function sequence that was running when the error occurred. This
sequence is sometimes referred to as the "function call stack" or "stack trace." The
trap file also contains additional information about the state of the process when
the signal or exception was caught.
The files are located in the directory specified by the DIAGPATH database
manager configuration parameter.
On all platforms, the trap file name begins with a process identifier (PID), followed
by a thread identifier (TID), followed by the partition number (000 on single
partition databases), and concluded with “.trap.txt”.
There are also diagnostic traps, generated by the code when certain conditions
occur which don't warrant crashing the instance, but where it is useful to see the
stack. Those traps are named with the PID in decimal format, followed by the
partition number (0 in a single partition database).
Examples:
v 6881492.2.000.trap.txt is a trap file with a process identifier (PID) of 6881492, and
a thread identifier (TID) of 2.
v 6881492.2.010.trap.txt is a trap file whose process and thread is running on
partition 10.
You can generate trap files on demand using the db2pd command with the -stack
all or -dump option. In general, though, this should only be done as requested by
DB2 Support.
You can generate stack trace files with db2pd -stacks or db2pd -dumps commands.
These files have the same contents as trap file but are generated for diagnostic
purposes only. Their names will be similar to 6881492.2.000.stack.txt.
The tool uses DB2 symbol files in order to format the trap files. A subset of these
.PDB files are included with the DB2 database products.
If a trap file called "DB30882416.TRP" had been produced in your DIAGPATH, you
could format it as follows:
db2xprt DB30882416.TRP DB30882416.FMT
Based on your operating environment, there might be more places outside of what
has been described here, so be aware of all of the potential areas you might need
to investigate when debugging problems in your system.
Operating systems
Every operating system has its own set of diagnostic files to keep track of activity
and failures. The most common (and usually most useful) is an error report or
event log. Here is a list of how this information can be collected:
v AIX: the /usr/bin/errpt -a command
v Solaris: /var/adm/messages* files or the /usr/bin/dmesg command
v Linux: the /var/log/messages* files or the /bin/dmesg command
v HP-UX: the /var/adm/syslog/syslog.log file or the /usr/bin/dmesg command
v Windows : the system, security, and application event log files and the
windir\drwtsn32.log file (where windir is the Windows install directory)
There are always more tracing and debug utilities for each operating system. Refer
to your operating system documentation and support material to determine what
further information is available.
Each application should have its own logging and diagnostic files. These files will
complement the DB2 set of information to provide you with a more accurate
picture of potential problem areas.
Hardware
Hardware devices usually log information into operating system error logs.
However, sometimes additional information is required. In those cases, you need to
identify what hardware diagnostic files and utilities might be available for piece of
hardware in your environment. An example of such a case is when a bad page, or
a corruption of some type is reported by DB2. Usually this is reported due to a
disk problem, in which case the hardware diagnostics would need to be
investigated. Please refer to your hardware documentation and support material to
determine what further information is available.
24 Troubleshooting Guide
and underlying hardware. The db2support tool automates the collection of most
DB2 and operating system information that you will need, but you should still be
aware of any information outside of this that might help the investigation.
The core file is named "core", and is placed in the diagpath by default unless
otherwise configured using the values in the DB2FODC registry variable. Note that
system core files are distinct from DB2 trap files.
The following steps can be used to determine the function that caused the core file
dump to occur.
1. Enter the following command from a UNIX command prompt:
dbx program_name core_filename
The following example shows how to use the dbx command to read the core file
for a program called "main".
1. At a command prompt, enter:
dbx main
2. Output similar to the following appears on your display:
dbx version 3.1 for AIX.
Type ’help’ for help.
reading symbolic information ...
[using memory image in core]
segmentation.violation in freeSegments at line 136
136 (void) shmdt((void *) pcAdress[i]);
3. The name of the function that caused the core dump is "freeSegments". Enter
where at the dbx prompt to display the program path to the point of failure.
In this example, the error occurred at line 136 of freeSegments, which was
called from line 96 in main.c.
4. To end the dbx command, type quit at the dbx prompt.
View the event logs using the Windows Event Viewer. The method used to launch
the viewer will differ, depending on the Windows operating system you are using.
For example, to open the Event Viewer on Windows XP, click Start —> Control
Panel. Select Administrative Tools, and then double-click Event Viewer.
Locate the Dr. Watson log file. The default path is <install_drive>:\Documents
and Settings \All Users\Documents\DrWatson
26 Troubleshooting Guide
Combining DB2 database and OS diagnostics
Diagnosing some problems related to memory, swap files, CPU, disk storage, and
other resources requires a thorough understanding of how a given operating
system manages these resources. At a minimum, defining resource-related
problems requires knowing how much of that resource exists, and what resource
limits might exist per user. (The relevant limits are typically for the user ID of the
DB2 instance owner.)
Here is some of the important configuration information that you need to obtain:
v Operating system patch level, installed software, and upgrade history
v Number of CPUs
v Amount of RAM
v Swap and file cache settings
v User data and file resource limits and per user process limit
v IPC resource limits (message queues, shared memory segments, semaphores)
v Type of disk storage
v What else is the machine used for? Does DB2 compete for resources?
v Where does authentication occur?
The following exercises are intended to help you discover system configuration
and user environment information in various DB2 diagnostic files. The first
exercise familiarizes you with the steps involved in running the db2support utility.
Subsequent exercises cover trap files, which provide more DB2-generated data that
can be useful in understanding the user environment and resource limits.
28 Troubleshooting Guide
Processor Type: x86 Family 15 Model 2 Stepping 4
OS Version: Microsoft Windows XP, Service Pack 2 (5.1)
Current Build: 2600
</SystemInformation>
<MemoryInformation>
<Usage>
Physical Memory: 1023 total, 568 free.
Virtual Memory : 2047 total, 1882 free.
Paging File : 2461 total, 2011 free.
Ext. Virtual : 0 free.
</Usage>
</MemoryInformation>
<EnvironmentVariables>
<![CDATA[
[e] DB2PATH=C:\Program Files\IBM\SQLLIB
[g] DB2_EXTSECURITY=YES
[g] DB2SYSTEM=MYSRVR
[g] DB2PATH=C:\Program Files\IBM\SQLLIB
[g] DB2INSTDEF=DB2
[g] DB2ADMINSERVER=DB2DAS00
]]></EnvironmentVariables>
System messages and error logs are too often ignored. You can save hours, days,
and even weeks on the time it takes to solve a problem if you take the time to
perform one simple task at the initial stage of problem definition and investigation.
That task is to compare entries in different logs and take note of any that appear to
be related both in time and in terms of what resource the entries are referring to.
While not always relevant to problem diagnosis, in many cases the best clue is
readily available in the system logs. If you can correlate a reported system problem
with DB2 errors, you will have often identified what is directly causing the DB2
symptom. Obvious examples are disk errors, network errors, and hardware errors.
Not so obvious are problems reported on different machines, for example domain
controllers which can affect connection time or authentication.
System logs can help to rule out crash entries in the db2diag.log as causes for
concern. If you see crash recovery in DB2 Administration Notification or DB2
diagnostic logs with no preceding errors, the DB2 crash recovery is likely a result
of a system shutdown.
This principle of correlating information extends to logs from any source and to
any identifiable user symptoms. For example, it can be very useful to identify and
document correlating entries from another application's log even if you can't fully
interpret them.
30 Troubleshooting Guide
Chapter 2. DB2 Health Advisor Service
How to use the DB2 Health Advisor Service
Starting in DB2 Version 9.5 Fix Pack 6, the DB2 Health Advisor Service data
collector (db2has) gathers information about a DB2 instance, its databases, and its
operating environment. The compressed output file can be sent to the DB2 Health
Advisor Service at IBM for analysis and generation of a PDF-based report
containing the findings and recommendations concerning the health of your DB2
environment.
This task shows you how to use the db2has tool and provides examples to
illustrate the usage of the options.
Procedure
Run the db2has command. A variety of options are available for you to configure
the data collection the way you want (for more details about the command
options, see: “db2has - DB2 Health Advisor Service data collector command”). You
must include the following 3 required parameters and their values:
1. -E | -email "companyEmail" must be specified in order to have the PDF-based
report sent to your e-mail address from the DB2 Health Advisor Service.
2. -I | -icn IBM_Customer_Number must be specified to obtain the PDF-based
report from the DB2 Health Advisor Service. It also serves to facilitate finding
previous Problem Management Records (PMRs) to gain access to historical data
to compare with the current system and instance state.
3. -t | -systype sType must be specified to differentiate the different types of
systems or instances in the analysis.
To immediately send the collected data to the DB2 Health Advisor Service, include
the -send parameter as part of the command string. The specification of an
argument is optional with the -send parameter. However, if the default argument
of the -send parameter fails, the db2has command will send the collected data to
db2has@ca.ibm.com. If this fails, an error message is returned. You can also send
the data by specifying the argument smtp://username:password@host:port/path after
the -send parameter.
Results
Example
Example 1
The following is an example of the options to specify for a typical run of the
db2has command:
db2has -icn FC123456 -name "Fake 1 Company, Inc." -address "123 Main St., Suite 123,
Anywhere, CA 99999" -phone "555-555-5555" -email "john.smith@fake1company.com"
-desc "Insurance services provider" -systype test -workload OLTP -send
The data will be collected for all databases that are activated on a test system. The
priority of the run could be set to the lowest setting to minimize the performance
impact of the data collector, which in most cases is negligible, on a system. The
resulting compressed file, db2has_hostname_timestamp.zip, will be placed into the
default working directory, ~/sqllib/db2hasdir and is sent, by way of the
Enhanced Customer Data Repository (ECuRep), to the DB2 Health Advisor Service.
A report with findings and recommendations will be sent to DBA John Smith using
the provided e-mail address in this example.
For the first-time run, add the -firsttime option. This option adds several
additional checks and data collections that can be useful for a detailed initial
analysis of a system and DB2 health. These checks can be skipped for subsequent
runs. Also, use this option after each DB2 upgrade to a new fix pack or release, or
after any other significant change to the DB2 database manager and its operating
environment.
Example 2
To collect data for a list of specified database names using default values for most
options, run the following command:
db2has -icn FC123456 -name "Fake 1 Company, Inc." -address "123 Main St., Suite 123,
Anywhere, CA 99999" -phone "555-555-5555" -email "john.smith@fake1company.com"
-desc "Insurance services provider" -systype QA -W Hybrid
-dblist "mydb1,mydb2,mydb3" -runid "QA test run #26"
The data will be collected for databases mydb1, mydb2, and mydb3 on the QA
(quality assurance) system. The run ID is specified for this specific QA test run to
distinguish it from other similar runs.
Example 3
To include a check for tables with type 1 indexes and also an analysis of database
and database manager snapshots to the previous example, run the following
command:
32 Troubleshooting Guide
db2has -icn FC123456 -name "Fake 1 Company, Inc." -address "123 Main St., Suite 123,
Anywhere, CA 99999" -phone "555-555-5555" -email "john.smith@fake1company.com"
-desc "Insurance services provider" -systype QA -dblist "mydb1,mydb2,mydb3"
-runid "QA test run #26" -W DSS -include "t1index,dbsnap,dbmsnap"
Example 4
The analysis engine applies more than 20 rule-based scenarios to data collected
from snapshots for a database manager and active databases. To include snapshot
data for all active databases (-extended), excluding potentially sensitive
information about IP addresses and active ports (-exclude IP), on a production
system using the quiet mode to suppress output to a terminal (-quiet), run the
following command:
db2has -icn FC123456 -name "Fake 1 Company, Inc." -address "123 Main St., Suite 123,
Anywhere, CA 99999" -phone "555-555-5555" -email "john.smith@fake1company.com"
-desc "Insurance services provider" -systype production -W OLTP -exclude IP
-quiet -extended
Note: The db2has tool does not enable any monitor switches. Enabling such
switches and activating databases is a user responsibility.
The amount of data sent for analysis depends on what is enabled on a system
during a data collection run. Although using the -extended option is optional, it is
very useful since it adds more scenarios for the analysis engine to consider and,
thereby, increases the chances of finding potential problems with a system or, on
the contrary, confirms the DB2 instance and its operating environment are in good
health.
Example 5
To write the collected data into a specified working directory, run the following
command:
db2has -icn FC123456 -name "Fake 1 Company, Inc." -address "123 Main St., Suite 123,
Anywhere, CA 99999" -phone "555-555-5555" -email "john.smith@fake1company.com"
-desc "Insurance services provider" -systype integration -workload OLTP
-workdir /home/inst1/IntegrationTests/DB2HAS
Note: A parent directory of the working directory must already exist, otherwise an
error will be returned.
Example 6
To send the data collected to a remote server, run the following command:
db2has -icn FC123456 -name "Fake 1 Company, Inc." -address "123 Main St., Suite 123,
Anywhere, CA 99999" -phone "555-555-5555" -email "john.smith@fake1company.com"
-desc "Insurance services provider" -systype test -workload DSS
-send "ftp://anonymous@ftp.ecurep.ibm.com:21/toibm/im"
The data is sent using the FTP protocol to the URL for the host server
ftp.ecurep.ibm.com using port number 21. The example specifies anonymous as the
user name. The argument for the -send option is optional. It should be only
specified when a resource file is used or as instructed by DB2 Software Support in
the event that the ECuRep FTP service is unavailable due to outage. In all other
cases, the recommended usage is to specify the -send option without the argument.
However, if the default argument of the -send parameter fails, the db2has
command will send the collected data to db2has@ca.ibm.com. If this fails, an error
message is returned. An alternative is to send the data by specifying the argument
smtp://username:password@host:port/path after the -send parameter.
To collect data using a prepared resource file db2has.res, run the following
command:
db2has -resource /home/inst1/db2has.res
You can copy and paste the contents of a sample resource file into a file that you
created. Options that are required for the data collection can be uncommented and
edited as needed. For the sample resource file, see: “Sample db2has resource file
(db2has.res)” in the Command Reference.
Example 8
The data collector will prompt you for a symptom description and then start the
input mode. The maximum allowed text input is 2 KB. Ctrl-D will terminate the
input mode. Any text is allowed, such as full text or fragments of error messages,
db2diag log records, DB2 trace fragments, call stacks from trap files, to name just a
few examples. This information will be used by the analysis engine to find matches
with other records in our symptom knowledge database. If a match is found, a
known solution, fix, or recommendation will be added to the final PDF-based
report from the DB2 Health Advisor Service.
Example 9
To send feedback about a previous DB2 Health Advisor Service report, run the
following command:
db2has -icn FC123456 -name "Fake 1 Company, Inc." -address "123 Main St., Suite 123,
Anywhere, CA 99999" -phone "555-555-5555" -email "john.smith@fake1company.com"
-desc "Insurance services provider" -W OLTP -systype DR -feedback
The data collector will prompt you for your feedback and then start the input
mode. The maximum allowed text input is 2 KB. Ctrl-D will terminate the input
mode. This option can be used for comments, suggestions, desired improvements,
new requirements, and any criticism you would like to provide on the data
collector and the PDF-based report.
What to do next
If you want to receive a PDF-based report from the DB2 Health Advisor Service
and you did not specify the -send option, the archived file must be sent to
ftp://anonymous@ftp.ecurep.ibm.com:21/toibm/im using your own FTP client. The
ECuRep FTP service prompts you for a password, to which you respond by
entering your e-mail address.
Read and assess the DB2 Health Advisor Service PDF-based report concerning the
health of your DB2 environment. Evaluate the affect of the report's
recommendations by first implementing them in your test environment.
34 Troubleshooting Guide
Chapter 3. Troubleshooting DB2
In general, troubleshooting requires that you isolate and identify a problem, then
seek a resolution. This section will provide troubleshooting information related to
specific features of DB2 products.
As common problems are identified, the findings will be added to this section in
the form of checklists. If the checklist does not lead you to a resolution, you can
collect additional diagnostic data and analyze it yourself, or submit the data to
IBM Software Support for analysis.
If your problem does not fall into one of these categories, basic diagnostic data
might still be required if you are contacting IBM Software Support. You should .see
the topic “Collect data for DB2” elsewhere in this book.
If you install a DB2 Version 9.5 general availability (GA) database product on AIX
6.1, the installer will detect that you are using AIX 6.1 and will not install the SA
MP Base Component.
Causes
The SA MP Base Component that is bundled with DB2 Version 9.5 GA does not
support AIX 6.1.
When you install DB2 Version 9.5 Fix Pack 1 or later fix packs on AIX 6.1, the SA
MP Base Component will be installed successfully.
Procedure
What to do next
If you complete these steps but cannot yet identify the source of the problem,
begin collecting diagnostic data to obtain more information.
Symptoms
If you install DB2 database products in the /usr or /opt directories on a system
WPAR, various errors can occur depending on how you configured the directories.
System WPARs can be configured to either share the /usr and /opt directories
with the global environment (in which case the /usr and /opt directories will be
readable but not write accessible from the WPAR) or to have a local copy of the
/usr and /opt directories.
36 Troubleshooting Guide
In the first scenario, if a DB2 database product is installed to the default path on
the global environment, that installation will be visible in the system WPAR. This
will give the appearance that DB2 is installed on the WPAR, however attempts to
create a DB2 instance will result in this error: DBI1288E The execution of the
program db2icrt failed. This program failed because you do not have write
permission on the directory or file /opt/IBM/db2/V9.5/profiles.reg,/opt/
IBM/db2/V9.5/default.env.
In the second scenario, if a DB2 database product is installed to the default path on
the global environment then when the WPAR creates the local copy of the /usr
and /opt directories the DB2 database product installation will also be copied. This
can cause unexpected problems if a system administrator attempts to use the
database system. Since the DB2 database product was intended for another system,
inaccurate information might be copied over. For example, any DB2 instances
originally created on the global environment will appear to be present in the
WPAR. This can cause confusion for the system administrator with respect to
which instances are actually installed on the system.
Causes
These problems are caused by installing DB2 database products in /usr or /opt
directories on a system WPAR.
Do not install DB2 database products in the default path on the global
environment.
Mount a file system that is accessible only to the WPAR and install the DB2
database product on that file system.
This restriction applies to both client and server components of DB2 database
products.
Uninstall the beta version of DB2 Version 9.5 before installing the non-beta version
or else choose a different installation path.
Symptoms
When you attempt to install a DB2 database product or the DB2 Information Center,
the DB2 Setup wizard reports an error that states "The service name specified is in
use".
The DB2 Setup wizard will prompt you to choose port numbers and service names
when you install:
v The DB2 Information Center
v A DB2 database product that will accept TCP/IP communications from clients
v A DB2 database product that will act as a database partition server
This error can occur if you choose a service name and port number rather than
accepting the default values. If you choose a service name that already exists in the
services file on the system and you only change the port number, this error will
occur.
To install a copy of DB2 UDB Version 8 when DB2 Version 9.5 is already installed:
1. Uninstall DB2 Version 9.5.
2. Install DB2 UDB Version 8.
3. Reinstall DB2 Version 9.5.
For DB2 UDB Version 8 Fix Pack 11 and earlier, the DB2 UDB Version 8 DB2 Setup
Launchpad does not prevent you from installing Version 8 when Version 9.5 is
already installed. However, doing so will cause problems.
Symptoms
Various error messages might occur, depending on the circumstances. For example,
the following error can occur when you create a database: SQL1229N The current
transaction has been rolled back because of a system error. SQLSTATE=40504
38 Troubleshooting Guide
Causes
The problem is caused by the presence of an entry for the IP address 127.0.0.2 in
the /etc/hosts file, where 127.0.0.2 maps to the fully qualified hostname of the
machine. For example:
127.0.0.2 ServerA.ibm.com ServerA
Environment
The problem is limited to DB2 Enterprise Server Edition with the DB2 Database
Partitioning Feature.
Remove the entry from the /etc/hosts file, or convert it into a comment. For
example:
# 127.0.0.2 ServerA.ibm.com ServerA
Symptoms
Causes
The DB2 diagnostic log file (db2diag.log) will contain the error message and the
following text: OSERR : ENOATTR (112) "No attribute found".
The following steps can help you troubleshoot problems that occur when you are
using optimization guidelines:
1. “Verify that optimization guidelines have been used” in Tuning Database
Performance.
2. Examine the full error message using the built-in “EXPLAIN_GET_MSGS table
function” in Administrative Routines and Views.
If you complete these steps but cannot yet identify the source of the problem,
begin collecting diagnostic data and consider contacting IBM Software Support.
40 Troubleshooting Guide
CREATE TABLE EXPLAIN_DIAGNOSTIC
( EXPLAIN_REQUESTER VARCHAR(128) NOT NULL,
EXPLAIN_TIME TIMESTAMP NOT NULL,
SOURCE_NAME VARCHAR(128) NOT NULL,
SOURCE_SCHEMA VARCHAR(128) NOT NULL,
SOURCE_VERSION VARCHAR(64) NOT NULL,
EXPLAIN_LEVEL CHAR(1) NOT NULL,
STMTNO INTEGER NOT NULL,
SECTNO INTEGER NOT NULL,
DIAGNOSTIC_ID INTEGER NOT NULL,
CODE INTEGER NOT NULL,
PRIMARY KEY (EXPLAIN_REQUESTER,
EXPLAIN_TIME,
SOURCE_NAME,
SOURCE_SCHEMA,
SOURCE_VERSION,
EXPLAIN_LEVEL,
STMTNO,
SECTNO,
DIAGNOSTIC_ID),
FOREIGN KEY (EXPLAIN_REQUESTER,
EXPLAIN_TIME,
SOURCE_NAME,
SOURCE_SCHEMA,
SOURCE_VERSION,
EXPLAIN_LEVEL,
STMTNO,
SECTNO)
REFERENCES EXPLAIN_STATEMENT ON DELETE CASCADE);
This DDL is included in the EXPLAIN.DDL file located in the misc subdirectory of
the sqllib directory.
Diagnosing traps
The DB2 instance prepares the first occurrence data capture (FODC) package for
the trap that you have encountered. If the DB2 instance has been configured for
greater database resiliency, the DB2_MEMORY_PROTECT variable is set to YES
and the DB2_THREAD_SUSPENSION variable is set to ON, then the DB2 instance
has also determined whether or not the trap is sustainable. The term 'sustainable'
means that the trapped DB2 engine thread has been suspended or terminated and
the DB2 instance continues to run. Perform these steps:
1. Use a text editor to view the administration notification log file. You will see
error message ADM14010C if the trap was sustainable and the DB2 instance is
still running. Otherwise, you will see error message ADM14011C and the DB2
instance has shutdown.
2. Note the directory name of the FODC information as specified in the
appropriate error message above.
3. If the trap was sustained, stop the DB2 instance at your earliest convenience.
Since a DB2 engine thread is suspended when the trap is sustained, the
db2stop and STOP DATABASE MANAGER commands will hang if they are
used to stop the DB2 instance. Instead, you must use the db2_kill command to
stop the DB2 instance and remove the suspended DB2 engine thread.
4. Restart the DB2 instance using the db2start or START DATABASE MANAGER
command.
5. Contact IBM Customer Support and refer to the FODC diagnostic information
to resolve the cause of the trap.
42 Troubleshooting Guide
Data compression dictionary is not automatically created
You have a large table but no data compression dictionary is created. You would
like to understand why the creation of the data compression dictionary did not
occur as you were expecting.
Although the table size is larger than the threshold size to allow automatic creation
of the data compression dictionary, there is another condition that is checked. The
condition is that there must be sufficient data present in the table to allow creation
of the dictionary. Past activity against the data in the table may also have included
the deletion or removal of data. There may be large sections within the table where
there is no data. This is how you can have a large table which meets or exceeds
the table size threshold, but there may not be enough data in the table to allow the
creation of the dictionary.
If you experience a lot of activity against the table, you need to reorganized the
table on a regular basis. If you do not, the table size may be large, but it may be
sparsely populated with data. Reorganizing the table will eliminate fragmented
data and compact the data in the table. Following the reorganization, the table will
be smaller and be more densely populated. The reorganized table will more
accurately represent the amount of data in the table and may be smaller than the
threshold size to allow automatic creation of the data compression dictionary.
The first scenario illustrates a possible problem when referencing global variables
that has a simple solution. The second scenario presents a more likely situation
where the permission to READ the global variables needs to be granted to the
appropriate users.
Scenario 2
developerUser has to determine whether finalUser sees the same global variable
values as he does. developerUser runs SET SESSION USER to see the global
variable values that the finalUser sees. Here is a proposed method to determine
this problem and solve it.
What follows is sample SQL showing the actions taken in the database by
developerUser.
########################################################################
# developerUser connects to database and creates needed objects
########################################################################
44 Troubleshooting Guide
WHERE userid = security.gv_user"
########################################################################
# secadmUser grants setsessionuser
########################################################################
db2 "connect to sample user secadmUser using xxxxxxxx"
db2 "grant setsessionuser on user finalUser to user developerUser"
db2 "terminate"
########################################################################
# developerUser will debug the problem now
########################################################################
echo "------------------------------------------------------------"
echo " Connect as developerUser "
echo "------------------------------------------------------------"
db2 "connect to sample user developerUser using xxxxxxxx"
echo "------------------------------------------------------------"
echo " SET SESSION AUTHORIZATION = finalUser "
echo "------------------------------------------------------------"
db2 "set session authorization = finalUser"
echo "--- TRY to get the value of gv_user as finalUser (we should not be able to)"
db2 "values(security.gv_user)"
echo "------------------------------------------------------------"
echo " SET SESSION AUTHORIZATION = developerUser "
echo "------------------------------------------------------------"
db2 "terminate"
Troubleshooting scripts
You may have internal tools or scripts that are based on the processes running in
the database engine. These tools or scripts may no longer work because all agents,
prefetchers, and page cleaners are now considered threads in a single,
multi-threaded process.
Your internal tools and scripts will have to be modified to account for a threaded
process. For example, you may have scripts that invoke the ps command to list the
process names; and then perform tasks against certain agent processes. Your scripts
will need to be rewritten.
The problem determination database command db2pd will have a new option -edu
(short for “engine dispatchable unit”) to list all agent names along with their
thread IDs. The db2pd -stack command continues to work with the threaded
engine to dump individual EDU stacks or to dump all EDU stacks for the current
node.
Once you have determined that there is a data consistency problem, you have two
options:
v Contact DB2 Service and ask for their assistance in recovering from the data
inconsistency
v Drop and rebuild the database object that has the data consistency problem.
You will use the INSPECT CHECK variation from the INSPECT command to check
the database, table space, or table that has evidence of a data inconsistency. Once
the results of the INSPECT CHECK command are produced, you should format
the inspection results using the db2inspf command.
If the INSPECT command does not complete then contact DB2 Service.
You can use the INSPECT command to carry out an online check for index to data
inconsistency by using the INDEXDATA option in the cross object checking clause.
Index data checking is not performed by default when using the INSPECT
command; it must be explicitly requested.
Locking implications
While checking for index to data inconsistencies by using the INSPECT command
with the INDEXDATA option, the inspected tables are only locked in IS mode.
When the INDEXDATA option is specified, by default only the values of explicitly
specified level clause options are used. For any level clause options which are not
explicitly specified, the default levels (INDEX NORMAL and DATA NORMAL) are
overwritten from NORMAL to NONE.
46 Troubleshooting Guide
To obtain the most complete output, the db2support utility should be invoked by
the instance owner. In addition, You must activate the database prior to running
db2support, otherwise the information collected does not contain enough
information.
To collect the base set of diagnostic information in a compressed file archive, enter
the db2support command:
db2support output_directory -s -d database_name -c
Using -s will give system details about the hardware used and the operating
system. Using -d will give details about the specified database. Using -c allows for
an attempt to connect to the specified database.
The output is conveniently collected and stored in a compressed ZIP archive,
db2support.zip, so that it can be transferred and extracted easily on any system.
For specific symptoms, or for problems in a specific part of the product, you might
need to collect additional data. Refer to the problem-specific "Collecting data"
documents.
These steps are only for situations where you can recreate the problem and you are
using DB2 on Linux or UNIX.
48 Troubleshooting Guide
directory, where insthome is the home directory for the instance owner. Likewise if
the problem is that the db2stop or STOP DATABASE MANAGER command is
failing, look for a file named db2stop.timestamp.log. These files will only appear if
the database manager did not respond to the command within the amount of time
specified in the start_stop_time database manager configuration parameter.
These steps assume that you have obtained the files described in Collecting data
for installation problems.
1. Ensure that you are looking at the appropriate installation log file. Check the
file's creation date, or the timestamp included in the file name (on Windows
operating systems).
2. Determine whether the installation completed successfully.
v On Windows operating systems, success is indicated by a message similar to
the following at the bottom of the installation log file:
Property(C): INSTALL_RESULT = Setup Complete Successfully
=== Logging stopped: 6/21/2006 16:03:09 ===
MSI (c) (34:38) [16:03:09:109]:
Product: DB2 Enterprise Server Edition - DB2COPY1 -- Installation operation
completed successfully.
If analyzing this data does not help you to resolve your problem, and if you have
a maintenance contract with IBM, you can open a problem report. IBM Software
Support will ask you to submit any data that you have collected, and they might
also ask you about any analysis that you performed.
If your investigation has not solved the problem, submit the data to IBM Software
Support.
The following steps assume that you have used the License Center or the db2licm
command to generate a DB2 license compliance report.
Procedure
1. Open the file that contains the DB2 license compliance report.
2. Examine the status of each DB2 feature in the compliance report. The report
displays one of the following values for each feature:
In compliance
Indicates that no violations were detected. The feature has been used
and is properly licensed.
Not used
Indicates that you have not performed any activities that require this
particular feature.
Violation
Indicates that the feature is not licensed and has been used.
50 Troubleshooting Guide
3. If there are any violations, use the License Center or the db2licm -l command to
view your license information.
If the DB2 feature is listed with a status of "Not licensed", you must obtain a
license for that feature. The license key and instructions for registering it are
available on the Activation CD that you receive when you purchase a DB2
feature.
Some DB2 features have a soft-stop policy; that is, the feature will continue to
work even in violation, giving you time to obtain and apply the license key.
Other features have hard-stop policies where the feature will cease to function
in violation.
Note: If you are using DB2 Version 9.5 Fix Pack 3 or earlier, it is recommended
that you move up to Fix Pack 4. As of DB2 Version 9.5 Fix Pack 3b, several DB2
features are integrated into the DB2 database products and separate licenses are
no longer required for those features. Earlier license terms that required you to
purchase utilities or functionality that is included as of Fix Pack 3b are not
enforced.
4. If you choose to drop or delete the problematic objects instead of purchasing a
license, use the following commands to determine which objects or settings in
your DB2 database product are causing the license violations:
v For the DB2 Advanced Access Control Feature:
Check for tables that use label based access control (LBAC). Run the
following command against every database in every instance in the DB2
copy:
SELECT TABSCHEMA, TABNAME
FROM SYSCAT.TABLES
WHERE SECPOLICYID>0
v For the DB2 High Availability Feature:
Check whether high availability disaster recovery (HADR) is turned on in
any of the instances. Run the following command once in every instance in
the DB2 copy:
SELECT NAME, VALUE
FROM SYSIBMADM.DBCFG
WHERE NAME=’hadr_db_role’
A return value of STANDARD means that HADR is not being used. A return
value of PRIMARY or STANDBY indicates that HADR is being used.
v For the DB2 Performance Optimization Feature:
– Check whether there are any materialized query tables. Run the following
command against every database in every instance in the DB2 copy:
SELECT OWNER, TABNAME
FROM SYSCAT.TABLES WHERE TYPE=’S’
– Check whether there are any multidimensional cluster tables. Run the
following command against every database in every instance in the DB2
copy:
SELECT A.TABSCHEMA, A.TABNAME, A.INDNAME, A.INDSCHEMA
FROM SYSCAT.INDEXES A, SYSCAT.TABLES B
WHERE (A.TABNAME=B.TABNAME AND A.TABSCHEMA=B.TABSCHEMA)
AND A.INDEXTYPE=’BLOK’
– Check whether any of your instances use query parallelism (also known
as interquery parallelism). Run the following command once in each
instance in the DB2 copy:
What to do next
Once you have addressed the violations (either by obtaining a license for the
feature or by removing the sources of the violation), you can reset the license
compliance report from the License Center or by issuing the following command:
db2licm -x
You can send diagnostic data, such as log files and configuration files, to IBM
Software Support using one of the following methods:
v FTP
v Electronic Service Request (ESR) tool
v To submit files (via FTP) to the Enhanced Centralized Client Data Repository
(EcuRep):
1. Package the data files that you collected into ZIP or TAR format, and name
the package according to your Problem Management Record (PMR)
identifier.
Your file must use the following naming convention in order to be correctly
associated with the PMR: xxxxx.bbb.ccc.yyy.yyy, where xxxxx is the PMR
number, bbb is the PMR's branch number, ccc is the PMR's territory code,
and yyy.yyy is the file name.
2. Using an FTP utility, connect to the server ftp.emea.ibm.com.
3. Log in as the userid "anonymous" and enter your e-mail address as your
password.
4. Go to the toibm directory. For example, cd toibm.
5. Go to one of the operating system-specific subdirectories. For example, the
subdirectories include: aix, linux, unix, or windows.
6. Change to binary mode. For example, enter bin at the command prompt.
7. Put your file on the server by using the put command. Use the following file
naming convention to name your file and put it on the server. Your PMR will
be updated to list where the files are stored using the format:
xxxx.bbb.ccc.yyy.yyy. (xxx is the PMR number, bbb is the branch, ccc is the
territory code, and yyy.yyy is the description of the file type such as tar.Z or
xyz.zip.) You can send files to the FTP server, but you cannot update them.
Any time that you need to subsequently change the file, you need to create a
new file name.
8. Enter the quit command.
v To submit files using the ESR tool:
52 Troubleshooting Guide
1. Sign onto ESR.
2. On the Welcome page, enter your PMR number in the Enter a report
number field, and click Go.
3. Scroll down to the Attach Relevant File field.
4. Click Browse to locate the log, trace, or other diagnostic file that you want to
submit to IBM Software Support.
5. Click Submit. Your file is transferred to IBM Software Support through FTP,
and it is associated with your PMR.
For more information about the EcuRep service, see IBM EMEA Centralized
Customer Data Store Service.
For more information about ESR, see Electronic Service Request (ESR) help.
After gathering the relevant information and based on your selection of the
applicable topic, proceed to the referenced section.
58 Troubleshooting Guide
3. Have you explored using communications software commands that return information
about the network?
v TCP/IP might have valuable information retrieved from using TCP/IP
commands and daemons.
4. Is there information returned in the SQLCA (SQL communication area) that can be
helpful?
v Problem handling procedures should include steps to examine the contents
of the SQLCODE and SQLSTATE fields.
v SQLSTATEs allow application programmers to test for classes of errors that
are common to the DB2 family of database products. In a distributed
relational database network this field might provide a common base.
5. Was DB2START executed at the Server? Additionally, ensure that the DB2COMM
environment variable is set correctly for clients accessing the server remotely.
6. Are other machines performing the same task able to connect to the server successfully?
The maximum number of clients attempting to connect to the server might
have been reached. If another client disconnects from the server, is the client
who was previously unable to connect, now able to connect?
7. Does the machine have the proper addressing? Verify that the machine is unique in
the network.
8. When connecting remotely, has the proper authority been granted to the client?
Connection to the instance might be successful, but the authorization might not
have been granted at the database or table level.
9. Is this the first machine to connect to a remote database? In distributed
environments routers or bridges between networks might block communication
between the client and the server. For example, when using TCP/IP, ensure that
you can PING the remote host.
Symptoms
If a DRDA application requester (DRDA AR) connects to DB2 Version 9.5 for
Linux, UNIX, and Windows and issues any of the following commands, the
command will fail:
Table 3. Unsupported DDM commands
DDM command DDM codepoint Description
BNDCPY X'2011' Copy an existing relational
database (RDB) package
BNDDPLY X'2016' Deploy an existing RDB
package
DRPPKG X'2007' Drop a package
DSCRDBTBL X'2012' Describe an RDB table
In addition, the following code points, used in the SQLDTA descriptor for
parameter-wise (or column-wise) array input, are also not supported:
The most common error message in this situation is SQL30020N ("Execution failed
because of a Distributed Protocol Error that will affect the successful execution of
subsequent commands and SQL statements").
Causes
Likewise, a DB2 Version 9.5 for Linux, UNIX, and Windows DRDA application
server does not support the FDOEXT and FDOOFF codepoints. These codepoints
are used in the SQLDTA descriptor that is sent to server when you submit a
column-wise array input request.
If you obtain a DB2 trace on the DRDA application server, you will see a message
similar to the following in response to these commands:ERROR MSG = Parser:
Command Not Supported.
There are currently no supported alternatives for the BNDCPY and BNDDPLY
DDM commands.
To drop a package, use the SQL statement DROP PACKAGE. For example, connect
to the DB2 Version 9.5 for Linux, UNIX, and Windows DRDA application server
and send a DROP PACKAGE statement in an EXECUTE IMMEDIATE request. DB2
Version 9.5 for Linux, UNIX, and Windows will process that request successfully.
To describe an RDB table, use one of the following DDM commands: DSCSQLSTT
(Describe SQL Statement) or PRPSQLSTT (Prepare SQL Statement). For example, if
you want a description of the table TAB1, describe or prepare the following
statement: SELECT * FROM TAB1.
To avoid problems with the FDOEXT and FDOOFF code points, use row-wise
array input requests instead of parameter-wise (or column-wise) array input
requests.
60 Troubleshooting Guide
Diagnostic tools
When you encounter a problem, you can use the following:
v All diagnostic data including dump files, trap files, error logs, notification files,
and alert logs are found in the path specified by the diagnostic data directory
path (diagpath) database manager configuration parameter:
If the value for this configuration parameter is null, the diagnostic data is
written to one of the following directories or folders:
– For Linux and UNIX environments: INSTHOME/sqllib/db2dump, where
INSTHOME is the home directory of the instance.
– For supported Windows environments:
- If the DB2INSTPROF environment variable is not set then
x:\SQLLIB\DB2INSTANCE is used where x:\SQLLIB is the drive reference and
the directory specified in the DB2PATH registry variable, and the value of
DB2INSTANCE has the name of the instance.
SQL0965 or SQL0969
Symptom
Messages SQL0965 and SQL0969 can be issued with a number of different
return codes from DB2 for i5/OS, DB2 for z/OS®, and DB2 for VM & VSE.
When you encounter either message, you should look up the original SQL
code in the documentation for the database server product issuing the
message.
SQL5043N
Symptom
Support for one or more communications protocols failed to start
successfully. However, core database manager functionality started
successfully.
Perhaps the TCP/IP protocol is not started on the DB2 Connect server.
There might have been a successful client connection previously.
If diaglevel = 4, then db2diag.log might contain a similar entry, for
example:
2001-05-30-14.09.55.321092 Instance:svtdbm5 Node:000
PID:10296(db2tcpcm) Appid:none
common_communication sqlcctcpconnmgr_child Probe:46
DIA3205E Socket address "30090" configured in the TCP/IP
services file and
required by the TCP/IP server support is being used by another
process.
Solution
This warning is a symptom which signals that DB2 Connect, acting as a
server for remote clients, is having trouble handling one or more client
communication protocols. These protocols can be TCP/IP and others, and
usually the message indicates that one of the communications protocols
defined to DB2 Connect is not configured properly.
Often the cause might be that the DB2COMM profile variable is not
defined, or is defined incorrectly. Generally, the problem is the result of a
mismatch between the DB2COMM variable and names defined in the
database manager configuration (for example, svcename or nname).
One possible scenario is having a previously successful connection, then
getting the SQL5043 error message, while none of the configuration has
changed. This could occur using the TCP/IP protocol, when the remote
system abnormally terminates the connection for some reason. When this
happens, a connection might still appear to exist on the client, and it might
become possible to restore the connection without further intervention by
issuing the commands shown below.
Most likely, one of the clients connecting to the DB2 Connect server still
has a handle on the TCP/IP port. On each client machine that is connected
to the DB2 Connect server, enter the following commands:
db2 terminate
db2stop
SQL30020
Symptom
SQL30020N Execution failed because of a Distributed Protocol Error that
will affect the successful execution of subsequent commands and SQL
statements.
Solutions
Service should be contacted with this error. Run the db2support command
before contacting service.
62 Troubleshooting Guide
SQL30060
Symptom
SQL30060N "<authorization-ID>" does not have the privilege to perform
operation "<operation>".
Solution
When connecting to DB2 for OS/390® and z/OS, the Communications
Database (CDB) tables have not been updated properly.
SQL30061
Symptom
Connecting to the wrong host or System i database server location - no
target database can be found.
Solution
The wrong server database name might be specified in the DCS directory
entry. When this occurs, SQLCODE -30061 is returned to the application.
Check the DB2 node, database, and DCS directory entries. The target
database name field in the DCS directory entry must correspond to the
name of the database based on the platform. For example, for a DB2
Universal Database for z/OS and OS/390 database, the name to be used
should be the same as that used in the Boot Strap Data Set (BSDS)
"LOCATION=locname" field, which is also provided in the DSNL004I
message (LOCATION=location) when the Distributed Data Facility (DDF) is
started.
The correct commands for a TCP/IP node are:
db2 catalog tcpip node <node_name> remote <host_name_or_address>
server <port_no_or_service_name>
db2 catalog dcs database <local_name> as <real_db_name>
db2 catalog database <local_name> as <alias> at <node node_name>
authentication server
To connect to the database you then issue:
db2 connect to <alias> user <user_name> using <password>
After stopping and restarting DB2, look in the db2diag.log file to check
that DB2 TCP/IP communications have been started. You should see
output similar to the following:
2001-02-03-12.41.04.861119 Instance:svtdbm2 Node:00
PID:86496(db2sysc) Appid:none
common_communication sqlcctcp_start_listen Probe:80
DIA3000I "TCPIP" protocol support was successfully started.
64 Troubleshooting Guide
Chapter 6. Tools for troubleshooting
Return codes internal to the database manager, tools that are part of the DB2
product, the different types of traces, and the tools that are part of the operating
system are all used when troubleshooting your problems. Each tool provides data
and information to assist you, and DB2 Support, while investigating a database
problem.
ZRC and ECF values basically serve the same purpose, but have slightly different
formats. Each ZRC value has the following characteristics:
v Class name
v Component
v Reason code
v Associated SQLCODE
v SQLCA message tokens
v Description
However, ECF values consist of:
v Set name
v Product ID
v Component
v Description
ZRC and ECF values are typically negative numbers and are used to represent
error conditions. ZRC values are grouped according to the type of error that they
represent. These groupings are called "classes". For example, ZRC values that have
names starting with “SQLZ_RC_MEMHEP” are generally errors related to
insufficient memory. ECF values are similarly grouped into "sets".
Full details about this ZRC value can be obtained using the db2diag command, for
example:
c:\>db2diag -rc 0x860F000A
Identifer:
SQLO_FNEX
SQLO_MOD_NOT_FOUND
Identifer (without component):
SQLZ_RC_FNEX
Description:
File not found.
Associated information:
Sqlcode -980
SQL0980C A disk error occurred. Subsequent SQL statements cannot be
processed.
The same information is returned if you issue the commands db2diag -rc
-2045837302 or db2diag -rc SQLO_FNEX.
ECF Set :
setecf (Set index : 1)
Product :
DB2 Common
Component:
OSSe
Code:
118 (0x0076)
Identifier:
ECF_LIB_CANNOT_LOAD
Description:
Cannot load the specified library
For a full listing of the ZRC or ECF values, use the commands db2diag -rc zrc and
db2diag -rc ecf, respectively.
66 Troubleshooting Guide
You need to have read permission on the archive log files, so that the db2cklog
tool can read the log files and perform its checks. Only log files that are closed,
such as archive log files, can be validated successfully. If you run the tool on a log
file that is still active, the tool cannot check that file accurately and you will receive
a warning to let you know that the file is still active.
The db2cklog tool reads either single log files or a range of numbered log files and
performs checks on the internal validity of the files. Log files that pass validation
without any error messages or warnings are known good files and you can use
them during a rollforward recovery operation. If an archive log file fails validation
with an error message or if a warning is returned, then you must not use that log
file during rollforward recovery. An archive log file that fails validation cannot be
repaired and you should follow the response outlined in this task for what to do
next.
To check your archive log files, you issue the db2cklog command from the
command line and include the log file or files you want checked. Note that you do
not specify full log file names with the db2cklog command but only the numerical
identifiers that are part of the log file names. The numerical identifier of the
S0000001.LOG log file is 1, for example; to check that log file, you specify db2cklog
1. If the archive log files are not in the current directory, include the relative or
absolute path to the log files with the optional ARCHLOGPATH parameter.
1. If you want to check the validity of a single archive log file, you specify the
numerical identifier of that log file as log-file-number1 with the command. For
example, to check the validity of the S0000000.LOG log file in the
/home/amytang/tests directory, you issue the command db2cklog 0
ARCHLOGPATH /home/amytang/tests.
2. If you want to check the validity of a range of archive log files, you include the
first and last numerical identifier of that range with the command (from
log-file-number1 to log-file-number2). All log files in the range are checked, unless
the upper end of the range specified with log-file-number2 is numerically lower
than the beginning of the range (specified with log-file-number1). In that case,
only log-file-number1 is checked. For example, to check the validity of the log
files ranging from S0000000.LOG to S0000005.LOG in the /home/nrichers/tests
directory, you issue the command db2cklog 0 TO 5 ARCHLOGPATH
/home/nrichers/tests
The db2cklog tool will return a return code of zero for any file that passes
validation. If a range of numbered archive log files is specified, the db2cklog tool
Example
The following example shows the typical output of the db2cklog command as it
parses a log file, in this case S0000002.LOG. This file passes validation with a return
code of zero.
$ db2cklog 2
____________________________________________________________________
_____ D B 2 C K L O G _____
____________________________________________________________________
________________________________________________________________________________
========================================================
"db2cklog": Processing log file header of "S0000002.LOG"
If an archive log file fails validation, your response depends on whether or not you
have a copy of the log file that can pass validation by the db2cklog tool. If you are
not sure whether you have a copy of the log file, check the setting for the
logarchmeth2 configuration parameter, which determines whether your database
server archives a secondary copy of each log file. If you are validating logs as they
are being archive and log mirroring is also configured on your data server, you
68 Troubleshooting Guide
might still be able to locate a copy of the log file in the log mirror path, as your
data server does not recycle log files immediately after archiving.
v If you have a copy of the archive log file, use the db2cklog command against
that copy. If the copy of the log file passes validation, replace the log file that
cannot be read with the valid copy of the log file.
v If you have only one copy of the archive log file and that copy cannot be
validated, the log file is beyond repair and cannot be used for rollforward
recovery purposes. In this case, you must make a full database backup as soon
as possible to establish a new, more recent recovery point that does not depend
on the unusable log file for rollforward recovery.
To display all of the possible options, simply issue the db2dart command without
any parameters. Some options that require parameters, such as the table space ID,
are prompted for if they are not explicitly specified on the command line.
By default, the db2dart utility will create a report file with the name
databaseName.RPT. For single-partition database partition environments, the file is
created in the current directory. For multiple-partition database partition
environments, the file is created under a subdirectory in the diagnostic directory.
The subdirectory is called DART####, where #### is the database partition number.
The db2dart utility accesses the data and metadata in a database by reading them
directly from disk. Because of that, you should never run the tool against a
database that still has active connections. If there are connections, the tool will not
know about pages in the buffer pool or control structures in memory, for example,
and might report false errors as a result. Similarly, if you run db2dart against a
database that requires crash recovery or that has not completed rollforward
recovery, similar inconsistencies might result due to the inconsistent nature of the
data on disk.
The INSPECT command is similar to the db2dart command in that it allows you to
check databases, table spaces, and tables. A significant difference between the two
commands is that the database needs to be deactivated before you run db2dart,
whereas INSPECT requires a database connection and can be run while there are
simultaneous active connections to the database.
If you do not deactivate the database, db2dart will yield unreliable results.
The following tables list the differences between the tests that are performed by the
db2dart and INSPECT commands.
70 Troubleshooting Guide
Table 7. Feature comparison of db2dart and INSPECT for index objects (continued)
Tests performed db2dart INSPECT
Check the location and YES YES
length of the index key and
whether there is overlapping
Check the ordering of keys YES NO
in the index
Determine the summary total NO YES
pages and used pages
Validate contents of internal YES YES
page header fields
Verify the uniqueness of YES NO
unique keys
Check for the existence of NO YES
the data row for a given
index entry
Verify each key to a data NO YES
value
Table 8. Feature comparison of db2dart and INSPECT for block map objects
Tests performed db2dart INSPECT
Check for consistency bit YES YES
errors
Determine the summary total NO YES
pages and used pages
Validate contents of internal YES YES
page header fields
Table 9. Feature comparison of db2dart and INSPECT for long field and LOB objects
Tests performed db2dart INSPECT
Check the allocation YES YES
structures
Determine the summary total NO YES
pages and used pages (for
LOB objects only)
In addition, the following actions can be performed using the db2dart command:
v Format and dump data pages
v Format and dump index pages
v Format data rows to delimited ASCII
v Mark an index invalid
The INSPECT command cannot be used to perform those actions.
The db2diag tool serves to filter and format the volume of information available in
the db2diag log files. Filtering db2diag log file records can reduce the time
required to locate the records needed when troubleshooting problems.
If there are several databases in the instance, and you want to only see those
messages which pertain to the database "SAMPLE", you can filter the db2diag log
files as follows:
db2diag -g db=SAMPLE
Thus you would only see db2diag log file records that contained "DB: SAMPLE",
such as:
2006-02-15-19.31.36.114000-300 E21432H406 LEVEL: Error
PID : 940 TID : 660 PROC : db2syscs.exe
INSTANCE: DB2 NODE : 000 DB : SAMPLE
APPHDL : 0-1056 APPID: *LOCAL.DB2.060216003103
FUNCTION: DB2 UDB, base sys utilities, sqleDatabaseQuiesce, probe:2
MESSAGE : ADM7507W Database quiesce request has completed successfully.
The following command can be used to display all severe error messages produced
by processes running on partitions 0,1,2, or 3 with the process ID (PID) 2200:
db2diag -g level=Severe,pid=2200 -n 0,1,2,3
Note that this command could have been written a couple of different ways,
including db2diag -l severe -pid 2200 -n 0,1,2,3. It should also be noted that the -g
option specifies case-sensitive search, so here "Severe" will work but will fail if
"severe" is used. These commands would successfully retrieve db2diag log file
records which meet these requirements, such as:
2006-02-13-14.34.36.027000-300 I18366H421 LEVEL: Severe
PID : 2200 TID : 660 PROC : db2syscs.exe
INSTANCE: DB2 NODE : 000 DB : SAMPLE
APPHDL : 0-1433 APPID: *LOCAL.DB2.060213193043
FUNCTION: DB2 UDB, data management, sqldPoolCreate, probe:273
RETCODE : ZRC=0x8002003C=-2147352516=SQLB_BAD_CONTAINER_PATH
"Bad container path"
The following command filters all records occurring after January 1, 2006
containing non-severe and severe errors logged on partitions 0,1 or 2. It outputs
the matched records such that the time stamp, partition number and level appear
on the first line, pid, tid and instance name on the second line, and the error
message follows thereafter:
db2diag -time 2006-01-01 -node "0,1,2" -level "Severe, Error" | db2diag -fmt
"Time: %{ts}
Partition: %node Message Level: %{level} \nPid: %{pid} Tid: %{tid}
Instance: %{instance}\nMessage: @{msg}\n"
72 Troubleshooting Guide
For more information, issue the following commands:
v db2diag -help provides a short description of all available options
v db2diag -h brief provides descriptions for all options without examples
v db2diag -h notes provides usage notes and restrictions
v db2diag -h examples provides a small set of examples to get started
v db2diag -h tutorial provides examples for all available options
v db2diag -h all provides the most complete list of options
The following examples show how to only see messages from a specific facility (or
from all of them) from within the database manager. The supported facilities are:
v ALL which returns records from all facilities
v MAIN which returns records from DB2 general diagnostic logs such as the
db2diag log files and the administration notification log
v OPTSTATS which returns records related to optimizer statistics
To display messages from the OPTSTATS facility and filter out records having a
level of Severe:
db2diag -fac OPTSTATS -level Severe
To display messages from all facilities available and filter out records having
instance=harmistr and level=Error:
db2diag -fac all -g instance=harmistr,level=Error
To display all messages from the OPTSTATS facility having a level of Error and
then outputting the Timestamp and PID field in a specific format:
db2diag -fac optstats -level Error -fmt " Time :%{ts} Pid :%{pid}"
This example shows how to merge two or more db2diag log files and sort the
records according to timestamps.
The result of the merge and sort of the records is the following:
v 2009-02-26-05.28.11.480542
v 2009-02-26-05.28.49.764762
v 2009-02-26-05.28.49.822637
v 2009-02-26-05.28.49.835733
v 2009-02-26-05.28.50.258887
v 2009-02-26-05.28.50.259685
v 2009-02-26-05.29.11.872184
v 2009-02-26-05.29.11.872968
where the timestamps are merged and sorted chronologically.
This example shows how to merge files from three database partitions on the
current host. To obtain the split diagnostic directory paths, the diagpath database
manager configuration parameter was set in the following way:
db2 update dbm cfg using diagpath ’"$n"’
To merge the three diagnostic log files and sort the records according to
timestamps, execute the following command:
db2diag -merge
In this example, the default diagnostic data directory path was split according to
physical host and database partition by setting the diagpath database manager
configuration parameter using the following command:
db2 update dbm cfg using diagpath ’"$h$n"’
This example shows how to obtain an output of all the records from all the
diagnostic logs and merge the diagnostic log files from three database partitions on
each of two hosts, bower and horton. The following is a list of the six db2diag log
files:
v ~/sqllib/db2dump/HOST_bower/NODE0000/db2diag.log
v ~/sqllib/db2dump/HOST_bower/NODE0001/db2diag.log
v ~/sqllib/db2dump/HOST_bower/NODE0002/db2diag.log
v ~/sqllib/db2dump/HOST_horton/NODE0003/db2diag.log
v ~/sqllib/db2dump/HOST_horton/NODE0004/db2diag.log
v ~/sqllib/db2dump/HOST_horton/NODE0005/db2diag.log
74 Troubleshooting Guide
To output the records from all six db2diag log files, run the following command:
db2diag -global
To merge all six db2diag log files in the diagnostic data directory path from all
three database partitions on each of the hosts bower and horton and format the
output based on the timestamp, execute the following command:
db2diag –global –merge –sdir /temp/keon –fmt %{ts}
where /temp/keon is a shared directory, shared by the hosts bower and horton, to
store temporary merged files from each host during processing.
You can view the Global Registry with the db2greg tool. This tool is located in
sqllib/bin, and in the install directory under bin as well (for use when logged
in as root).
You can edit the Global Registry with the db2greg tool. Editing the Global Registry
in root installations requires root authority.
You should only use the db2greg tool if requested to do so by DB2 Customer
Support.
A typical result of running the db2level command on a Windows system would be:
DB21085I Instance "DB2" uses "32" bits and DB2 code release "SQL09010" with
level identifier "01010107".
Informational tokens are "DB2 v9.1.0.189", "n060119", "", and Fix Pack "0".
Product is installed at "c:\SQLLIB" with DB2 Copy Name "db2build".
The combination of the four informational tokens uniquely identify the precise
service level of your DB2 instance. This information is essential when contacting
IBM support for assistance.
You can use the db2look tool to extract the required DDL statements needed to
reproduce the database objects of one database in another database. The tool can
also generate the required SQL statements needed to replicate the statistics from
the one database to the other, as well as the statements needed to replicate the
database configuration, database manager configuration, and registry variables.
This is important because the new database might not contain the exact same set of
data as the original database but you might still want the same access plans chosen
for the two systems. The db2look command should only be issued on databases
running on DB2 Servers of Version 9.5 and higher levels.
The db2look tool is described in detail in the DB2 Command Reference but you can
view the list of options by executing the tool without any parameters. A more
detailed usage can be displayed using the -h option.
To extract the DDL for the tables in the database, use the -e option. For example,
create a copy of the SAMPLE database called SAMPLE2 such that all of the objects
in the first database are created in the new database:
C:\>db2 create database sample2
DB20000I The CREATE DATABASE command completed successfully.
C:\>db2look -d sample -e > sample.ddl
-- USER is:
-- Creating DDL for table(s)
-- Binding package automatically ...
-- Bind is successful
-- Binding package automatically ...
-- Bind is successful
Note: If you want the DDL for the user-defined spaces, database partition groups
and buffer pools to be produced as well, add the-l flag after -e in the command
above. The default database partition groups, buffer pools, and table spaces will
not be extracted. This is because they already exist in every database by default. If
you want to mimic these, you must alter them yourself manually.
Bring up the file sample.ddl in a text editor. Since you want to execute the DDL in
this file against the new database, you must change the CONNECT TO SAMPLE
statement to CONNECT TO SAMPLE2. If you used the -l option, you might need
to alter the path associated with the table space commands, such that they point to
76 Troubleshooting Guide
appropriate paths as well. While you are at it, take a look at the rest of the
contents of the file. You should see CREATE TABLE, ALTER TABLE, and CREATE
INDEX statements for all of the user tables in the sample database:
...
------------------------------------------------
-- DDL Statements for table "DB2"."ORG"
------------------------------------------------
Once you have changed the connect statement, execute the statements, as follows:
C:\>db2 -tvf sample.ddl > sample2.out
Take a look at the sample2.out output file -- everything should have been executed
successfully. If errors have occurred, the error messages should state what the
problem is. Fix those problems and execute the statements again.
As you can see in the output, DDL for all of the user tables are exported. This is
the default behavior but there are other options available to be more specific about
the tables included. For example, to only include the STAFF and ORG tables, use
the -t option:
C:\>db2look -d sample -e -t staff org > staff_org.ddl
To only include tables with the schema DB2, use the -z option:
C:\>db2look -d sample -e -z db2 > db2.ddl
If both databases have the exact same data loaded into them and the same options
of RUNSTATS is performed on both, the statistics should be identical. However, if
the databases contain different data or if only a subset of data is being used in the
test database then the statistics will likely be very different. In such a case, you can
use db2look to gather the statistics from the production database and place them
into the test database. This is done by creating UPDATE statements against the
SYSSTAT set of updatable catalog tables as well as RUNSTATS commands against
all of the tables.
The option for creating the statistic statements is -m. Going back to the
SAMPLE/SAMPLE2 example, gather the statistics from SAMPLE and add them
into SAMPLE2:
C:\>db2look -d sample -m > stats.dml
-- USER is:
-- Running db2look in mimic mode
UPDATE SYSSTAT.INDEXES
SET NLEAF=-1,
NLEVELS=-1,
FIRSTKEYCARD=-1,
FIRST2KEYCARD=-1,
FIRST3KEYCARD=-1,
FIRST4KEYCARD=-1,
FULLKEYCARD=-1,
CLUSTERFACTOR=-1,
CLUSTERRATIO=-1,
SEQUENTIAL_PAGES=-1,
PAGE_FETCH_PAIRS=’’,
DENSITY=-1,
AVERAGE_SEQUENCE_GAP=-1,
AVERAGE_SEQUENCE_FETCH_GAP=-1,
AVERAGE_SEQUENCE_PAGES=-1,
AVERAGE_SEQUENCE_FETCH_PAGES=-1,
AVERAGE_RANDOM_PAGES=-1,
AVERAGE_RANDOM_FETCH_PAGES=-1,
NUMRIDS=-1,
NUMRIDS_DELETED=-1,
NUM_EMPTY_LEAFS=-1
WHERE TABNAME = ’ORG’ AND TABSCHEMA = ’DB2 ’;
...
As with the -e option that extracts the DDL, the -t and -z options can be used to
specify a set of tables.
CONNECT TO SAMPLE;
--------------------------------------------------------
-- Database and Database Manager configuration parameters
--------------------------------------------------------
78 Troubleshooting Guide
UPDATE DBM CFG USING cpuspeed 2.991513e-007;
UPDATE DBM CFG USING intra_parallel NO;
UPDATE DBM CFG USING comm_bandwidth 100.000000;
UPDATE DBM CFG USING federated NO;
...
---------------------------------
-- Environment Variables settings
---------------------------------
COMMIT WORK;
CONNECT RESET;
Note: Only those parameters and variables that affect DB2 compiler will be
included. If a registry variable that affects the compiler is set to its default value, it
will not show up under "Environment Variables settings".
At least one DB2 Version 9 database product must already be installed by a root
user for a symbolic link to the db2ls command to be available in the
/usr/local/bin directory.
With the ability to install multiple copies of DB2 database products on your system
and the flexibility to install DB2 database products and features in the path of your
choice, you need a tool to help you keep track of what is installed and where it is
installed.
Restrictions
The output that the db2ls command lists is different depending on the ID used:
v When the db2ls command is run with root authority, only root DB2 installations
are queried.
v When the db2ls command is run with a non-root ID, root DB2 installations and
the non-root installation owned by matching non-root ID are queried. DB2
installations owned by other non-root IDs are not queried.
The db2ls command is the only method to query a DB2 database product. You
cannot query DB2 database products using Linux or UNIX operating system native
Procedure
v To list the path where DB2 database products are installed on your system and
list the DB2 database product level, enter:
db2ls
The command lists the following information for each DB2 database product
installed on your system:
– Installation path
– Level
– Fix pack
– Special Install Number. This column is used by IBM DB2 Support.
– Installation date. This column shows when the DB2 database product was last
modified.
– Installer UID. This column shows the UID with which the DB2 database
product was installed.
v To list information about DB2 database products or features in a particular
installation path the q parameter must be specified:
db2ls -q -p -b baseInstallDirectory
where:
– q specifies that you are querying a product or feature. This parameter is
mandatory. If a DB2 Version 8 product is queried, a blank value is returned.
– p specifies that the listing displays products rather than listing the features.
– b specifies the installation directory of the product or feature. This parameter
is mandatory if you are not running the command from the installation
directory.
Results
80 Troubleshooting Guide
Monitoring and troubleshooting using db2pd
The db2pd tool is used for troubleshooting because it can return quick and
immediate information from the DB2 memory sets.
The tool collects information without acquiring any latches or using any engine
resources. It is therefore possible (and expected) to retrieve information that is
changing while db2pd is collecting information; hence the data might not be
completely accurate. If changing memory pointers are encountered, a signal
handler is used to prevent db2pd from aborting abnormally. This can result in
messages such as "Changing data structure forced command termination" to
appear in the output. Nonetheless, the tool can be helpful for troubleshooting. Two
benefits to collecting information without latching include faster retrieval and no
competition for engine resources.
If you want to capture information about the database management system when a
specific SQLCODE, ZRC code or ECF code occurs, this can be accomplished using
the db2pdcfg -catch command. When the errors are caught, the db2cos (callout
script) is launched. The db2cos file can be dynamically altered to run any db2pd
command, operating system command, or any other command needed to solve the
problem. The template db2cos file is located in sqllib/bin on UNIX and Linux. On
the Windows operating system, db2cos is located in the $DB2PATH\bin directory.
Use the db2pd -db <database name> -locks -transactions -applications -dynamic
command to get the following results:
Locks:
Address TranHdl Lockname Type Mode Sts Owner Dur HldCnt Att ReleaseFlg
0x07800000202E5238 3 00020002000000040000000052 Row ..X G 3 1 0 0x0000 0x40000000
0x07800000202E4668 2 00020002000000040000000052 Row ..X W* 2 1 0 0x0000 0x40000000
For the database that you specified using the -db database name option, the first
results show the locks for that database. We can see that TranHdl 2 is waiting on a
lock held by TranHdl 3.
Transactions:
Address AppHandl [nod-index] TranHdl Locks State Tflag Tflag2 Firstlsn Lastlsn LogSpace ...
0x0780000020251B80 11 [000-00011] 2 4 READ 0x00000000 0x00000000 0x000000000000 0x000000000000 0 ...
0x0780000020252900 12 [000-00012] 3 4 WRITE 0x00000000 0x00000000 0x000000FA000C 0x000000FA000C 113 ...
... SpaceReserved TID AxRegCnt GXID ClientUserID ClientWrkstnName ClientApplName ClientAccntng
... 0 0x0000000000B7 1 0 n/a n/a n/a n/a
... 154 0x0000000000B8 1 0 n/a n/a n/a n/a
We can see that AppHandl 12 last ran dynamic statement 17, 1. ApplHandl 11 is
currently running dynamic statement 17, 1 and last ran statement 94, 1.
Dynamic SQL Statements:
Address AnchID StmtUID NumEnv NumVar NumRef NumExe Text
0x07800000209FD800 17 1 1 1 2 2 update pdtest set c1 = 5
0x07800000209FCCC0 94 1 1 1 2 2 set lock mode to wait 1
Scenario 2: Using the -wlocks option to capture all the locks being waited on
Scenario 3: Using the -apinfo option to capture detailed run time information
about the lock owner and the lock waiter
The sample output below is captured under the same conditions as Scenario 2
above.
venus@boson:/home/venus =>db2pd -apinfo 47 -db pdtest
Application :
Address : 0x0780000001676480
AppHandl [nod-index] : 47 [000-00047]
Application PID : 876558
Application Node Name : boson
IP Address: n/a
Connection Start Time : (1197063450)Fri Dec 7 16:37:30 2007
Client User ID : venus
System Auth ID : VENUS
Coordinator EDU ID : 5160
Coordinator Partition : 0
Number of Agents : 1
Locks timeout value : 4294967294 seconds
Locks Escalation : No
Workload ID : 1
Workload Occurrence ID : 2
Trusted Context : n/a
Connection Trust Type : non trusted
Role Inherited : n/a
Application Status : UOW-Waiting
Application Name : db2bp
Application ID : *LOCAL.venus.071207213730
ClientUserID : n/a
ClientWrkstnName : n/a
ClientApplName : n/a
ClientAccntng : n/a
82 Troubleshooting Guide
Database Partition 0 -- Database PDTEST -- Active -- Up 0 days 00:01:39
Application :
Address : 0x0780000000D77A60
AppHandl [nod-index] : 46 [000-00046]
Application PID : 881102
Application Node Name : boson
IP Address: n/a
Connection Start Time : (1197063418)Fri Dec 7 16:36:58 2007
Client User ID : venus
System Auth ID : VENUS
Coordinator EDU ID : 5913
Coordinator Partition : 0
Number of Agents : 1
Locks timeout value : 4294967294 seconds
Locks Escalation : No
Workload ID : 1
Workload Occurrence ID : 1
Trusted Context : n/a
Connection Trust Type : non trusted
Role Inherited : n/a
Application Status : Lock-wait
Application Name : db2bp
Application ID : *LOCAL.venus.071207213658
ClientUserID : n/a
ClientWrkstnName : n/a
ClientApplName : n/a
ClientAccntng : n/a
Find the db2cos output files. The location of the files is controlled by the database
manager configuration parameter DIAGPATH. The contents of the output files will
differ depending on what commands you enter in the db2cos file. An example of
the output provided when the db2cos file contains a db2pd -db sample -locks
command is as follows:
Lock Timeout Caught
Thu Feb 17 01:40:04 EST 2006
Instance DB2
Datbase: SAMPLE
Partition Number: 0
PID: 940
TID: 2136
Function: sqlplnfd
Component: lock manager
Probe: 999
Timestamp: 2006-02-17-01.40.04.106000
AppID: *LOCAL.DB2...
AppHdl:
...
Database Partition 0 -- Database SAMPLE -- Active -- Up 0 days 00:06:53
Just look for the "W*" as this is the lock that experienced the timeout. A lock
timeout can also occur when a lock is being converted to a higher mode. In those
cases, you will not see a “W*” in the output, but rather a “C*”. In this particular
case, however, a lockwait has occurred. You can map the results to a transaction,
application, agent, and even an SQL statement with the output provided by other
db2pd commands in the db2cos file. You can narrow down the output or use other
commands to collect the information you need. For example, you could change the
db2pd command options to use the -locks wait option that only prints locks with a
wait status. You could also put in -app, and -agent options if that is what you
need.
The command db2pd -applications reports the current and last anchor ID and
statement unique ID for dynamic SQL statements. This allows direct mapping from
an application to a dynamic SQL statement.
db2pd -app -dyn
Applications:
Address AppHandl [nod-index] NumAgents CoorPid Status
0x00000002006D2120 780 [000-00780] 1 10615 UOW-Executing
84 Troubleshooting Guide
Memory blocks sorted by size for apmh pool:
PoolID PoolName TotalSize(Bytes) TotalCount LOC File
70 apmh 40200 2 121 2986298236
70 apmh 10016 1 308 1586829889
70 apmh 6096 2 4014 1312473490
70 apmh 2516 1 294 1586829889
70 apmh 496 1 2192 1953793439
70 apmh 360 1 1024 3878879032
70 apmh 176 1 1608 1953793439
70 apmh 152 1 2623 1583816485
70 apmh 48 1 914 1937674139
70 apmh 32 1 1000 1937674139
Total size for apmh pool: 60092 bytes
...
The final section of output sorts the consumers of memory for the entire set:
All memory consumers in DBMS memory set:
PoolID PoolName TotalSize(Bytes) %Bytes TotalCount %Count LOC File
57 ostrack 5160048 71.90 1 0.07 3047 698130716
50 sqlch 778496 10.85 1 0.07 202 2576467555
50 sqlch 271784 3.79 1 0.07 260 2576467555
57 ostrack 240048 3.34 1 0.07 3034 698130716
50 sqlch 144464 2.01 1 0.07 217 2576467555
62 resynch 108864 1.52 1 0.07 127 1599127346
72 eduah 108048 1.51 1 0.07 174 4210081592
69 krcbh 73640 1.03 5 0.36 547 4210081592
50 sqlch 43752 0.61 1 0.07 274 2576467555
70 apmh 40200 0.56 2 0.14 121 2986298236
69 krcbh 32992 0.46 1 0.07 838 698130716
50 sqlch 31000 0.43 31 2.20 633 3966224537
50 sqlch 25456 0.35 31 2.20 930 3966224537
52 kerh 15376 0.21 1 0.07 157 1193352763
50 sqlch 14697 0.20 1 0.07 345 2576467555
...
You can also report memory blocks for private memory on UNIX and Linux. For
example:
db2pd -memb pid=159770
Using db2pd -tcbstats, the number of Inserts can be identified for a table. Here is
sample information for a user-defined global temporary table called TEMP1:
You can then obtain the information for table space 3 via the db2pd -tablespaces
command. Sample output is as follows:
Tablespace 3 Configuration:
Address Type Content PageSz ExtentSz Auto Prefetch BufID BufIDDisk FSC NumCntrs MaxStripe LastConsecPg Name
0x0780000020B1B5A0 DMS UsrTmp 4096 32 Yes 32 1 1 On 1 0 31 TEMPSPACE2
Tablespace 3 Statistics:
Address TotalPgs UsablePgs UsedPgs PndFreePgs FreePgs HWM State MinRecTime NQuiescers
0x0780000020B1B5A0 5000 4960 1088 0 3872 1088 0x00000000 0 0
Containers:
Address ContainNum Type TotalPgs UseablePgs StripeSet Container
0x0780000020B1DCC0 0 File 5000 4960 0 /home/db2inst1/tempspace2a
You would notice the space filling up by referring to the FreePgs column. As the
free pages value decreases, there is less space available. Notice also that the values
for FreePgs plus UsedPgs will equal the value of UsablePgs.
Once this is known, you can identify the dynamic SQL statement that is using the
table TEMP1:
db2pd -db sample -dyn
Dynamic Cache:
Current Memory Used 1022197
Total Heap Size 1271398
Cache Overflow Flag 0
Number of References 237
Number of Statement Inserts 32
Number of Statement Deletes 13
Number of Variation Inserts 21
Number of Statements 19
Finally, you can map this to db2pd -app output to identify the application.
Applications:
Address AppHandl [nod-index] NumAgents CoorPid Status
0x0000000200661840 501 [000-00501] 1 11246 UOW-Waiting
The anchor ID (AnchID) value resulting from the request for dynamic SQL
statements in the previous use of db2pd is used with the request for the associated
applications. The applications result shows that the last anchor ID (L-AnchID)
value is the same as the anchor ID (AnchID) value. The results from one run of
db2pd is used in the next run of db2pd.
The output from db2pd -agent will show the number of rows read (Rowsread
column) and rows written (Rowswrtn column) by the application. These values
will give you an idea of what the application has completed and what the
application still has to complete.
Address AppHandl [nod-index] AgentPid Priority Type DBName
0x0000000200698080 501 [000-00501] 11246 0 Coord SAMPLE
86 Troubleshooting Guide
The values for AppHandl and AgentPid from the db2pd -agent request can map
back to the corresponding values for AppHandl and CoorPiid from the db2pd -app
request.
The steps are slightly different if you suspect that an internal temporary table is
filling up the table space. You would still use db2pd -tcbstats to identify tables
with large numbers of inserts. Here is sample information for an implicit
temporary table:
TCB Table Information:
Address TbspaceID TableID PartID MasterTbs MasterTab TableName SchemaNm ObjClass DataSize ...
0x0780000020CC0D30 1 2 n/a 1 2 TEMP (00001,00002) <30> <JMC Temp 2470 ...
0x0780000020CC14B0 1 3 n/a 1 3 TEMP (00001,00003) <31> <JMC Temp 2367 ...
0x0780000020CC21B0 1 4 n/a 1 4 TEMP (00001,00004) <30> <JMC Temp 1872 ...
In this example, there are a large number of inserts for tables with the naming
convention "TEMP (TbspaceID, TableID)". These are implicit temporary tables. The
values in the SchemaNm column have a naming convention of the value for
AppHandl concatenated with the value for SchemaNm, which makes it possible to
identify the application doing the work.
You can then map that information to the output from db2pd -tablespaces to see
the used space for table space 1. Take note of the UsedPgs in relationship to the
UsablePgs in the table space statistics.
Tablespace Configuration:
Address Id Type Content PageSz ExtentSz Auto Prefetch BufID BufIDDisk FSC NumCntrs MaxStripe LastConsecPg Name
0x07800000203FB5A0 1 SMS SysTmp 4096 32 Yes 320 1 1 On 10 0 31 TEMPSPACE1
Tablespace Statistics:
Address Id TotalPgs UsablePgs UsedPgs PndFreePgs FreePgs HWM State MinRecTime NQuiescers
0x07800000203FB5A0 1 6516 6516 6516 0 0 0 0x00000000 0 0
Containers:
...
You can then identify the application handles 30 and 31 (since these were seen in
the -tcbstats output), using the command db2pd -app:
Applications:
Address AppHandl [nod-index] NumAgents CoorPid Status C-AnchID C-StmtUID L-AnchID L-StmtUID Appid
0x07800000006FB880 31 [000-00031] 1 4784182 UOW-Waiting 0 0 107 1 *LOCAL.db2inst1.051215214142
0x07800000006F9CE0 30 [000-00030] 1 8966270 UOW-Executing 107 1 107 1 *LOCAL.db2inst1.051215214013
Finally, map this to the Dynamic SQL using the db2pd -dyn command:
Dynamic SQL Statements:
Address AnchID StmtUID NumEnv NumVar NumRef NumExe Text
0x0780000020B296C0 107 1 1 1 43 43 select c1, c2 from test group by c1,c2
The command db2pd -recovery shows several counters that you can use to verify
that recovery is progressing. Current Log and Current LSN provide the log
position. CompletedWork counts the number of bytes completed thus far.
Recovery:
Recovery Status 0x00000401
Current Log S0000005.LOG
Current LSN 000002551BEA
Job Type ROLLFORWARD RECOVERY
Job ID 7
Job Start Time (1107380474) Wed Feb 2 16:41:14 2005
Job Description Database Rollforward Recovery
Invoker Type User
Progress:
Address PhaseNum Description StartTime CompletedWork TotalWork
0x0000000200667160 1 Forward Wed Feb 2 16:41:14 2005 2268098 bytes Unknown
0x0000000200667258 2 Backward NotStarted 0 bytes Unknown
The command db2pd -transactions provides the number of locks, the first log
sequence number (LSN), the last LSN, the log space used, and the space reserved.
This can be useful for understanding the behavior of a transaction.
Transactions:
Address AppHandl [nod-index] TranHdl Locks State Tflag
0x000000022026D980 797 [000-00797] 2 108 WRITE 0x00000000
0x000000022026E600 806 [000-00806] 3 157 WRITE 0x00000000
0x000000022026F280 807 [000-00807] 4 90 WRITE 0x00000000
The command db2pd -logs is useful for monitoring log usage for a database. By
watching the Pages Written output, you can determine whether the log usage is
progressing.
Logs:
Current Log Number 0
Pages Written 0
Method 1 Archive Status n/a
Method 1 Next Log to Archive n/a
Method 1 First Failure n/a
Method 2 Archive Status n/a
Method 2 Next Log to Archive n/a
Method 2 First Failure n/a
Log Chain ID 0
Current LSN 0x00000177000C
88 Troubleshooting Guide
Without the db2pd -sysplex command, the only way to report the sysplex list is via
a DB2 trace.
Sysplex List:
Alias: HOST
Location Name: HOST1
Count: 1
The db2pd -stack all command for Windows operating systems (-stack for UNIX
operating systems) can be used to produce stack traces for all processes in the
current database partition. You might want to use this command iteratively when
you suspect that a process or thread is looping or hanging.
You can obtain the current call stack for a particular engine dispatchable unit
(EDU) by issuing the command db2pd -stack <eduid>. For example:
db2pd -stack 137
If the call stacks for all of the DB2 processes are desired, use the command db2pd
-stack all, for example (on Windows operating systems):
If you are using a partitioned database environment with multiple physical nodes,
you can obtain the information from all of the partitions by using the command
db2_all "; db2pd -stack all". If the partitions are all logical partitions on the same
machine, however, a faster method is to use db2pd -alldbp -stacks.
The db2pd -dbptnmem command shows how much memory the DB2 server is
currently consuming and, at a high level, which areas of the server are using that
memory.
Controller Automatic: Y
Memory Limit: 122931408 KB
Current usage: 651008 KB
HWM usage: 651008 KB
Cached memory: 231296 KB
The continuation of the sample output from the db2pd -dbptnmem on AIX is
shown below.
Individual Memory Consumers:
Name Mem Used (KB) HWM Used (KB) Cached (KB)
===========================================================
APPL-DBONE 160000 160000 159616
DBMS-name 38528 38528 3776
FMP_RESOURCES 22528 22528 0
PRIVATE 13120 13120 740
FCM_RESOURCES 10048 10048 0
LCL-p606416 128 128 0
DB-DBONE 406656 406656 67200
All registered “consumers” of memory within the DB2 server are listed with the
amount of the total memory they are consuming. The column descriptions are:
v Name: A brief, distinguishing name of a “consumer” of memory. Examples
include:
– APPL-<dbname> for application memory consumed for database <dbname>
– DBMS-xxx for global database manager memory requirements
– FMP_RESOURCES for memory required to communicate with db2fmps
– PRIVATE for miscellaneous private memory requirements
– FCM_RESOURCES for Fast Communication Manager resources
– LCL-<pid> for memory segment used to communicate with local applications
– DB-<dbname> for database memory consumed for database <dbname>
v Mem Used (KB): How much memory is currently allotted to that consumer.
v HWM Used (KB): High-Water Mark, or Peak, memory that the consumer has
used.
v Cached (KB): Of the Mem Used (KB), the amount of memory that is not
currently being used but is immediately available for future memory allocations.
Using the db2support utility avoids possible user errors, as you do not need to
manually type commands such as GET DATABASE CONFIGURATION FOR
<database name> or LIST TABLESPACES SHOW DETAIL. Also, you do not
require instructions on which commands to run or files to collect, therefore it takes
less time to collect the data.
v Execute the command db2support -h to display the complete list of command
options.
v Collect data using the appropriate db2support command.
90 Troubleshooting Guide
You must activate the database prior to running db2support, otherwise the
information collected does not contain enough information.
The db2support utility should be run by a user with SYSADM authority, such as
an instance owner, so that the utility can collect all of the necessary information
without an error. If a user without SYSADM authority runs db2support, SQL
errors (for example, SQL1092N) might result when the utility runs commands
such as QUERY CLIENT or LIST ACTIVE DATABASES.
If you're using the db2support utility to help convey information to IBM
support, run the db2support command while the system is experiencing the
problem. That way the tool will collect timely information, such as operating
system performance details. If you are unable to run the utility at the time of the
problem, you can still issue the db2support command after the problem has
stopped since some first occurrence data capture (FODC) diagnostic files are
produced automatically.
The following basic invocation is usually sufficient for collecting most of the
information required to debug a problem (note that if the -c option is used the
utility will establish a connection to the database):
db2support <output path> -d <database name> -c
The output is conveniently collected and stored in a compressed ZIP archive,
db2support.zip, so that it can be transferred and extracted easily on any system.
The type of information that db2support captures depends on the way the
command is invoked, whether or not the database manager has been started, and
whether it is possible to connect to the database.
The db2support utility collects the following information under all conditions:
v db2diag.log
v All trap files
v locklist files
v Dump files
v Various system related files
v Output from various system commands
v db2cli.ini
The HTML report db2support.html will always include the following information:
v Problem record (PMR) number (if -n was specified)
v Operating system and level (for example, AIX 5.1)
v DB2 release information
v An indication of whether it is a 32- or 64-bit environment
The following information appears in the db2support.html file when the -s option
is specified:
v Detailed disk information (partition layout, type, LVM information, and so on)
v Detailed network information
v Kernel statistics
v Firmware versions
v Other operating system-specific commands
The db2support.html file contains the following additional information if DB2 has
been started:
v Client connection state
v Database and Database Manager Configuration (Database Configuration requires
the -d option)
v CLI config
v Memory pool info (size and consumed). Complete data is collected if the -d
option is used
v The result of the LIST ACTIVE DATABASES command
v The result of the LIST DCS APPLICATIONS command
The db2support.html file contains the following information if the -c has been
specified and a connection to the database is successfully established:
v Number of user tables
v Approximate size of database data
v Database snapshot
v Application snapshot
v Buffer pool information
v The result of the LIST APPLICATIONS command
v The result of the LIST COMMAND OPTIONS command
v The result of the LIST DATABASE DIRECTORY command
92 Troubleshooting Guide
v The result of the LIST INDOUBT TRANSACTIONS command
v The result of the LIST DATABASE PARTITION GROUPS command
v The result of the LIST DBPARTITIONNUMS command
v The result of the LIST ODBC DATA SOURCES command
v The result of the LIST PACKAGES/TABLES command
v The result of the LIST TABLESPACE CONTAINERS command
v The result of the LIST TABLESPACES command
v The result of the LIST DRDA IN DOUBT TRANSACTIONS command
Extracting the db2support.zip file, the following files and directories were
collected:
v DB2CONFIG/ - Configuration information (for example, database, database
manager, BP, CLI, and Java developer kit, among others)
v DB2DUMP/ - db2diag.log file contents for the past 3 days
v DB2MISC/ - List of the sqllib directory
v DB2SNAP/ - Output of DB2 commands (for example,db2set, LIST TABLES, LIST
INDOUBT TRANSACTIONS, and LIST APPLICATIONS, among others)
v db2supp_opt.zip - Diagnostic information for optimizer problems
v db2supp_system.zip - Operating system information
v db2support.html - Diagnostic information formatted into HTML sections
v db2support.log - Diagnostic log information for db2support collection
v db2support_options.in - Command line options used to start the db2support
collection
The amount of information gathered by a trace grows rapidly. When you take the
trace, capture only the error situation and avoid any other activities whenever
possible. When taking a trace, use the smallest scenario possible to reproduce a
problem.
DB2 Customer Support should provide the following information when traces are
requested:
v Simple, step by step procedures
v An explanation of where each trace is to be taken
v An explanation of what should be traced
v An explanation of why the trace is requested
v Backout procedures (for example, how to disable all traces)
94 Troubleshooting Guide
Trace information is not always helpful in diagnosing an error. For example, it
might not capture the error condition in the following situations:
v The trace buffer size you specified was not large enough to hold a complete set
of trace events, and useful information was lost when the trace stopped writing
to the file or wrapped.
v The traced scenario did not re-create the error situation.
v The error situation was re-created, but the assumption as to where the problem
occurred was incorrect. For example, the trace was collected at a client
workstation while the actual error occurred on a server.
DB2 Traces
How to obtain a DB2 trace using an internal utility is discussed. Once a DB2 trace
file is created from the trace data in the trace buffer, you will need to format the
output to make it readable. The information within that file can be used by DB2
Support to address your particular problem.
Keep in mind that there is added overhead when a trace is running so enabling the
trace facility might impact your system's performance.
In general, DB2 Support and development teams use DB2 traces for
troubleshooting. You might run a trace to gain information about a problem that
you are investigating, but its use is rather limited without knowledge of the DB2
source code.
Note: You will need one of SYSADM, SYSCTRL or SYSMAINT authority to use
db2trc
To get a general idea of the options available, execute the db2trc command without
any parameters:
C:\>db2trc
Usage: db2trc (chg|clr|dmp|flw|fmt|inf|off|on) options
For more information about a specific db2trc command parameter, use the -u
option. For example, to see more information about turning the trace on, execute
the following command:
db2trc on -u
This will provide information about all of the additional options (labeled as
"facilities") that can be specified when turning on a DB2 trace.
When turning trace on, the most important option is -L. This specifies the size of
the memory buffer that will be used to store the information being traced. The
buffer size can be specified in either bytes or megabytes. (To specify megabytes
append either "M" or "m" after the value). The trace buffer size must be a power of
two megabytes. If you specify a size that does not meet this requirement, the
buffer size will automatically be rounded down to the nearest power of two.
However, if you are tracing a larger operation or if a lot of work is going on at the
same time, a larger trace buffer might be required.
On most platforms, tracing can be turned on at any time and works as described
above. However, there are certain situations to be aware of:
1. On multiple database partition systems, you must run a trace for each physical
(as opposed to logical) database partition.
2. On HP-UX, Linux and Solaris platforms, if the trace is turned off after the
instance has been started, a very small buffer will be used the next time the
trace is started regardless of the size specified. For example, yesterday you
turned trace on by using db2trc on -l 8m, then collected a trace, and then
turned the trace off (db2trc off). Today you wish to run a trace with the
memory buffer set for 32 megabytes (db2trc on -l 32m) without bringing the
instance down and restarting. You will find that in this case trace will only get
a small buffer. To effectively run a trace on these platforms, turn the trace on
before starting the instance with the size buffer you need and “clear” the buffer
as necessary afterwards.
While the trace is running, you can use the clr option to clear out the trace buffer.
All existing information in the trace buffer will be removed.
C:\>db2trc clr
Trace has been cleared
Once the operation being traced has finished, use the dmp option followed by a
trace file name to dump the memory buffer to disk. For example:
C:\>db2trc dmp trace.dmp
Trace has been dumped to file
The trace facility will continue to run after dumping the trace buffer to disk. To
turn tracing off, use the off option:
C:\>db2trc off
Trace is turned off
To verify that a trace file can be read, format the binary trace file to show the flow
control and send the formatted output to a null device. The following example
shows the command to perform this task:
db2trc flw example.trc nul
96 Troubleshooting Guide
where example.trc is a binary file that was produced using the dmp option.
The output for this command will explicitly tell you if there is a problem reading
the file, and whether or not the trace was wrapped.
At this point, the dump file could be sent to DB2 Support. They would then format
it based on your DB2 service level. However, you might sometimes be asked to
format the dump file into ASCII format before sending it. This is accomplished via
the flw and fmt options. You must provide the name of the binary dump file along
with the name of the ASCII file that you want to create:
C:\>db2trc flw trace.dmp trace.flw
C:\Temp>db2trc flw trace.dmp trace.flw
Total number of trace records : 18854
Trace truncated : NO
Trace wrapped : NO
Number of trace records formatted : 1513 (pid: 2196 tid 2148 node: -1)
Number of trace records formatted : 100 (pid: 1568 tid 1304 node: 0)
...
If this output indicates "Trace wrapped" is "YES", then this means that the trace
buffer was not large enough to contain all of the information collected during the
trace period. A wrapped trace might be okay depending on the situation. If you
are interested in the most recent information (this is the default information that is
maintained, unless the -i option is specified), then what is in the trace file might be
sufficient. However, if you are interested in what happened at the beginning of the
trace period or if you are interested in everything that occurred, you might want to
redo the operation with a larger trace buffer.
There are options available when formatting a binary file into a readable text file.
For example, you can use db2trc fmt -xml trace.dmp trace.fmt to convert the
binary data and output the result into an xml parsable format. Additional options
are shown in the detailed description of the trace command (db2trc).
Another thing to be aware of is that on Linux and UNIX operating systems, DB2
will automatically dump the trace buffer to disk when it shuts the instance down
due to a severe error. Thus if tracing is enabled when an instance ends abnormally,
a file will be created in the diagnostic directory and its name will be db2trdmp.###,
where ### is the database partition number. This does not occur on Windows
platforms. You have to dump the trace manually in those situations.
The db2drdat utility records the data interchanged between a DRDA Application
Requestor (AR) and a DB2 DRDA Application Server (AS) (for example between
DB2 Connect and a host or Series i database server).
Trace utility
The db2drdat utility records the data interchanged between the DB2 Connect
server (on behalf of the IBM data server client) and the host or System i database
server.
Output from db2drdat lists the data streams exchanged between the DB2 Connect
workstation and the host or System i database server management system. Data
sent to the host or System i database server is labeled SEND BUFFER and data
received from the host or System i database server is labeled RECEIVE BUFFER.
Trace output
The db2drdat utility writes the following information to tracefile:
v -r
– Type of DRDA reply/object
– Receive buffer
v -s
– Type of DRDA request
– Send buffer
v -c
– SQLCA
v TCP/IP error information
– Receive function return code
– Severity
– Protocol used
– API used
– Function
– Error number.
Note:
1. A value of zero for the exit code indicates that the command completed
successfully, and a non-zero value indicates that it did not.
2. The fields returned vary based on the API used.
3. The fields returned vary based on the platform on which DB2 Connect is
running, even for the same API.
4. If the db2drdat command sends the output to a file that already exists, the old
file will be erased unless the permissions on the file do not allow it to be
erased.
The first buffer contains the Exchange Server Attributes (EXCSAT) and Access RDB
(ACCRDB) commands sent to the host or System i database server management
system. It sends these commands as a result of a CONNECT TO database command.
The next buffer contains the reply that DB2 Connect received from the host or
Figure 2 on page 102 uses DB2 Connect Enterprise Edition Version 9.1 and DB2
Universal Database (UDB) for z/OS Version 8 over a TCP/IP connection.
SEND BUFFER(AR):
RECEIVE BUFFER(AR):
SEND BUFFER(AR):
RECEIVE BUFFER(AR):
SEND BUFFER(AR):
RECEIVE BUFFER(AR):
SEND BUFFER(AR):
RECEIVE BUFFER(AR):
SEND BUFFER(AR):
RECEIVE BUFFER(AR):
This turns on the Control Center Trace and saves the output of the trace to the
specified file. The output file is saved to <DB2 install path>\sqllib\tools on
Windows and to /home/<userid>/sqllib/tools on UNIX and Linux.
Note: When the Control Center has been launched with tracing enabled, recreate
the problem using as few steps as possible. Try to avoid clicking on unnecessary or
unrelated items in the tool. Once you have recreated the problem, close the Control
Center (and any other GUI tools which you opened to recreate the problem).
The resulting trace file will need to be sent to DB2 Support for analysis.
JDBC traces
Depending on the type of the JDBC driver you are using, there are different ways
to obtain trace files for the applications or stored procedures you are running.
These different ways are presented here.
This type of trace is applicable for situations where a problem is encountered in:
v a JDBC application which uses the DB2 JDBC Type 2 Driver for Linux, UNIX
and Windows (DB2 JDBC Type 2 Driver)
v DB2 JDBC stored procedures.
Note: There are lots of keywords that can be added to the db2cli.ini file that can
affect application behavior. These keywords can resolve or be the cause of
application problems. There are also some keywords that are not covered in the
CLI documentation. Those are only available from DB2 Support. If you have
keywords in your db2cli.ini file that are not documented, it is likely that they
were recommended by DB2 Support. Internally, the DB2 JDBC Type 2 Driver
106 Troubleshooting Guide
makes use of the DB2 CLI driver for database access. For example, the Java
getConnection() method is internally mapped by the DB2 JDBC Type 2 Driver to
the DB2 CLI SQLConnect() function. As a result, Java developers might find a DB2
CLI trace to be a useful complement to the DB2 JDBC trace.
1. Create a path for the trace files. It is important to create a path that every user
can write to.
For example, on Windows:
mkdir c:\temp\trace
On Linux and UNIX:
mkdir /tmp/trace
chmod 777 /tmp/trace
2. Update the CLI configuration keywords. There are two methods to accomplish
this:
v Manually edit the db2cli.ini file. The location of the db2cli.ini file might
change based on whether the Microsoft ODBC Driver Manager is used, the
type of data source names (DSN) used, the type of client or driver being
installed, and whether the registry variable DB2CLIINIPATH is set. For more
information, see the “db2cli.ini initialization file” topic in the Call Level
Interface Guide and Reference, Volume 1.
a. Open up the db2cli.ini file in a plain text editor.
b. Add the following section to the file (if the COMMON section already
exists, just append the variables):
[COMMON]
JDBCTrace=1
JDBCTracePathName=<path>
JDBCTraceFlush=1
The CLI trace offers very little information about the internal workings of the DB2
CLI driver.
This type of trace is applicable for situations where a problem is encountered in:
v a CLI application
v an ODBC application (since ODBC applications use the DB2 CLI interface to
access DB2)
v DB2 CLI stored procedures
v JDBC applications and stored procedures
The DB2 JDBC Type 2 Driver for Linux, UNIX and Windows (DB2 JDBC Type 2
Driver) depends on the DB2 CLI driver to access the database. Consequently, Java
developers might also want to enable DB2 CLI tracing for additional information
on how their applications interact with the database through the various software
layers. DB2 JDBC and DB2 CLI trace options (though both set in the db2cli.ini file)
are independent of each other.
Note: There are lots of keywords that can be added to the db2cli.ini file that can
affect application behavior. These keywords can resolve or be the cause of
application problems. There are also some keywords that are not covered in the
CLI documentation. Those are only available from DB2 Support. If you have
keywords in your db2cli.ini file that are not documented, it is likely that they
were recommended by the DB2 Support team.
By default, the location of the DB2 CLI/ODBC configuration keyword file is in the
sqllib directory on Windows operating systems, and in the sqllib/cfg directory
of the database instance running the CLI/ODBC applications on Linux and UNIX
operating systems. The location of the db2cli.ini file might change based on
whether the Microsoft ODBC Driver Manager is used, the type of data source
names (DSN) used, the type of client or driver being installed, and whether the
registry variable DB2CLIINIPATH is set. For more information, see the “db2cli.ini
initialization file” topic in the Call Level Interface Guide and Reference, Volume 1.
1. Create a path for the trace files.
It is important to create a path that every user can write to. For example, on
Windows:
mkdir c:\temp\trace
On Linux and UNIX:
mkdir /tmp/trace
chmod 777 /tmp/trace
2. Update the CLI configuration keywords.
This can be done by either manually editing the db2cli.ini file or using the
UPDATE CLI CFG command.
Option A: Manually Editing the db2cli.ini file.
a. Open up the db2cli.ini file in a plain text editor.
b. Add the following section to the file (or if the COMMON section already exists,
just append the variables):
[COMMON]
Trace=1
TracePathName=path
TraceComm=1
TraceFlush=1
TraceTimeStamp=1
When you use the trace facility to diagnose application issues, keep in mind that it
does have an impact on application performance and that it affects all applications,
not only your test application. This is why it is important to remember to turn it
off after the problem has been identified.
The initial call to the CLI function shows the input parameters and the values
being assigned to them (as appropriate).
When CLI functions return, they show the resultant output parameters, for
example:
SQLAllocStmt( phStmt=1:1 )
<--- SQL_SUCCESS Time elapsed - +4.444000E-003 seconds
The following trace entry shows the preparation of the SQL statement ('?' denotes a
parameter marker):
SQLPrepare( hStmt=1:1, pszSqlStr=
"select * from employee where empno = ?",
cbSqlStr=-3 )
---> Time elapsed - +1.648000E-003 seconds
( StmtOut="select * from employee where empno = ?" )
SQLPrepare( )
<--- SQL_SUCCESS Time elapsed - +5.929000E-003 seconds
The following trace entry shows the binding of the parameter marker as a CHAR
with a maximum length of 7:
SQLBindParameter( hStmt=1:1, iPar=1, fParamType=SQL_PARAM_INPUT,
fCType=SQL_C_CHAR, fSQLType=SQL_CHAR, cbColDef=7, ibScale=0,
rgbValue=&00854f28, cbValueMax=7, pcbValue=&00858534 )
---> Time elapsed - +1.348000E-003 seconds
SQLBindParameter( )
<--- SQL_SUCCESS Time elapsed - +7.607000E-003 seconds
The dynamic SQL statement is now executed. The rbgValue="000010" shows the
value that was substituted for the parameter marker by the application at run time:
SQLExecute( hStmt=1:1 )
---> Time elapsed - +1.317000E-003 seconds
( iPar=1, fCType=SQL_C_CHAR, rgbValue="000010" - X"303030303130",
pcbValue=6, piIndicatorPtr=6 )
sqlccsend( ulBytes - 384 )
sqlccsend( Handle - 14437216 )
sqlccsend( ) - rc - 0, time elapsed - +1.915000E-003
sqlccrecv( )
sqlccrecv( ulBytes - 1053 ) - rc - 0, time elapsed - +8.808000E-003
SQLExecute( )
<--- SQL_SUCCESS Time elapsed - +2.213300E-002 seconds
(This time value indicates the time spent in the application since last CLI API was
called)
SQLAllocStmt( phStmt=1:1 )
<--- SQL_SUCCESS Time elapsed - +4.444000E-003 seconds
(Since the function has completed, this time value indicates the time spent in DB2,
including the network time)
The other way to capture timing information is using the CLI keyword:
TraceTimeStamp. This keyword will generate a timestamp for every invocation and
result of a DB2 CLI API call. The keyword has 4 display options: no timestamp
information, processor ticks and ISO timestamp, processor ticks, or ISO timestamp.
This can be very useful when working with timing related problems such as
CLI0125E - function sequence errors. It can also be helpful when attempting to
determine which event happened first when working with multithreaded
applications.
It is also possible that you could see a CLI function call return an "Option value
changed" or a "Keyset Parser Return Code". This is a result of the keyset cursor
displaying a message, such as when the cursor is being downgraded to a static
cursor for some specific reason.
SQLExecDirect( )
<--- SQL_SUCCESS_WITH_INFO Time elapsed - +1.06E+001 seconds
In the above CLI trace, the keyset parser has indicated a return code of 1100, which
indicates that there is not a unique index or primary key for the table, and
therefore a keyset cursor could not be created. These return codes are not
externalized and thus at this point you would need to contact DB2 Support if you
wanted further information about the meaning of the return code.
If you need to know what the actual thread id is, this information can be seen in
the CLI Trace Header:
[ Process: 3500, Thread: 728 ]
[ Date & Time: 02/17/2006 04:28:02.238015 ]
[ Product: QDB2/NT DB2 v9.1.0.190 ]
...
You can also trace a multithreaded application to one file, using the CLI keyword:
TraceFileName. This method will generate one file of your choice, but can be
cumbersome to read, as certain API's in one thread can be executed at the same
time as another API in another thread, which could potentially cause some
confusion when reviewing the trace.
Platform-specific tools
There are troubleshooting commands, performance monitoring utilities, and other
methods of gathering diagnostic information that are associated with the platforms
you are using. These tools are presented as part of your Windows operating
system, or your Linux and UNIX operating systems.
The following AIX system commands are useful for DB2 troubleshooting:
errpt The errpt command reports system errors such as hardware errors and
network failures.
v For an overview that shows one line per error, use errpt
v For a more detailed view that shows one page for each error, use errpt
-a
v For errors with an error number of "1581762B", use errpt -a -j 1581762B
v To find out if you ran out of paging space in the past, use errpt | grep
SYSVMM
v To find out if there are token ring card or disk problems, check the errpt
output for the phrases "disk" and "tr0"
lsps The lsps -a command monitors and displays how paging space is being
used.
lsattr This command displays various operating system parameters. For example,
use the following command to find out the amount of real memory on the
database partition:
lsattr -l sys0 -E
xmperf
For AIX systems using Motif, this command starts a graphical monitor that
collects and displays system-related performance data. The monitor
displays three-dimensional diagrams for each database partition in a single
window, and is good for high-level monitoring. However, if activity is low,
the output from this monitor is of limited value.
spmon
If you are using system partitioning as part of the Parallel System Support
Program (PSSP), you might need to check if the SP Switch is running on all
workstations. To view the status of all database partitions, use one of the
following commands from the control workstation:
v spmon -d for ASCII output
v spmon -g for a graphical user interface
Alternatively, use the command netstat -i from a database partition
workstation to see if the switch is down. If the switch is down, there is an
asterisk (*) beside the database partition name. For example:
css0* 65520 <Link>0.0.0.0.0.0
The following system commands are for all Linux and UNIX systems, including
AIX, unless otherwise noted.
df The df command lets you see if file systems are full.
v To see how much free space is in all file systems (including mounted
ones), use df
v To see how much free space is in all file systems with names containing
"dev", use df | grep dev
v To see how much free space is in your home file system, use df /home
v To see how much free space is in the file system "tmp", use df /tmp
v To see if there is enough free space on the machine, check the output
from the following commands: df /usr , df /var , df /tmp , and df
/home
truss This command is useful for tracing system calls in one or more processes.
pstack Available for Solaris 2.5.1 or later, the /usr/proc/bin/pstack command
displays stack traceback information. The /usr/proc/bin directory contains
other tools for debugging processes that appear to be suspended.
The following tools are available for monitoring the performance of your system.
vmstat
This command is useful for determining if something is suspended or just
taking a long time. You can monitor the paging rate, found under the page
in (pi) and page out (po) columns. Other important columns are the
amount of allocated virtual storage (avm) and free virtual storage (fre).
iostat This command is useful for monitoring I/O activities. You can use the read
and write rate to estimate the amount of time required for certain SQL
operations (if they are the only activity on the system).
netstat
This command lets you know the network traffic on each database
partition, and the number of error packets encountered. It is useful for
isolating network problems.
system file
Available for Solaris operating system, the /etc/system file contains
definitions for kernel configuration limits such as the maximum number of
users allowed on the system at a time, the maximum number of processes
per user, and the inter-process communication (IPC) limits on size and
number of resources. These limits are important because they affect DB2
performance on a Solaris operating system machine.
Once you have a clear understanding of what the problem situation is, you need to
compile a list of search keywords to increase your chances of finding the existing
solutions. Here are some tips:
1. Use multiple words in your search. The more pertinent search terms you use,
the better your search results will be.
2. Start with specific results, and then go to broader results if necessary. For
example, if too few results are returned, then remove some of the less pertinent
search terms and try it again. Alternatively, if you are uncertain which
keywords to use, you can perform a broad search with a few keywords, look at
the type of results that you receive, and be able to make a more informed
choice of additional keywords.
3. Sometimes it is more effective to search for a specific phrase. For example, if
you enter: "administration notification file" (with the quotation marks) you will
get only those documents that contain the exact phrase in the exact order in
which you type it. (As opposed to all documents that contain any combination
of those three words).
4. Use wildcards. If you are encountering a specific SQL error, search for
"SQL5005<wildcard>", where <wildcard> is the appropriate wildcard for the
resource you're searching. This is likely to return more results than if you had
merely searched for "SQL5005" or "SQL5005c ".
5. If you are encountering a situation where your instance ends abnormally and
produces trap files, search for known problems using the first two or three
functions in the trap or core file's stack traceback. If too many results are
returned, try adding keywords "trap", "abend" or "crash".
6. If you are searching for keywords that are operating-system-specific (such as
signal numbers or errno values), try searching on the constant name, not the
value. For example, search for "EFBIG" instead of the error number 27.
Troubleshooting resources
A wide variety of troubleshooting and problem determination information is
available to assist you in using DB2 database products.
Refer to the DB2 Technical Support Web site if you are experiencing problems and
want help finding possible causes and solutions. The Technical Support site has
links to the latest DB2 publications, TechNotes, Authorized Program Analysis
Reports (APARs), fix packs and other resources. You can search through this
knowledge base to find possible solutions to your problems.
A DB2 fix pack contains updates and fixes for problems (Authorized Program
Analysis Reports, or "APARs") found during testing at IBM, as well as fixes for
problems reported by customers. The APARLIST.TXT file describes the fixes
contained in each fix pack and it is available for download at http://www-
01.ibm.com/support/docview.wss?rs=71&uid=swg21293566.
Fix packs are cumulative. This means that the latest fix pack for any given version
of DB2 contains all of the updates from previous fix packs for the same version of
DB2.
Restrictions
v A DB2 Version 9.5 fix pack can only be applied to DB2 Version 9.5 general
availability (GA) or fix pack level copies.
v All DB2 instances, DAS, and applications related to the DB2 copy being updated
must be stopped before installing a fix pack.
v In a partitioned database environment, prior to installing the fix pack, you must
stop the database manager on all database partition servers. You must install the
fix pack on the instance-owning database partition server and all other database
partition servers. All computers participating in the instance must be upgraded
to the same fix pack level.
v On Linux or UNIX operating systems:
– If you have DB2 database products on a Network File System (NFS), you
must ensure the following are stopped completely before installing the fix
pack: all instances, the DB2 administration server (DAS), interprocess
communications (IPC), and applications on other machines using the same
NFS mounted installation.
– If the system commands fuser or lsof are not available, the installFixPack
command cannot detect loaded DB2 files. You must ensure no DB2 files are
loaded and provide an override option to install the fix pack. On UNIX, the
fuser command is required to check for loaded files. On Linux, either the
fuser command or lsof command is required.
For details on the override option, see the installFixPack command.
v On client applications, after a fix pack has been applied, to perform autobind of
applications, the user must have bind authority.
v Installation of a DB2 fix pack will not service IBM Data Studio Administration
Console or IBM Data Studio.
What to do next
Check the log file for any post-installation steps, or error messages and
recommended actions.
If you have multiple DB2 copies on the same system, those copies can be at
different version and fix pack levels. If you want to apply a fix pack to one or
more DB2 copies, you must install the fix pack on those DB2 copies one by one.
The modified DB2 code that resolves the problem described in the APAR can be
delivered in fix packs, interim fix pack and test fixes.
Fix pack
A fix pack is a cumulative collection of APAR fixes. In particular, fix packs
address the APARs that arise between new releases of DB2. They are
intended to allow you to move up to a specific maintenance level. Fix
packs have the following characteristics:
v They are cumulative. Fix packs for a particular release of DB2 supersede
or contain all of the APAR fixes shipped in previous fix packs and
interim fix packs for that release.
v They are available for all supported operating systems and DB2 database
products.
v They contain many APARs.
v They are published on the DB2 Technical Support Web site and are
generally available to customers who have purchased products under
the Passport Advantage® program.
v They are fully tested by IBM.
It is recommended that you keep your DB2 environment running at the latest fix
pack level to ensure problem-free operation. To receive notification of the
To learn more about the role and purpose of DB2 fixes and fix packs, see the
support policy statement.
Each test fix has specific prerequisites. Refer to the Readme that accompanies the
test fix for details.
If national languages have been installed, you might also require a separate
national language test fix. The national language test fix can only be applied if it is
at the same test fix level as the installed DB2 product. If you are applying a
universal test fix, you must apply both the universal test fix and the national
language test fix to update the DB2 products.
Obtain the test fix from DB2 Customer Support and follow the instructions in the
Readme with respect to installing, testing and removing (if necessary) the test fix.
When installing a test fix in a multi-partition database partition environment, the
system must be offline and all computers participating in the instance must be
upgraded to the same test fix level.
Note: The DB2 Information Center topics are updated more frequently than either
the PDF or the hard-copy books. To get the most current information, install the
documentation updates as they become available, or refer to the DB2 Information
Center at ibm.com®.
You can access additional DB2 technical information such as technotes, white
papers, and IBM Redbooks publications online at ibm.com. Access the DB2
Information Management software library site at http://www.ibm.com/software/
data/sw-library/.
Documentation feedback
We value your feedback on the DB2 documentation. If you have suggestions for
how to improve the DB2 documentation, send an email to db2docs@ca.ibm.com.
The DB2 documentation team reads all of your feedback, but cannot respond to
you directly. Provide specific examples wherever possible so that we can better
understand your concerns. If you are providing feedback on a specific topic or
help file, include the topic title and URL.
Do not use this email address to contact DB2 Customer Support. If you have a DB2
technical issue that the documentation does not resolve, contact your local IBM
service center for assistance.
If you would like to help IBM make the IBM Information Management products
easier to use, take the Consumability Survey: http://www.ibm.com/software/
data/info/consumability-survey/.
Although the tables identify books available in print, the books might not be
available in your country or region.
The form number increases each time a manual is updated. Ensure that you are
reading the most recent version of the manuals, as listed in the following table.
Note: The DB2 Information Center is updated more frequently than either the PDF
or the hard-copy books.
Table 10. DB2 technical information
Name Form Number Available in print Last updated
Administrative API SC23-5842-03 Yes December, 2010
Reference
Administrative Routines SC23-5843-03 No December, 2010
and Views
Call Level Interface SC23-5844-03 Yes December, 2010
Guide and Reference,
Volume 1
Call Level Interface SC23-5845-03 Yes December, 2010
Guide and Reference,
Volume 2
Command Reference SC23-5846-03 Yes December, 2010
Data Movement Utilities SC23-5847-03 Yes December, 2010
Guide and Reference
Data Recovery and High SC23-5848-03 Yes December, 2010
Availability Guide and
Reference
Data Servers, Databases, SC23-5849-03 Yes December, 2010
and Database Objects
Guide
Database Security Guide SC23-5850-03 Yes December, 2010
Developing ADO.NET SC23-5851-02 Yes April, 2009
and OLE DB
Applications
Developing Embedded SC23-5852-02 Yes April, 2009
SQL Applications
Developing Java SC23-5853-03 Yes December, 2010
Applications
Developing Perl and SC23-5854-02 No April, 2009
PHP Applications
Developing User-defined SC23-5855-03 Yes December, 2010
Routines (SQL and
External)
Printed versions of many of the DB2 books available on the DB2 PDF
Documentation DVD can be ordered for a fee from IBM. Depending on where you
are placing your order from, you may be able to order books online, from the IBM
Publications Center. If online ordering is not available in your country or region,
you can always order printed DB2 books from your local IBM representative. Note
that not all books on the DB2 PDF Documentation DVD are available in print.
Note: The most up-to-date and complete DB2 documentation is maintained in the
DB2 Information Center at http://publib.boulder.ibm.com/infocenter/db2luw/
v9r5.
To invoke SQL state help, open the command line processor and enter:
? sqlstate or ? class code
where sqlstate represents a valid five-digit SQL state and class code represents the
first two digits of the SQL state.
For example, ? 08003 displays help for the 08003 SQL state, and ? 08 displays help
for the 08 class code.
For DB2 Version 9.7 topics, the DB2 Information Center URL is http://
publib.boulder.ibm.com/infocenter/db2luw/v9r7/.
For DB2 Version 9.5 topics, the DB2 Information Center URL is http://
publib.boulder.ibm.com/infocenter/db2luw/v9r5.
For DB2 Version 9.1 topics, the DB2 Information Center URL is http://
publib.boulder.ibm.com/infocenter/db2luw/v9/.
For DB2 Version 8 topics, go to the DB2 Information Center URL at:
http://publib.boulder.ibm.com/infocenter/db2luw/v8/.
Note: Adding a language does not guarantee that the computer has the
fonts required to display the topics in the preferred language.
– To move a language to the top of the list, select the language and click the
Move Up button until the language is first in the list of languages.
3. Clear the browser cache and then refresh the page to display the DB2
Information Center in your preferred language.
v To display topics in your preferred language in a Firefox or Mozilla browser:
1. Select the button in the Languages section of the Tools —> Options —>
Advanced dialog. The Languages panel is displayed in the Preferences
window.
2. Ensure your preferred language is specified as the first entry in the list of
languages.
– To add a new language to the list, click the Add... button to select a
language from the Add Languages window.
– To move a language to the top of the list, select the language and click the
Move Up button until the language is first in the list of languages.
3. Clear the browser cache and then refresh the page to display the DB2
Information Center in your preferred language.
On some browser and operating system combinations, you might have to also
change the regional settings of your operating system to the locale and language of
your choice.
To update the DB2 Information Center installed on your computer or intranet server:
1. Stop the DB2 Information Center.
v On Windows, click Start → Control Panel → Administrative Tools → Services.
Then right-click on DB2 Information Center service and select Stop.
v On Linux, enter the following command:
/etc/init.d/db2icdv95 stop
2. Start the Information Center in stand-alone mode.
v On Windows:
a. Open a command window.
b. Navigate to the path where the Information Center is installed. By
default, the DB2 Information Center is installed in the
Program_files\IBM\DB2 Information Center\Version 9.5 directory,
where Program_files represents the location of the Program Files directory.
c. Navigate from the installation directory to the doc\bin directory.
d. Run the help_start.bat file:
help_start.bat
v On Linux:
a. Navigate to the path where the Information Center is installed. By
default, the DB2 Information Center is installed in the /opt/ibm/db2ic/V9.5
directory.
b. Navigate from the installation directory to the doc/bin directory.
c. Run the help_start script:
help_start
The systems default Web browser launches to display the stand-alone
Information Center.
3. Click the Update button ( ). On the right hand panel of the Information
Center, click Find Updates. A list of updates for existing documentation
displays.
4. To initiate the installation process, check the selections you want to install, then
click Install Updates.
5. After the installation process has completed, click Finish.
6. Stop the stand-alone Information Center:
Note: The help_end batch file contains the commands required to safely
terminate the processes that were started with the help_start batch file. Do
not use Ctrl-C or any other method to terminate help_start.bat.
v On Linux, navigate to the installation directory's doc/bin directory, and run
the help_end script:
help_end
The updated DB2 Information Center displays the new and updated topics.
DB2 tutorials
The DB2 tutorials help you learn about various aspects of DB2 products. Lessons
provide step-by-step instructions.
You can view the XHTML version of the tutorial from the Information Center at
http://publib.boulder.ibm.com/infocenter/db2help/.
Some lessons use sample data or code. See the tutorial for a description of any
prerequisites for its specific tasks.
DB2 tutorials
Personal use: You may reproduce these Publications for your personal, non
commercial use provided that all proprietary notices are preserved. You may not
distribute, display or make derivative work of these Publications, or any portion
thereof, without the express consent of IBM.
Commercial use: You may reproduce, distribute and display these Publications
solely within your enterprise provided that all proprietary notices are preserved.
You may not make derivative works of these Publications, or reproduce, distribute
or display these Publications or any portion thereof outside your enterprise,
without the express consent of IBM.
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the Publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country/region or send inquiries, in
writing, to:
The following paragraph does not apply to the United Kingdom or any other
country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions; therefore, this statement may not apply
to you.
This document may provide links or references to non-IBM Web sites and
resources. IBM makes no representations, warranties, or other commitments
whatsoever about any non-IBM Web sites or third-party resources that may be
referenced, accessible from, or linked from this document. A link to a non-IBM
Web site does not mean that IBM endorses the content or use of such Web site or
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information that has been exchanged, should contact:
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information may contain examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious, and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work must
include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at Copyright and
trademark information at www.ibm.com/legal/copytrade.shtml.
J
Java Database Connectivity (JDBC)
applications
R
receive buffer 98
trace facility configuration 106, 108
return codes
traces 106
internal 65
L S
License Center
scripts
compliance report 50
troubleshooting 45
licenses
searching
compliance report 50
techniques 117
Linux
SECCHK command 99
listing DB2 products 79
send buffer
log files
tracing data 98
administration 11
SQL statements
checking validity 67
displaying help 129
SQL0965 error code 61
SQL0969 error code 61
N SQL1338 error code 61
notices 135 SQL30020 error code 61
notify level configuration parameter SQL30060 error code 61
updating 11 SQL30061 error code 61
SQL30073 error code 61
SQL30081N error code 61
O SQL30082 error code 61
SQL5043N error code 61
ODBC (open database connectivity)
SQLCA (SQL communication area)
applications
buffers of data 98
trace facility configuration 109
SQLCODE field 98
operating system
SQLCODE
troubleshooting tools 113
field in SQLCA 98
optimization
SRVNAM object 99
guidelines
storage keys
troubleshooting 39
troubleshooting 42
optimization profiles
system commands
troubleshooting 39
dbx (UNIX) 25
ordering DB2 books 128
gdb (Linux) 25
xdb (HP-UX) 25
system core files
P identification 25
parameters Linux 25
PRDID 99 UNIX 25
partitioned database
troubleshooting 38
PRDID parameter 99
problem determination
T
TCP/IP
connection 57
ACCSEC command 99
diagnostic tools
SECCHK command 99
overview 61
information available 132
Index 141
terms and conditions troubleshooting (continued)
use of publications 133 CLI and ODBC applications 108, 109
test fixes Control Center traces 106
applying 123 DRDA 101, 105
description 121 JDBC applications 106, 108
types 123 tutorials 132
threads 45 troubleshooting FCM problems 38
Tivoli System Automation for Multiplatforms tutorials
troubleshooting installation 36 problem determination 132
tools troubleshooting 132
diagnostic Visual Explain 132
overview 114
Windows 113
trace facility
CLI applications 109
U
UNIX
Control Center traces 106
listing DB2 products 79
DB2 traces 95, 96
updates
DRDA traces 101, 105
DB2 Information Center 130
JDBC applications 108
utilities
trace option configuration 106
db2drdat 98
troubleshooting overview 94
process status 99
trace utility (db2drdat) 98
ps (process status) 61, 99
traces
trace 98
buffer information for DRDA traces 105
CLI 108
analyzing 110, 111, 112, 113
data between DB2 connect and the server 98 V
DRDA Visual Explain
interpreting 98 tutorial 132
output file 98, 99
output file samples 101
overview 94
trap files 23
Z
ZRC return codes
formatting (Windows) 23
description 65
troubleshooting 1, 27
connect 57, 58
creating
database 39
current release 35
DB2 Connect 61
db2cklog command 67
description 35
diagnostic data
automatic collection 3
collecting base set 47
configuring collection 4
directory path 13
for DAS or instance management 48
for data movement 47
for installation 47
manual collection 3
splitting by database partition server, database partition,
or both 14
gathering information 47, 52, 57, 75, 81, 90
high-availability problems 36
installation problems 36, 37
introduction 1
known problems 37, 38, 59
log files 67
online information 132
overview 1, 57
problem recreation 76
resources 118
searching for solutions to problems 117
storage keys 42
tools 65
trace facilities 94, 95
Printed in USA
GI11-7857-03
Spine information:
DB2 Version 9.5 for Linux, UNIX, and Windows Version 9 Release 5 Troubleshooting Guide