MySQL To DB2 Redbook
MySQL To DB2 Redbook
MySQL to DB2
Conversion Guide
Complete guide to convert MySQL
database and application to DB2
Application conversion
with detailed examples
Whei-Jen Chen
Angela Carlson
ibm.com/redbooks
Draft Document for Review October 16, 2009 10:22 am 7093edno.fm
October 2009
SG24-7093-01
7093edno.fm Draft Document for Review October 16, 2009 10:22 am
Note: Before using this information and the product it supports, read the information in
“Notices” on page ix.
This edition applies to DB2 9.7 for Linux, UNIX, and Windows, MySQL 5.1.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Chapter 5. Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.1 DB2 Express-C 9.7 on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.1.1 System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.1.2 Installation procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.1.3 Instance creation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.1.4 Client setup on Linux. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.2 Other software product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.2.1 Apache2 installation with DB2 support . . . . . . . . . . . . . . . . . . . . . . 105
5.2.2 PHP installation with DB2 support . . . . . . . . . . . . . . . . . . . . . . . . . 107
5.3 IBM Data Movement Tool installation and usage . . . . . . . . . . . . . . . . . . 111
5.3.1 IBM Data Movement Tool prerequisites . . . . . . . . . . . . . . . . . . . . . 112
5.3.2 IBM Data Movement Tool installation . . . . . . . . . . . . . . . . . . . . . . . 113
Contents v
7093TOC.fm Draft Document for Review October 16, 2009 10:25 am
Contents vii
7093TOC.fm Draft Document for Review October 16, 2009 10:25 am
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. These and other IBM trademarked
terms are marked on their first occurrence in this information with the appropriate symbol (® or ™),
indicating US registered or common law trademarks owned by IBM at the time this information was
published. Such trademarks may also be registered or common law trademarks in other countries. A current
list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AMD, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro Devices, Inc.
Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and
other countries.
SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and
other countries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation
and/or its affiliates.
Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S. and
other countries.
VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware, Inc. in
the United States and/or other jurisdictions.
EJB, Enterprise JavaBeans, J2EE, Java, Java runtime environment, JavaBeans, JavaServer, JDBC, JSP,
MySQL, Solaris, Sun, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the
United States, other countries, or both.
Access, ActiveX, Expression, Microsoft, MS, SQL Server, Visual Basic, Visual Studio, Windows Mobile,
Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Intel Pentium, Intel Xeon, Intel, Itanium-based, Itanium, Pentium, Intel logo, Intel Inside logo, and Intel
Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United
States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Notices xi
7093spec.fm Draft Document for Review October 16, 2009 10:21 am
Preface
If you picked up this book, you are most likely considering converting to DB2 and
are probably aware of some of the advantages of to converting to DB2 data
server. In this IBM Redbooks® publication we discuss in detail how you can take
advantage of this industry leading database server.
This book is an informative guide that describes how to convert the database
system from MySQL 5.1 to DB2 9.7 on Linux®, and the steps involved in
enabling the applications to use DB2 instead of MySQL.
This guide also presents the best practices in conversion strategy and planning,
conversion tools, porting steps, and practical conversion examples. It is intended
for technical staff involved in a MySQL to DB2 conversion project.
Boris Bialek
Program Director, Information Management Partner Technologies, IBM Canada
Irina Delidjakova
Information Management Emerging Partnerships and Technologies, IBM Canada
Vlad Barshai
Information Management Emerging Partnerships and Technologies, IBM Canada
Martin Schlegel
Information Management Partner Technologies, IBM Canada
Emma Jacob
International Technical Support Organization, San Jose Center
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you will develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xv
7093pref.fm Draft Document for Review October 16, 2009 10:21 am
Summary of changes
This section describes the technical changes made in this edition of the book and
in previous editions. This edition may also include minor corrections and editorial
changes that are not identified.
Summary of Changes
for SG24-7093-01
for MySQL to DB2 Conversion Guide
as created or updated on October 16, 2009.
New information
DB2 feature and functions of DB2 for Linux, UNIX®, and Windows® Version
9, 9.5, and 9.7
IBM Data Movement Tool
MySQL 5.1 features
Changed information
DB2 and MySQL features and functions
Conversion scenarios and examples
Executive summary
This IBM Redbook describes how to migrate MySQL 5.1 to DB2 Version 9.7 on Linux
and enable your applications on DB2. To further ease your migration, this informative
guide will cover best practices in migration strategy and planning, as well as the
step-by-step directions, tools, and practical conversion. After completing this book, it
will be clear to the technical reader that a MySQL to DB2 migration is easy and
straight forward.
DB2 Express C offers the same high quality, reliable, scalable features that you would
expect from an IBM enterprise database at no charge. Fixed Term License support is
available as well, at a lower price than the competition. The decision to migrate
becomes simple when you consider that DB2 can be easily deployed in the
development stack, while offering many additional features and ease of use.
Enterprise class features aimed to lower the total cost of ownership can be found in
every edition of DB2. DB2 has powerful autonomics which make installation,
configuration, maintenance and administration virtually hands free. DB2 9.7's
compression features help companies manage rising energy costs and reduce
datacenter sprawl by reducing storage requirements and improving I/O efficiency.
IBM is committed to providing products to our customers that are powerful and
affordable. DB2 provides industry leading features, such as pureXML, Workload
Management, and Granular Security. Using DB2 pureXML®, make XML data
processing even faster, more flexible, and more reliable. Manage workloads with new
threshold, priority and OS integration features in DB2 9.7. Keep data secure from
internal and external threats using the unparalleled security control in DB2 9.7.
Start taking advantage of these exciting new features and help your business
manage costs and simplify application development. Migrate your database systems
and applications today and discover why DB2 9.7 is a smarter product for a smarter
planet.
Arvind Krishna
General Manager
IBM Information Management
1.1 Introduction
IBM has an extremely strong history of database innovation and has developed a
number of highly advanced data servers. It started in the 60's when IBM
developed the Information Management System (IMS™), which is a hierarchical
database management system. IMS was used to maintained inventory for the
Saturn V moon rocket and the Apollo space vehicle. In the 70's IBM invented the
Relational Model and the Structured Query Language (SQL). In the 80's IBM
introduced DB2 for the mainframe (DB2 for z/OS®). This was the first database
which used relational modeling and SQL. DB2 for distributed platforms (DB2 for
Linux, UNIX, and Windows) was introduce in the 90's. Since then IBM continues
to development on DB2 for both mainframe and distributed platforms. Although
relational data model has become more prevalent in the industry; IBM still
realizes that the hierarchical data model is important. Therefore in July 2006 IBM
launched the first hybrid (also known as multi-structured) data server.
The release of DB2 for Linux, UNIX, and Windows Version 9 (DB2 9) data server
brought the most exciting and innovative database features to the market; these
features further enhanced with the release of DB2 9.5 and 9.7. DB2 9 introduced
many important features for both database administrators and application
developers. These features included pureXML®, autonomics, table partitioning,
data compression, and label based access control. DB2 9.5 enhanced the
manageability of the DB2 data server by introducing the threaded engine, easier
integration with HADR, workload management, enhancements to autonomics
and more. As DB2 9.7 was released the focus is to provide unparalleled reliability
and scalability for the changing needs of your business. Therefore, DB2 9.7
introduces enhancements to Version 9 and Version 9.5 features, such as
enhancements to data compression, performance, workload management,
security, and application development.
When this book was written DB2 9.7 had just been released on June 2009. DB2
9.7 is the database version we use through the book. DB2 9.7 is a highly
scalable, easy to install and manage hybrid data server. DB2 was developed to
meet the demands of even the most critical database applications. DB2 provides
a highly adaptable database environment while optimizing data storage through
backups and deep data row compression. This is managed through various
autonomics capabilities, such as self-tuning memory management and automatic
storage. DB2 deep embedded capabilities allow for ubiquitous deployment in
user directories and administrative installations for any size server. In a single
database, DB2 provides native storage and processing of both transactional XML
data in a pre-parsed tree format and relational data using pureXML technology!
DB2 Personal Edition (PE) provides a single user database engine ideal for
deployment to PC based users. The PE includes the ability for remote
management, pureXML feature, and SQL replication feature, making it the
perfect choice for deployment in occasionally connected or remote office
implementations that don't require multi-user capability, that is, point of sales
systems.
PE does not accept remote database requests, however, it contains DB2
client components and serves as a remote client to a DB2 Server. The DB2
Personal edition can also be used for connecting and managing other DB2
data servers on the network.
The personal edition includes most of the features included in DB2 Express
Edition and runs in either 32-bit or 64-bit Intel® or AMD™ workstations for
either Windows or Linux operating systems.
DB2 Express-C
DB2 Express-C is the no-charge community version of the DB2 data server. It
is targeted towards developers and Independent Software Vendors (ISVs) to
allow the development and deployment of applications including the free
distribution of DB2 Express-C itself. All applications developed with this
version of DB2 can be moved to a higher edition of DB2 for Linux, UNIX and
Windows and even DB2 for z/OS without any application changes if using
common SQL API set of the DB2 family.
This version of DB2 is free for download and is therefore perfectly suited for
DB2 educators and students. DB2 Express-C does not restrict the database
size and can be used in a 64-bit memory model. The code is optimized to use
up to a maximum of 2 CPU cores and 2 GB of memory. No fixpack updates
are available for this edition, however, new versions of Express-C are updated
and freely available for download at any time.
While this version does not include all the features of higher editions of DB2,
such as storage optimization, replication services, or high availability, it comes
with the award-winning pureXML technology to leverage both relational data
by being able to natively store XML data in a single database. Some of these
features can be activated by purchasing the DB2 Express Fixed Term License
(FTL). Obtaining the FTL enables a 1 year 24x7 support plus the ability to use
high availability and disaster recovery.
At any point users of DB2 Express-C can receive advice on the IBM DB2
Express Forum, monitored by IBM DB2 developers, by accessing the
following link:
http://www.ibm.com/developerworks/forums/forum.jspa?forumID=805
DB2 Express-C runs on Windows or Linux for both Intel and AMD on 32-bit or
64-bit architecture as well as Linux on Power (IBM System p® and System
i®).
DB2 Workgroup
DB2 Workgroup Server Edition is used primarily for small business to
medium environments. It includes support for a per authorized user based or
a per processor based licensing model designed to provide an attractive price
point for smaller installations while still providing a fully functional data server
on a wider range of platforms compared to lower editions of DB2.
Workgroup Server now includes the High Availability Disaster Recovery
(HADR) feature, TSA MP and Online Table Reorganization. The following
feature packs are available:
– Query Optimization including Materialized Query Tables (cached tables)
– Multidimensional Clustering and Query Parallelism
– pureXML
This DB2 edition can be deployed on systems with up to 400 processor value
units and 16 GB of memory.
While the DB2 Express edition only runs on Windows, Linux, or Solaris the
Workgroup Server edition adds support for AIX®, HP-UX on Itanium64 and
Solaris on x86 and Sparc.
DB2 Enterprise
DB2 Enterprise Server Edition meets the database server needs for any size
business. This product is the ideal foundation for building data warehouses,
transaction processing, or Web-based solutions, as well as a back-end for
packaged solutions like ERP, CRM, and SCM. In addition, the DB2 Enterprise
Server Edition offers connectivity and integration for other enterprise DB2 and
Informix data sources.
The Enterprise Server Edition does not pose limits to the maximum memory
or number of CPU cores. It can be licensed with either authorized user
licenses or processor value unit licenses.
In addition to features offered in the Workgroup Server Edition, the following
features are also available: Query Parallelism, Multidimensional Clustering,
Materialized Query Tables, Table Partitioning, and Connection Concentration.
With the DB2 feature pack the following can also be added: Performance
Optimization Feature, Advanced Access® Control Feature, Storage
Optimization Feature and Geodetic Data Management Feature.
DB2 Enterprise Server Edition runs on Windows (32 and 64-bit), Linux (Intel/
AMD 64-bit, System i, System p, System z®), AIX, Solaris (Sparc and x64)
and HP-UX (ia64).
InfoSphere™ Warehouse
InfoSphere Warehouse (formerly known as DB2 Warehouse) is a powerful
platform for building business intelligence (BI) solutions. InfoSphere
Warehouse comes as single integrated software package, using DB2
manage the data. In this section we will discuss the DB2 engine architecture and
the database objects used to maintain your database.
From a client-server perspective, the client code and the server code are
separated into different address spaces. The application code runs in the client
process, while the server code runs in a separate process. The client process
can run on either the same machine as the data server or a different one,
accessing the data server through a programming interface. The memory units
are allocated for database managers, databases, and applications.
Since DB2 is running with a threaded architecture all threads within the engine
process share the same address space, meaning all threads can immediately
Instances
A DB2 instance represents the database management system. It controls
how data is manipulated and manages system resources assigned to it. Each
instance is a complete, fairly independent environment containing all the
database partitions defined for a given parallel database system. An instance
can have its own set of databases (which other instances cannot access
directly), and all database partitions share the same system directories. Each
instance has separate security from other instances on the same machine
(system), allowing for situations where both production and development
environments are run on the same machine without interference. In order to
connect to a database, any database client must first establish a network
connection to the instance.
Databases
A database is a structured collection of data, which is stored within tables.
Since DB2 9, data within tables, can be stored as both relational data and
XML documents natively in a pre-parsed tree format within a table column.
Each database includes a set of system catalog tables that describe the
logical and physical structure of the object in the database, a configuration file
containing the parameter values configured for the database, and a recovery
log. Figure 1-4 on page 12 shows the relationship between instances,
databases, and tables.
space, each container is a directory in the file system of the operating system.
This type of table spaces allows the operating system's file manager to
control the storage space. In a DMS table space, each container is either a
resizable file, or a pre-allocated physical device such as a disk which the
database manager must control.
When using SMS or DMS in combination with container files you can choose
how DB2 is handling these files. For example, choosing to enable various
optimization features if supported by the operating systems, that is, Direct I/O
(to bypass file system caching; always enabled with raw and block devices),
Vector I/O (reading contiguous data pages from disk into contiguous portions
of memory), and Async I/O (non-sequential processing of read and write
requests across multiple disks to avoid delays from synchronous events).
When using the Automatic Storage feature in DB2, one can simply specify
folders where the database can automatically create and manage DMS table
spaces. When more space is required the database manager automatically
allocates more space. Table spaces can be automatically resized using this
feature. This feature provides a convenient and worry free operation scenario.
Manual operation can be done without having to specifying container files.
Containers
A container is a physical storage device. It can be identified by a directory
name, a device name, or a file name. A container is assigned to a table
space. A single table space can span many containers, but each container
can belong to only one table space.
Buffer pools
A buffer pool is the amount of memory allocated to cache table and index
data pages. The purpose of the buffer pool is to improve system performance.
When using the Deep Data Row Compression feature DB2 is able to
transparently compress and decompress table rows (for each table with
compression turned on). This feature can effectively save 45-80% space on
disk. Compressed rows in a table are compressed when pre-fetched to buffer
pool memory and left in a compressed state until they are actually being
used. Although decompression of the data when its fetched adds a slight
overhead, I/O bound workloads will have a performance gain. This is due to
the reduced amount of data we actually need to read and write from or to disk
as well as saved memory.
Views
A view provides a different way of looking at data from one or more tables; it is
a named specification of a result table. The specification is a SELECT
statement that runs whenever the view is referenced in a SQL statement. A
view has columns and rows just like a base table. All views can be used just
like base tables for data retrieval. Figure 1-8 on page 17 shows the
relationship between tables and views.
Indexes
An index is a set of keys, each pointing to rows in a table. For example, table
A has an index based on the first column in the table (Figure 1-9 on page 18).
This key value provides a pointer to the rows in the table: value 19 points to
record KMP. If searching for this particular record, a full table scan can be
avoided since we have an index defined. Except for changes in performance,
users of this table are unaware that an index is being used. DB2 decides
whether to use the index or not. DB2 also provides tools, such as the Design
Advisor, that can help decide what indexes would be beneficial.
An index allows efficient access when selecting a subset of rows in a table by
creating a direct path to the data through pointers. The DB2 SQL Optimizer
chooses the most efficient way to access data in tables. The optimizer takes
indexes into consideration when determining the fastest access path.
Indexes have both benefits and disadvantages. One should be careful when
defining indexes and take into consideration costs associated with update,
delete, and insert operations and maintenance such as reorganization and
recovery.
The catalog is automatically created with the database. The base tables are
owned by the SYSIBM schema and stored in the SYSCATSPACE table space. On
top of the base tables, the SYSCAT and SYSSTAT views are created. SYSCAT
views are the read-only views that contain the object information and are found in
the SYSCAT schema. SYSSTAT views are updateable views containing statistical
information that are found in the SYSTAT schema. The complete DB2 catalog
views can be found in DB2 SQL Reference Volume 1 and 2 available for
download under the following link:
http://www.ibm.com/support/docview.wss?rs=71&uid=swg27015148
View Database Manager Settings db2 get dbm cfg show detail
Change a Database Manager Setting db2 update dbm cfg using health_mon off
Note: The Control Center and its associated components have been
deprecated in Version 9.7 and might be removed in a future release. It is
recommended to use the new suite of GUI tools for managing DB2 data and
data-centric applications. These new tools include the IBM Data Studio, the
Optim Development Studio and the Optim Database Administrator
and
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0804zikopoulos
The IBM Data Server Runtime client offers the basic client functionality and
includes drivers for ODBC, CLI, ADO.NET, OLE DB, PHP, Ruby, Perl-DB2, JDBC
and SQLJ. This client already includes the drivers and the capabilities to define
data sources. Furthermore the Lightweight Directory Access Protocol (LDAP) is
available as well.
Additionally the IBM Data Server client provides vast amounts of sample code in
various languages, header files for application development and graphical
administration and development tools, such as the DB2 Control Center, the IBM
Data Studio, the MS® Visual Studio® Tools and more.
Figure 1-10 on page 22 illustrates how to connect to a DB2 data server using the
IBM Data Server clients.
built on the extensible Eclipse framework this IDE includes a number of plug-ins
to support programming languages, such as Java, C/C++, PHP, Ruby, Perl, and
so on. Other plug-ins are available to maintain the written application sources in
various source code repositories for example the Concurrent Versions System
(CVS) or IBM Rational® ClearCase® from within IBM Optim Data Studio.
If running mixed versions of DB2 servers and clients it is good to know, that DB2
Clients from DB2 UDB Version 8, DB2 9.1 or 9.5 for Linux, UNIX, and Windows
are still supported and able to connect to a DB2 9.7 data server. In the reversed
direction the newer IBM Data Server clients from version 9.7 can also connect to
the earlier DB2 9.1 and DB2 UDB version 8 servers using the IBM Data Server
Driver for ODBC, CLI and .Net. In this case however new DB2 version 9.7
functionality is not available.
As of DB2 version 9.5 both clients and drivers are decoupled from the server
release schedule and can be downloaded separately. The IBM Data Server
Driver for JDBC and SQLJ is already included in the IBM Data server Runtime
Client. It provides support for JDBC 3 and 4 compliant applications, as well as for
Java applications using static SQL (SQLJ). Support is also provided for
pureXML, SQL/XML and XQuery. All this and other features such as connection
concentration, automatic client reroute, and more are provided within in a single
package called db2jcc4.jar. The IBM Data Server driver for ODBC, CLI and
.Net is a lightweight deployment solution for Windows applications to provide
runtime support for applications without having the need to install the Data
Server client or the Data Server Runtime Client. On Windows, the driver comes
as an installable image including merge modules to easily embed it in a Windows
installer-based installation. On Linux and UNIX there is another easy deployment
solution called the IBM Data Server Driver for ODBC and CLI, which is available
in tar format.
Communication protocols
The primary DB2 communication protocols are
TCP, IPv4, IPv6, and Named Pipes (Windows only) for remote connections
Interprocess Communication (IPC) for local connections within a DB2
instance
For client-server communication, DB2 supports TCP/IP and Named Pipes for
remote or local loopback connections and uses IPC for client connections, which
are local to the DB2 server instance. Local and remote DB2 connections are
illustrated in Figure 1-11.
From the command line this information can be then updated in the database
manager with the following DB2 command:
db2 UPDATE DBM CFG USING SVCENAME db2icdb2
These tasks can also be performed using the DB2 Configuration Assistant utility.
At the client side the database information is configured using either the CATALOG
command or using the Configuration Assistant. The databases are configured
under a node, which describes host information such as protocol use, port
The service name registered in the server or the port number can be specified in
the SERVER option. To catalog a database under this node, the command used
is:
db2 CATALOG DATABASE database-name AS alias-name AT NODE node-name
When using the Configuration Assistant GUI tool to add a database connection,
a database discovery can be started to find the desired database.
Note: DB2 Discovery method is enabled at the instance level using the
DISCOVER_INST parameter, and at database level using DISCOVER_DB
parameter.
IBM recognizes that in many cases there may be a need for accessing data
from a variety of distributed data sources rather than one centralized
database. The data sources can be from IBM, such as DB2 or Informix, or
non-IBM databases, such as Oracle®, or even non-relational data, such as
files or spreadsheets. As illustrated in the last scenario in the Table 1-6 on
page 26, IBM offers the most comprehensive business integration solution by
allowing federated access to a variety of distributed data sources.
embedded SQL statements into DB2 run-time API calls that a host compiler
can process to create a bind file. The bind command creates a package in the
database. This package then contains the SQL operation and the access plan
that DB2 will use to perform the operations.
Dynamic SQL
Dynamic SQL statements in an application are built and executed at runtime.
For a dynamically prepared SQL statement, the syntax has to be checked and
an access plan has to be generated during the program execution.
Examples of embedded static and dynamic SQL can be found in the DB2
home directory: sqllib/samples/.
When using DB2 CLI, the application passes dynamic SQL statements as
function arguments to the database manager for processing. Because of this,
applications use common access packages provided with DB2. DB2 CLI
applications do not need to be pre-compiled or bound. Only compiling and linking
of the application is needed. Before DB2 CLI or ODBC applications can access
DB2 databases, the DB2 CLI binds files, which come with the IBM Data Server
Client to each DB2 database that will be accessed. This occurs automatically
with the execution of the first statement.
DB2 Data Server 9 offers different ways of creating Java applications, either
using a type 2 or type 4 JDBC driver:
Type 2 driver:
With a type 2 driver, calls to the JDBC application driver are translated to Java
native methods. The Java applications that use this driver must run on a DB2
client, through which JDBC requests flow to the DB2 server.
Tip: If prototyping CLI calls before placing them in a program, use the
db2cli.exe (Windows) or db2cli (Linux) file in the sqllib/samples/cli
directory.
Type 4 driver:
The JDBC type 4 driver can be used to create both Java applications and
applets. To run an applet that is based on the type 4 driver, a Java enabled
browser is required, which downloads the applet and the JDBC driver
(db2jcc4.jar). To run a DB2 application with a type 4 driver, an entry for the
JDBC driver in the class path and no DB2 client is required.
The JDBC driver is included in the IBM Data Server Client Driver for JDBC
and SQLJ and is architected as an abstract JDBC processor that is
ADO.NET
DB2 supports Microsoft ADO.NET programming interface through a native
managed provider. Such applications can use the DB2 .Net, the OLE DB .Net or
ODBC .NET Data Provider. High-performing Windows Forms, Web Forms, and
Web Services can be developed using the ADO.NET API. DB2 supports a
collection of features that integrate seamlessly into Visual Studio 2003, 2005 and
2008 to make it easier to work with DB2 servers and develop DB2 procedures,
functions and objects.
The IBM Data Server Provider for .Net extends data server support for the
ADO.NET interface and delivers high-performing, secure access to IBM data
servers:
DB2 Version 9 (or later) for Linux, UNIX, and Windows
DB2 Universal Database™ Version 8 for Windows, UNIX, and Linux
DB2 for z/OS and OS/390® Version 6 (or later), through DB2 Connect
DB2 for i5/OS Version 5 (or later), through DB2 Connect
DB2 Universal Database Version 7.3 (or later) for VSE and VM, through DB2
Connect
IBM Informix Dynamic Server, Version 11.10 or later
IBM UniData®, Version 7.1.11 or later
IBM UniVerse, Version 10.2 or later
When used in conjunction with stored procedures and the federated database
capabilities of DB2 data servers and DB2 Connect servers, this data access can
be extended to include a wide variety of other data sources, including non-DB2
mainframe data, Informix Dynamic Server (IDS), Microsoft SQL Server®, Sybase
and Oracle databases as well as any data source that has an OLE DB Provider
available.
For more information about developing ADO.NET and OLE DB refer to DB2 for
Linux, UNIX, and Windows Developing ADO.NET and OLE DB Applications,
SC23-5851-01 available at:
http://www.ibm.com/support/docview.wss?uid=pub1sc23585101
Perl DBI
DB2 supports the Perl Database Interface (DBI) specification for data access
through the DBD::DB2 driver. The Perl DBI module uses an interface that is
similar to the CLI and JDBC interfaces, which makes it easy to port Perl
prototypes to CLI and JDBC. As of DB2 9.5 the Perl DBI comes with support for
DB2 pureXML technology. This allows you to insert XML documents without the
need to parse or validate XML. The Perl driver also supports multi-byte character
sets, which means your application does not have to deal with the conversion
itself when interacting with the database.
PHP
The “PHP: Hypertext Preprocessor” is a modular and interpreted programming
language intended for the development of Web applications. Its functionality can
be customized through the use of extensions. DB2 supports PHP through an
extension called pdo_ibm, which allows DB2 access through the standard PHP
Application Objects (PDO) interface. In addition the ibm_db2 extension offers a
procedural API that, in addition to the normal creates, read, update, and write
database operations, also offers extensive access to the database metadata.
The most up to date versions of ibm_db2 and pdo_ibm are available from the
PHP Extension Community Library (PECL):
http://pecl.php.net/
For DB2 data server for Linux, UNIX and Windows pre-compiled and easy to
install packages called “Zend Core for IBM” are available for download. Zend
Core for IBM features tight integration with DB2 and Informix Dynamic Server
drivers. This can be used in a development or production system. More
information about Zend Core for IBM can be found at:
http://www.ibm.com/software/data/info/zendcore/?S_TACT=appdmain&S_CMP=ibm_im
Ruby on Rails
Since DB2 9.5 has drivers for Ruby, which is an object-oriented programming
language. Combined with the open source Ruby framework called Rails the
For more information about IBM Ruby projects and the RubyForge open source
community, refer to the following website:
http://rubyforge.org/projects/rubyibm/
DB2 offers a no-charge community edition (DB2 Express-C) of the DB2 data
server. This edition of DB2 is completely free to develop, deploy, and to
distribute. DB2 Express-C is a full-function relational and XML data server and
has the same reliability, flexibility, and power of the higher editions of DB2. DB2
also offers the DB2 Express + Fix Term license option, which is price comparable
with the MySQL Enterprise Gold pricing. For more details on each of the DB2
edition available, refer to 1.2.1, “DB2 Data Server Editions for the production
environment” on page 3.
MySQL was initially developed for UNIX and Linux applications. It became
popular when Internet Service Providers (ISP) discovered that MySQL could be
offered free of charge to their Internet customers providing all of the storage and
retrieval functionality that a dynamic Web application would need. It was also
advantageous since ISPs primarily use Linux or UNIX as their base operating
system, in combination with APACHE as their favorite Web server environment.
Today MySQL is also used as an integrated or embedded database for various
applications running on almost all platform architecture.
always connects using TCP/IP. If the client is running on the local server any of
the supported connection protocols can be used.
Figure 2-1 illustrates the conceptual architecture of the MySQL database. The
next several sections cover the functionality of the integrated components in
more detail.
The client layer is the front end component with which users will interact. This
component presents three types of users that interact with the server: query
users, administrators, and applications.
Query users
Query users can interact with the database server through a query interface
called mysql, which allows users to issue SQL statements and view the results
returned from the server using the command line. There is also a graphical tool
called MySQL Query Browser that provides a graphical interface to create and
execute database queries.
In DB2 you can use the command line process for the same functionality. Use
the db2 or db2cmd command to start the command line processor.
Administrators
Administrators use the administrative interface and utilities, such as mysqladmin
and MySQL Administrator. Theses tools can be used for creating or dropping
databases and users as well as managing the MySQL server. These tools
connect to the database server using the native C client library. There are also
utilities that can be used for administrative purposes but do not connect to the
MySQL server, instead work directly with the database files.These tools are
myisamchk for table analysis, optimization, and crash recovery and myisampack
for creating read-only compressed versions of MyISAM tables.
DB2 offers a rich set of database management GUI tools, such as the DB2
Control Center, the Optim Database Administrator, and the IBM Optim Data
studio. These tools simplify database administration by providing one single tool
to completely manage your entire database environment. You can use these
tools to also query the database. The GUI tools are discussed in detail in 9.8,
“Database management tools” on page 312.
Applications
Applications communicate with the database server through MySQL APIs
available for various programming languages such as C++, PHP, Java, Perl,
.NET, and so on. We discuss this in more detail later in the chapter.
Optimizer
Caches and buffers
Database management utilities
Storage engines
Physical resources
Connection pool
The connection pool assigns user connections to the database and
corresponding memory caches. The utilities and programs that are included with
MySQL connect using the Native C API. Other applications can connect using
the standard drivers MySQL offers, such as C++, PHP, Java, Perl, .NET, and so
on. MySQL supports TCP/IP, UNIX socket file, named pipe and shared memory
networking protocols depending on the type of operating system used. For more
details on application programming interfaces, see 2.5, “MySQL application
programming interfaces (API)” on page 51.
SQL interface
The SQL interface accepts and conveys the SQL statements from the
connecting user or MySQL application. This layer is independent of the storage
ending layer and therefore SQL support statements are not dependent on the
type of storage engine being used. The SQL statement is then passed to the
SQL parser for further processing.
SQL parser
The parser analyses the SQL statements and validates the SQL query syntax.
The parser breaks up the statement and creates a parse tree structure to
validate the SQL query syntax and prepare the statement for the optimizer.
SQL optimizer
The SQL optimizer verifies that the tables exist and that the records can be
access by the requesting user. After security checking, the query is analyzed and
optimized to raise performance of the query process.
Physical resource
This is the bottom layer of the MySQL architecture, and represents the
secondary storage or physical disk. This layer is accessed through the storage
engines to store or retrieve data. The actual data stored in this layer is:
Data files (user data)
Data dictionary (metadata)
Indexes
Log information
Statistical data
In the next section, we cover how the database objects and data are physically
stored on the server.
In the example in Figure 2-2 on page 42 there are two databases on this MySQL
server. The first database is the mysql database, which by default holds the
security information. The second database is the sample database inventory,
which is discussed in more detail in Chapter 4, “Conversion scenario” on
page 75.
The following files are created for each database directory in the MySQL home
directory:
Files with the frm extension contain the structure definition of table and view,
known as the schema.
Files with the MYD extension contains the table data.
Files with the MYI extension contains the table indexes.
If there are triggers, then there also are files with the TRN and the TRG
extension.
Example 2-1 shows the files created for each table in our sample database.
Log files by default are created in the mysql home directory. The security data
tables in mysql database are in the directory /< mysql home directory>/mysql.
Table 2-2 lists the files of the security data tables.
By default DB2 uses a better approach for the logical and physical distribution of
database objects. Different from MySQL, DB2 stores all database objects in
table spaces. The benefits of table spaces are increased performance and
simplified management. To take advantage of this advanced database
distribution, refer to 6.2.1, “Database manipulation” on page 123, where we
discuss in detail the conversion of a MySQL database structure to DB2.
table definitions in a file with the .frm extension upon creation. MySQL storage
engines can be split into two different categories:
Non-transaction-safe
Transaction-safe
Transaction-safe storage engines have commit and rollback capabilities and can
be recovered from a crash. Therefore these storage engines guarantee the
Atomicity, Consistency, Isolation, and Durability (ACID properties) of a database.
Non-transaction-safe storage engines are faster and require less memory and
disk space. However, non-transaction-safe storage engines do not guarantee the
database is left in a consistent state, as they do not support ACID properties
The following are the storage engines, each storage engine are explained in this
section.
Non-transaction-safe tables can be managed by the following storage
engines:
– MyISAM
– Memory
– Merge
– Archive
– CSV
– Federated
– Blackhole
Transaction-safe tables can be managed by the following storage engines:
– InnoDB
MySQL supports transactions with the InnoDB and NDB transactional storage
engines. Though both provide full ACID compliance, the performance and
throughput may be a concern and it is often a reason to convert to DB2.
In DB2 all tables support transactions. Therefore, tables that are managed by
MySQL InnonDB, MyISAM, ARCHIVE, and CSV storage engines can all be
converted to a DB2 regular table. Details on converting MySQL tables to a DB2
table are discussed in 6.2.2, “Table manipulation” on page 128.
Innodb was acquired by Oracle in 2006 and this resulted into a development
effort by MySQL AB to build their own transactional safe storage engine which is
still under development at the time the book is written.
Transactions
The MySQL default storage engine MyISAM does not support transactions.
For this storage engine the developers of MySQL followed the “atomic
operations” data integrity model. Auto-commit is always enabled by default for
MyISAM, therefore every time a statement is executed the changes are
committed to the database, as shown in Example 2-2.
MySQL supports transactions with the InnoDB and NDB transactional storage
engines. Both engines provide full ACID compliance. In contrast, all tables in
DB2 support transactions and provide full ACID compliance.
Referential integrity
Referential integrity is the state in which all values of all foreign keys are valid.
The relationship between some rows of the DEPT and EMP tables, shown in
Figure 2-3 on page 48, illustrates referential integrity concepts and
terminology. For example, referential integrity ensures that every foreign key
value in the DEPT column of the EMP table matches a primary key value in
the DEPTNO column of the DEPT table.
DB2 has a full set of utilities available to work with and manage the database
environment. DB2 utilities are described in 1.4, “DB2 utilities” on page 18.
Setup programs
The rest of the programs are used for setting up operations during the installation
or upgrade of the MySQL server:
mysql_install_db
mysql_fix_privilege_tables
make_binary_distribution
mysqlbug
comp_err
make_win_bin_dist
mysql_secure_installation
mysql_tzinfo_to_sql
mysql_upgrade
The first approach is to connect the Java application using JDBC and the
Connector/J, which is officially supported by MySQL. This connector is written in
Java and does not use the C client library to implement the client/server
communication protocol.
The third approach is to connect using the Connector/ODBC for applications that
use ODBC standards. This connector uses the embedded C client libraries to
implement the client/server communication protocol. This approach is officially
supported by MySQL.
The fourth approach is to use the third-party APIs provided by the programming
languages like PHP, Perl, or Python. These APIs will use the embedded C client
libraries to implement the client/server communication protocol. The third-party
APIs are not officially supported by MySQL. The following lists some of the APIs
available for MySQL:
C API
The API to connect from C programs to a MySQL database. More details can
be found at:
http://dev.mysql.com/doc/refman/5.1/en/c.html
C++ API
The API to connect to a MySQL database from C++. Information can be
found at:
http://forge.mysql.com/wiki/Connector_C%2B%2B_Binary_Builds
PHP API
PHP contains support for accessing several databases including MySQL.
Information about MySQL access can be found in the PHP documentation,
which can be downloaded at:
http://www.php.net/download-docs.php
PERL API
The Perl API consists of a generic Perl interface and a special database
driver. The generic interface in Perl is called Database Interface (DBI) and for
MySQL the driver is called DBD::mysql. Information about the DBI can be
found at:
http://dbi.perl.org/
Python API
The API to connect to MySQL for Python is called MySQLdb, and can be
found at:
http://sourceforge.net/projects/mysql-python/
Ruby API
The API for accessing MySQL servers from Ruby programs and can be found
at:
http://tmtm.org/en/mysql/ruby/
DB2 supports the most frequently used MySQL programming languages. These
languages include PHP, Java, Perl, Python, Ruby, C#, C/C++, and Visual
Basic®. With proper planning and knowledge these applications can be
converted to DB2 with minimal effort. In Chapter 8, “Application conversion” on
page 207 we discuss and provide examples for converting applications from
MySQL to DB2.
We also provide information about how IBM conversion specialists can support
you in your conversion project in any of the steps involved and the available
conversion tools such as the IBM Data Movement Tool.
Based on the application profile, you can plan software and hardware needs for
the target system. The planning stage is also a good time to consider many of the
rich functions and features of the DB2 product family, which may increase
productivity and reduce maintenance costs. While the most common reason for
one-to-one conversion is the usage of the more advance features that MySQL
does not provide, reduced runtime of transactions by multiple times also helps.
The advanced optimizer in DB2 showed improvements of approximately 20x for
real production load in various projects.
Knowledge regarding the skills required and resources available for the
conversion project is required. IBM provides a variety of DB2 courses to help
IBM customers learn DB2 quickly. For more information go to:
http://www.ibm.com/developerworks/
http://www.ibm.com/industries/education/index.jsp
and
http://www.ibm.com/developerworks/data/bootcamps
A conversion assessment provides you with the overall picture of the conversion
tasks and efforts needed. From a conversion assessment a conversion project
plan can be created to manage each conversion step.
To execute typical conversion steps, various tools are available to help you save
time in your conversion project. IBM offers the free conversion tool, the IBM
Data Movement Tool, for converting from various relational database systems to
DB2.
Application porting
Basic administration
Testing and tuning
User education
Experienced IBM specialists can support you during any phase of the conversion
project with special conversion offerings provided by IBM Worldwide.
DB2 has the best scalability by being the only one to have transaction
processing and business intelligence scaling up to 1000 nodes. For more
information about scalability see:
http://www.ibm.com
Integrated support for native environments
DB2 conforms to many standards including operating system support. It maps
closely onto internal resources for performance and scalability making it more
reliable and tightly integrated.
Deep compression
Businesses with large volumes of data know how expensive storage can be.
DB2 can dramatically reduce this cost with industry leading data compression
technologies that compress rows, indexes, temporary tables, LOBs, XML, and
back up data with compression rates that can reach over 80%. This allows
DB2 to keep more data in memory, thereby avoiding performance-robbing
disk I/O. In turn causing database performance to increase considerably. For
more details on deep compression see:
http://www.ibm.com/software/data/db2/compression/
Security
Unauthorized data access is an ever present threat that can cost businesses
considerable sums of money, their reputation, or both. DB2 offers a
comprehensive suite of security features that effectively and decisively
minimizes this threat. DB2 provides additional peace of mind with lightweight
security audit mechanisms that are used to verify that any unauthorized data
access.
http://www.ibm.com/software/data/db2/9/editions_features_advaccess.html
pureXML
DB2 pureXML revolutionizes the management of XML data with breakthrough
native XML support provided only by DB2. It eliminates much of the work
typically involved in the management of XML data and it serves data at
unmatched speeds. If you work with XML data, then you need to know about
DB2 pureXML. Now DB2 9.7 for Linux, UNIX, and Windows opens new
opportunities to efficiently analyze XML data in data warehouses.
http://www.ibm.com/software/data/db2/xml/
Integrated system management tools
DB2 has a number of tools for managing the database system. IBM Optim
Data studio is a rich and extensible solution to help you develop DB2
applications and manage databases. The Health Monitor and the Health
Center help to easily capture and monitor the overall health of the database.
The Replication Center is a tool used to setup and administer a replication
assist your organization regarding any phase of the conversion process. These
phases include:
Assessment of the database and application conversion efforts
Project planning
System planning
Database design
Porting preparation and DB2 installation
Database structure and data conversion
Application conversion
Basic Administration
Testing and tuning of DB2 data server
There is no cost to join partner world, you can find more information and register
by visiting the following link:
http://www.ibm.com/partnerworld
If you are a customer and have a conversion project in mind, contact one of the
following contacts according to your geography:
In North America and Latin America, contact: db2mig@us.ibm.com
In UK, Europe, Middle East and Africa, contact: emeadbct@uk.ibm.com
In Japan, India and Asia Pacific, contact: dungi@hkl.ibm.com
More information about the DB2 conversion team can be found at the Software
Migration Project Office (SMPO) Web site:
http://www.ibm.com/software/solutions/softwaremigration/dbmigteam.html
You can find the most up to date details about current offerings, success stories,
literature, and other information on the DB2 Migrate Now! Web site:
http://www.ibm.com/software/data/db2/migration/
3.1.3 Education
DB2 provides an easy-to-use, feature-rich environment. Therefore, it is important
that those individuals involved in the conversion process be appropriately trained
to take full advantage of its offerings.
For further information regarding DB2 training, visit the DB2 Web site at:
http://www.ibm.com/software/data/education/
This DB2 for Linux, UNIX, and Windows conversion Web site can help you find
the information you need to port an application and its data from other database
management systems to DB2. The porting and conversion steps, which are
described in this chapter, appear in the order that they are commonly performed.
In addition to the technical information on this site, IBM customers and IBM
Business partners should check out the Information for IBM customers and
Information for IBM partners links:
http://www.ibm.com/developerworks/db2/zones/porting/partners.html
and
http://www.ibm.com/developerworks/db2/zones/porting/customers.html.
Here you will find additional links and information regarding assistance or
available resources for your port.
http://www.ibm.com/developerworks/db2/zones/porting/index.html
You need to understand how your application works and what resources are
needed. There are probably a lot of characteristics within your application that
will influence system planning and the scope of the conversion effort.
– CPU
– Memory
– Hard disk
With DB2 support for different hardware platforms and multiple operating
systems such as Linux, Windows, and AIX and so on, platform limitation should
not be an issue. You can select on which system the converted application
should run based on the application nature and future enhancement
requirements.
As shown in Figure 3-1 on page 63, the target system can be the same system
as the source system, or a different one with a different operating system and
hardware. You might even want to make your database server a separate
machine from the machine your application runs on (creating a two-tier
architecture)
If you decide to use a new machine for the conversion program, you need to plan
what kind of hardware you want to use, and which operating system you want to
install on it.
In either case you should check if the hardware of your target system meets the
minimum requirements, paying particular attention to the following:
Operating system
DB2
Application
Data
Conversion tools (if used)
3.3.1 Software
You must determine which software must be installed on your target system.
This can include the following:
Operating system (Linux, AIX, UNIX, Windows, others)
DB2 version
Application to be converted
Conversion tools (if used and installed on target system)
Any software that you have on your source system, which is required by your
application to run properly. This can include, but is not limited to:
– HTTP-server
– Web application server
– Development environment
– Additional software (like LDAP or others)
Be sure to have the latest versions and fix packs of the planned products. Ensure
the chosen software is supported on the chosen operating system.
3.3.2 Hardware
When starting the conversion process it is important to have a target platform
that meets the minimum requirements of all the software that will be installed on
it. Check the supported hardware platforms depending on the chosen software.
Your application also requires hardware resources. Be sure to have enough disk
space for your application and transformed data.
When deciding to use a tool, be sure that it fulfills the requirements appropriate
for your platform.
The tool can be used to extract the data from the source database into flat files,
generates scripts to create the database objects, and import the data using the
DB2 LOAD utility. At the time this book was written the Data Movement Tool
supports the following database objects for a MySQL to DB2 conversion:
Tables, constraints, Indexes, primary keys and foreign keys(with InnoDB)
At the time this book was written the Data Movement Tool does not support the
following database objects for a MySQL database conversion:
Views, procedures, functions, triggers, packages
The IBM Data Movement tool is available in both GUI and command line form.
Graphical user interface (GUI)
The GUI interface offers the IBM Data Movement Tool conversion
functionality by using a Java interface. It provides an easy to use interface for
beginners.
Command line
The command line interface offers a way to operate the IBM Data Movement
Tool from the command line. The command line interface is intended for
experienced users who want to run end-to-end conversions without user
interaction
For our conversion scenario we use the new IBM Data Movement Tool to convert
database objects and data from MySQL to DB2. If you want to download the IBM
Data Movement Tool or receive more information about it, refer to:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/
index.html
For more information on installation of DB2 and the IBM Data Movement Tool,
refer to Chapter 5, “Installation” on page 87.
Tools such as IBM Data Movement Tool can do most of this task automatically.
Regardless of the tool used, you should verify the output results. This step can
also be performed manually, however, you need to make sure to cover all
database objects.
If you plan to change the logical model of your database structure to enhance
your application and take advantage of DB2 functions and features, the DDL
should be modified in this step.
These actions are supported by tools such as IBM Data Movement Tool. Make
sure to check that all database objects like tables, keys, indexes and functions
are created successfully.
In this step, you create scripts to create the required users and grant them
access privileges to the DB2 database and objects based on the MySQL user
data.
The scripts with the DB2 commands for creating users and granting privileges
should be run.
All of the steps described in this section are performed on the MySQL sample
database and explained in great detail in Chapter 7, “Data conversion” on
page 167.
The extent to which you have to change the application code depends on the
database interface that is used in the source application. When a database
access layer is used, the adoption is not that complicated, otherwise, the effort to
enable the application will likely be higher.
You might find things that are not natively supported in DB2 that used in MySQL,
so a concept must be established to allow the application to behave in the same
way as prior to conversion.
Database interface
Regardless of the interface used between an application and a database
application, the access to the database must be changed since the database
server has been modified.
If standardized interfaces such as ODBC or JDBC are used, the changes will be
less significant than if the application uses the native API of the database
product.
Condition handling
Depending on the implementation of your application, there might be some
changes in the condition handling part of the application.
Additional considerations
DB2 offers rich, robust functions, which you can take advantage of in your
applications. Some of these features that you may want to consider using in your
application, which are different in MySQL are:
Concurrency
Locking
Isolation level transactions
Logging
National language support
XML support
The steps list in this section are performed with various sample application code
and explained in great detail in Chapter 8, “Application conversion” on page 207.
Every database has its own way for backup and recovery, since these are
common and vital tasks in database administration. The database should be
backed up regularly, and the data retention period should be defined based on
the business requirements.
If you have backup and recovery tasks defined on the source system, you
probably want to convert these tasks as well. Be sure to port any existing scripts
for backup tasks to support DB2.
Both the database backup and recovery functions should be tested to ensure a
safe environment to your application.
Log files
DB2 logs differently than MySQL, so database administrators should be aware of
the logging level that can be set, where log information is stored, and how to read
these logs.
If you succeed with testing you can then proceed to tuning your database and
application in order to speed up your application.
Data checking
Aside from basic data checks that should be performed when exporting and
importing data, checking that your application handles your data correctly and
manipulates the expected fields on inserts, updates, or deletes is a vital part.
This can performance checking can be done manually, or a script may be used
to have the data checked.
Troubleshooting
Whenever the conversion leads to a problem, such as has wrong data or
incorrect application behavior, you have to determine what the problem is in
order to fix it.
You should understand error messages from the application as well as DB2 error
messages. The troubleshooting process includes studying the DB2 log files.
See the DB2 technical support Web site for help with specific problems:
http://www.ibm.com/software/data/support/
Basic tuning
Once your new system is working perfectly, you might want to tune it for even
better performance. With the correct database configurations, and hints from
DB2 tuning tools, you can speed up your queries quite easily.
When a user enters the Web site, the login page (Figure 4-2) provides three
functions:
User Login: To log in as an already registered user
New Users: To create a new user account
Reset Password: To request the password to a default password
With User Login, a registered user uses their user ID and password to log in. The
application verifies the username, password and user permissions against the
registered users in the database. The lowest level of permissions allows the user
to view, edit, and create their inventory and service requests. If the user has
permissions to view, create, and edit groups and users they will be given extra
functionality to do so.
From New Users, a new user can create an account. Completing the registration
form (Figure 4-3 on page 78) creates a new user account in the application. The
application verifies the username provided by the user is unique. By default new
users have the lowest level of permissions. A user who is allowed to edit groups
can add a user to a group with more than default level permissions.
Using Forget Password, users can reset their account password by entering their
first, last, and user name, as shown in Figure 4-4 on page 79.The new password
will be displayed to the user.
Once successfully logged in, the management options are presented to the user.
A set of management options are displayed, and may vary, depending on the
type of permissions held by the logged in user. Figure 4-5 shows the welcome
page options for a user with the highest level of permissions.
Using the View/Edit Account Info, users can view account details as shown in
Figure 4-6. Users can update their details by editing the fields and submitting the
form.
With Add Inventory, the user can associate new inventory with their user
account. Figure 4-7 on page 80 shows a typical filled out Add Inventory form.
Using View/Edit Inventory List, users have the ability to view their assigned
inventory and other users’ inventory using owner, location, inventory type, or
service created against the inventory (Figure 4-8 on page 81). From this page
users can select to update inventory records by selecting Edit. To see the Edit
button the user must have the permission to update the inventory record.
Using Create Service Tickets (Figure 4-9 on page 81), users can open service
tickets against their assigned inventory.
From View/Edit Service Tickets, users have the ability to view their created and
assigned service tickets, as shown in Figure 4-10. A user can also view all
service tickets based on user, inventory location, inventory type, and service
type. From this page the user can select to update the service ticket by selecting
Edit. To see the Edit button the user must have the permission to update the
inventory record.
With View Group Users, users with administration permissions can view all users
within a specific group. From this page the user can update any given user
account by selecting Edit. Figure 4-11 on page 83 shows a user viewing the
manager group user details.
Using Create/Edit Group (Figure 4-12 on page 83), a user with administration
permissions can create and edit groups.
For detailed table information see Figure 4-13 on page 85. We discuss the data
type conversion between MySQL and DB2 in detail in Chapter 6, “Database
conversion” on page 115.
In our conversion scenario we setup two servers. Our original server which had
the following software installed on the VMware Image:
SUSE® Linux 10 SP2
MySQL 5.1.36 Community (MySQL AB)
Apache 2.0
PHP 5.3.0
The second VMware image, the destination server, has the following software
installed on the VMware image:
SUSE Linux 10 SP2
DB2 9.7 Express-C
Apache 2.0
PHP 5.3.0
IBM Data Movement Tool
For more information about VMware workstation and working with VMware
images go to:
http://vmware.com/
DB2 9.7 Express-C for Linux, UNIX and Windows can be downloaded from:
http://www.ibm.com/software/data/db2/express/ .
IBM Data Movement Tool is used to simplify and greatly decrease time it takes
to convert from MySQL to DB2. This tool is available free of charge from IBM at
the following URL:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/
index.html
With IBM Data Movement Tool, the conversion of database objects such as
tables and data types, and the conversion of data can be done automatically into
equivalent DB2 database objects.
The installation and configuration of DB2 and the IBM Data Movement Tool is
covered in the next chapter.
Chapter 5. Installation
In this chapter we discuss the target system environment setup. For the
database server, we guide you through the installation process of DB2 9.7 for
Linux including the hardware and software prerequisites. The application server
has to be examined to ensure that the exiting software has proper DB2 support.
If this is a completely new system setup, make sure that all the required software
is included in the installation list. Furthermore, we describe the download and
steps required to setup the IBM Data Movement Tool (DMT).
Hardware requirements
DB2 products are supported on the following hardware:
HP-UX
– Itanium® based HP Integrity Series Systems
Linux
– x86 (Intel Pentium®, Intel Xeon®, and AMD) 32-bit Intel and AMD
processors
– x64 (64-bit AMD64 and Intel EM64T processors)
– POWER® (IBM eServer™ OpenPower®, iSeries, pSeries®, System i,
System p, and POWER Systems that support Linux)
– eServer System z or System z9®
Solaris
– UltraSPARC or SPARC64 processors
– Solaris x64 (Intel 64 or AMD64)
Windows
– All Intel and AMD processors capable of running the supported Windows
operating systems (32-bit and 64-bit base systems)
For more information on DB2 9.7 system requirements and other DB2 release
system requirement, check:
http://www.ibm.com/software/data/db2/9/sysreqs.html
Chapter 5. Installation 89
7093ch05.fm Draft Document for Review October 16, 2009 10:21 am
compact, or custom installation. The following are the sizes for DB2 Express-C
installation:
Typical: Requires 500 to 610 MB
With Typical installation type, DB2 is installed with most of the features and
functionality, including graphical tools such as the Control Center and DB2
Instance Setup wizard.
Compact: Requires 450 to 550 MB
With Compact installation type, only basic DB2 features and functions are
installed, minimal configuration will be performed and graphical tools are not
included.
Custom: Requires 450 to 1080 MB
With Custom installation type, you can select the features you want to install.
The disk space needed varies based on the selected features.
When you install DB2 Enterprise Server Edition or Workgroup Server Edition
using the DB2 setup wizard, size estimates are dynamically provided by the
installation program based on installation type and component selection.
If the space required for the installation type and components exceeds the space
found in the path specified, the setup program issues a warning about insufficient
space. The installation is allowed to continue. If the space for the files being
installed is in fact insufficient, installation will stop, and the setup program will
need to be aborted if additional space cannot be provided.
On Linux and UNIX operating systems, 2 GB of free space in the /tmp directory is
recommended.
For IBM data server client support, these memory requirements are for a
base of five concurrent client connections. You will need an additional 16 MB
of RAM per five client connections.
Memory requirements are affected by the size and complexity of your
database system, as well as by the extent of database activity and the
number of clients accessing your system.
For DB2 server products, the self-tuning memory feature simplifies the task of
memory configuration by automatically setting values for several memory
configuration parameters. When enabled, the memory tuner dynamically
distributes available memory resources among several memory consumers
including sort memory, the package cache, lock list memory, and buffer
pools.
Additional memory may be required for non-DB2 software that may be
running on your system.
Specific performance requirements may determine the amount of memory
needed.
On Linux operating system, SWAP space at least twice as large as RAM is
recommended.
Communication requirements
When using TCP/IP as the communication protocol, no additional software is
needed for connectivity. For more supported communication protocols, refer to
DB2 manual Quick Beginnings for DB2 Servers 9.5, GC10-4246 at:
http://publibfp.boulder.ibm.com/epubs/pdf/c2358642.pdf
or visit the IBM DB2 Database for Linux, UNIX and Windows Information Center
at:
https://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
Chapter 5. Installation 91
7093ch05.fm Draft Document for Review October 16, 2009 10:21 am
For this particular project or any other conversion project in general, the DB2
Data Server Client is required. It provides libraries for application development. If
the application server and the database server are to be placed at same system,
you can install both the DB2 server and Data Server Client in one step by
selecting Custom installation type.
For this project we perform the following steps to install DB2 9.7 on Linux:
1. Log on to Linux as a root user.
2. Download DB2 9.7 Express-C from:
http://www.ibm.com/software/data/db2/express/download.html
3. Save the tar file to /usr/local/src directory.
4. Change directories to the /usr/local/src directory:
cd /usr/local/src/expc
5. Extract the tar file:
tar -xzf db2exc_970_LNX_x86.tar.gz
6. Change to the directory:
cd /usr/local/src/expc
7. Launch DB2 Setup wizard with which opens Figure 5-1 on page 93:
./db2setup
Note: Starting in DB2 9.5 you can also do non-root installation of DB2.
Chapter 5. Installation 93
7093ch05.fm Draft Document for Review October 16, 2009 10:21 am
9. Select Install New under the option to install the server to launch the DB2
setup wizard.
10.Go to the Software License Agreement panel and read the Software License
Agreement, as shown in Figure 5-3 on page 95. If you agree with the
agreement select the Accept and click Next to go to the next screen.
11.In the Installation Type panel click Custom as shown in Figure 5-4 on page 96
and click Next.
Chapter 5. Installation 95
7093ch05.fm Draft Document for Review October 16, 2009 10:21 am
12.Click Next again to get the Features panel. Select Application Development
tools option, as shown in Figure 5-5 on page 97, and click Next.
Figure 5-5 DB2 custom installation with application development tools selected
13.In the Languages panel choose the type of languages to install and click
Next.
14.In the Documentation panel choose where to access the DB2 Information
Center. You can choose to install it as part of this process or you can access
the online DB2 Information Center at any time. Click Next.
15.In the Database Administration Server (DAS) panel fill out the DAS user
information. Linux group and user accounts do not have to be created prior to
this step; DB2 will create the required Linux system group and user
automatically. For the example installation, we use the default name dasusr1
and choose a password for this user, as shown in Figure 5-6 on page 98 and
click Next.
Chapter 5. Installation 97
7093ch05.fm Draft Document for Review October 16, 2009 10:21 am
16.In the Instance setup panel, you can choose whether you would also like to
setup an Instance during the DB2 installation. By selecting Create a DB2
instance and clicking Next we let DB2 create the instance for us.
17.Fill out the instance owner information in the Instance Owner panel. Linux
group and user accounts do not have to be created prior to this step; DB2 will
create the Linux group and user. For the example installation, we use the
default db2inst1 settings and create a password for this user, as shown in
Figure 5-7 on page 99 and click Next.
18.In the Fenced user panel, we allow DB2 create the ID for us, as shown in
Figure 5-8 on page 100, and click Next.
Chapter 5. Installation 99
7093ch05.fm Draft Document for Review October 16, 2009 10:21 am
21.At the end, the setup wizard provides a summary of the installation options
selected. Review it and click Finish to start the installation.
If the log file option is not specified, the db2setup.log and db2setup.err are stored
in the /tmp directory on a Linux operating system. Example 5-1 shows an
example of the db2setup.log.
DB2 Setup log file started at: Fri Jul 24 15:20:15 2009 EDT
============================================================
If you have chosen not to create instances during the DB2 installation or need
add another instance after installation, there are two options to create instances
manually.
The db2isetup command starts a graphical tool for creating and configuring
instances as shown in Figure 5-11 on page 103. It allows you to specify all the
required configuration parameters such as the instance owner and
communication protocol in an easy and guided fashion. The command can be
found in /opt/ibm/db2/V9.7/instance on a Linux operating system.
The command to create the DAS user is dascrt and is used in the following way:
dascrt -u dasadm1
As part of the GUI instance creation, the installer suggests three users identified
as db2inst1, db2fenc1, and dasadm1. These are default names for the instance
users. If you do not want to use the default names, you can choose your own by
creating the system user IDs and groups ahead of time and inputting these
parameters in the wizard when prompted. The installer will also add the following
entry to the /etc/services file in order to allow communication from DB2 clients:
db2c_db2inst1 50000
Where db2c_db2inst1 indicates the service name, and 50000 indicates the port
number. DB2 allows for multiple instances on one server installation to allow for
various environments, that is, test, production, development, and so on.
Subsequent instances may be created on the same server simply by using one
of the methods introduced above.
All clients are supported on Linux, AIX, HP-UX, Solaris, and Windows operating
systems.
To access a remote DB2 database, you can either run the easy to use graphical
tool Configuration Assistant or use the catalog commands to provide entries for
the following three directories:
NODE directory: A list of remote DB2 instances
ADMIN NODE directory: A list of remote DB2 Administration servers
DATABASE directory: A list of databases
To use the command-line tools, first catalog the DB2 node. The DB2 node is the
server where the database resides, then catalog the database. See
Example 5-2.
--
-- catalog the DAS on the remote node
--
CATALOG ADMIN TCPIP NODE db2das remote SERVER1
-
-- catalog database
--
CATALOG DATABASE invent AS inventdb AT NODE db2node
After installing your DB2 Client, you should configure it to access a remote DB2
server using the Configuration Assistant. The graphical interface can be
launched through the DB2 Control Center or ran on its own by using the
command db2ca. For more details refer to the IBM DB2 manual Quick
Beginnings for DB2 Clients, GC10-4242.
The sample application in this book is written in PHP with Apache2. Therefore,
the next two sections discuss how to prepare the target system for Apache2 and
PHP.
Installation steps
The following steps explain how we installed Apache2 on SUSE 10 SP2:
1. Downloading the Apache package.
The source code for Apache package is available at:
http://httpd.apache.org/download.cgi
In our conversion scenario we used Version 2.2.11 of Apache, and the
package we downloaded was httpd-2.2.11.tar.gz.
2. Changing the working directory.
Use the cd command to make your working directory the directory you
downloaded the tar file to:
db2server: # cd /usr/local/src/
3. Uncompress the source package.
The following command decompress the contents of the source package into
a directory called httpd-2.2.11:
db2server:/usr/local/src # tar -xzf httpd-2.2.11.tar.gz
4. Changing the working directory.
Use the cd command to make the newly created directory your working
directory:
db2server:/usr/local/src # cd httpd-2.2.11/
5. Specifying the configuration options for the PHP source.
8. Installing Apache
Once Apache has compiled successfully, it can be installed as the root user:
db2server:/usr/local/src/httpd-2.2.11 # make install
9. Add the apachect1 script to the following directories.
Use the ln command to create a link to the apachect1 file in the /usr/bin and
/etc/init.d directories:
ln -s /usr/local/apache2/bin/apachectl /usr/bin/apachectl
ln -s /usr/local/apache2/bin/apachectl /etc/init.d/.
10.Starting Apache httpd server.
Use the following command to start the apache httpd server:
apachectl start
Note: All commands and procedure descriptions provided in this section refer
to SUSE Linux Enterprise Server 10 SP2. This can vary for other versions or
Linux distributions.
Installation steps:
In order to use the IBM DB2 libraries, PHP has to be recompiled. These
installation steps explain how to update your PHP install and install PHP from
scratch:
1. Backup the httpd.conf and php.ini files.
To ensure the configuration files for Apache and PHP are not lost when
installing the new PHP version, we recommend backing up the
/etc/httpd/httpd.conf and if you have a previous version of PHP installed,
back up the /etc/php.ini files.
2. Downloading the PHP package.
The source code for PHP is available at:
http://www.php.net/downloads.php
In our conversion scenario we used Version 5.3.0 of PHP, and the package
we downloaded was php-5.3.0.tar.gz.
Download the ibm_db2 PECL extension at:
http://pecl.php.net/package/ibm_db2
In our conversion scenario we used Version 1.2.8, and the package we
downloaded was ibm_db2-1.8.2.tgz.
Download the PDO_IBM PECL extension at:
http://www.pecl.php.net/package/PDO_IBM
In our conversion scenario we used Version 1.3.0, and the package we
downloaded was PDO_IBM-1.3.0.tgz.
3. Uncompress the source package.
The following command decompresses the contents of the sources package
in to a directory called php-5.3.0:
db2server:/usr/local/src # tar xzf php-5.3.0
4. Add the PECL extensions to php install directory.
Use the mv command to move the compressed files to the extension directory
in the install directory:
db2server:/usr/local/src # mv ibm_db2-1.8.2.tgz php-5.3.0/ext/.
db2server:/usr/local/src # mv PDO_IBM-1.3.0.tgz php-5.3.0/ext/.
5. Changing the working directory.
Use the cd command to make the ext directory your working directory:
db2server:/usr/local/src # cd php-5.3.0/ext/
6. Uncompressing PECL extension packages.
The following command decompresses the contents of the extension
packages into directories called ibm_db2-1.8.2 and PDO_IBM-1.3.0:
db2server:/usr/local/src/php-5.3.0/ext/ # gzip -d < ibm_db2-1.8.2.tgz |
tar -xvf -
db2server:/usr/local/src/php-5.3.0/ext # gzip -d < PDO_IBM-1.3.0.tgz |
tar -xvf -
7. Rename the extension directories.
Use the mv command to rename the extension directories:
db2server:/usr/local/src/php-5.3.0/ext # mv ibm_db2-1.8.2 ibm_db2
db2server:/usr/local/src/php-5.3.0/ext # mv PDO_IBM-1.3.0 pdo_ibm
8. Changing the working directory.
Use the cd command to make the PHP install directory your working
directory:
db2server:/usr/local/src # cd /usr/local/src/php-5.3.0/
9. Remove the configure file.
Use the rm command to remove the PHP configure script:
db2server:/usr/local/src/php-5.3.0 # rm configure
10.Rebuild the configure file.
The rebuildconf command rebuilds the configure file to include the new
extensions:
db2server:/usr/local/src/php-5.3.0 # ./buildconf --force
11.Verify the extension is now with in the configure file.
Use the following command to verify the buildconf worked successfully:
db2server:/usr/local/src/php-5.3.0 # ./configure --help | grep
with-ibm-db2
db2server:/usr/local/src/php-5.3.0 # ./configure --help | grep pdo-ibm
12.Specify the configuration options for the PHP source.
A list of the possible configuration options can be seen by issuing the
following command:
db2server:/usr/local/src/php-5.3.0 # configure -help
13.Run the configure script.
The configure script builds the Makefile. For our purposes we specified the
configure command as follows:
db2server:/usr/local/src/php-5.3.0 # ./configure
--prefix=/usr/local/apache2/php
--with-IBM_DB2=/opt/ibm/db2/V9.7
--with-pdo-ibm=/opt/ibm/db2/V9.7
--with-pdo-odbc=ibm-db2,/home/db2inst1/sqllib
--with-ibm-db2=/opt/ibm/db2/V9.7
--with-apxs2=/usr/local/apache2/bin/apxs
--with-config-file-path=/usr/local/apache2/php
Where option:
--prefix specifies the PHP install directory
--with-IBM_DB2 is for the ibm_db2 extension
--with-pdo-ibm is for the PDO_IBM extension
--with-pdo-odbc is for the PDO_ODBC extension
--with-ibm-db2 is for the Unified ODBC extension
--with-apxs2 is for the Apache apxs tool, which allows you to
build extension modules to add to Apache’s
functionality
For the IBM Data Movement user guide and a free download of the IBM Data
Movement Tool visit:
http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovement/
Software requirements
The software required to use the IBM Data Movement Tool are described in this
topic.
General requirements
In general, you should have the following:
Latest version of the IBM Data Movement Tool.
MySQL: Ensuring that MySQL is running (usually the daemon can be started
with the command safe_mysqld & from an account with root permissions).
DB2 V9.7 should be installed on the target server. Use the command
db2start to ensure the DB2 Server is up and running.
Java version 1.5 or higher must be installed on your target server. To verify
your current Java version, run java -version command. By default, Java is
installed as part of DB2 for Linux, UNIX, and Windows in
<install_dir>\SQLLIB\java\jdk (Windows) or /opt/ibm/db2/V9.7/java/jdk
(Linux).
You must have the JDBC drivers for the MySQL source database
(mysql-connector-java-5.1.8-bin.jar or latest driver) and the DB2 target
database (db2jcc.jar, db2jcc_license_cu.jar or db2jcc4.jar,
db2jcc4_license_cu.jar) installed on the server with the IBM Data Movement
Tool.
Operating system
DMT supports the following operating system:
Windows
z/OS
AIX
LINUX
Solaris
HP-UX
Mac
For the purpose of this document we have used the IBM Data Movement Tool
with DB2 Version 9.7 and MySQL Version 5.1.36. An installation on the DB2
server side is recommended to achieve the best data movement performance.
In this chapter we discuss the process of converting the database structure from
MySQL 5.1 server to DB2 9.7 server. Before doing this we must look into the
differences between MySQL and DB2 database structure.
In the first section we discuss data type mapping, taking a closer look at MySQL
and DB2 data types and the differences between them. Following this section,
Data Definition Language (DDL) differences are described providing a basic
syntax comparison between MySQL and DB2.
Every column in database table has an associated data type which determines
the values that this column could contain. DB2 supports both built-in data types
and user-defined data types (UDT) whereas MySQL only supports built-in data
types. Figure 6-1 shows built-in data type of MySQL.
Figure 6-2 on page 117 shows built-in data types supported by DB2.
MySQL data types are grouped into three categories, and can be converted to
DB2 data types following the rules suggested as follows:
Numeric type
– TINYINT
This is a single byte integer in MySQL that can be mapped to a DB2
SMALLINT for similar functionality.
– SMALLINT
A small integer is a two-byte integer with a precision of five digits. With
MySQL, the range of signed small integers is (-32768 to 32767) making it
replaceable by DB2 SMALLINT. For unsigned MySQL small integers the
range is (0 to 65535) making it replaceable by DB2 INTEGER.
– BIT, BOOL, and BOOLEAN
These are synonyms for TINYINT(1). Instead of BIT, BOOL, and
BOOLEAN, DB2 uses SMALLINT with check constraint.
– MEDIUMINT
Table 6-1 shows the default data type mappings between the two databases
used by the IBM Data Movement Tool. This is the mapping we used for our
sample conversion.
TINYINT SMALLINT
UNSIGNED
SMALLINT SMALLINT
SMALLINT INTEGER
UNSIGNED Optional: SMALLINT
BIT SMALLINT
BOOLEAN SMALLINT
MEDIUMINT INTEGER
MEDIUMINT INTEGER
UNSIGNED
INTEGER/INT INTEGER
INTEGER/INT BIGINT
UNSIGNED Optional: INTEGER
BIGINT BIGINT
BIGINT DECIMAL
UNSIGNED Optional: BIGINT
FLOAT DOUBLE
FLOAT DOUBLE
UNSIGNED
DOUBLE DOUBLE
DOUBLE DECIMAL
UNSIGNED Optional: DOUBLE
REAL DOUBLE
REAL DOUBLE
UNSIGNED
NUMBERIC DECIMAL(31,0)
DECIMAL
DEC
NUMERIC(P) DECIMAL(min(P,31),0)
NUMERIC(P,0)
DECIMAL(P)
DECIMAL(P,0)
DEC(P)
DEC(P,0)
DECIMAL DECIMAL
UNSIGNED
NUMERIC DECIMAL
UNSIGNED
DATE DATE
DATETIME TIMESTAMP
Optional: TIME
TIMESTAMP TIMESTAMP
TIME TIME
YEAR CHAR(4)
Optional: SMALLINT
CHAR CHAR
VARCHAR VARCHAR
BINARY CHAR(I) FOR BIT DATA
TINYBLOB BLOB(255)
Optional: VARCHAR(255)
TINYTEXT CLOB(255)
BLOB BLOB(65535)
TEXT CLOB(65535)
MEDIUMBLOB BLOB(16777215)
MEDIUMTEXT CLOB(16777215)
LONGBLOB BLOB(2G)
LONGTEXT CLOB(2G)
Both MySQL and DB2 follow the structured query language (SQL), standardized
language used to access databases and their objects, as defined by the
ANSI/ISO.
The data definition language is a set of SQL statements. These can be used for a
variety of tasks including, creation or deletion of databases and database objects
(tables, views, and indexes), definitions of column types, and definitions of
referential integrity rules.
On Linux machines, MySQL can use the file system mounting options, however,
in most cases, MySQL uses symbolic links. This can be done by creating a
directory where you have extra space:
bash> cd <file system with space>
bash> mkdir mysqldata
MySQL users can distribute tables using symbolic linking or the data and index
directory options of the CREATE TABLE statement.
MySQL stores data in single files, multiple files, or table spaces, depending on
the table type being used. Figure 6-3 shows example of storage engines that fall
under one of these three types.
The tables to the left side of the diagram are managed by the MyISAM storage
engine. For MyISAM tables, MySQL creates a .MYD file for data and a .MYI file
to store indexes, with only one file for all data and indexes, respectively. The
tables in the middle of the diagram are managed by the Merge storage engine.
With Merge tables, the .MRG file contains the names of the tables that should be
used and a .FRM file for table definition. In a Merge table various tables are
used, each of these having there own data file. However, as a whole, a Merge
table uses multiple data files. The tables to the right side of the diagram are
managed by InnoDB storage engine. For InnoDB tables, MySQL stores data in a
table space identified by the path parameter, innodb_data_file_path. Multiple
data files can be used for InnonDB.
MySQL also has a feature called user-defined partitioning, which allows for table
data to be horizontally split across file systems depending on a specific set of
data defined by the user. For each partition created there is a corresponding
.MYD file for data and .MYI file for the index.
In contrast to MySQL, DB2 stores everything in table spaces. Table spaces are
logical representations of physical containers on the file system. DB2 uses a
better approach for the logical and physical distribution of the database and the
database elements in different sectors, as shown in Figure 6-4. After completing
the conversion of your database from MySQL to DB2, you can use these features
to enhance the performance of your application.
Instance
A DB2 server can have more than one instance. One instance can have multiple
databases. One instance per application database has the advantage that the
application and database support do not have to coordinate with one another to
take the database or the instance offline. For conversion purposes, a single
instance can be created for your database application environment using the
db2icrt command:
bash> db2icrt -u db2fenc1 db2inst1
Database
A database represents your data as a collection of data in structured fashion. It
includes a set of system catalog tables that describe the logical and physical
structure of the data, a configuration file containing environment parameter
values used by the database, and a recovery log with ongoing and archivable
transactions.
Database Partition
A database partition is part of a database containing its own data, indexes,
configuration files, and transaction logs. A database partition is sometimes called
a node or a database node. A partitioned database environment is a database
installation that supports the distribution of data across database partitions. This
can be used if you want to spread your DB2 database across multiple servers in
a cluster or along multiple nodes. There are no database partition group design
considerations when using a non-partitioned database. The database partition
group can be created within a database using the following command:
db2> CREATE DATABASE PARTITION GROUP MaxGroup ON ALL DBPARTITIONNUMS
Creating a database
The database in DB2 can be created simply by issuing the following command:
db2>CREATE DATABASE invent
This command generates a new database with a default path, and table spaces.
It creates three initial table spaces and the system tables, and creates the
recovery log.
You can use the create database statement with options to personalize the
database and take advantage of DB2 advanced features such as automatic
storage which simplifies storage management for table spaces as shown in
Example 6-1. When using automatic storage, it is possible to specify a group of
storage devices for DB2 to use for your database. This allows DB2 to allocate
and grow this specified space as table spaces are created and populated.
Automatic storage is turned on by default when creating a database.
Dropping a database
In MySQL you can drop the database using:
mysql> DROP DATABASE [if exists] inventory
This removes all the database files (.BAK, .DAT, .HSH, .ISD, .ISM, .ISM, .MRG,
.MYD,.MYI, .db, .frm) from your file system.
This command deletes the database contents and all log files for the database,
uncatalogs the database, and deletes the database subdirectory.
Alter database
MySQL alter database command allows to change the overall characteristics of a
database. For example, the character set clause changes the database
character set, and the collation clause changes the database collation. The basic
syntax for altering the database is:
mysql>ALTER DATABASE inventory CHARACTER SET charset_name COLLATE
collation_name
In DB2 you can use the UPDATE DATABASE CONFIGURATION and UPDATE
DATABASE MANAGER CONFIGURATION commands to set the database and
database manager configuration parameters. These allow modification of various
configuration parameters such as log file size, log file path, heap size, cache
size, and many others. Advantage can be taken of the DB2 autonomic features
by enabling the automatic maintenance and self tuning memory manager
features. Automatic maintenance allows for scheduling database backups,
keeping statistic current, and reorganizing tables and indexes. Self tuning
memory manager provides constant tuning of your database without the need for
DBA intervention. The following are examples of how these can be set in DB2:
db2> UPDATE DATABASE MANAGER CONFIGURATION using diaglevel 3
db2> UPDATE DATABASE ONFIGURATION for invent
using auto_maint on
auto_tbl_maint on
auto_runstats on
auto_reorg on
self_tuning_mem on
The example above does not show all the parameters available in DB2. For more
information on how to setup automatic maintenance and self tuning memory
manager, visit at
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
In addition, these commands can be used to change the physical and logical
partitioning of a database, to allocate table spaces and paging configurations.
Table space
A table space is a storage structure containing tables, indexes, large objects, and
long data. Table spaces reside in database partition groups and allow
assignment of database location and table data directly onto containers. DB2
allows for two types of table spaces: System Managed Space (SMS), where the
operating system allocates and manages the space where the tables are stored,
and Data Managed Space (DMS), where the database administrator has the
ability to decide which devices or files to use and allows DB2 to manage this
space. Another option is to enable automatic storage for the table spaces. No
container definitions are needed in the latter case because the DB2 database
manager assigns and manages the container automatically.
Any DB2 database should at least have the following three table spaces:
Schema
A schema is an identifier such as a user ID that helps group tables and other
database objects. A schema can be owned by an individual, and the owner can
control access to the data and the objects within it. A schema is also an object in
the database. It may be created automatically when the first object in a schema
is created. We can create a schema using:
db2>CREATE SCHEMA inventschema AUTHORIZATION inventUser
MySQL tables
As shown in Figure 6-5 on page 129, MySQL supports two types of tables:
transaction-safe tables and non transaction-safe tables. Transaction-safe tables
(managed by InnoDB or NDB storage engines) are crash safe and can take part
in transactions providing concurrency features that allow commit and rollback.
On the other hand, non transaction-safe tables (managed by MyISAM,
MEMORY, MERGE, ARCHIVE or CSV storage engines) are less safe but are
much faster, and consume less space and memory.
Example 6-4 shows how to create a table using the MyISAM storage engine.
Example 6-5 is the DB2 conversion. Notable changes are:
Changes in the data type according to data type mapping.
Instead of auto_increment, generated by default as identity is used.
Example 6-6 is a MySQL table creation example using the ARCHIVE engine.
Example 6-7 is the DB2 conversion using row compression.
Example 6-8 shows how to create a MySQL table with Partitioning. Example 6-9
is the DB2 conversion, again no major change required.
Example 6-8 Creating MySQL table using partitioning with the default MyISAM storage
engine
mysql>CREATE TABLE partsales (id INT, item VARCHAR (20) )
PARTITION BY RANGE (id)(
Alter table
Alter table is a statement used to modify one or more properties of a table. The
syntax of the ALTER TABLE statement for MySQL and DB2 is quite similar and
is shown in Example 6-10.
db2>ALTER TABLE partsales alter column status set data type varchar(20)
Alter table in DB2 now supports the dropping of columns. DB2 dropping column
using temporary table. The syntax of ALTER TABLE for MySQL and DB2 is
similar as shown in Example 6-11.
Drop table
Tables can easily be deleted from the database by issuing the DROP TABLE
statement as shown below.
For MySQL:
DROP [TEMPORARY] TABLE [IF EXISTS] tbl_name [, tbl_name,...] [RESTRICT
|CASCADE]
For DB2:
DROP table tbname
DB2 uses the updateable UNION ALL view to achieve the above feature.
UNION ALL views are commonly used for logically combining different but
semantically related tables. UNION ALL view is also used for unification of
like tables for better performance, manageability, and integrating federated
data sources:
An example of using the UNION ALL command for views is:
db2>CREAT VIEW UNIONVIEW as SELECT * FROM table1 UNION ALL SELECT * FROM
table2
MEMORY table
A MEMORY table is a hashed index that is always stored in memory. Memory
tables are fast but not crash safe. When MySQL crashes or has a scheduled
reboot the MEMORY table will still exists on reboot but data is lost.
A MySQL Memory table can be created using the following command:
mysql> CREATE TABLE memtable type=MEMORY SELECT * FROM table1;
MySQL MEMORY tables can be converted to DB2 as temporary tables,
materialized query tables, or indexes depending upon your requirements.
– Temporary table
DB2 temporary tables are tables used for storing data in non-persistent,
in-memory, session-specific tables. Once a session is over, the table
definition for the table is lost. When your application is using MEMORY
table in this fashion, temporary tables can be declared in your application
by calling the statements:
• Create user temporary table space if it does not exist using:
db2>CREATE USER TEMPORARY TABLESPACE discompose MANAGED BY SYSTEM
using ('usertemp1')
• Declare temporary table in the application:
db2>DECLARE GLOBAL TEMPORARY TABLE distemper LIKE table1 ON COMMIT
db2>DELETE ROWS NOT LOGGED IN discompose
– Materialized query table
Materialized query tables (MQT) also known as summary table can also
be used to improve the query performance. A MQT is a table whose
definition is based on the results of a query, and whose data is in the form
of pre-computed results. If the SQL compiler determines that a query will
run more efficiently against a materialized query table than a base table or
tables, the query executes against the materialized query table:
db2>CREATE TABLE sales AS (SELECT * FROM table1) DATA INITIALLY
DEFERRED REFRESH DEFERRED
We discuss MQT in more detail in 11.4, “Materialized query tables (MQT)”
on page 391.
In addition, DB2 also supports tables for clustering and query performance
enhancement. These tables can also be used according to various
requirements:
Multidimensional clustering (MDC) tables
Multidimensional clustering (MDC) tables have a physical cluster on more
than one key or dimension at the same time. An MDC table maintains
clustering over all dimensions automatically and continuously, thus
eliminating the need to reorganize the table in order to restore the physical
order of the data.
When creating MDC tables, the performance of many queries might improve
because the optimizer can apply additional optimization strategies. Some of
the advantages of MDC tables include quicker and more rare scanning
because of dimension block, faster lookups, block level index ANDing and
ORing, and faster retrieval.
The following command can be issued to create it:
db2>CREATE TABLE tblmdc
(col1 int, col2 int, col3 int, col4 char(10))
ORGANIZE BY DIMENSIONS(col1,col2,col3)
//Here is the SQL to create the tables in the Person table hierarchy
Views
Views are the named specification of a result table. This specification is a
select statement that is run whenever the view is referenced in an SQL
statement. It can be used just like a base table.
A simple view can be created by issuing a create statement as shown in
Example 6-14
MySQL supports both single column indexes and multi-column indexes. MySQL
has five types of indexes:
Primary key
Unique
Non-unique
Fulltext
Spatial
DB2 supports all the index types supported by MySQL with the same kind of
terminologies allowing them to map directly during conversion.
Create Index
The following is the MySQL CREATE INDEX syntax:
CREATE [ONLINE|OFFLINE] [UNIQUE|FULLTEXT|SPATIAL] INDEX index_name
[index_type]
ON tbl_name (index_col_name,...)
index_col_name:
col_name [(length)] [ASC | DESC]
Drop index
Drop index statement for MySQL and DB2:
mysql> DROP INDEX index_name ON tbl_name;
db2> DROP INDEX index_name
There are two types of triggers - statement and row triggers. Statement triggers
are executed in response to a single INSERT, UPDATE, or DELETE statement.
Row triggers are executed for each row affected by the INSERT, UPDATE, or
DELETE statement. There are many benefits to having triggers, since they allow
logging of information about changes to a table, can be used to validate an
insert, restrict access to specific data, or make data modifications and
comparisons at change.
DB2 supports both row based and statement based triggers; MySQL supports
only row-based triggers. Both MySQL and DB2 support INSERT, UPDATE, and
DELETE triggered events and can be defined to fire using BEFORE and AFTER.
DB2 has an additional firing classification called INSTEAD OF which allows the
triggers to perform INSERT, UPDATE, or DELETE on views. MySQL does not
support a trigger on a view.
Multiple servers
In some cases multiple MySQL servers are placed on the same machine.
Possible cases reasons include user management, testing, or potentially
differentiating applications. MySQL provides an option to run multiple servers on
same machine using several operating parameters.
DB2 also supports the creation of different instances on the same machine with a
different DB2 version.
+------+-------+------+------+
| col1 | col2 | col1 | col2 |
+------+-------+------+------+
| 1 | val 1 | 1 | new |
| 1 | val 1 | 2 | new |
+------+-------+------+------+
Figure 6-7 MySQL application using multiple DBs instead of multiple schema
The tables created in the particular schema can be accessed using the full table
qualifier table schema.table name.
Table placement
MySQL does not support table spaces for managing physical location and page
size or distributing tables onto the different table spaces, except with the optional
InnoDB storage engine which supports multiple table spaces by distributing them
into different files.
DB2 supports table spaces to establish the relationship between the physical
storage devices used by your database system and the logical containers used
to store data. Table spaces reside in database partition groups. They allow you
to assign the location of table data directly onto containers.
List information
MySQL provides a show command to list the information about databases,
tables, columns, or status information about the server.
DB2 provide commands for getting information about instances, databases, table
spaces, and others. The DB2 system catalogues contain all necessary
information about tables, columns, and indexes and others. You can use
describe and list commands to display database and table structure or use
select statement to get the details of the table definition.
MySQL DB2
Referential integrity refers to the constraints defined on the table and its columns,
which help you to control the relationship of data in different tables. Essentially,
this involves primary keys, foreign keys, and unique keys.
Primary keys and unique keys are treated similarly in MySQL and DB2.
However, MySQL currently only parses foreign key syntax in the CREATE
TABLE statements, but does not use/store the information about foreign keys
except in InnoDB tables which support checking of foreign key constraints,
including CASCADE, ON DELETE, and ON UPDATE.
DB2 provides full support for foreign keys. With the full referential integrity
functionality from DB2, your application can be released from the job of taking
care of the data integrity issues. Example 6-16 shows creation and usage of
foreign key constraints in DB2.
We discuss more foreign keys creation in more detail in 6.5.2, “Manual database
object conversion and enhancements” on page 158.
Database schema conversion can be done in various ways, however, the most
common approaches are:
Automatic conversion using porting tools
Manual porting
Metadata transport
In general, all approaches taken above use existing MySQL databases as input
and pass them through the following functional engine:
Capture database schema information from MySQL
Modify schema information for DB2
Create the database in DB2 with structure
the new DB2 database. With the IBM Data Movement Tool, you can
automatically convert data types, tables, columns, and indexes into equivalent
DB2 database elements. The IBM Data Movement Tool provides database
administrators (DBAs) and application programmers the functionality needed to
automate the conversion task. The strength of this tool is shown in large scale
data movement projects. This tool has been used to move up to 4TB of data in
just three days with good planning and procedures.
You can reduce the downtime, eliminate human error, and cut back on person
hours and other resources associated with the database conversion by using the
following features found in the IBM Data Movement Tool:
Extract of DDL statements from the MySQL source database.
Extract data from the MySQL source database.
Generate and run DB2 DDL conversion scripts.
Automate the conversion of database object definitions.
View and refine conversion scripts.
Efficiently implement converted objects using the deployment options
(interactive deployment or automated deployment).
Generate and run data movement scripts.
Track the status of object conversions and data movement, including error
messages, error location, and DDL change reports, using the detailed
conversion log file and report.
Instructions on download, installation, and usage for the IBM Data Movement
Tool are described in 5.3, “IBM Data Movement Tool installation and usage” on
page 111.
For the rest of the chapter we discuss how to convert the database structure to
DB2. The next chapter discusses how to convert the data to DB2. The following
briefly describes the database structure conversion process when using the IBM
Data Movement Tool:
1. Specify source and DB2 database server connection information.
2. Test the connection to the source and target database. Click Connect to
MySQL to test the connection and Connect to DB2 to test the DB2
connection.
3. Specify the working directory where DDL and data are to be extracted to.
4. With the IBM Data Movement Tool you have the option to extract only the
database objects, the data or both. Choose if you want DDL and/or DATA.
5. Click the Extract Data button to extract the DDL and/or DATA and
automatically convert to DB2 syntax. You can monitor progress in the console
window.
6. After the data extraction is completed successfully, go through the result
output files for the status of the data movement, warnings, errors, and other
potential issues. Optionally, you can click on the View Script/Output to
check the generated scripts, DDL, data or the output log file.
7. Click the Deploy Data button to automatically create tables, indexes in DB2
and load data that was extracted from the source database. Optionally, you
can click Interactive Deploy to deploy database objects one by one.
In 6.1, “Data type mapping” on page 116 and 6.2, “Data definition language
differences” on page 122, we demonstrated syntax and semantic differences
between MySQL and DB2, which could use manual conversion. We have also
discussed creation, deletion, and alteration of various database objects such as
database, tables, index, views among others, and how these are related when
converting from MySQL and DB2. There are several steps involved in completing
the manual process, which will be discussed in more detail below:
Capturing the database schema information from MySQL
MySQL offers a utility called mysqldump which extracts the database
structure and deposits this information into a text file. The structure is
represented in DDL, and can be used to recreate these database elements
for your DB2 server. Syntax for mysqldump is as shown below:
bash> mysqldump DatabaseName > mysqlobjects.ddl
When using this tool on a very large database it is recommended to use the
--quick or --opt option. Without these options, mysqldump loads whole result
set into memory before dumping the results.
For further information on this utility refer to the MySQL Reference Manual at:
http://dev.mysql.com/doc/refman/5.1/en/
Modify schema information for DB2
Now that you have captured the source of your MySQL database structure
using the mysqldump utility, it is time to modify the schema information and
make it work for DB2. Manual changes include:
– DDL changes
The first step in schema modification is the conversion of the create
statement for various database objects such as database, tables, views,
indexes and others. Refer to 6.2, “Data definition language differences” on
page 122 or these conversions as reference. Also a DDL script must be
written for creating the database and table spaces.
– Data type changes
Check the data types used in the table definition. Change the MySQL data
type to DB2 data types. Refer 6.1, “Data type mapping” on page 116.
– Reserved words conversion
There are many reserved words in MySQL and DB2 which cannot be a
valid name for the column and database element. Refer to the MySQL
Reference Manual (http://dev.mysql.com/doc/refman/5.1/en/) and DB2.
For more detail about reserved words and change the conflicting names in
the DDL statements, visit DB2 Information Center at:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
– Create database and database objects
Now that you have DDL statements modified, the DB2 database and
database objects themselves need to be created. This can be done from
the command line processor (CLP).
connect to invent;
disconnect invent;
To create database, invoke the above SQL script from DB2 Command Line
Window or bash shell by using.
db2>@create-database.sql
bash>db2 -f create-database.sql
Now we execute the above created DDL scripts from bash shell as shown in
Example 6-18.
Although this is a good technique, it is very costly since it requires modeling tools
for the conversions.
For this example we use the following command to create our new DB2
database:
db2> CREATE DATABASE invent AUTOMATIC STORAGE YES ON
'/home/db2inst1/invent'
For our conversion scenario we use the IBM Data Movement Tool GUI.
<IBM Data Movement Tool Installation directory>/IBMDataMovementTool.sh
db2inst1@db2server:/opt/ibm/IBMDataMovementTool>
./IBMDataMovementTool.sh
This launches IBM Data Movement window as shown in Figure 6-8.
You can also run the IBM Data Movement Tool in command line mode. The
tool will automatically switch to command line mode if it is not able to start the
GUI. If you want to run the tool in command line mode you can run the
following command:
<IBM Data Movement Tool Installation directory>/IBMDataMovementTool.sh
-console
db2inst1@db2server:/opt/ibm/IBMDataMovementTool>
./IBMDataMovementTool.sh -console
Specifying the source and target database information:
Enter the connection information in the “Source Database” and “DB2
Database” fields. Since the IBM Data Movement Tool used in this example is
installed on the DB2 server, we connect to localhost for the DB2 server and
specifying the MySQL server IP address. You must know the following
information:
– IP Address or host name of the source and DB2 servers
– Port numbers to connect
Figure 6-9 Testing database connection using IBM Data Movement Tool
After the data extraction has completed, go through the result output files for the
status of the data movement, warnings, errors and other potential issues. You
can click on the View Script/Output button from the Extract/Deploy window to
check the generated scripts, DDL, data or the output log file.
Table 6-3 shows the command scripts that are re-generated each time you run
the tool in GUI mode. These scripts can also be issued in console mode without
the GUI. This is helpful when you want to embed this tool as part of a batch
processes to accomplish an automated data movement.
IBMExtract.properties This file contains all input parameters that you specified
through your GUI or command line input values. You can edit
this file manually to modify or correct parameters. Note this
file will be overwritten each time you run the GUI.
Filename Description
geninput This is the first data movement step where you will create an
input file that contains the names of the tables to move. You
can edit this file manually to exclude tables you do not want to
move.
unload This is the last step of data movement. This script unloads
data from the source database server to flat files. DB2 LOAD
scripts will be generated after running this script. Note that if
you did not choose to separate DDL from DATA, the genddl
content is included in the unload script.
rowcount This file will be used after you have moved the data to do a
sanity check for the row count for tables in source and target
database servers.
Choose an option that works best for your conversion project. The interactive
deployment mode is best for deploying database objects that contain triggers,
functions and/or procedures. In most MySQL conversions the first two options
will suffice, since the conversion of the database objects will be preformed
outside of the IBM Data Movement tool. For our example, we select the
Interactive Deploy mode to better explain the conversion process by separating
the database object and data deployment. Figure 6-11 on page 154 shows the
Interactive Deployment window.
From the Interactive Deploy window you can perform a number of tasks
Refresh Database Object List
Select the refresh button (circled in Figure 6-12 on page 155) to refresh the
list of database objects in the Database Objects view on the left side of the
window.
Edit the Object Definition
You can select the database object you would like to modify and edit in the
right side panel, as shown in Figure 6-12 on page 155. To save and deploy
changes, deploy object before selecting a new object. After deployment you
can return to refine any objects that failed to deploy.
You can also edit the scripts that were extracted into the conversion directory.
To change the table definition, edit the db2tables.sql file, as follows
db2inst1@db2server:/opt/ibm/IBMDataMovementTool/migr> vi db2tables.sql
Example 6-19 shows the converted DB2 table creation file.
TERMINATE;
Note: You should not reduce the size of any field because it may cause an
error while converting the data
Views
Manual conversion is required to port views from MySQL to DB2. The MySQL
view definition can be extracted from the MySQL database using the mysqldump
utility or selecting from the INFORMATION_SCHEMA.VIEWS table.
The syntax for a view in MySQL and DB2 is very similar, which makes it simple to
convert the DDL for this database object. Example 6-20 shows the CREATE
VIEW syntax for the MySQL views.
For our example, we alter the CREATE VIEW commands to match DB2 syntax,
as shown in Example 6-21.
o.phoneNum,
(SELECT count(*) FROM inventory i, owners o
WHERE o.id = newID and o.id = i.ownerID) as inventNum
FROM owners o, locations l
WHERE o.groups = 'manager' and o.locID = l.id;
Trigger conversion
Triggers also require manual conversion to port them from MySQL to DB2. The
MySQL trigger definition can be extracted from the MySQL database using the
mysqldump utility or selecting from the INFORMATION_SCHEMA.TRIGGERS
table.
You should also note the change in the date function. You must find the
equivalent functions in DB2 when converting DDL and DML statements that
contain built-in functions. MySQL and DB2 built-in functions and operators are
discussed and compared in more detail in 8.1.10, “Built-in functions and
operators” on page 223.
The syntax for a procedure in MySQL and DB2 is similar. Example 6-25 on
page 161 shows the CREATE PROCEDURE syntax for the MySQL procedure
and Example 6-26 on page 162 shows the CREATE PROCEDURE for the DB2
procedure. You may notice the only difference between the statements the date
function to determine the number of days between the open date and the close
date. For a description of MySQL built-in functions and DB2 equivalent functions
or solution refer to 8.1, “Data Manipulation Language differences and similarities”
on page 208.
BEGIN
UPDATE severity SET avgDays = (SELECT SUM(Datediff(closeDate,
openDate))/Count(*)
FROM services where severity = sevLevel and status = 7)
WHERE id = sevLevel;
END
Foreign keys
Now any additional enhancement can be added to your database using the DB2
features that may have not been supported in your existing MySQL storage
engine. One example is referential integrity. This is essential to the database by
ensuring consistency of data values between related columns in different tables.
Referential integrity is usually maintained by using the primary key, unique key,
and foreign keys. MySQL only supports foreign keys in the InnoDB engine.
Primary and unique are successfully converted using the IBM Data Movement
Tool but at the time of writing this book foreign keys are not supported for a
MySQL conversion project. If you want create foreign keys in your database or
convert your foreign keys from an Innodb database this needs to add manually.
Now we have completed the DDL modification; we executed the above changed
scripts to create the DB2 database and the objects as shown in Example 6-28.
Automatic maintenance
Performing maintenance activities on your databases is essential to ensuring
that they are optimized for performance and recoverability. The database
manager provides automatic maintenance capabilities for performing database
backups, keeping statistics current and reorganizing tables and indexes as
necessary.
and runs only the required maintenance activities during the next available
maintenance window (a user-defined time period for the running of automatic
maintenance activities). Example 6-29 shows the activities that can be controlled
by the database manager’s automatic maintenance feature.
This chapter also discusses the differences in specific data formats and data
types and ways in which they can be converted from MySQL to DB2.
We also describe how user account management (user data, access rights, and
privileges) is implemented in MySQL and how this information can be ported to
implement secure database access within DB2.
Finally, the steps for how we did the data conversion in our sample project are
discussed in detail.
Database systems provide commands and tools for unloading and loading data.
In MySQL the mysqldump tool is used to unload a database. DB2 provides the
LOAD and IMPORT commands for loading data from files into the database.
You have to be aware of the differences in how specific data types are
represented by different database systems. For example, the representation of
date and time values may be different in different databases, and this often
depends on the local settings of the system.
If the source and the target databases use different formats, you must to convert
the data either automatically using tools or manually. Otherwise, the loading tool
would not be able to understand the data it needs to load due to improper
formatting.
The IBM Data Movement Tools automates the use of the MySQL SELECT
statements and the DB2 LOAD commands.
For a complete description of this tool, run mysqldump --help. Some important
command line options to keep in mind are:
--no-data
This option ensures that no data is extracted from the database, just the SQL
statements for creating the tables and indexes, therefore, used for extracting
DDL statements only.
--no-create-info
This option ensures that no SQL statements for creating the exported table
are extracted from the database. Therefore, it is used for exporting data only.
The output file containing the data can be loaded into a DB2 table at a later
time.
--tab=<outFilePath>
This option creates a text file with the DDL (<tablename>.sql) and a tab
separated text file with the data (<tablename>.txt) in the given path for each
specified table. This option works only when the utility is run on the same
machine as the MySQL daemon. If this option is not specified, INSERT
statements for each row are created.
Example 7-1 shows the usage and output of the mysqldump command using only
the --user and --password options. The output includes DDL statements for table
creation and INSERT statements to insert data into the table.
Example 7-1 Usage of mysqldump with only the --user and --password options
mysqlServer:~ # mysqldump --user root --password inventory severity
--
-- Table structure for table `severity`
--
--
-- Dumping data for table `severity`
--
UNLOCK TABLES;
Example 7-2 shows the usage and output of the mysqldump command with the
--no-create-info but without the --tab options. The output has only INSERT
statements for each row to insert data into the table.
Example 7-2 Usage of mysqldunp with the --no-create-info but without the --tab option
mysqlServer:~# mysqldump--no-create-info --user root -password inventory severity
--
-- Dumping data for table `severity`
--
UNLOCK TABLES;
Example 7-3 shows the usage and output of the mysqldump command with the
--no-create-info and the --tab options. This outputs a file in the current directory
named <tableName>.txt that contains only the exported MySQL data. This file
can be read by the DB2 LOAD command.
Example 7-3 Usage of mysqldump with the --no-create-info and the --tab option
mysqlServer:~ # mysqldump --no-create-info --tab=. --user root --password
inventory severity
2 high-med \N 4 4
4 low-med \N 10 10
3 medium \N 7 7
5 low \N 14 12
1 high \N 1 2
Example 7-4 shows the usage and output of mysqldump without the
--no-create-info but with the --tab options. The output are two files: one
containing the DDL statements for table creation (<tableName>.sql), the other
one with the exported MySQL data (<tableName>.txt) in the current directory.
The second file can be read by the DB2 LOAD command.
Example 7-4 Usage of mysqldump without the --no-create-info but with the --tab option
mysqlServer:~ # mysqldump --tab=. --user root -password inventory severity
--
-- Table structure for table `severity`
--
Example 7-5 illustrates the inventory.tables generated file for our sample
conversion.
There are a few other scripts that are generated during the DDL extraction phase
that are used to extract the data from the MySQL database and load it into DB2.
The unload script is used to unload the data from the MySQL database and store
it in <schema>_<tableName>.txt files in the <migration output directory>/data.
The db2load.sh script can then be used to load these files into the DB2
database. The db2load.sql script is executed from the db2gen.sh script which
also executes other .sql generated scripts to port the database. You can also
execute these using the GUI, which we discuss further in the chapter.
In general, the LOAD utility is faster than the IMPORT utility, because it writes
formatted pages directly into the database, while the IMPORT utility performs SQL
insert statements. The LOAD utility validates the uniqueness of the indexes, but
does not fire triggers, and does not perform referential or table constraints
checking.
See Example 7-6 for a simplified syntax diagram for the LOAD command. For a
complete syntax description, visit the Information Center at:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
The IBM Data Movement Tool uses the LOAD utility for loading the application
data into DB2. Example 7-7 shows an example of the LOAD command used to
load the data into the Severity table.
See Example 7-8 for a simplified syntax diagram for the IMPORT command. For a
complete syntax description, visit the Information Center at:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
Binary Large Objects (BLOB data type) usually contain binary data. Exporting of
binary data into text files is not possible. So, if your BLOBs contain binary data,
you must convert them in a different way than exporting and loading. The IBM
Data Movement tool handles the conversion BLOB data for you.
The DB2 LOAD command let you specify the file-type-modifier clause
TIMESTAMPFORMAT, which determines the formatting of the TIMESTAMP
values. If you want to import MySQL TIMESTAMP values, you must change the
LOAD command in the deploy.sh script to the following:
If you are using the IBM Data Movement Tool you do not have to worry about this
format since the tool takes care of this for you.
When extracting the data from the GUI tool make sure to select No when
requested to recreate the conversion directory and the <database>.tables file.
Otherwise your changes will be overwritten.
User accounts
When assessing your application, be sure to distinguish between the following
user types:
Application user accounts
These users log onto the application, but do not exist on the database level.
Database access is through the application with the application’s database
user ID. As the information about application users is usually stored in custom
application tables, the porting of application user account data is done when
porting the MySQL application data. In the sample inventory database this
would be our owners table.
Database user accounts
Database users connect directly to the MySQL database to retrieve and
manipulate data. At least one database user must exist for applications to
connect to the database. Database user accounts are created with the
MySQL server and allow you to grant and restrict access to portions of the
MySQL servers. A database user account is associated with a host name and
user name. The user account information is stored with the mysql.user table
and must be ported in data conversion step. Access rights and privileges for
these users are stored in the mysql.db, mysql.host, mysql.tables_priv,
mysql.columns_priv, and mysql.procs_priv tables.
Passwords
Database users have associated passwords, which are stored encrypted in the
mysql.user table. Encrypted passwords cannot be ported and must be reset on
the new system. The password of the database user, which is used by an
application to access the database, is typically stored in a profile with restricted
rights.
Access rights
When accessing a MySQL database there are two levels of access control.
When you first connect to a MySQL server you provide a user name and the
This access information is stored in the mysql.user table in the fields: user,
password, and host. The MySQL wildcard % is often used in the host field to
specify that this user can connect from any host. The wildcard underscore “_” is
also sometimes used for single characters.
In Example 7-10 the user user1 can connect from any host, the user user2 can
connect from just the host remoteHost.ibm.com, and db2inst1 can connect from
localhost, myServer and 127.0.0.1.
Refer to the MySQL Reference Manual for a complete description of the MySQL
connection verification at:
http://dev.mysql.com/doc/refman/5.1/en/index.html
You can also find information on how the entries in the mysql.user table are
ordered when the provided connection data meets more than one connection
criteria.
Privileges
The second level of access control occurs once a connection to the MySQL
database is established, each time a command is run against the database,
MySQL checks if the connected user has sufficient privileges to run this
command.
Privileges exist for selecting, inserting, updating, and deleting data for creating,
altering, and dropping database objects and other operations performed at the
database level.
Privileges can be granted to users with the MySQL GRANT command; they can be
revoked with the REVOKE command.
For more information about MySQL privileges, see the MySQL Reference
Manual, at:
http://dev.mysql.com/doc/refman/5.1/en/
User accounts
To create a user for DB2 implies creation of a user in the server’s operating
system, assigning the user to a group, and granting specific database privileges
to the user or group.
On Linux systems you must have root access to the system to create groups and
users. Group information is stored in the file /etc/group, user information in the
file /etc/passwd.
For example, if you want to create a new group db2app1 with one user db2usr1
to access a specific DB2 table, the necessary steps are:
1. Log on to the Linux system with root privileges.
2. Create the group. Make sure that the provided group name does not already
exist and ensure that it is no longer than eight characters:
groupadd [-g 995] db2app1
3. Create the user and assign it to the previously created group. Make sure that
the ID for the user does not already exist and is not longer than eight
characters:
useradd [-u 1001] -g db2app1 -m -d /home/db2usr1 db2usr1 [-p passwd1]
If the user is going to access the DB2 database locally, then continue with the
next two steps:
4. Edit the profile of the created user:
vi /home/db2usr1/.profile
5. Add the following line to the profile. Be sure to specify the path of your
DB2instance owner’s home directory and to specify a blank between the dot
and the command:
. /home/db2inst1/sqllib/db2profile
Passwords
The passwords that are used for DB2 are the system passwords of the user. To
set a password in Linux use the passwd <username> command as root user.
Access rights
The first component in the DB2 security model is authentication. Access to DB2
databases is restricted to users that exist on the DB2 system. When connecting
to a DB2 database you have to provide a user name and password that is valid
against the server's system. Authentication can occur at the DB2 server or DB2
client using operating system authentication, Kerberos or an external security
manager.
basis. Together, these control access to the database manager and its database
objects. Users can access only those objects for which they have the appropriate
authorization, that is, the required privilege or authority.
Administrative authority
A user or group can have one or more of the following administrative authorities:
System-level authorization
The system-level authorities provide varying degrees of control over
instance-level functions:
SYSADM authority
The SYSADM (system administrator) authority provides control over all the
resources created and maintained by the database manager. The system
administrator possesses all authorities for the SYSCTRL, SYSMAINT, and
SYSMON authority. The user who has SYSADM authority is responsible for
both controlling the database manager and ensuring the safety and integrity
of the data.
SYSCTRL authority
The SYSCTRL authority provides control over operations that affect system
resources. For example, a user with SYSCTRL authority can create, update,
start, stop, or drop a database. This user can also start or stop an instance,
but cannot access table data. Users with SYSCTRL authority also have
SYSMON authority.
SYSMAINT authority
The SYSMAINT authority controls maintenance operations on all databases
associated with an instance. A user with SYSMAINT authority can update the
database configuration, backup a database or table space, restore an existing
database, and monitor a database. Like SYSCTRL, SYSMAINT does not
provide access to table data. Users with SYSMAINT authority also have
SYSMON authority.
SYSMON authority
The SYSMON (system monitor) authority controls the usage of the database
system monitor.
Figure 7-1 illustrates the instance level authorities that can be granted to a user
or role.
Database-level authorization
The database level authorities provide control within a database:
DBADM (database administrator)
The DBADM authority level provides administrative authority over a single
database. This database administrator possesses the privileges required to
create objects and issue database commands.
The DBADM authority can only be granted by a user with SECADM authority
and cannot be granted to PUBLIC.
SECADM (security administrator)
Figure 7-2 on page 185 illustrates the database level authorities that can be
granted to a user or role.
Privileges
A privilege is a permission to perform an action or task. Authorized users can
create objects, have access to objects they own, and can pass on privileges on
their own objects to other users by using the GRANT statement.
the CONTROL privilege on all tables, views, and nicknames identified in the
fullselect.
Indirect
When a user has the privilege to execute a package or routine, they do not
necessarily require specific access privileges on the objects handled in the
package or routine. If the package or routine contains static SQL or XQuery
statements, the privileges of the owner of the package are used for those
statements. If the package or routine contains dynamic SQL or XQuery
statements, the authorization ID used for privilege checking depends on the
setting of the DYNAMICRULES BIND option of the package issuing the
dynamic query statements, and whether those statements are issued when
the package is being used in the context of a routine.
A user or group can be authorized for any combination of individual privileges
or authorities. When a privilege is associated with a resource, that resource
must exist. For example, a user cannot be given the SELECT privilege on a
table unless that table has previously been created.
Note: Care must be given to granting authorities and privileges to user names
that do not exist in the system yet. At some later time, this user name can be
created and automatically receive all of the authorities and privileges
previously granted.
– CREATEIN privilege
– ALTERIN privilege
– DROPIN privilege
Table space level
– USE privilege
Table and view level
– CONTROL privilege
– SELECT privilege
– INSERT privilege
– UPDATE privilege
– DELETE privilege
– INDEX privilege
– ALTER privilege
– REFERENCES privilege
– ALL PRIVILEGES privilege
Row or column level
– SECURITY LABEL privilege
– LBAC Rule Exemption privileges
Other privileges
– Package privileges
– Index level
– Procedure, function and method privileges
– Sequence privileges
LBAC controls access to table objects by attaching security labels to rows and
columns. Users attempting to access an object must have a certain security label
granted to them. Only users who have the proper labels when accessing the row
are allowed to retrieve the data. Other users do not receive any indication that
they could not retrieve the row. This form of security is different from normal
SELECT privileges, as users who attempt to access a table that they are not
allowed to access receive an SQL error messages.
All privileges and labels can be granted to users or groups with the GRANT
command, they can be revoked using the REVOKE command.
Note: Catalogs contain, among other things, statistics about data distribution
in a table, which are needed by the query optimizer to determine the best way
to execute a query. Users may indirectly gain knowledge about some data
values by accessing the catalogs. When using Label-Based Access Control
(LBAC) the new RESTRICT_ACCESS option can be added on the CREATE
DATABASE command to create a database where access to the catalogs is
not granted to PUBLIC. Access to the catalogs can then be granted on a
“need-to-know” basis.
Table 7-1 shows the mapping of MySQL privileges to DB2 privileges, assuming
that different MySQL databases are mapped to different DB2 schemas. For
example, an INSERT privilege granted in MySQL on the global level means that
you have to grant the INSERT privilege on all existing tables in the DB2 database
to the specified user. If you create a new table in DB2, you have to grant the
INSERT privilege on this table to the user.
database SELECT on
SYSCAT.VIEWS
table SELECT on
SYSCAT.VIEWS
Example 7-11 shows retrieving users with access to the sample project
database.
Example 7-11 Retrieve users with access to the sample project database
>mysqlaccesss % % inventory -b -U root -P
mysqlaccess Version 2.06, 20 Dec 2000
……
Sele Inse Upda Dele Crea Drop Relo Shut Proc File Gran Refe Inde Alte Show Supe Crea
Lock Exec Repl Repl Crea Show Crea Alte Crea Even Trig Ssl_ Ssl_ X509 X509 Max_ Max_
Max_ Max_ | Host,User,DB
---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
---- ---- + --------------------
N N N N N N N N N N N N N N N N N N N N N N N N N N N N ? ? ? ? 0 0 0 0 |
%,root,inventory
Y Y Y Y Y Y N N N N N Y Y Y Y N Y Y Y Y Y Y Y Y Y Y Y Y ? ? ? ? 0 0 0 0 |
%,inventAppUser,inventory
N N N N N N N N N N N N N N N N N N N N N N N N N N N N ? ? ? ? 0 0 0 0 |
%,user01,inventory
N N N N N N N N N N N N N N N N N N N N N N N N N N N N ? ? ? ? 0 0 0 0 |
%,user02,inventory
N N N N N N N N N N N N N N N N N N N N N N N N N N N N ? ? ? ? 0 0 0 0 |
%,ANY_NEW_USER,inventory
Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y ? ? ? ? 0 0 0 0 |
localhost,root,inventory
Y Y Y Y Y Y N N N N N Y Y Y Y N Y Y Y Y Y Y Y Y Y Y Y Y ? ? ? ? 0 0 0 0 |
localhost,inventAppUser,inventory
N N N N N N N N N N N N N N N N N N N N N N N N N N N N ? ? ? ? 0 0 0 0 |
localhost,user01,inventory
N N N N N N N N N N N N N N N N N N N N N N N N N N N N ? ? ? ? 0 0 0 0 |
localhost,user02,inventory
N N N N N N N N N N N N N N N N N N N N N N N N N N N N ? ? ? ? 0 0 0 0 |
localhost,ANY_NEW_USER,inventory
The user accounts user1, users2 and ANY_NEW_USER (which is similar to the
logical DB2 group PUBLIC) do not have any privileges on our sample database,
so we do not need to upgrade these users either.
The only remaining user is the user inventAppUser which is the account our
application uses to connect to the MySQL database and manipulate data.
The user’s privileges are set for SELECT, INSERT, UPDATE, DELETE,
CREATE, and DROP on database level for inventory, so we map these
privileges to SELECT, INSERT, UPDATE and DELETE for all tables in the DB2
schema inventory and CREATEIN and DROPIN for the schema.
groupadd $2
useradd -g $2 -m -d $HOMEDIR/$1 $1
passwd $1
echo '. '${DB2DIR}'/sqllib/db2profile' >> $HOMEDIR/$1/.profile
The creation of our user and group was done by the root user as shown in
Example 7-13.
The granting of the privileges was done by the instance owner db2inst1 with the
DB2 command shown in Example 7-14.
The next step is to extract the MySQL application data using the Data Movement
Tool. In 7.1.1, “Data porting commands and tools” on page 168 of this chapter we
discuss how to create data extraction and transfer files using the IBM Data
Movement Tool. And in 7.1.2, “Differences in data formats” on page 175 we
discuss how to modify the extraction scripts to extract special data. Now we must
execute these scripts and extract the data from the MySQL database. To do this,
open the Extract/Deploy window, un-select the DDL checkbox, select the Data
Movement checkbox and click Extract DDL/data, as shown in Figure 7-3 on
page 196. If you have made changes to the extraction script be sure to select No
when requested to recreate output directory or <tableName>.tables file.
Now you can go into your project directory and check the extracted data files.
The data output files extracted from the MySQL database are located under the
<migration output directory>/data. Example 7-15 shows the created scripts for
our inventory scenario.
Each of the files is tab-delimited, containing the data from the corresponding
MySQL table. This format can be read by the DB2 LOAD command.
Example 7-16 shows the DB2 LOAD commands generated by the IBM Data
Movement Tool for our sample project.
Example 7-16 DB2 LOAD commands for loading the data into the DB2 database
CONNECT TO INVENT;
--#SET :LOAD:ADMIN:GROUPS
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_groups.txt" OF DEL
MODIFIED BY CODEPAGE=1208COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_groups.txt"
METHOD P(1,2,3,4,5)MESSAGES "/opt/ibm/IBMDataMovementTool/migr/msg/admin_groups.txt"
REPLACE INTO "ADMIN"."GROUPS"("GROUPNAME", "EDITUSER", "EDITGRANTUSERPERM", "EDITINVT",
"EDITSERVICE") --STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL
NONRECOVERABLE INDEXING MODE AUTOSELECT;
--#SET :LOAD:ADMIN:LOCATIONS
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_locations.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_locations.txt"
METHOD P(1,2,3,4)MESSAGES "/opt/ibm/IBMDataMovementTool/migr/msg/admin_locations.txt"
REPLACE INTO "ADMIN"."LOCATIONS" ("ID", "ROOMNAME", "FLOORNUM", "PASSCODE")
--STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL NONRECOVERABLE
INDEXING MODE AUTOSELECT ;
--#SET :LOAD:ADMIN:SEVERITY
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_severity.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_severity.txt"
METHOD P (1,2,3,4,5) MESSAGES "/opt/ibm/IBMDataMovementTool/migr/msg/admin_severity.txt"
REPLACE INTO "ADMIN"."SEVERITY" ("ID", "TITLE", "NOTES", "ESTDAYS", "AVGDAYS"
)--STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL NONRECOVERABLE INDEXING
MODE AUTOSELECT;
--#SET :LOAD:ADMIN:OWNERS
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_owners.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_owners.txt"
METHOD P (1,2,3,4,5,6,7,8,9,10,11) MESSAGES
"/opt/ibm/IBMDataMovementTool/migr/msg/admin_owners.txt" REPLACE INTO "ADMIN"."OWNERS"
("ID", "FIRSTNAME", "LASTNAME", "EMAIL", "LOCID", "CUBENUM",
"PHONENUM", "LOGINNAME", "PASSWORD", "FAXNUM", "GROUPS" ) --STATISTICS YES WITH
DISTRIBUTION AND DETAILED INDEXES ALL NONRECOVERABLE INDEXING MODE AUTOSELECT;
--#SET :LOAD:ADMIN:INVENTORY
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_inventory.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_inventory.txt"
METHOD P (1,2,3,4,5,6,7,8) MESSAGES
"/opt/ibm/IBMDataMovementTool/migr/msg/admin_inventory.txt"
REPLACE INTO "ADMIN"."INVENTORY"("ID", "ITEMNAME", "MANUFACTURER", "MODEL", "YEAR",
"SERIAL", "LOCID", "OWNERID")--STATISTICS YES WITH DISTRIBUTION AND DETAILED INDEXES ALL
NONRECOVERABLE INDEXING MODE AUTOSELECT ;
--#SET :LOAD:ADMIN:SERVICES
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_services.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_services.txt"
METHOD P (1,2,3,4,5,6,7,8,9) MESSAGES
/opt/ibm/IBMDataMovementTool/migr/msg/admin_services.txt" REPLACE INTO
"ADMIN"."SERVICES" ("ID", "INVENTID", "DESCRIPTION", "SEVERITY", "SERVICEOWNER",
"OPENDATE", "CLOSEDATE", "TARGETCLOSEDATE", "STATUS" )--STATISTICS YES WITH DISTRIBUTION
AND DETAILED INDEXES ALL NONRECOVERABLE INDEXING MODE AUTOSELECT;
--#SET :LOAD:ADMIN:STATUS
LOAD FROM
"/opt/ibm/IBMDataMovementTool/migr/data/admin_status.txt" OF DEL
MODIFIED BY CODEPAGE=1208 COLDEL~ ANYORDER USEDEFAULTS CHARDEL"" DELPRIORITYCHAR
NOROWWARNINGS --DUMPFILE="/opt/ibm/IBMDataMovementTool/migr/dump/admin_status.txt"
METHOD P (1,2,3) MESSAGES "/opt/ibm/IBMDataMovementTool/migr/msg/admin_status.txt"
REPLACE INTO "ADMIN"."STATUS" ("ID", "TITLE", "NOTES" )--STATISTICS YES WITH
DISTRIBUTION AND DETAILED INDEXES ALL NONRECOVERABLE INDEXING MODE AUTOSELECT;
TERMINATE;
page 196. You can also execute this load from the command line by running the
db2load.sh script. If you have made changes to the extraction script be sure to
select No when requested to recreate output directory or <tableName>.tables
file, as it overrides your changes.
After importing the data into the DB2 tables, the RUNSTATS command should be
executed in order to recreate the statistic information about indexes. The
statistics information is used by the query optimizer. The IBM Data Movement
Tool generates a custom RUNSTATS script for the new database called
db2runstats.sql. Example 7-17 shows the db2runstats.sql script for our sample
conversion. You can run this in the IBM Data Movement Tool GUI or command
line.
Example 7-17 DB2 RUBSTATS commands for recreating the statistics information
CONNECT TO INVENT;
RUNSTATS ON TABLE "ADMIN"."GROUPS" ON ALL COLUMNS WITH DISTRIBUTION
ON ALL COLUMNS AND DETAILED INDEXES ALL ALLOW WRITE ACCESS ;
Example 7-18 Log file information about the DB2 LOAD command
Number of rows read = 140
Number of rows skipped = 0
Number of rows loaded = 140
Number of rows rejected = 0
TERMINATE
DB20000I The TERMINATE command completed successfully.
CONNECT TO INVENT
ADMIN.GROUPS
---------------------------------
6.
1 record(s) selected.
ADMIN.LOCATIONS
---------------------------------
140.
1 record(s) selected.
ADMIN.SEVERITY
---------------------------------
5.
1 record(s) selected.
ADMIN.OWNERS
---------------------------------
502.
1 record(s) selected.
ADMIN.INVENTORY
---------------------------------
703.
1 record(s) selected.
ADMIN.SERVICES
---------------------------------
808.
1 record(s) selected.
ADMIN.STATUS
---------------------------------
7.
1 record(s) selected.
TERMINATE
DB20000I The TERMINATE command completed successfully.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
1 record(s) selected.
Make sure that the number of rows read equals the number of rows committed.
This should also equal to the number of records in the MySQL source table.
Example 7-19 shows the MySQL command for the record count.
After you have checked that all the records were loaded into DB2, you should
check the sample data in each ported table. Pay attention to ensure that the
values are correct, especially if you have any-time values or decimal values.
Example 7-20 shows the table content checking.
1 record(s) selected.
The STRAIGHT_JOIN keyword forces the MySQL optimizer to join tables in the
order specified. In DB2 the join order is always determined by the optimizer. The
optimizer choices can be limited by changing the default query optimization class
to a lower level using SET CURRENT QUERY OPTIMIZATION. This however,
does not guarantee that the optimizer will evaluate the join in the order stated
within the SQL statement since the DB2 cost-based optimizer usually chooses
the best access path for a given query. For additional information see 10.5.5,
“SQL execution plan” on page 365.
Options prefixed with SQL are MySQL specific and do not require a DB2
equivalent.
DB2 has a similar operator to guide SQL optimizer decisions with a different
syntax as shown in Example 8-3.
The STRAIGHT_JOIN keywords force the MySQL optimizer to join tables in the
order they are specified. In DB2 the join order is always determined by the
optimizer. The optimizer choices can be limited by changing the default query
optimization class using SET CURRENT QUERY OPTIMIZATION command.
A NATURAL join, as its name implies, can be invoked when two or more tables
share exactly the same columns needed for a successful equijoin. It is
semantically equivalent to DB2 INNER JOIN or LEFT OUTER JOIN with the
respective join criteria specified in the ON clause.
According to the SQL ANSI standard when you must join tables that share more
than one column naturally, the JOIN ... USING syntax must be used. An
equivalent join can be composed using the DB2 supported join syntax in the ON
clause.
AVG([DISTINCT] mysql> SELECT a, AVG ([DISTINCT | db2 " SELECT a, Returns the
expression) AVG(b) ALL] expression) AVG(b) average set of
FROM t1 FROM t1 numbers
GROUP BY a GROUP BY a"
SUM([DISTINCT] mysql> SELECT a, SUM([DISTINCT | db2 " SELECT a, Returns the sum
expression) SUM(b) ALL] expression) sum(b) of a set of
FROM t1 FROM t1 numbers.
GROUP BY a GROUP BY a"
GROUP BY on alias mysql> SELECT a as x use column name for db2 " SELECT a Groups data by
FROM a grouping FROM t1 the alias name
GROUP BY x; GROUP BY a" provided.
GROUP BY on mysql> SELECT a use column name for db2 " SELECT a Groups data by
position FROM t1 grouping FROM t1 the position
GROUP BY 1 GROUP BY a" provided.
HAVING on alias mysql> SELECT a as x use column name in db2 " SELECT a Groups data
FROM t1 having clause FROM t1 meeting the
GROUP BY a GROUP BY a HAVING
HAVING x > 0 HAVING a > 0” expression.
8.1.6 Strings
Unless starting MySQL in ANSI mode (using mysqld --ansi), it behaves
differently than DB2. As Example 8-7 illustrates, MySQL accepts single as well
as double quotes as a string delimiter when started in default mode.
+---------+-----------+-------------+----------+
| redbook | 'redbook' | ''redbook'' | red"book |
+----------+-------------+---------------+----------+
1 row in set (0.00 sec)
DB2 is designed and implemented according to the ANSI standard and therefore
accepts single quotes as a string delimiter. Double quotes are used in DB2 for
delimiting SQL identifiers. Example 8-8 shows how DB2 handles strings. Similar
results will be achieved when MySQL runs in ANSI mode.
Table 8-2 provides an overview of a few of the MySQL string related functions,
and how these can be converted to DB2. For a full list of MySQL string functions
and the DB2 equivalent, refer to A.2, “String functions” on page 400.
ASCII(string) mysql> SELECT ascii('a'); ASCII(string) db2 "VALUES ascii('a') " Returns ASCII
+------------+ code value
| ascii('a') | 1
+------------+ -----------
| 97 | 97
+------------+
1 row in set (0.00 sec) 1 record(s) selected
REPLACE(st mysql> SELECT REPLACE REPLACE(stri db2 "VALUES REPLACE Returns as sting
ring1, string2, ('DINING', 'N', 'VID'); ng1, string2, ('DINING', 'N', 'VID') " with all
string3) +--------------------------------+ string3) 1 occurrences of
| REPLACE ('DINING', 'N', ------------------ string2 in string1
'VID') | DIVIDIVIDG with string3
+--------------------------------+ 1 record(s) selected.
| DIVIDIVIDG |
+--------------------------------+
1 row in set (0.00 sec)
TRIM([Both | mysql> select trim(trailing from TRIM([Both | db2 "VALUES Removes blanks
Leading | trim(LEADING FROM ' abc Leading | trim(trailing from or occurrences of
trailing ')) as OUTPUT; trailing trim(LEADING FROM ' another specified
[substring] +--------+ [substring] abc '))" character from the
FROM] | OUTPUT | FROM] string) end or the
string) +--------+ OUTPUT beginning of a
| abc | --------- string expression
+--------+ abc
1 row in set (0.00 sec)
1 record(s) selected.
Prior to Version 9.7, strong typing was used during comparisons and
assignments. Strong typing requires matching data types, which means that you
must explicitly convert one or both data types to a common data type prior too
performing comparisons or assignments.
In Version 9.7, the rules used during comparisons and assignments have been
relaxed. If reasonable interpretation can be made between two mismatched data
types, implicit casting is used to perform comparisons or assignments. Implicit
casting is also supported during function resolution. When the data types of the
arguments of a function being invoked cannot be promoted to the data types of
the parameters of the selected function, the data types of the arguments are
implicitly casted to those of the parameters.
Implicit casting reduces the amount of SQL statements that you must modify
when enabling applications that run on data servers other than DB2 data servers
to run on DB2 9.7. In many cases, you no longer have to explicitly cast data
types when comparing or assigning values with mismatched data types.
Example 8-9 shows how MySQL implicitly casts the character value 5 to an
integer value to resolve the query.
C1
-----------
5
1 record(s) selected.
For DB2 versions prior to 9.7 explicit casting of the character value to an integer
value is required as illustrated in Example 8-11.
SQL0401N The data types of the operands for the operation "=" are not
compatible. SQLSTATE=42818
db2 => select * from t1 where c1 = cast ('5' as int)
C1
--------
5
1 record(s) selected.
Example 8-12 illustrates how MySQL implicitly casts numeric values and DATA,
TIME, or TIMESTAMP values to strings when concatenated.
Example 8-12 MySQL implicit casting using concatenation for strings and DATE
mysql> select concat('ITSOSJ',1234) from t1;
+-----------+
| stringcol |
+-----------+
| ITSOSJ1234|
+-----------+
1 row in set (0.02 sec)
Example 8-13 illustrates how DB2 9.7 implicitly casts numeric values, as well as
DATA, TIME, or TIMESTAMP values to strings when concatenated.
Example 8-13 DB2 9.7 casting character strings and DATE explicitly
db2 => select concat('ITSOSJ',1234) from t1
1
----------
ITSOSJ1234
1 record(s) selected.
STRINGDATE
----------------
ITSOSJ08/31/2009
1 record(s) selected.
DB2 9.5 and prior, require compatible arguments for the concatenation built-in
functions as shown in Example 8-14. If the arguments are not compatible, for
example calling a function with a character data type argument using a numeric
data type, the concatenation will fail with the error message SQL0440.
Example 8-14 DB2 9.5 and prior - casting character strings and DATE explicitly
db2 => select concat('ITSOSJ',1234) from t1
SQL0440N No authorized routine named "CONCAT" of type "FUNCTION" having
compatible arguments was found. SQLSTATE=42884
db2 => select concat('ITSOSJ','1234') as stringcol from t1
STRINGCOL
-----------
ITSOSJ1234
1 record(s) selected.
STRINGDATE
------------------------
ITSOSJ01/23/2004
1 record(s) selected.
| This is an example. |
+-------------------------------------------+
1 row in set (0.00 sec)
DB2 follows the ANSI92 standard for concatenation of multiple strings. DB2 also
has a CONCAT(string1, string2) which can be used for concatenation of two
strings. Example 8-16 shows how DB2 handles concatenating strings.
1
-------------------
This is an example.
1 record(s) selected.
db2 => VALUES ('This ' || 'is ' || 'an ' || 'example.')
1
-------------------
This is an example.
1 record(s) selected.
The ANSI92 standard states that if you concatenate a NULL value onto an
existing string, the result set is NULL. Example 8-17 shows you the behavior of
MySQL.
+----------------------------------+
| nullstring |
+----------------------------------+
| abc |
+----------------------------------+ 1 row in set (0.00 sec)
As shown, MySQL behaves as ANSI-92 compliant, and therefore gives you the
same result sets as Example 8-18 for DB2.
NULLSTRING
-----
-
1 record(s) selected.
NULLSTRING
-----
abc
1 record(s) selected.
+------+
2 rows in set (0.02 sec)
mysql> rollback ;
ERROR 1196: Warning: Some non-transactional changed tables couldn't be rolled
back
mysql> select * from t1;
Empty set (0.00 sec)
The TRUNCATE option is primarily used to quickly delete all records from a table
when no recovery of the deleted rows is required. As of DB2 9.5 you can enable
the support of the TRUNCATE statement using the
DB2_COMPATIBILITY_VECTOR registry variable.
These features ease the task of converting applications written for other
relational database vendors to DB2 Version 9.5 or later. This DB2 registry
variable is represented as a hexadecimal value, and each bit in the variable
enables one of the DB2 compatibility features. To enable the TRUNCATE
statement set DB2_COMPATIBILITY_VECTOR registry variable to 8.
Example 8-20 shows the syntax to set the DB2_COMPATIBILITY_VECTOR
registry variable and execute the TRUNCATE command. More information on the
DB2_COMPATIBILITY_VECTOR registry variable can be found at the IBM
Information Center at:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
db2inst1@db2rules:~> db2set
DB2_COMPATIBILITY_VECTOR=8
DB2RSHCMD=/usr/bin/ssh
DB2COMM=tcpip
db2inst1@db2rules:~> db2start
C1
-----------
5
10
15
20
4 record(s) selected.
C1
-----------
0 record(s) selected.
You may also turn off logging with the following ALTER TABLE statement, to
achieve a similar behavior.
ALTER TABLE <tablename> ACTIVATE NOT LOGGED INITIALLY WITH EMPTY TABLE
The DBI interface can also be used to access DB2 using the DBD::DB2 driver.
Information about the DBI interface and the DBD::DB2 driver, as well as
installation instructions can be found at the following Web sites:
http://www.ibm.com/software/data/db2/perl/
http://search.cpan.org/~ibmtordb2/DBD-DB2-1.74/DB2.pod
http://www.perl.com/CPAN/modules/by-module/DBD/
http://aspn.activestate.com/ASPN/Modules/
Figure 8-1 on page 224 illustrates at how the Perl interfaces and pluggable
drivers connect to MySQL and DB2 databases.
Connecting to a database
Use the following code within your Perl application to connect to a MySQL
database:
$dsn= "dbi:mysql:$database:$host:$port";
$dbh = DBI ->Connect($dsn, $user, $password);
The connection statement consists of the data source name, user ID, and
password. The data source name consists of the vendor specific database
driver, the database name, the host name and the port. Optionally, the host
name and port can be left out of the data source name. Example 8-21 on
page 225 shows the connect syntax to a MySQL database. For simplicity
reasons the error handling is not included in following examples.
my $dns="DBI:mysql:database=$database;host=$host;port=$port";
$dbh =
DBI->connect("$dns","$user","$password");
$dbh = DBI->connect("$dns","$user","$password");
As DB2 is more powerful than MySQL, the DB2 connect statement may require a
fourth argument \%attr which contains the connection attributes, as shown in
Example 8-23.
$dbh->disconnect;
The following sections will cover porting PHP applications from MySQL to DB2.
For more information about developing PHP applications with DB2 check out
Developing PHP Applications for IBM Data Servers, SG24-7218.
If you are using either the mysql or mysqli functions to access you MySQL
database from PHP, it is relatively straightforward to convert your application to
use the ibm_db2 interface. You may also decide to perform a more complex
code rewrite and use PDO instead.
Connecting to a database
Connecting to a MySQL database with the mysql extension consists of two parts:
First a connection to the MySQL server has to be established and then a
database can be chosen.
The function specified to connect the MySQL server is given in the following
declaration:
resource mysql_connect ( [string server [, string username [, string
password [, bool new_link [, int client_flags]]]]])
The server variable in the mysql_connect() function contains the host name or
the IP address of the server that is hosting the MySQL database.
Example 8-26 shows the connection part of our sample application using MySQL
database.
Just like the mysqli connections, when connecting to a DB2 database with
ibm_db2 connections to databases are done using a single db2_connect()
command.
Example 8-28 shows the converted connection part of the sample application. In
our connection script, the command after the connection statement sets the
current schema which is used when querying the database. The background for
this approach is that compared to DB2, MySQL does not have schemas, nor
instances.
Note: The DSN is the database name which is registered in the DB2 catalog.
Because this database is cataloged, no server information has to be declared
in the connect statement.
$sqlResults = mysql_query($sql)
or die ("Couldn't execute query. " . $sql . "\n");
$result = mysql_fetch_array($sqlResults);
$username = $result[7];
You can map the mysqli_query() function directly to the db2_exec() function, and
the db2_exec() function nearly corresponds to the mysql_query() function. The
only difference between the two functions is that db2_exec() requires two
parameters: one of which specifies the connection id returned by the
db2_connect() statement and the other is the SQL statement:
resource db2_exec ( resource $connection , string $statement [, array
$options ] )
In Example 8-32, Example 8-33, and Example 8-34, the difference between the
MySQL and ibm_db2 functions for the INSERT statement is shown.
mysql_close($conn);
mysqli_close($conn);
db2_close($conn);
The three functions above, execute the same task of disconnecting from the
database and both return true on success and false on failure. In rare cases the
return value of the mysql_close() function is not used at all (see note above),
therefore conversions can mostly be done by simply replacing the function or
inserting the new function at the end of program execution.
mysqli_stmt_prepare db2_prepare
Mysqli_stmt_execute db2_execute
mysql_field_name db2_field_name
Connecting to a database
The syntax for connecting to a database using PDO is as follows:
$conn = new PDO( string $dsn [, string $username [, string $password [,
array $driver_options ]]] )
Example 8-38 and Example 8-39 show the difference between the syntax to
connect to a MySQL and DB2 database using PDO.
Note: The DSN is the database name, which is registered in the DB2 catalog.
Since this database is cataloged, no server information has to be declared in
the connect statement.
Sconn = null;
Using the Unified ODBC support in PHP applications does not require a special
load of library files since support has been integrated during the compilation
process of PHP. A complete overview about the MySQL and the Unified ODBC
functions can be found in the PHP manual, available at:
http://www.php.net/docs.php
When we talk about ODBC in this section (always Unified ODBC), we refer to the
native DB2 driver. Wide similarities between syntax of Unified ODBC and other
ODBC types, and performance advantages when using Unified ODBC support,
is the reason the application conversion is discussed with the Unified ODBC
support.
Connecting to a database
When connecting to a DB2 database with ODBC, the connection is done in a
single ODBC command (odbc_connect()).
resource odbc_connect ( string dsn, string user, string password [, int
cursor_type])
Example 8-42 shows how to connect to our sample application database using
ODBC command.
Note: The DSN is the database name, which is registered in the DB2 catalog.
Since this database is cataloged, no server information has to be declared in
the connect statement.
$conn = odbc_connect($database,$user,$password)
or die("Could not connect ". odbc_errormsg());
Example 8-44 illustrates the ODBC functions for the INSERT statement. Refer to
Example 8-32 on page 232 and Example 8-33 on page 232 for the MySQL
INSERT statements.
if($updateOutput){
$textOutput = "Service Ticket updated. \n";
}else{
$textOutput = "Service Ticket updated Failed. \n";
}
echo $textOutput . "\n";
Example 8-45 shows the disconnect function using the ODBC library. Refer to
Example 8-28 for the MySQL disconnect function.
odbc_close($conn);
MySQL has two approached to connect to the to a MySQL database from Ruby:
MySQL/Ruby API
This API is a MySQL API.
Ruby/MySQL API
This API is developed by Ruby.
The two APIs provide the same functions the MySQL C API.
Collectively known as the IBM_DB gem, the IBM_DB Ruby driver and Rails
adapter allows Ruby applications to access the following database management
systems:
DB2 Version 9 for Linux, UNIX, and Windows
DB2 Universal Database (DB2 UDB) Version 8 for Linux, UNIX, and Windows
DB2 UDB Version 5, Release 1 (and later) for AS/400® and iSeries, through
DB2 Connect
DB2 for z/OS, Version 8 and Version 9, through DB2 Connect
Connecting to a database
Connecting to a MySQL database with the MySQ/Ruby API consists of two parts:
First a connection to the MySQL server has to be established using the connect()
function and then a database can be chosen using the select_db() function.
conn = Mysql.init()
conn.connect('localhost', 'password')
conn.select_db('test')
The correspondent IBM_DB Ruby query is shown in Example 8-49 on page 240.
The MySQL query() function can be mapped directly to the DB2 exec() function.
Both functions have similar functionality. The only difference between the two
functions is that exec() requires two parameters: one of which specifies the
connection id returned by the connect() statement and the other is the SQL
statement:
resource IBM_DB::exec ( resource connection, string statement [, array
options] )
Example 8-50 and Example 8-51 show the disconnect functions using the
MySQL/Ruby API and the IBM_DB API functions.
For more information on developing DB2 application with Ruby on Rails review
the Developing Perl, PHP, Python, and Ruby on Rails Applications Manual
available at:
http://www.ibm.com/support/docview.wss?rs=71&uid=swg27015148
DB2 provides the IBM Data Server Driver for JDBC and SQLJ. This driver is a
single application driver to support the most demanding Java applications. This
agile driver can be used in either type 4 or type 2 mode. This section provides an
overview of JDBC, SQLj, and the conversion of existing MySQL Java
applications to DB2.
A JDBC application can establish a connection to a data source using the JDBC
DriverManager interface. In the following sections we discuss the changes
required within code of Java application when converting from MySQL to DB2.
In order to use the DB2 JDBC type 2 driver you need following properties:
drivername="COM.ibm.db2.jdbc.app.DB2Driver"
URL="java:db2:dbname"
The user ID and password are implicitly selected according to the DB2 client
setup.
Note: The DB2 JDBC Type 2 Driver for Linux, UNIX and Windows will not
be supported in future releases. You should therefore consider switching to
the IBM Data Server Driver for JDBC and SQLJ described below.
In this section we provide you with information on Java program conversion from
MySQL to DB2.
Connecting to a database
In this part, the Java program tries to establish a connection to the given
database. This is done by calling the function DriverManager.getConnection with
the proper URL values as discussed within the driver description in “IBM JDBC
driver for DB2” on page 242. After this call, DriverManager selects the
appropriate driver from set of registered drivers, in order to connect to the
database. Example 8-52 and Example 8-53 show these steps for MySQL and
DB2 respectively.
//...
}
}
}
}
The JDBC API does not put any restrictions on the kind of SQL statements that
can be executed. Therefore, it becomes the responsibility of the application to
pass SQL statements compatible to the database. The connection obtained in
Example 8-52 and Example 8-53 can be used for one of the following three types
of statements depending upon the requirements:
Statement - Simple single SQL statement
The statement can be created by using the createStatement method of the
Connection. Example 8-54 shows the usage of executeQuery with a change
for MySQL and DB2. It is evident that only changes to the SQL statement are
required.
conn.close();
Since MySQL does not enforce strict type conversions, the Java programmer
has to take care of data loss because of round off, overflow, or precision loss.
For more detials on how MySQL is mapped to Java data types refer to:
http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-type-conversions.h
tml
On the other hand, DB2 sticks to JDBC specification, and provides a default and
recommended data type mapping as shown in Table 8-4.
Apart from this, DB2 uses C/C++ for server side programming for creating:
Stored procedures on DB2 server
User-defined functions (UDF) on DB2 server.
DB2 provides precompilers for C, C++, COBOL, Fortran, and REXX to support
embedded SQL applications. Embedded SQL applications support both static
and dynamic SQL statements. Static SQL statements require information about
all SQL statements, tables, and data types used at compile time. The application
needs to be precompiled, bound, and compiled prior to execution. In contrast,
dynamic SQL statements can be built and executed at runtime. For further
Converting applications
MySQL C API and DB2 CLI are similar in functionality and mechanisms to
access databases. Both use the function call to pass dynamic SQL statements
and do not need to be precompiled. We recommend converting MySQL C
applications to DB2 CLI. This section describes conversion changes for various
levels of the application:
int main(){
if(!mysql_real_connect (
conn, /* pointer to connection handler */
NULL, /* host to connect, default localhost*/
"mysql", /* user name, default local user*/
Figure 8-4 on page 250 shows a similar task using DB2 CLI. It shows the
initialization tasks, which consist of: allocation and initializing the environment
and connection handlers; creating the connection; processing of transactions;
and terminating the connection and removing of handlers.
Example 8-57 shows the implementation of the task defined by the figure above.
exit(0);
}
Processing a query
A typical MySQL C API program involves three steps in query processing:
Query construction
Depending upon your requirement you can construct a null terminated string
or counted length string for the query:
char *query;
Execution of the query
For executing the query you can use mysql_real_query() for a counted length
query string or the mysql_query() for a null terminated query string.
Example 8-58 shows the processing of a query with both mysql_real_query()
and mysql_query() method calls.
Processing of the returned results
After executing the query, the final step is to process the results. All the
statements except SELECT, SHOW, DESCRIBE, and EXPLAIN do not return
results. For these statements, mysql provides the mysql_affected_rows()
function for accessing the number of rows affected.
If your query returns a result set, follow these steps for the result processing:
a. Generate the result set using mysql_store_result() or mysql_use_result().
b. Fetch each row using mysql_fetch_row().
c. Release the result set using mysql_free_result().
Example 8-58 shows an example of both MySQL queries - some that return a set
of results and others that do not.
mysql_free_result(result);
}
}
On the other hand DB2 CLI provides a more comprehensive set of APIs for doing
similar tasks. One of the essential parts of DB2 CLI is transaction processing,
which is supported by all the tables in DB2. Figure 8-5 on page 253 shows the
typical order of function calls of query processing.
Example 8-59 DB2 CLI prepared statement with column binding, auto commit on
SQLHANDLE hstmt; /* statement handle */
SQLCHAR firstName [TEXT_SIZE];
SQLCHAR lastName [TEXT_SIZE];
SQLCHAR email [TEXT_SIZE];
SQLINTEGER id = 501;
/* set AUTOCOMMIT on */
ret = SQLSetConnectAttr(hdbc, SQL_ATTR_AUTOCOMMIT, (SQLPOINTER)SQL_AUTOCOMMIT_ON,
SQL_NTS);
if (ret != SQL_SUCCESS) {
/* handle error */
}
if(ret == SQL_NO_DATA_FOUND){
printf("No data found");
}
while(ret != SQL_NO_DATA_FOUND){
printf("First name: %s \n",firstName);
printf("Last name: %s \n",lastName);
Example 8-60 DB2 CLI prepare/execute in one step with SQLGetData and manual commit
SQLHANDLE hstmt; /* statement handle */
SQLCHAR firstName [TEXT_SIZE];
SQLCHAR lastName [TEXT_SIZE];
SQLCHAR email [TEXT_SIZE];
/* set AUTOCOMMIT on */
ret = SQLSetConnectAttr(hdbc,
SQL_ATTR_AUTOCOMMIT, (SQLPOINTER)SQL_AUTOCOMMIT_OFF, SQL_NTS);
if (ret != SQL_SUCCESS) {
/* handle error */
}
if(ret == SQL_NO_DATA_FOUND) {
printf("No data found");
}
int count = 0;
while(ret != SQL_NO_DATA_FOUND) {
ret=SQLFetch(hstmt);
}
ret = SQLEndTran( SQL_HANDLE_DBC, hdbc, SQL_ROLLBACK );
if (ret != SQL_SUCCESS) {
/* handle error */
return 1;
}
For all applications using the MySQL Connector/C++, you may want to consider
converting these to DB2 CLI. The typical conversion process would remain the
same as both MySQL C and MySQL C++ use the same flow of the program.
Since DB2 CLI is also based on the ODBC specification, and you can build
ODBC applications without using any ODBC driver manager, so the application
conversion is quite easy. All you must do is use DB2's ODBC driver to link to
your application with libdb2. The DB2 CLI driver also acts as an ODBC driver
when loaded by an ODBC driver manager. DB2 CLI conforms to ODBC 3.51.
Figure 8-6 on page 258 shows the MySQL driver and DB2 ODBC driver in the
ODBC scenario. This shows the simplicity of converting an application using an
ODBC driver to another driver. It also shows various components involved in the
ODBC application and how they are mapped from MySQL Connector/ODBC to
DB2 ODBC.
Figure 8-6 ODBC application conversion from MyODBC to DB2 ODBC Driver
Driver manager
IBM Data Server CLI and ODBC Driver do not come with ODBC Driver
Manager. When using an ODBC application you must ensure that an ODBC
Driver Manager is installed and users using ODBC have access to it. The
following ODBC Driver Managers can be configured to work with the IBM
Data Server CLI and ODBC Driver:
– unixODBC driver manager
The unixODBC Driver Manager is an open source ODBC driver manager
supported for DB2 ODBC applications on all supported Linux and UNIX
operating systems.
– Microsoft ODBC driver manager
The Microsoft ODBC driver manager can be used for connections to
remote DB2 databases when using TCP/IP network.
– DataDirect ODBC driver manager
The DataDirect ODBC driver manager for DB2 can be used for
connections to the DB2 database.
Connector/ODBC
As shown in Figure 8-6 on page 258, you are not required to use
Connector/ODBC now, instead you use the DB2 ODBC Driver.
ODBC Configuration
The ODBC Driver Manager uses two initialization files:
– /etc/unixODBC/odbcinst.ini, in which you must add the following:
[IBM DB2 ODBC DRIVER]
Driver=/home/<instance name>/sqllib/lib/db2.o
– /home/<instance name>/.odbc.ini, in which you must configure the data
source; for setting up a data source you must add the following:
In [ODBC Data Source] stanza add
invent= IBM DB2 ODBC DRIVER
Add [invent] stanza with
Driver=/home/<instance name>/sqllib/lib/libdb2.so
Description= invent DB2 ODBC Database
MySQL Server
The MySQL database server is replaced by the DB2 server; this is discussed
in detail in previous chapters.
You may optionally configure DB2 ODBC Driver to modify the behavior of the
DB2 ODBC Driver. This can be done by changing the db2cli.ini file.
However, the value for SQLCODE is IBM defined. To achieve the highest
portability of applications, you should only build dependencies on a subset of
DB2 SQLSTATEs that are defined by ODBC Version 3 and ISO SQL/CLI
specifications. Whenever you build your exception handling on IBM supplied
SQLSTATEs or SQLCODEs, the dependencies should be carefully documented.
The specifications can be found using the search words ISO/IEC and standards
9075-1, 9075-2, and 9075-3 for SQL Foundation.
For example, if your application would signal SQLSTATE 23000, the DB2
description reports an integrity constraint violation, which is similar to MySQL's
rudimentary description ER NON UNIQ ERROR or ER DUP KEY. Hence,
condition handling for both database management systems could almost
execute the same code.
i = 1;
printf("-------------------------\n");
}
Note: The code snippets provided in this chapter are for illustration purposes
only. utilcli.c is sample code shipped with DB2 and can be found in the
SQLLIB/samples directory.
You can use the getErrorCode method to retrieve SQL error codes and the
getSQLState method to retrieve SQLSTATEs.
For exception handling in Java it is important to know that DB2 provides several
types of JDBC drivers with slightly different characteristics. With the DB2
Universal JDBC Driver, you can retrieve the SQLCA. For the DB2 JDBC type 2
driver for Linux, UNIX, and Windows (DB2 JDBC type 2 driver), use the standard
SQLException to retrieve SQL error information.
SQLException under the IBM Data Server Driver for JDBC and
SQLJ
As in all Java programs, error handling is done using try/catch blocks. Methods
throw exceptions when an error occurs, and the code in the catch block handles
those exceptions.
JDBC provides the SQLException class for handling errors. All JDBC methods
throw an instance of SQLException when an error occurs during their execution.
According to the JDBC specification, an SQLException object contains the
following information:
A string object that contains a description of the error or null
A string object that contains the SQLSTATE or null
An integer value that contains an error code
A pointer to the next SQLException or null
IBM DB2 Driver for JDBC and SQLJ provides an extension to the SQLException
class, which gives you more information about errors that occur when DB2 is
accessed. If the JDBC driver detects an error, the SQLException class provides
you the same information as the standard SQLException class. However, if DB2
detects the error, the SQLException class provides you the standard information,
along with the contents of the SQLCA that DB2 returns. If you plan to run your
JDBC applications only on a system that uses the IBM DB2 Driver for JDBC and
SQLJ, you can use this extended SQLException class.
Under the IBM DB2 Driver for JDBC and SQLJ, SQLExceptions from DB2
implement the com.ibm.db2.jcc.DB2Diagnosable interface. An SQLException
from DB2 contains the following information:
A java.lang.Throwable object that caused the SQLException or null if no such
object exists. The java.lang.Throwable class is the superclass of all errors
and exceptions in the Java language.
The information that is provided by a standard SQLException
An object of DB2-defined type DB2Sqlca that contains the SQLCA. This
object contains the following objects:
– An INT value that contains an SQL error code
– A String object that contains the SQLERRMC values
– A String object that contains the SQLERRP value
– An array of INT values that contains the SQLERRD values
– An array of CHAR values that contains the SQLWARN values
– A String object that contains the SQLSTATE
The basic steps for handling an SQLException in a JDBC program that runs
under the IBM DB2 Driver for JDBC and SQLJ are:
1. Import the required classes for DB2 error handling,
com.ibm.db2.jcc.DB2Diagnosable for getting diagnostic data, and
com.ibm.db2.jcc.DB2Sqlca for receiving error messages.
2. In your code catch SQLException and use it to get SQLCA. This is allowed
only if the exception thrown is an instance of the DB2Diagnosable class.
3. Once you have DB2Sqlca, it can be used to get SQLCODE, messages, SQL
errors, and winnings as shown in Example 8-63.
……
try {
// Code that could throw SQLExceptions }
….
catch(SQLException sqle) {
while(sqle != null) {
if (sqle instanceof DB2Diagnosable) {
DB2Sqlca sqlca = ((DB2Diagnosable)sqle).getSqlca();
if (sqlca != null) {
System.err.println ("SqlCode: " + sqlca.getSqlCode());
System.err.println ("SQLERRMC: " + sqlca.getSqlErrmc());
System.err.println ("SQLERRP: " + sqlca.getSqlErrp() );
String[] sqlErrmcTokens = sqlca.getSqlErrmcTokens();
for (int i=0; i< sqlErrmcTokens.length; i++) {
System.err.println (" token " + i + ": " + sqlErrmcTokens[i]);
}
int[] sqlErrd = sqlca.getSqlErrd();
char[] sqlWarn = sqlca.getSqlWarn();
System.err.println ("SQLSTATE: " + sqlca.getSqlState());
System.err.println ("message: " + sqlca.getMessage());
}
}
sqle=sqle.getNextException();
}
}
The statement WHENEVER must be used prior to the SQL statements that will
be affected. Otherwise, the precompiler does not know that additional
error-handling code should be generated for executable SQL statements. You
can have any combination of the three basic forms active at any time. The order
in which you declare the three forms is not significant.
To avoid an infinite looping situation, ensure that you undo the WHENEVER
handling prior to execution of any SQL statements within the handler. You can do
this by using the WHENEVER SQLERROR CONTINUE statement.
After executing each SQL statement, the system issues a return code in both
SQLCODE and SQLSTATE. SQLCODE is an integer value that summarizes the
execution of the statement, and SQLSTATE is a character field that provides
common error codes across IBM's relational database products. SQLSTATE
also conforms to the ISO/ANS SQL92 and FIPS 127-2 standard.
Note that if SQLCODE is less than 0, it means an error has occurred and the
statement has not been processed. If the SQLCODE is greater than 0, it means a
warning has been issued, but the statement is still processed.
#include "sqlca.h"
extern struct sqlca sqlca;
When DB2 raises a condition that matches a condition, DB2 passes control to
the condition handler. The condition handler performs the action indicated by
handler-type, and then executes SQL-procedure-statement.
You can also use the DECLARE statement to define your own condition for a
specific SQLSTATE.
Example 8-64 shows the general flow of the condition handler in a stored
procedure.
statement4;
end;
statement1;
statement2;
End
Example 8-65 shows a CONTINUE handler for delete and update operations on
a table named EMP. Again, note that this code is solely intended for illustration
purposes.
This information has to be ported into a DB2 table. When a user attempts to
access the data in the DB2 database, the application will verify each user's
database access rights along with the host system information where one is
connecting from.
Two tables are needed for our DB2 conversion: one to store user privilege
information ported from MySQL and one is a working table. The table definitions
and some sample values are shown in Example 8-67 on page 268.
-- table ACCESSLIST
-- it stores access rights for specific users connecting from specific hosts
-- remark: there should be different access-flags for different functions
-- fields:
-- username, whom access to the function should be granted
-- hostname or ip-address, from which the user must connect
-- select access flag (Y/N), if SELECT is granted
-- insert access flag (Y/N), if INSERT is granted
-- table APPLACCESS
-- it stores the info about users and their host asking for access
-- this table is filled automatically by the sample application
-- fields:
-- username, who asks for access to the function
-- hostname, from which the user connects
-- ip-address, from which the user connects
-- timestamp, when the user asks for access
The application code is listed in Example 8-68; remember that the code is just for
demonstration purposes.
{
// in our example we use fixed values, you should make this variable
private static final String DB2DB = "invent"; // database name
private static final String DB2USR = "db2inst1"; // database user
private static final String DB2PWD = "password"; // database password
db2Conn=DriverManager.getConnection("jdbc:db2:"+DB2DB,DB2USR,DB2PWD);
Both methods have their pros and cons, but by far the most popular method is
the latter approach. Both MySQL and DB2 follow this approach to various
degrees of sophistication and implementation differences.
Because the isolation level determines how data is isolated from other processes
while the data is being accessed, you should select an isolation level that
Note: For Cursor Stability isolation level, the currently committed semantics
introduced in Version 9.7, only committed data is returned. This was the case
in previous releases, but now readers do not wait for updaters to release row
locks. Instead, readers return data that is based on the currently committed
version; that is, data prior to the start of the write operation.
As one can see, the isolation levels listed in Table 8-5 are ordered descendent
according to the number and duration of locks held during the transaction, and
therefore the degree of concurrency or locking is required to ensure the desired
level of data integrity. However, as we can see too much locking drastically
reduces concurrency. Poor application design and coding may cause locking
problems such as:
Deadlocks
Lock waits
Lock escalation
Lock time-outs
By default DB2 operates with the isolation level cursor stability. Transaction
isolation can be specified at many different levels as discussed in 8.3.5,
“Specifying the isolation level in DB2” on page 277. For good performance, verify
the lowest isolation level required for your converted application.
8.3.4 Locking
Some MySQL applications when ported to DB2 appear to behave identically, and
the topic of concurrency can be ignored. However, if your applications involve
frequent access to the same tables you may experience a different behavior. By
default, MySQL runs in a mode that is called autocommit. This means that
MySQL considers each and every SQL statement as an atomic unit of work or
transaction.
Another matter causing heated discussions among experts is the level of locking
required for implementation on the database level. Should the locking approach
be implemented with the lowest level of overhead, and therefore maintain locks
on a table level? Or, is it better to lock on a lower level for example on page
level? Should the granularity be even finer and locking occur on row level?
MySQL development decided to go the multi storage engine way and decided to
implement lock levels based on the type of table. Table types can be mixed
within a database and even a statement, and types can be altered. The default
storage engine for MySQL supports only table level locking. MyISAM table,
Merge and MEMORY tables use table level locking. The InnoDB storage engine
was released as a transactional table handler of MySQL with a lock manager for
row level locking mechanisms. Hence, the MySQL table type InnoDB defines
tables, most alike DB2 tables. Table 8-6 gives a superficial comparison of the
different flavors of MySQL tables with DB2 tables:
Lock level Row level, None or table level Row level and table
Table level only on level
explicit request
However, let us attempt to summarize the concurrency issues that may arise
when converting a MySQL application to DB2 based on the two MySQL table
types, which we consider significant:
MyISAM tables provide a high level of concurrency since SQL processing
occurs in autocommit mode and no row level locks are maintained. When
converting to DB2 ensure your application operates in autocommit mode.
Verify the lowest isolation level required for your application and MyISAM
tables.
InnoDB tables provide concurrency control very similar to DB2. Note that
default transaction isolation for InnoDB is repeatable read..
The isolation level that you specify is in effect for the duration of the unit of work.
The isolation level can be specified in several different ways. The following
heuristics are used to determine which isolation level will be used when
compiling an SQL or XQuery statement:
Static SQL:
– If an isolation clause is specified in the statement, then the value of that
clause is used.
– If no isolation clause is specified in the statement, then the isolation level
used is the one specified for the package at the time when the package
was bound to the database.
Dynamic SQL:
– If an isolation clause is specified in the statement, then the value of that
clause is used.
– If no isolation clause is specified in the statement, and a SET CURRENT
ISOLATION statement has been issued within the current session, then
the value of the CURRENT ISOLATION special register is used.
– If no isolation clause is specified in the statement, and no SET CURRENT
ISOLATION statement has been issued within the current session, then
the isolation level used is the one specified for the package at the time
when the package was bound to the database.
For static or dynamic XQuery statements, the isolation level of the
environment determines the isolation level that is used when the XQuery
expression is evaluated.
If you create a bind file at precompile time, the isolation level is stored in the
bind file. If you do not specify an isolation level at bind time, the default is the
isolation level used during precompilation.
If you do not specify an isolation level, the default of cursor stability is used.
where XXXXXXXX is the name of the package and YYYYYYYY is the schema
name of the package. Both of these names must be in all capital letters.
REXX and the command line processor connect to a database using a default
isolation level of cursor stability. Changing to a different isolation level does not
change the connection state.
To determine the isolation level that is being used by a REXX application, check
the value of the SQLISL predefined REXX variable. The value is updated each
time that the CHANGE ISOLATION LEVEL command executes.
This isolation level overrides the isolation level that is specified for the package in
which the statement appears. You can specify an isolation level for the following
SQL statements:
DECLARE CURSOR
Searched DELETE
INSERT
SELECT
SELECT INTO
Searched UPDATE
Note: JDBC and SQLj are implemented with CLI on DB2, which means the
db2cli.ini settings might affect what is written and run using JDBC and SQLj.
Using the DB2 profile registry allows for centralized control of the environment
variables. Through use of different profiles, different levels of support are
provided. Remote administration of the environment variables is also available
when using the DB2 Administration Server.
This registry contains a list of all instance names associated with the current
copy. Each installation has its own list. You can see the complete list of all
instances available on the system by running db2ilist.
The variables can be set using the db2set command. The command immediately
stores the updated variables in the profile registry. Example 9-1 shows different
modes in which the db2set command can be used.
DB2 configures the operating environment by checking for registry values and
environment variables, and resolves them in the following order:
1. Environment variables are set using the set command. (Or the export
command on UNIX platforms.)
2. Registry values set with the instance node-level profile (using the db2set -i
<instance name> <nodenum> command).
3. Registry values set with the instance-level profile (using the db2set -i
command).
4. Registry values set with the global-level profile (using the db2set -g
command).
Figure 9-1 illustrates the two DB2 configuration files and additional operating
system configurations.
Configuration tools
IBM has tools to assist with configuring your database server. Two of which are
the Configuration Assistant and the IBM Data Studio.
You can use the DB2 Configuration Assistant to configure and maintain the
database objects that you or your application will be using. The Configuration
Assistant is a graphical tool tightly integrated with the DB2 Control Center. It
allows you to update both the DB2 Profile Registry and the DB2 database
manager configuration parameters on the local machine as well as remotely. It
can be launched from the Control Center or by calling the db2ca utility. The
Configuration Assistant also has an advanced view, which uses a notebook to
organize connection information by object: Systems, Instance Nodes,
Databases, Database Connection Services (DCS), and Data Sources. Figure 9-2
shows how to change the database manager configuration using the
Configuration Assistant.
Note: The Configuration Assistant has been deprecated in Version 9.7 and
might be removed in a future release. We recommend the IBM Integration
Management solutions for managing DB2 for Linux, UNIX, and Windows data
and data-centric applications.
The IBM Data Studio is part of the IBM Integrated Management solutions for
managing your DB2 database. Data Studio simplifies the process of managing
your database objects by supporting instance and database management and by
providing the ability to run database commands and utilities. It provides a simple
user interface to invoke the database administration commands that you use to
maintain and manage your database environment. Figure 9-3 shows how to
change the database manager and database configuration using Data Studio.
IBM offers a number of automatic tools and DB2 features to make database
administration effortless. The Configuration Advisor can be used to assist with
parameter configuration and configuring your database for optimal performance.
The Configuration Advisor looks at your current database, asks for user input on
the database workload and suggests the best configuration parameters for buffer
pool size, database configuration, and database manager configuration.
Figure 9-4 shows the suggested output for our sample inventory database.
These log files are important if you need to recover data that is lost or damaged.
The three files work in the following ways:
Recovery log files
The recovery log files are used to recover from application or system errors.
In combination with the database backups, they are used to recover the
consistency of the database right up to the point in time when the error
occurred. There are recovery log files for each database on the server.
Recovery history files
The recover history files contain a summary of the backup information that
can be used to determine recovery options, if all or part of the database must
be recover to a given point in time. It is used to track recovery-related events
such as backup and restore operations, among others. This file is located in
the database directory.
Table space change history file
The table space change history files are also located in the database
directory. These files contain information that can be used to determine which
log files are required for the recover of a particular table space.
There are two types of databases for backup and recovery, non-recoverable and
recoverable databases. As a database administrator you need to decide which
category your database falls under.
Non-recoverable database
Data that is easily recreated can be stored in a non-recoverable database.
This includes data that is used for read-only applications and tables that are
not often updated. These types of databases have a small amount of logging
which does not justify the added complexity of managing log files and rolling
forward after a restore operation.
Database backup
To back up a DB2 database, database partition, or a selected table space, you
can use the DB2 backup command. This command can be used to create a
backup to disk, tape, or named pipes in UNIX. DB2 supports both offline and
online backup.
db2 backup database invent to /home/db2inst1/backup
You can back up an entire database, database partition, or only selected table
spaces.
In addition to backing up the entire database every time, DB2 also supports
incremental backups where you can back up large databases on a regular basis
incrementally. This requires a trackmod database configuration parameter to be
set to yes. Incremental backup can be a cumulative backup, which stores data
changes since the last successful full backup, or delta backup which is the last
successful backup irrespective of whether that backup was full, delta, or
cumulative. Figure 9-6 on page 290 and Example 9-2 on page 290 show the
cumulative and delta backup technique.
Database recovery
The recover utility performs the necessary restore and roll forward operations to
recover a database to a specified time, based on information found in the
recovery history file. When you use this utility, you specify that the database be
recovered to a point-in-time or to the end of the log files. The utility will then
select the best suitable backup image and perform the recovery operations.
Example 9-3 shows how to use the RECOVER DATABASE command.
Rollforward Status
Node number = 0
Rollforward status = not pending
Next log file to be read =
Log files processed = S0000000.LOG - S0000001.LOG
Last committed transaction = 2009-09-11-19.25.01.000000 Local
Figure 9-11 shows how you can use Data Studio to recover a DB2 database.
Database restore
DB2 database restore is as easy as taking database backup. This can be done
by using the RESTORE utility. The restore database command rebuilds the
database data or table space to the state that it was in when the backup copy
was made. This utility can overwrite a database with a different image or restore
the backup copy to a new database. The restore utility can also be used to
restore backup images in DB2 Version 9.7 that were backed up on DB2
Universal Database Version 8, DB2 Version 9.1, or DB2 Version 9.5.
This RESTORE utility supports full and incremental database restore. Incremental
database restore can be automatic or manual. Example 9-4 shows automatic
incremental restore, and Example 9-5 shows manual incremental restore.
If at the time during backup operation the database was enabled for roll forward
recovery, the database can be brought to its previous state by invoking the
following ROLLFORWARD command after successful completion of a restore
operation:
db2inst1@db2server:~ > db2 ROLLFORWARD DATABASE invent COMPLETE
Tip: DB2 backup and restore can also be performed using the Data Studio
The RESTORE and ROLLFORWARD utilities can also be executed from Data Studio.
Figure 9-12 on page 296 shows the restore database window.
IBM provides two different solutions that you can use to replicate data from and
to relational databases: SQL replication and Q replication. IBM also provides a
solution called event publishing for converting committed source changes into
messages in an XML or delimited format and for publishing those messages
across WebSphere MQ queues to applications.
Replication is not only supported between two DB2 systems running on different
platforms, but also on the following non-DB2 databases SQL replication supports
replicating between DB2 on Linux, UNIX, Windows, z/OS, and iSeries; Informix;
Microsoft SQL Server; Oracle; Sybase; and Teradata (target only). Q replication
supports DB2 for Linux, UNIX, and Windows, DB2 for z/OS, Informix (target
only), Microsoft SQL Server (target only), Oracle (target only), and Sybase
(target only).
IBM provides three tools that can assist with setting up replication.
Replication Center
The Replication Center is a graphical user interface that you can use to
define, operate, and monitor your replication and publishing environments. It
comes with the DB2 Administration Client and runs on Linux and Windows
systems. The Replication Center provides a single interface for administering
your replication environments on different platforms across multiple systems.
Among its features:
– Launch pads that show you step by step how to configure basic replication
and publishing environments.
– Wizards that help you set up simple to highly customized replication and
publishing configurations.
– Profiles that you can customize that let you create replication objects with
schemas, names, and other attributes that conform to your own
conventions and storage requirements.
The Replication Center can be invoked through Control Center or using db2rc
command.
ASNCLP command-line program
The ASNCLP program generates SQL scripts for defining and changing
replication and publishing environments. The program runs on Linux, UNIX,
Windows, and UNIX System Services (USS) for z/OS. The ASNCLP program
does not run natively on z/OS or System i.
You can use the ASNCLP to administer SQL replication, Q replication,
Classic replication, event publishing, and the Replication Alert Monitor. You
can build ASNCLP input files and run them to generate SQL scripts, or you
can run ASNCLP commands interactively from an operating system prompt.
You can also run the ASNCLP program in execute-immediately mode, which
is useful for operational commands such as START QSUB, STOP QSUB, or
LIST.
Replication Dashboard
The Replication Dashboard is a Web-based, graphical user interface that
helps you monitor and manage the health of replication and event publishing.
The dashboard provides an overall summary of replication and publishing
configurations in a convenient tabular format with high-level status indicators.
You can drill down for more detailed information on queues and queue depth,
subscriptions, latency, and exceptions, and generate detailed reports to help
track performance or diagnose problems. You can also view up to 24 moving
graphs for near-real-time performance information.
You can change program parameters, start, stop, and reinitialize
subscriptions, and start and stop queues. The dashboard also provides a
convenient way to view alerts from the Replication Alert Monitor program.
EXPORT utility
DB2 EXPORT is a powerful tool to export your DB2 data quickly from DB2 to the
external file system. DB2 EXPORT uses SQL select or an XQuery statement to
export tables, views, large objects, or typed tables to one of the three external file
formats:
– .DEL: delimited ASCII format file
– .WSF: work sheet format like lotus 1-2-3®
– .IXF: integrated exchange format
db2move utility. This utility queries the system catalog tables for a specified
database, and exports the table structure and contents of each table found to a
PC/IXF formatted file. These files can be used to populate another DB2
database. It can be run in three modes:
EXPORT mode
In this mode, the db2move utility invokes DB2 EXPORT utility to extract data from
one or more tables and write to PC/IXF formatted files. It also creates a file
named db2move.lst that contains the names of all tables exported and the
names of the files that the table data was written to. EXPORT mode can be
used as below:
db2inst1@db2server:~ > db2move invent EXPORT
IMPORT mode
In this mode, the db2move utility invokes the DB2 IMPORT utility to recreate
tables and indexes from data stored in PC/IXF formatted files. The file
db2move.lst generated in EXPORT mode can be used to get information
about tables in the exported files. The above exported files can be imported
using:
db2inst1@db2server:~ > db2move invent IMPORT
LOAD mode
In this mode, the db2move utility invokes the DB2 LOAD utility to populate
tables that already exist with data stored in PC/IXF formatted files. The file
db2move.lst generated in EXPORT mode can be used to get information
about tables. The above exported files can be loaded using:
db2inst1@db2server:~ > db2move invent LOAD -l /home/db2inst1/export
There are four different modes you can execute the LOAD utility in:
INSERT
In this mode, load appends input data to the table without making any
changes to the existing data.
REPLACE
In this mode, load deletes existing data from the table and populates it with
the input data.
RESTART
In this mode, an interrupted load is resumed. In most cases, the load is
resumed from the phase it failed in. If that phase was the load phase, the load
is resumed from the last successful consistency point.
TERMINATE
In this mode, a failed load operation is rolled back.
adapters can be used. As of DB2 9.7 the standby server can also be used for
read-only transactions, therefore utilizing all of the hardware at all times.
For the automatic failover the DB2-integrated component called Tivoli System
Automation for Multiplatforms (TSA MP) comes into play (often referred to as
cluster manager). This component monitors all recourses involved in both
systems - i.e. DB2 instances, databases, network interfaces, and others - and
initiates the database takeover. At the same time DB2 is aware of TSA MP and is
letting it know when a planned outage is wanted, i.e. the database was manually
stopped by an administrator. This way a database administrator can, for the most
part, use the usual DB2 commands without having to deal with the cluster
manager. Every DB2 edition supporting HADR includes all necessary software
packages as well as licenses for TSA MP and offers to install these during the
DB2 setup. It also performs the configuration of TSA MP.
For detailed for more information and implementation steps, check out High
Availability and Disaster Recovery Options for DB2 on Linux, UNIX, and
Windows, SG24-7363
9.6 Autonomics
Automated task management allows automation of database management jobs
by scheduling activities according to specific requirements. This can be really
There is a master switch to turn on STMM, to turn this on set the database
configuration parameter self_tuning_mem to ON (default for newly created
databases). Then you can choose which buffer pools and memory-related
database parameters to tune. The following memory-related database
configuration parameters can be automatically tuned:
database_memory - Database shared memory size
locklist - Maximum storage for lock list
maxlocks - Maximum percent of lock list before escalation
pckcachesz - Package cache size
sheapthres_shr - Sort heap threshold for shared sorts
Example 9-6 on page 308 shows how to enable STMM and configure each
parameter to be managed by the STMM.
Automatic Storage
The automatic storage feature simplifies storage management for table spaces.
When you create an automatic storage database, you specify the storage paths
where the database manager will place your table space data. Then, the
database manager manages the container and space allocation for the table
spaces as you create and populate them. By default automatic storage is turned
on.
Automatic storage databases
Automatic storage is intended to make storage management easier. Rather
than managing storage at the table space level using explicit container
definitions, storage is managed at the database level and the responsibility of
Automatic Maintenance
The database manager provides automatic maintenance capabilities for
performing database backups, keeping statistics current and reorganizing
tables and indexes as necessary. Performing maintenance activities on your
databases is essential in ensuring that they are optimized for performance
and recoverability.
Configuration Advisor
You can use the Configuration Advisor to obtain recommendations for values of
the buffer pool size, database configuration parameters, and database manager
configuration parameters.
You can display the recommended values or apply them by using the APPLY
option of the CREATE DATABASE command. The recommendations are based
on input that you provide and system information that the advisor gathers.
When you create a database, this tool is automatically run to determine and set
the database configuration parameters and the size of the default buffer pool
(IBMDEFAULTBP). The values are selected based on system resources and the
intended use of the system. This initial automatic tuning means that your
database performs better than an equivalent database that you could create with
the default values. It also means that you will spend less time tuning your system
after creating the database.
You can run the Configuration Advisor at any time (even after your databases
are populated) to have the tool recommend and optionally apply a set of
configuration parameters to optimize performance based on the current system
characteristics. You can use the graphical database administration tools to run
the Configuration Advisor. Figure 9-4 on page 287 shows the Data Studio
Configuration Advisor Wizard
The values suggested by the Configuration Advisor are relevant for only one
database per instance. If you want to use this advisor on more than one
database, each database must belong to a separate instance.
Data compression
Both tables and indexes can be compressed to save storage. Compression is
fully automatic; once you specify that a table or index should be compressed
using the COMPRESS YES clause of the CREATE TABLE, ALTER TABLE,
CREATE INDEX or ALTER INDEX statements, there is nothing more you must
do to manage compression. Temporary tables are compressed automatically;
indexes for compressed tables are also compressed automatically, by default.
Data Compression is discussed in further in 11.2, “Data compression” on
page 383.
Utility throttling
This feature regulates the performance impact of maintenance utilities so that
they can run concurrently during production periods. Although the impact policy
for throttled utilities is defined by default, you must set the impact priority if you
want to run a throttled utility. The throttling system ensures that the throttled
utilities run as frequently as possible without violating the impact policy.
Currently, you can throttle statistics collection, backup operations, rebalancing
operations, and asynchronous index cleanup.
As of DB2 Data Server 9.5 Workload Management (WLM) is built right into the
DB2 engine allowing administrators to monitor and control database activities,
such as DDL and DML statements over their full life cycle. Through definable
rules the engine can be programmed to automatically filter certain workloads and
apply execution priorities to control concurrency or activity through setup of
thresholds. WLM can explicitly control CPU usage among executing work, detect
and prevent so-called “run away” queries, which for example are exceeding the
predicted or configured number of rows returned, the execution time or the
estimated execution costs. Specifically on the AIX platform DB2 workload
management does not stop on a database level and can be tightly integrated with
the operating system's workload management.
For more details and setup information refer to the IBM DB2 9.7 for Linux, UNIX,
Windows, Information Center at:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
DB2 supports impressive suite of tools for database management and services.
All the daily database operations can be easily and effectively done using
following GUI tools and wizards:
Control Center
Optim Development Studio
Optim Database Administrator
IBM Data Studio
Data Studio Administration Console
DB2 Control Center provides a common interface for managing DB2 databases
on different platforms. You can run DB2 commands, create DDL statements, and
execute DB2 utilities. DB2 Control Center provides point-and-click navigation
capabilities make it easy to find objects, whether you have hundreds or tens of
thousands in your database environment. It can be used to administer system,
instances, tables, views, indexes, triggers, user-defined types, user-defined
functions, packages, aliases, users, or groups.
It is tightly coupled with other DB2 tools; Figure 9-17 shows a hierarchy of
database objects on the left hand panel, and details on right hand panel. The
Control Center can started by calling the db2cc command.
The following tools can be launched from the Control Center Tools menu:
Replication Center
Satellite Administration Center Command Center
Command Editor
Task Center
Health Center
Journal
License Center
Configuration Assistant
DB2 Control Center also provides set of wizards for completing specific
administration tasks by taking you through each step one at a time. The following
DB2 wizards are available though the Control Center:
Add database partitions launch pad
Backup wizard
Create database wizard
Create database with automatic maintenance
Create table space wizard
Create table wizard
Design Advisor
Load wizard
Configuration Advisor
Restore data wizard
Configure database logging wizard
Set up activity monitor wizard
Set up high availability disaster recovery databases
Configure automatic maintenance
A benefit to these new tools is that you can have one single environment to
develop and manage your database, which required separate tools in the past.
This tool also supports multiple database servers, such as DB2 for LUW, i5/zOS,
Apache Derby, Informix IDS and others, this make managing multiple database
much easier.
The IBM Optim and Data Studio tool suite is built on the Eclipse platform and, as
such, is said to be an Eclipse-based development environment. The Eclipse
platform is a framework that allows integrated development environments (IDE to
be created; plug-ins exist to allow development in Java, C/C++, PHP, COBOL,
Ruby, and more.
There are two paid versions, the Optim Database Administrator and the Optim
Development Studio, and one free version of IBM Data Studio.
Optim Database Administrator 2.2 is on Eclipse 3.4, and here are some of the
products it can shell share with:
Optim Development Studio 2.2
Optim Query Tuner 2.2
InfoSphere Data Architect 7.5.x.x
The Optim Development Studio provides the following key features for database
object management. Typically these tasks are done on test databases that you
are using to test your applications.
If you are working on a large team, you can use the following features to enable
team members to share resources:
You can share data development projects using supported source code
control systems.
You can share database connection information by importing and exporting
this information to XML files.
You can customize the user interface to enable and disable visible controls
and defaults.
Optim Development Studio 2.2 is on Eclipse Version 3.4, and here are some of
the products it can shell share with:
Rational Application Developer for WebSphere Software 7.5.x.x
Optim Database Administrator 2.2
InfoSphere Data Architect 7.5.x.x
You can also use the Data Studio Administration Console to monitor Q
replication and event publishing, generate replication health reports, and perform
basic replication operations.
You can launch Data Studio Administration Console from the IBM Data Studio
Developer user interface so that you can monitor IBM data servers for status
including database availability, dashboards, and alerts. Figure 9-19 on page 319
shows the Data Studio Administration Console dash.
10
We also provide information on how you can check that system behavior has not
changed in an undesired way.
Further, methods and tools available for DB2 to tune the database in order to
achieve optimal performance are discussed.
It is always best to tie all test dates directly to their related conversion activity
dates. This prevents the test team from being perceived as the cause of a delay.
For example, if system testing is to begin after delivery of the final build, then
system testing begins the day after delivery. If the delivery is late, system testing
starts from the day of delivery, not on a specific date. This is called dependent or
relative dating.
Figure 10-1 shows the test phases during a typical conversion project. The
definition of the test plans happen at a very early moment. The test cases, and all
subsequent tasks, must be done for all test phases.
Prepare Infrastructure
The time exposure of tests depends on the availability of an existing test plan
and already prepared test items. The efforts depend also on the degree of
changes during the application and database conversion.
Note: Test efforts can range between 50% and 70% of the total conversion
effort.
The testing process should detect if all rows were imported into the target
database, ensure that all data type conversions were successful, and check
random data byte-by-byte. The data checking process should be automated by
appropriate scripts. When testing data conversion results you should:
Check IMPORT/LOAD messages for errors and warnings.
Count the number of rows in source and target databases and compare them.
Prepare scripts that perform data checks.
Involve data administration staff familiar with the application and its data to
perform random checks.
SQL3148W A row from the input file was not inserted into the table. SQLCODE "-545" was
returned.
SQL0545N The requested operation is not allowed because a row does not
satisfy the check constraint "DB2INST1.TABLE01.SQL090915100543100".
SQLSTATE=23513
SQL3185W The previous error occurred while processing data from row "2" of the input
file.
SQL3117W The field value in row "3" and column "1" cannot be converted to a SMALLINT
value. A null was loaded.
SQL3125W The character data in row "4" and column "2" was truncated because the data is
longer than the target database column.
SQL3110N The utility has completed processing. "4" rows were read from the input file.
SQL3149N "4" rows were processed from the input file. "3" rows were
successfully inserted into the table. "1" rows were rejected.
As shown in the summary, during the import process one record from the input
file was rejected, and three were inserted into the database. To understand the
nature of the warnings, you should look into the data source file and the table
definition (db2look command). For Example 10-1 on page 324, the table
definition is presented in Example 10-2, and the data file in Example 10-3
The first row from the input file (Example 10-3) was inserted without any
warnings. The second row was rejected because it violated check constraints
(warnings SQL3148W, SQL0545N, SQL3185W). A value of 32768 from the third
row was changed to null because it was out of SMALLINT data type range
(warning SQL3117W) and string abcd from the last row was truncated to abc
because it was longer than the relevant column definition (warning SQL3125W).
SQL3117W The field value in row "F0-3" and column "1" cannot be converted to a SMALLINT
value. A null was loaded.
SQL3125W The character data in row "F0-4" and column "2" was truncated
because the data is longer than the target database column.
SQL3110N The utility has completed processing. "4" rows were read from the
input file.
SQL3515W The utility has finished the "LOAD" phase at time "09/15/2009
10:15:23.836019".
A table that has been created with constraints is left by the LOAD command in
check pending state. Accessing the table with SQL queries generates a warning:
SQL0668N Operation not allowed for reason code "1" on table <TABLE_NAME>.
SQLSTATE=57016.
The SET INTEGRITY SQL statement should be used to move loaded tables into
a usable state. Example 10-5 shows a way to validate constraints. All rows that
violated constraints will be moved to exception table table01_e.
The SET INTEGRITY statement has many options like turning integrity on only
for new data, turning integrity off, or specifying exception tables with additional
diagnostic information. To read more about the SET INTEGRITY command refer
to:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
The IBM Data Movement tool also generates a script called db2checkpending.sql
in the conversion project output directory, which when executed runs a SET
INTEGRITY on each of the tables.
For each table you should count the number of rows and store the information in
the CK_ROW_COUNT table. The following INSERT statement can be used for
that purpose:
INSERT INTO ck_row_count SELECT 'tab_name', COUNT(*), 'MYS', sysdate() FROM
tab_name
The table ck_row_counts and its data can be manually converted to the target
DB2 database. Example 10-7presents the DB2 version of the table.
On the DB2 system, you should repeat the counting process with the equivalent
INSERT statement:
INSERT INTO ck_row_count SELECT 'tab_name', COUNT(*), 'DB2', CURRENT
TIMESTAMP FROM tab_name
Having the information about the number of rows in a SQL table is convenient
because with a single query you can get the table names that contain a different
number of rows in the source and target database:
SELECT tab_name FROM (SELECT DISTINCT tab_name, row_count FROM
ck_row_count) AS t_temp GROUP BY t_temp.tab_name HAVING(COUNT(*) > 1)
The presented approach for comparing the number of rows can be extended for
additional checking such as comparing the sum of numeric columns. Here are
the steps that summarize the technique:
1. Define check sum tables on the source database and characterize scope of
the computation.
2. Perform the computation and store the results in the appropriate check sum
tables.
3. Convert the check sum tables as other user tables.
4. Perform equivalent computations on the target system, and store the
information in the converted check sum tables.
5. Compare the computed values.
Table 10-1 provides computations for selected database types. The argument for
the DB2 SUM() function is converted to DECIMAL type. This occurs because in
most cases the SUM() function returns the same data type as its argument,
which can cause arithmetic overflow. For example, when calculating the sum on
an INTEGER column, if the result exceeds the INTEGER data type range, error
SQL0802N is generated: Arithmetic overflow or other arithmetic exception
occurred. Converting the argument to DECIMAL eliminates the error.
Along with the functional testing, the application should also be checked against
performance requirements. Since there are many architectural differences
between MySQL and DB2, some SQL operations might require further
optimization. Observing the performance differences in the early testing stages
increases the chance of preparing more optimal code for the new environment.
Before going into production, the converted database should be verified under
high volumes and loads. These tests should emulate the production
environment, and can determine if further application or database tuning is
necessary. The stress load can also reveal other hidden problems such as
locking issues, which can be observed only in a production environment.
MySQL users and privileges are resolved in DB2 with operating system users
and groups. A list of MySQL users should be compared to the equivalent DB2
operating system users. All of DB2's authorities should be verified to allow the
correct individuals to connect to the database. Privileges for all database objects
should also be verified.
10.4 Troubleshooting
The first step of problem determination is to know what information is available to
you. When DB2 performs an operation, an associated return code is returned.
The return code is displayed to the user in the form of an informational or error
message. These messages are logged into diagnostic files depending on the
diagnostic level set in the DB2 configuration. In this section we discuss the DB2
diagnostic logs, error message interpretations, and tips which may help with
problem determination, troubleshooting, and resolutions to specific problems.
The following actions should be taken when experiencing a DB2 related problem:
Check related messages
Explain error codes
Check documentation
Search through available Internet resources
Here is the complete list for DB2 error messages to prefix your reference:
ASN: Replication messages
CCA: Client Configuration Assistant messages
CLI: Call Level Interface messages
DB2: Command Line Processor messages
DBA: Control Center and Database Administration Utility messages
DBI: Installation or configuration messages
EXP: Explain utility messages
FLG: Information Catalog Manager messages
LIC: DB2 license manager messages
SAT: Satellite messages
SPM: Synch Point Manager messages
SQJ: Embedded SQLJ in Java messages
SQL: Database Manager messages
DB2 also provides detailed information for each message. The full error
message describes the nature of the problem in detail along with potential user
responses. To display the full message for the DB2 return, you can use the DB2
command db2 ? error-code in Linux or AIX. Since ? (question mark) is a special
character; you must separate the DB2 command and the error code with a
double quote ("). See Example 10-10.
Explanation:
Explanation:
2 All the containers assigned to this DMS table space are full.
This is the likely cause of the error.
[...]
You can find full information about the DB2 message format, and a listing of all
the messages in Messages Reference, Volumes 1 and 2, SC27-2450-00, and
SC27-2451-00, available online at:
http://www.ibm.com/support/docview.wss?rs=71&uid=swg27015148
DB2 is not the only one that can write to the notification logs, tools such as the
Health Monitor, Capture and Apply programs, and user applications can also
write to these logs using the db2AdminMsgWrite API function.
db2diag.log
The db2diag.log is the most often used file for DB2 problem investigation. You
can find this file in the DB2 diagnostic directory, defined by the DIAGPATH
variable in the database manager configuration. If the DIAGPATH parameter is
not set, by default the directory is located at:
Most of the time, the default value is sufficient for problem determination. In
some cases, especially on development or test systems you can set the
parameter to 4, in order to collect all informational messages. However, ensure
to pay attention to the database activities and the size available on the file
system, since this may cause performance issues due to the large amounts of
data recorded into the file. Setting DIAGLEVEL to 4 may also make the file very
large and harder to read.
Explanations of the db2diag.log entries are included below. The numbers bolded
in the example correspond to the following numbers:
1. A timestamp and timezone for the message.
2. The record ID field. The recordID of the db2diag log files specifies the file
offset at which the current message is being logged (for example, “27204”)
and the message length (for example, “655”) for the platform where the DB2
diagnostic log was created.
3. The diagnostic level associated with an error message. For example, Info,
Warning, Error, Severe, or Event
4. The process ID
5. The thread ID
6. The process name
7. The name of the instance generating the message.
8. For multi-partition systems, the database partition generating the message.
(In a non-partitioned database, the value is “000”.)
9. The database name
10.The application handle. This value aligns with that used in db2pd output and
lock dump files. It consists of the coordinator partition number followed by the
coordinator index number, separated by a dash.
11.Identification of the application for which the process is working. In this
example, the process generating the message is working on behalf of an
application with the ID 9.26.54.62.45837.070518182042.
TCP/IP-generated application ID is composed of three sections
i. IP address: It is represented as a 32-bit number displayed as a
maximum of 8 hexadecimal characters.
ii. Port number: It is represented as 4 hexadecimal characters.
iii. A unique identifier for the instance of this application.
12.The authorization identifier.
13.The engine dispatchable unit identifier.
14.The name of the engine dispatchable unit.
15.The product name (“DB2”), component name (“data management”), and
function name (“sqlInitDBCB”) that is writing the message (as well as the
probe point (“4820”) within the function).
16.The information returned by a called function. There may be multiple data
fields returned.
Trap files
The database manger generates a trap file if it cannot continue processing due
to a trap, segmentation violation, or exception.
All signals or exceptions received by DB2 are recorded in the trap file. The trap
file also contains the function sequence that was running when the error
occurred. This sequence is sometimes referred to as the “function call stack” or
“stack trace.” The trap file also contains additional information about the state of
the process when the signal or exception was caught.
A trap file is also generated when an application is forced while running a fenced
thread-safe routine. The trap occurs as the process is shutting down. This is not
a fatal error and it is nothing to be concerned about.
The files are located in the directory specified by the DIAGPATH database
manager configuration parameter.
On all platforms, the trap file name begins with a process identifier (PID),
followed by a thread identifier (TID), followed by the partition number (000 on
single partition databases), and concluded with .trap.txt.
There are also diagnostic traps, generated by the code when certain conditions
occur which do not warrant crashing of the instance, but where it may be useful
to verify values within the stack. These traps are named with the PID in decimal
format, followed by the partition number (0 in a single partition database).
The following resembles a trap file with a process identifier (PID) of 6881492,
and a thread identifier (TID) of 2.
6881492.2.000.trap.txt
The following is an example of a trap file whose process and thread is running on
partition 10.
6881492.2.010.trap.txt
You can generate trap files on demand using the db2pd command with the -stack
all or -dump option. In general, though, this should only be done as requested by
IBM Software Support.
Dump files
When DB2 determines that extra information is required for collection due to an
error, it often creates binary dump files in the diagnostic path. This file are named
with the process or thread ID that failed, the node where the problem occurred
and ends with .dump.bin extension, as shown in the example below.
6881492.2.010.dump.bin
When a dump file is created or appended, an entry is made in the db2diag log file
indicating the time and the type of data written.
violations, illegal instructions, bus errors, and user-generated quit signals cause
core files to be dumped.
These files are located in the directory specified by the DIAGPATH database
manager configuration parameter.
Maintenance version
The db2level utility can be used to check the current version of DB2. As
presented in Figure 10-2, the utility returns information about the installed
maintenance updates (FixPaks), the word length used by the instance (32-bit or
64-bit), the build date, and other code identifiers. It is recommended to
periodically check if the newest available FixPaks are installed. DB2
maintenance updates are freely available at:
ftp://ftp.software.ibm.com/ps/products/db2/fixes
db2support utility
The db2support utility is designed to automatically collect all DB2 and system
diagnostic data. This program generates information about a DB2 server,
including that related to configuration and system environment.
In one simple step, the tool can gather database manager snapshots,
configuration files, and operating system parameters, which should make the
problem determination quicker. Below is a sample call of the utility:
db2support . -d invent -c
The dot represents the current directory where the output file is stored. The rest
of the command options are not required and can be omitted. The -d and -c
clauses instruct the utility to connect to the invent database, and to gather
information about database objects such as table spaces, tables, or packages.
The site has the most recent copies of documentation, knowledge base to search
for technical recommendations or DB2 defects, links for product updates, latest
support news, and other useful DB2 related links.
To find related problems, prepare words that describe the issues, such as the
commands that were run, symptoms, or tokens from the diagnostics messages.
You can use these as terms in the DB2 Knowledge Base. The Knowledge Base
offers an option to search through DB2 documentation, TechNotes, and DB2
defects (APARs). TechNotes is a set of recommendations and solutions for
specific problems.
On the DB2 support site there is a possibility to search for closed, open, and
HIPER APARs. A closed status for APAR indicates that a resolution for a
problem has been achieved and included in a specific FixPak. Open APARs
represent DB2 defects that are currently being worked on or waiting to be
included in the next available FixPak. HIPER APARs (High-Impact or
PERvasive) are critical problems that should be reviewed to assess the potential
impact of staying at a particular FixPak level.
The DB2 Technical Support site offers e-mail notification of critical or pervasive
DB2 customer support issues including HIPER APARs and FixPak alerts. To
subscribe to it, follow the DB2 Alert link on the Technical Support main page.
Guidelines and reference materials (which you may need when calling IBM
support), as well as the telephone numbers are available in the IBM Software
Support Guide at:
http://techsupport.services.ibm.com/guides/handbook.html
When using these table functions in a database partitioned environment, you can
choose to receive data for a single partition or for all partitions. If you choose to
receive data for all partitions, the table functions returns one row for each
partition. Using SQL, you can sum the values across partitions to obtain the
value of a monitor element across partitions.
Monitor table functions can be broken up into three categories, depending on the
information they monitor:
Example 10-12 shows an example of how one can use the MON_GET_TABLE
function to retrieve the rows read, inserted, updated, and deleted from all tables
in the ADMIN schema.
ORDER BY total_rows_read
DESC"
Snapshot monitoring
Snapshot monitoring describes the state of database activity at a particular point
in time when a snapshot is taken. Snapshot monitoring is useful in determining
the current state of the database and its applications. Taken at regular intervals,
they are useful for observing trends and foreseeing potential problems.
Snapshots can be taken from the command line, using custom API programs or
through SQL using table functions. Example 10-13 shows an extract from a
sample snapshot invoked from the command line.
Database Snapshot
[...]
[...]
[...]
In the example above, the snapshot has collected database level information for
the INVENT database. Some of the returned parameters display point-in-time
values such as the number of currently connected applications:
Some parameters can contain historical values such as the maximum number of
concurrent connections that have been observed on the database:
High water mark for connections = 9
Cumulative or historical values are used to relate to the point in time during last
initialization of counters. The counters can be reset to zero by the RESET MONITOR
command, or by the appropriate DB2 event. With the mentioned Example 10-13
on page 343, database deactivation and activation reset all the database level
counters. Example 10-14 shows how to reset monitors for an entire instance and
for the specific database.
The monitor switches can be turned on at the instance level or the application
level. To switch the monitors at the instance level, modify the appropriate
database manager parameter. After modifying the DFT_MON_BUFPOOL
parameter, as shown in Example 10-16, all users with SYSMAINT, SYSCTRL, or
SYSADM authorities are able to collect buffer pool statistics on any database in
the instance.
To switch the monitors at the application level, issue the UPDATE MONITOR
SWITCHES command using command line. The changes only are applicable to
that particular prompt window. Example 10-17 shows how to update the suitable
monitor switch for collecting buffer pool information.
The complete list of monitor switches and related database manager parameters
is presented on Table 10-2.
Sample snapshots
The database manager snapshot (Example 10-18 on page 346) captures
information specific to the instance level. The information centers around the
total amount of memory allocated to the instance and the number of agents that
are currently active on the system.
The table snapshot (Example 10-20) contains information on the usage and
creation of all tables. This information is useful in determining how much work is
being run against a table and how much the table data changes. This information
can be used to decide how your data should be laid out physically.
The table space and buffer pool snapshots (Example 10-21) contain similar
information. The table space snapshot returns information regarding the layout of
the table space and the amount of space being used. The buffer pool snapshot
contains information on the amount of space currently allocated for buffer pools,
and space required when the database is next reset. Both snapshots contain a
summary of the way in which data is accessed from the database. This access
can be done from a buffer pool, directly from tables on disk, or through a direct
read or write for LOBs or LONG objects.
The dynamic SQL snapshot (Example 10-22 on page 347) is used extensively to
determine how well SQL statements are performing. This snapshot summarizes
the behavior of the different dynamic SQL statements that are run. The snapshot
does not capture static SQL statements, so anything that was pre-bound does
not show up in this list. The snapshot is an collection of the information
concerning the SQL statements. If a SQL statement is executed 102 times, then
there will be one entry, which will encapsulate a summary of the behavior of all
102 executions.
Example 10-23 and Example 10-24 show how to get similar monitoring
information using the table functions and views as we did from Example 10-17
using the GET SNAPSHOT command. Example 10-23 demonstrates a query
that captures the snapshot of lock information for the currently connected
database. Example 10-24 is a query that captures a snapshot of lock information
about the SAMPLE database for the current connected database partition.
Table 10-3 lists the snapshot table functions, administrative views and return
information that can be used to monitor your database system. All administrative
views belong to the SYSIBMADM schema.
For the following list of snapshot table functions, if you enter NULL for the
currently connected database, you get snapshot information for all databases in
the instance:
SNAP_GET_DB_V95
SNAP_GET_DB_MEMORY_POOL
SNAP_GET_DETAILLOG_V91
SNAP_GET_HADR
SNAP_GET_STORAGE_PATHS
SNAP_GET_APPL_V95
SNAP_GET_APPL_INFO_V95
SNAP_GET_AGENT
SNAP_GET_AGENT_MEMORY_POOL
SNAP_GET_STMT
SNAP_GET_SUBSECTION
SNAP_GET_BP_V95
SNAP_GET_BP_PART
The database name parameter does not apply to the database manager level
snapshot table functions; they have an optional parameter for database partition
number.
Event monitoring
Event monitors are used to monitor the performance of DB2 over a fixed period
of time. The information that can be captured by an event monitor similar to the
snapshots, but in addition to snapshot-level information, event monitors also
examine transition events in the database, and consider each event as an object.
Event monitors can capture information about DB2 events in the following areas:
Statements: A statement event is recorded when an SQL statement ends. The
monitor records statement's start and stop time, CPU used, text of dynamic
SQL, the return code of the SQL statement, and other matrixes such as fetch
count.
Connections: A connection event is recorded whenever an application
disconnects from the database. The connection event records all application
level counters.
Database: An event of database information is recorded when the last
application disconnects from the database. This event records all database
level counters.
Buffer pools; A buffer pool event is recorded when the last application
disconnects from the database. The information captured contains the type
and volume of use of the buffer pool, use of pre-fetchers and page cleaners,
and whether or not direct I/O was used.
Table spaces: A table space event is recorded when the last application
disconnects from the database. This monitor captures information regarding
counters for buffer pool, prefetchers, page cleaners and direct I/O for each
table space.
Tables: All active table events are recorded when the last application
disconnects from the database. An active table is one which has been altered
or created since the database was activated. The monitor captures the
number of rows read and written to the table.
Event monitors are created with the CREATE EVENT MONITOR SQL
statement. Information about event monitors is stored in the system catalog
table, and it can be reused later.
Example 10-25 on page 352 shows a sequence of statements that illustrate how
to collect Event Monitor information using commands.
[...]
The access plan acquired from Visual Explain helps to understand how individual
SQL or XQuery statements are executed. The information displayed in the Visual
Explain graph can be used to tune SQL and XQuery queries to optimize
performance.
You can start Visual Explain from the Control Center or the Optim Data Studio
toolset. From Data Studio create or open SQL or XQuery statement. In the Main
panel view right click and select Open Visual Explain, as shown in Figure 10-3
on page 354.
A configuration window appears where you can specify the general settings and
the values for Visual Explain to use for special registers when fetching explain
data.
Figure 10-4 on page 355 shows an example of an access plan graph. To get the
details right-click the desired graph element.
which are turned on by default with DB2. However, there are still a few settings
you may required to modify to fit your environment. This section focuses on a
number of DB2 performance tuning tips that may be used for initial configuration.
For very simple databases, the default configurations may be sufficient for your
needs. However, in most cases you may want to add additional table spaces.
The benefit of multiple table spaces is that you can assign different database
objects to different table spaces and assign the table spaces to dedicated
physical devices. Therefore allowing each table object to utilize the hardware
allocated to the table space it belongs to. This essentially allows for table level
backup.
You can create an SMS tables space using the MANAGED BY SYSTEM clause
in the create table space definition. The benefit of using a SMS table space is
that it does not require initial storage; space is not allocated by the system until it
is required. Creating a table space with SMS requires less initial work, because
you do not have to predefine the containers.
However, with SMS table spaces, the file system of the operating system
decides where each logical file page is physically stored. This means that pages
may not be stored contiguously on disk - the storage placement depends on the
file system algorithm and on the level of activity on the file system. This may
cause the performance of an SMS table space to be negatively affected.
Therefore, SMS table spaces are ideal for small database that require low
maintenance/monitoring and grow/shrink rapidly.
With DMS, the database manager can ensures that pages are physically
contiguous, since it bypasses operating system I/O and interfaces with the disk
directly. This can improve performance significantly. You can create a DMS
tables space by using MANAGED BY DATABASE clause in the create table
space definition.
The down side is that a DMS table space requires more efforts in tuning and
administration, since you must add more storage containers as the table space
fills with data. However, you can easily add new containers, drop, or modify the
size of existing containers. The database manager then automatically
rebalances existing data into all the containers belonging to the table space.
Therefore, DMS table spaces are idea for performance-sensitive application,
particularly ones that involve a large number of INSERT operations.
For optimal performance, large volume data and indexes should be placed within
DMS table spaces; if possible, split to separate raw devices. Initially, system
temporary table spaces should be of SMS type. In an OLTP environment, there
is no need for creating large temporary objects to process SQL queries, so the
SMS system temporary table space is a good starting point. The easiest way to
optimize your table spaces is to use table spaces managed by automatic
storage.
As shown in Figure 10-5, all data modifications are not only written to table space
containers, but are also logged to ensure recoverability. Since every INSERT,
UPDATE, or DELETE statement is replicated in the transactional log, the
flushing speed of the logical log buffer can be crucial for the entire database
performance. To understand the importance of logical log placement, you should
keep in mind that the time necessary to write data to disk depends on the
physical data distribution on disk. The more random reads or writes that are
performed, the more disk head movements are required, and therefore, the
slower the writing speed. Flushing logical log buffer to disk by its nature is
sequential and should not be interfered by other operations. Locating logical log
files on separate devices isolates them from other processes, and ensures
uninterrupted sequential writes.
To change logical log files to a new location you must modify the NEWLOGPATH
database parameter as shown in Example 10-27. The logs are relocated to the
new path on the next database activation (this can take some time to create the
files).
When creating a DMS table space with many containers, DB2 automatically
distributes the data across them in a round-robin fashion, similar to the striping
method available in disk arrays. To achieve the best possible performance, each
table space container should be placed on a dedicated physical device. For
parallel asynchronous writes and reads from multiple devices, the number of
database page cleaners (NUM_IO_CLEANERS) and I/O servers
(NUM_IOSERVERS) should be adjusted. The best value for these two
parameters depends on the type of workload and available resources. You can
start your configuration with the following values:
NUM_IOSERVERS = Number of physical devices, not less than three and no
more than five times the number of CPUs.
NUM_IO_CLEANERS = Number of CPUs
However, the most effective way to configure these parameters is to set them to
automatic and let DB2 manage them, as shown in Example 10-28.
If this registry variable is set, and the prefetch size of the table is not
AUTOMATIC, the degree of parallelism of the table space is the prefetch size
divided by the extent size. If this registry variable is set, and the prefetch size of
the table space is AUTOMATIC, DB2 automatically calculates the prefetch size
of a table space. Table 10-4 summarizes the different options available and how
parallelism is calculated for each situation:
Initially, set the total size of buffer pools to 10% to 20% of available memory. You
can monitor the system later and correct it. DB2 allows changing buffer pool
sizes without shutting down the database. The ALTER BUFFERPOOL statement
with the IMMEDIATE option takes effect right away, except when there is not
enough reserved space in the database-shared memory to allocate new space.
This feature can be used to tune database performance according to periodical
changes in use, for example, switching from daytime interactive use to nighttime
batch work.
Once the total available size is determined, this area can be divided into different
buffer pools to improve utilization. Having more than one buffer pool can
preserve data in the buffers. For example, let us suppose that a database has
many frequently used smaller tables, which normally reside in the buffer in their
entirety, and thus would be accessible quickly. Now let us suppose that there is a
query running against a very large table using the same buffer pool and involving
reads of more pages than total buffer size. When this query runs, the pages from
the small, very frequently used tables will be lost, making it necessary to re-read
them when they are needed again.
At the start you can create additional buffer pools for caching data and leave the
IBMDEFAULTBP for system catalogs. Creating an extra buffer pool for system
temporary data can be also valuable for the system performance, especially in
an OLTP environment where the temporary objects are relatively small. Isolated
temporary buffer pools are not influenced by the current workload, so it should
take less time to find free pages for temporary structures, and it is likely that the
modified pages will not be swapped out to disk. In a warehousing environment,
the operation on temporary table spaces are considerably more intensive, so the
buffer pools should be larger, or combined with other buffer pools if there is not
enough memory in the system (one pool for caching data and temporary
operations).
Example 10-29 shows how to create buffer pools assuming there is an additional
table space DATASPACE for storing data and indexes was already created and
that there is enough memory in the system. You can take this as a starting buffer
pool configuration for a 2 GB RAM system.
The results:
BPNAME NPAGES PAGESIZE TBSPACE
-------------------- ----------- ----------- --------------------
IBMDEFAULTBP 16384 4096 SYSCATSPACE
IBMDEFAULTBP 16384 4096 SYSTOOLSPACE
IBMDEFAULTBP 16384 4096 USERSPACE1
DATA_BP 65536 4096 DATASPACE
TEMP_BP 16384 4096 TEMPSPACE1
Though you can tune your buffer pools manually as described above, using
STMM is an easier and more effective way of tuning the buffer pools for optimal
performance. As we discussed in 9.6, “Autonomics” on page 306, STMM can
tune database memory parameters and buffer pools without any DBA
intervention. STTM works with buffer pools of multiple page sizes and can easily
trade memory between the buffer pools as needed. You can turn on STMM for a
specific buffer pool by issuing commands in Example 10-30 on page 362.
The first command in the example above turns STMM on, which is the default.
The second command tells DB2 to automatically tune the buffer pool BP32. You
can tune individual buffer pools or all buffer pools with STMM.
For databases with a heavy update transaction workload, you can generally
ensure that there are enough clean pages in the buffer pool by setting the
parameter value to be equal-to or less-than the default value. A percentage
larger than the default can help performance of your database if there are a small
number of very large tables. To change the default parameter, you can use the
following command:
db2 update db cfg for sample using CHNGPGS_THRESH 40
A single transaction should fit into the available log space to be completed; if it
does not fit, the transaction is rolled back by the system (SQL0964C The
transaction log for the database is full). To process transactions, which are
modifying large numbers of rows, adequate log space is needed.
The currently total log space available for transactions can be calculated by
multiplying the size of one log file (database parameter LOGFILSIZ) and the
number of logs (database parameter LOGPRIMARY).
From a performance perspective, it is better to have a larger log file size because
of the cost for switching from one log to another. When log archiving is switched
on, the log size also indicates the amount of data for archiving. In this case, a
larger log file size is not necessarily better, since a larger log file size may
increase the chance of failure, or cause a delay in archiving or log shipping
scenarios. The log size and the number of logs should be balanced.
Locking is the mechanism that the database manager uses to control concurrent
access to data in the database by multiple applications. Each database has its
own list of locks (a structure stored in memory that contains the locks held by all
applications concurrently connected to the database). The size of the lock list is
controlled by the LOCKLIST database parameter.
The default storage for LOCKLIST on Windows and UNIX is set to AUTOMATIC.
On 32-bit platforms, each lock requires 48 or 96 bytes of the lock list, depending
on whether other locks are held on the object or not. On 64-bit platforms, each
lock requires 64 or 128 bytes of the lock list, depending on whether other locks
are held on the object or not.
When this parameter is set to AUTOMATIC, it is enabled for self tuning. This
allows the memory tuner to dynamically size the memory area controlled by this
parameter as the workload requirements change. Since the memory tuner trades
memory resources among different memory consumers, there must be at least
two memory consumers enabled for self tuning in order for self tuning to be
active.
Automatic tuning of this configuration parameter only occurs when self tuning
memory is enabled for the database (the SELF_TUNING_MEM database
configuration parameter is set to ON).
When the maximum number of lock requests has been reached, the database
manager replaces existing row-level locks with table locks (lock escalation). This
operation reduces the requirements for lock space, because transactions will
hold only one lock on the entire table instead of many locks on every row. Lock
escalation has a negative performance impact because it reduces concurrency
on shared objects. Other transactions must wait until the transaction holding the
table lock commits or rollbacks work. Setting LOCKLIST to AUTOMATIC avoids
this situation, as the lock list will increase synchronously to avoid lock escalation
or a “lock list ful” situation.
The snapshot collects the requested information at the time the command was
issued. Issuing the get snapshot command later can produce different results
because in the meantime the applications may commit the transaction and
release locks.To check lock escalation occurrences, look at the db2diag.log file.
Log buffer
Log records are written to disk when one of the following occurs:
A transaction commits or a group of transactions commit, as defined by the
mincommit configuration parameter
The log buffer is full
As a result of some other internal database manager event.
This log buffer size must also be less than or equal to the dbheap parameter.
Buffering the log records results in more efficient logging file I/O because the log
records are written to disk less frequently and a greater quantity of log records
are written out at each time.
The default size for log buffer is 256 4KB pages. In most cases the log records
are written to disk when one of the transactions issue a COMMIT, or the log
buffer is full. It is recommended to increase the size of this buffer area if there is
considerable read activity on a dedicated log disk, or there is high disk utilization.
Increasing the size of the log buffer may result in more efficient I/O operations,
especially when the buffer is flushed to disk. The log records are written to disk
less frequently and more log records are written each time.
When increasing the value of this parameter, you should also consider
increasing the DBHEAP parameter since the log buffer area uses space
controlled by the DBHEAP parameter.
At a later time, you can use the snapshot for applications to check the current
usage of log space by transactions as presented in Example 10-33.
Before running the application snapshot, the Unit Of Work monitor should be
switched on. At the time the snapshot was issued, you can see that there are
only three applications running on the system. The first transaction uses 478
bytes of log space, the second 21324, and last 110865, which is roughly 28
pages more than the default log buffer size. The snapshot gives only current
values from the moment the command was issued. To get more valuable
information about the usage of log space by transactions, run the snapshot many
times.
An access plan can be used to view statistics for selected tables, indexes, or
columns; properties for operators; global information such as table space and
function statistics; and configuration parameters relevant to optimization. With
Visual Explain, you can view the access plan for an SQL or XQuery statement in
graphical form.
After many changes to table data, logically sequential data might reside on
non-sequential data pages, so that the database manager must perform
additional read operations to access data.
Additional read operations are also required if many rows have been deleted. In
this case, you may consider reorganizing the table to match the index and to
reclaim space. You can also reorganize the system catalog tables.
Because reorganizing a table usually takes more time than updating statistics,
you could execute the RUNSTATS command to refresh the current statistics for
your data, and then rebind your applications. If refreshed statistics do not
improve performance, reorganization might help.
The RUNSTATS command can be executed against a table from the command
line. Example 10-34 shows how to execute RUNSTATS command against our
sample inventory table.
It is also possible to update statistics using the Data Studio tool. Within the
Database Explorer View, connect to your database and drill down the database
object folders until you find the table you would like to update the statistics for. In
our example we connected to the invent database, then drop down the invent
database folder, the schema folder, the ADMIN schema folder, the Tables folder
and this brought us to a list of tables in the invent database in the ADMIN
schema. To pull up the table options we right clicked the INVENTORY table
icon, as shown in illustrated in Figure 10-7 on page 367.
In some scenarios you may need more than current statistics on a table to
improve performance. The following factors can indicate if your database will
benefit from table reorganization:
There has been a high volume of insert, update, and delete activity against
tables that are accessed by queries.
There have been significant changes in the performance of queries that use
an index with a high cluster ratio.
Executing the RUNSTATS command to refresh table statistics does not
improve performance.
Output from the REORGCHK command indicates a need for table
reorganization.
You can access the table reorganization option from the Data Studio by
right-clicking on the table, as shown in Figure 10-7 on page 367. After selecting
the REORG Table... option in the drop down menu, the reorganization table
wizard opens in the main view. Figure 10-9 on page 369 illustrates the
reorganization table wizard. You can select the parameters for the REORG
command and select Run to execute the command.
DB2 comes with a very powerful query optimization algorithm. This cost-based
algorithm will attempt to determine the cheapest way to perform a query against
a database. Items such as the database configuration, database physical layout,
table relationships, and data distribution are all considered when finding the
optimal access plan for a query. To check the current execution plan, you can
use the Explain utility.
By default in DB2 9, any new databases will run the Configuration Advisor in the
background and have such configuration recommendations automatically
applied. To disable this feature, or to explicitly enable it, you must use the db2set
command, as Example 10-35 illustrates.
In any case, the Configuration Advisor can be manually run at any time against a
database to update the current configuration, regardless of the
DB2_ENABLE_AUTOCONFIG_DEFAULT setting. All recommendations are
based on the input that you provide and system information that the
Configuration Advisor gathers. The generated recommendations can be applied
or simply displayed.
It is important to point out that the values suggested by the Configuration Advisor
are relevant for only one database per instance. If you want to use this advisor
on more than one database, each database should belong to a separate
instance.
To invoke this wizard from the DB2 Control Center, expand the object tree until
you find the database that you want to tune. Select the icon for the database,
right-click and select Configuration Advisor. Through several dialog windows,
the wizard collects information about the percentage of memory dedicated to
DB2, type of workload, number of statements per transaction, transaction
throughput, trade-off between recovery and database performance, number of
applications, and isolation level of applications connected to the database.
Based on the supplied answers, the wizard proposes configuration changes and
gives the option to apply the recommendations or save them as a task for the
Task Center for later execution as shown in Figure 10-11 on page 372. The result
window is presented in Figure 10-12 on page 373.
[...]
[...]
To execute the index advisor against a specific database, we first must specify
the workload that will be run against the database. To do this from the command
line, we create a file that defines the workload. Example 10-37 shows the queries
that will be ran against the SAMPLE database.
We can then run the db2advis command and specify the db2advis.in as the
workload input script. Example 10-38 shows the syntax and output to execute the
index advisor; for more options run db2advis -h from the command line.
--
-- UNUSED EXISTING INDEXES
-- ============================
-- DROP INDEX "DB2INST1"."XEMP2";
-- ===========================
--
-- ====ADVISOR DETAILED XML OUTPUT=============
-- ==(Benefits do not include clustering recommendations)==
--
--<?xml version="1.0"?>
--<design-advisor>
--<statement>
--<statementnum>0</statementnum>
--<statementtext>
-- SELECT COUNT(*) FROM EMPLOYEE
--</statementtext>
--<objects>
--<identifier>
--<name>EMPLOYEE</name>
--<schema>DB2INST1</schema>
--</identifier>
--<identifier>
--<name>XEMP2</name>
--<schema>DB2INST1</schema>
--</identifier>
--</objects>
--<benefit>0.000000</benefit>
--<frequency>100</frequency>
--</statement>
--<statement>
--<statementnum>1</statementnum>
--<statementtext>
-- SELECT * FROM EMPLOYEE WHERE LASTNAME='HAAS'
--</statementtext>
--<objects>
--<identifier>
--<name>EMPLOYEE</name>
--<schema>DB2INST1</schema>
--</identifier>
--</objects>
--<benefit>0.000000</benefit>
--<frequency>100</frequency>
--</statement>
--<statement>
--<statementnum>2</statementnum>
--<statementtext>
-- SELECT AVG(BONUS), AVG(SALARY)
-- FROM EMPLOYEE GROUP BY WORKDEPT ORDER BY
-- WORKDEPT
--</statementtext>
--<objects>
--<identifier>
--<name>EMPLOYEE</name>
--<schema>DB2INST1</schema>
--</identifier>
--<identifier>
--<name>XEMP2</name>
--<schema>DB2INST1</schema>
--</identifier>
--</objects>
--<benefit>0.000000</benefit>
--<frequency>1</frequency>
--</statement>
--</design-advisor>
-- ====ADVISOR DETAILED XML OUTPUT=============
--
11
11.1 pureXML
DB2 pureXML allows native storage of XML data in a pre-parsed tree format
within a database table. More specifically, this is done using a special XML
column data type. However, unlike other databases on the market, DB2 does not
store XML data simply as character strings or CLOB or shred it into relational
data. When inserting XML data into a DB2 database, the data gets parsed and
fragmented with node-level granularity preserving the tree structures of the
original XML data. Moreover, during this process, the user can enable DB2 to
validate XML data against an XML schema registered with the database and
thus making sure that the data is always in the format their applications require.
Figure 11-1 on page 381 illustrates a simple example of how relational and XML
data are integrated within a single table. As shown in the figure, when creating a
table within a DB2 database, you can specify both relational and XML data types
in separate column definitions. This creates a table, where for every row of
relational data, there will be an XML document associated with that row.
Since it is possible that large size of XML documents will be stored, they are
physically stored in separate objects by default - these objects are called the
XML Data Area (XDA) objects. For most application scenarios, this provides
excellent performance. However, you do have the option of storing the XML
documents in the same physical space as the relational data by using the SET
INLINE LENGTH clause in the CREATE or ALTER statement. For more
information refer to the IBM DB2 Information Center at:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
In order to query and update SQL and XML data, SQL and XQuery statements
can be used (SQL/XML: ISO standard ISO/IEC 9075-14:2003). Several
operations are available to directly modify not only full documents, but also parts
or sub-trees of XML documents without having to read, modify and reinsert them.
Using the XQuery language you can directly modify single values and nodes
within the XML document. XQuery is a fairly new, standardized query language
supporting path-based expressions. More information to XQuery can be found
here:
http://www.w3.org/TR/2007/REC-xquery-20070123/.
With pureXML, applications are not only able to combine statements from both
languages to query SQL and XML data; you can express many queries in plain
XQuery, in SQL/XML, or XQuery with embedded SQL. In certain cases, one of
the options to express your query logic may be more intuitive than another. In
general, the “right” approach for querying XML data needs to be chosen on a
case-by-case basis, taking the application's requirements and characteristics
into account. Example 11-1 shows a simple XQuery command.
1
--------------------------------------------
<name>Kathy Smith</name>
<name>Kathy Smith</name>
<name>Jim Noodle</name>
<name>Robert Shoemaker</name>
<name>Matt Foreman</name>
<name>Larry Menard</name>
6 record(s) selected.
6 record(s) selected.
Example 11-3 shows how to use a combined XQuery / SQL statement to retrieve
XML data based on some relational data.
1
<name>Kathy Smith</name>
1 record(s) selected.
result in tremendous cost savings. These savings will also extend to backup disk
space and more.
Before you consider turning on row compression, you can inspect your tables to
see what potential savings to expect. Compressing and decompressing data is
effortless.
To enable row compression, you can use the COMPRESS YES keywords in
either the CREATE or ALTER TABLE statement, as shown in Example 11-4.
After enabling row compression and loading or inserting 1-2 MB of data, DB2
automatically creates the compression dictionary. This dictionary contains the
frequently occurring patterns with an associated shorter 12-bit symbol. These
symbols then are used to replace the original data.
Figure 11-2 illustrates the mapping of repeating patterns in two table rows to
dictionary symbols representing those patterns. The end result is a compressed
data record shorter in length than the original uncompressed record - depicted by
the yellow rectangles representing the rows beneath the table.
As of DB2 9.7, data compression had been extended to include all temporary
tables. Data compression for temporary tables reduces the amount of temporary
disk space that is required for large and complex queries increasing query
performance.
If compression is enabled on a table with an XML column, the XML data that is
stored in the XDA object is also compressed. A separate compression dictionary
for the XML data is stored in the XDA object. XDA compression is not supported
for tables whose XML columns were created prior to this version; for such tables,
only the data object is compressed.
disk and is ideal for larger databases used for data warehousing, data mining,
online analytical processing or working with online transaction processing
workloads.
Such systems can also be made highly available when using shared storages. In
this case two or more nodes can share the file systems holding the table spaces.
If an outage occurs, the surviving node can immediately access the failed node's
table spaces and continue processing.
To partition a particular table, specify the PARTITION BY RANGE clause and the
partitioning column(s). You can specify multiple columns and generated columns.
The column must be a base type, no LOBS, LONG VARCHARS, etc. Figure 11-4
on page 387 describes different syntax for creating the same table partitions. The
first CREATE table statement, creates a table with three partitions on the c1
column. This statement creates a partition to hold data for each of the following
ranges: 1 to 33, 34 to 66 and 67 to 99. We refer to this as short form because it
allows DB2 to create, name and distribute the partitions over three table spaces.
In the second CREATE TABLE statement, the user specifies the partition names
by using the PARTITION or PART key word. In this example the user also
specifies the table spaces, where each partition should be stored in.
After creating a partitioned table, open INSERT, UPDATE or LOAD into the
table, DB2 automatically inserts rows into the appropriate table partition
according to the specified range. If the inserted data does not fit within the
ranges of any of the partitions, DB2 produces an error.
Traditionally, in order to archive older data, data would need to be moved to the
archived locations and delete statements would be issued to remove the data
from the current table. This results in a full table scan to find all rows belonging to
the requested range. By using table partitioning, each table partition can be
quickly separated from the table using the DETACH PARTITION key words in
the ALTER TABLE statement. Example 11-5 describes the syntax for dropping a
particular table partition.
db2> COMMIT
Physically, there is no impact to the system when using ALTER table DETACH
PARTITION command. The command is extremely fast, since no data
movement takes place. As you can see in Figure 11-5, the table containing the
detached partition resides in the same table space as the original table partition.
The DETACH only changes catalog entries to let DB2 know that the table
partition is no longer apart of the main table. Once the statement is committed,
the detached data is available from the new table name. From now on, the table
are a regular table and you can do whatever you need to it.
The ATTACH command is very similar to DETACH. For more details visit the
IBM DB2 Information Center at:
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
Keep in mind, that table partitioning can be used efficiently in combination with
database partitioning and multidimensional clustering.
The indexes for each dimension are block based, not record based, thus
reducing their size (and effort needed for logging and maintaining) dramatically.
Reorganization of the table in order to re-cluster is not necessary.
Example 11-6 shows the CREATE TABLE statement of an MDC table clustered
in three columns: itemId, nation, and orderDate. The block indexes for each
dimension are created automatically.
Figure 11-7 shows the data clustering according to three dimensions as defined
in Example 11-6.
If an existing block is filled, a new block is allocated. All blocks with the same
combination of dimension values are grouped into what is called cells.
With this internal organization DB2 can very quickly find data along dimensions
or find all rows for a specific combination of dimension values.
Benefits of MDC
Advantages of MDC tables:
Dimension block index lookups can identify the required portions of the table
and quickly scan only the required blocks.
Block indexes are smaller than record identifier (RID) indexes, thus lookups
are faster.
Index ANDing and ORing can be performed at the block level and combined
with RIDs.
Data is guaranteed to be clustered on extents, which makes retrieval faster.
Rows can be deleted faster when rollout can be used.
Note that MDC can be efficiently used in combination with database partitioning
and table partitioning.
Materialized query tables (MQTs) are a powerful way to improve response time
for complex queries, especially queries that might require some of the following
operations:
Aggregated data over one or more dimensions
Joins and aggregates data over a group of tables
Data from a commonly accessed subset of data, that is, from a “hot”
horizontal or vertical database partition
Repartitioned data from a table, or part of a table, in a partitioned database
environment
Knowledge of MQTs is integrated into the SQL and XQuery compiler. During
compilation, the query rewrites phases and the optimizer matches queries with
MQTs to determine whether substitution of an MQT for a query that accesses the
base tables is required. If an MQT is used, the EXPLAIN facility can provide
information about which MQT was selected.
Since MQTs behave like regular tables in many ways, the same guidelines as
MQTs apply for optimizing data access using table space definitions, creating
indexes, and issuing RUNSTATS.
large tables, one containing transaction items and the other identifying the
purchase transactions.
An MQT is created with the sum and count of sales for each level of the following
hierarchies:
Product
Location
Time, composed of year, month, day.
Many queries can be satisfied from this stored aggregate data. The following
example shows how to create an MQT that computes sum and count of sales
along the product group and line dimensions; along the city, state, and country
dimension; and along the time dimension. It also includes several other columns
in its GROUP BY clause. Example 11-7 is an example of the create MQT
statement.
)
DATA INITIALLY DEFERRED REFRESH DEFERRED;
The cost of computing the answer using the MQT could be significantly less than
using a large base table since a portion of the answer is already computed.
MQTs can reduce expensive joins, sorts, and aggregation of base data.
The larger the base tables become, the greater the improvements in response
time can be since the MQT grows more slowly than the base table. MQTs can
effectively eliminate overlapping work among queries by doing the computation
once the MQTs are built and refreshed, as well as reusing their content for many
queries.
or typed view, respectively. For typed tables and typed views, the names and
data types of the attributes of the structured type become the names and data
types of the columns of this typed table or typed view. Rows of the typed table
or typed view can be thought of as a representation of instances of the
structured type. When used as a data type for a column, the column contains
values of that structured type (or values of any of that type's subtypes, as
defined below). Methods are used to retrieve or manipulate attributes of a
structured column object.
A structured type can be created using the CREATE TYPE statement. For
example, we can define a product and sku type, which can be used to create
typed tables as shown in Example 11-8. Figure 11-8 shows its hierarchy.
Distinct type
ID
SKU_type SKU
We can also use this type as a type for a column as shown in the last
statement of Example 11-8 on page 394.
Reference type
A reference type is a companion type to a structured type. Similar to a distinct
type, a reference type is a scalar type that shares a common representation
with one of the built-in data types. This same representation is shared for all
types in the type hierarchy. The reference type representation is defined
when the root type of a type hierarchy is created. When using a reference
type, a structured type is specified as a parameter of the type. This parameter
is called the target type of the reference.
The target of a reference is always a row in a typed table or a typed view.
When a reference type is used, it may have a scope defined. The scope
identifies a table (called the target table) or view (called the target view) that
contains the target row of a reference value. The target table or view must
have the same type as the target type of the reference type. An instance of a
scoped reference type uniquely identifies a row in a typed table or typed view,
called the target row
Array Type
A user-defined array type is a data type that is defined as an array with
elements of another data type. Every ordinary array type has an index with
the data type of INTEGER and has a defined maximum cardinality. Every
associative array has an index with the data type of INTEGER or VARCHAR
and does not have a defined maximum cardinality.
Row type
A row type is a data type that is defined as an ordered sequence of named
fields, each with an associated data type, which effectively represents a row.
A row type can be used as the data type for variables and parameters in SQL
PL to provide simple manipulation of a row of data.
Cursor data type
A user-defined cursor type is a user-defined data type defined with the
keyword CURSOR and optionally with an associated row type. A user-defined
cursor type with an associated row type is a strongly-typed cursor type;
otherwise, it is a weakly-typed cursor type. A value of a user-defined cursor
type represents a reference to an underlying cursor.
AVG([DISTINCT] mysql> SELECT a, AVG ([DISTINCT | ALL] db2 " SELECT a, Returns the
expression) AVG(b) expression) AVG(b) average set of
FROM t1 FROM t1 numbers
GROUP BY a GROUP BY a"
MAX ([DISTINCT] mysql> SELECT a, MAX ([DISTINCT | ALL] db2 "SELECT a, Returns the
expression) MAX(b) expression) MAX(b) maximum value in
FROM t1 FROM t1 a set of values.
GROUP BY a GROUP BY a"
MIN ([DISTINCT] mysql> SELECT a, MIN ([DISTINCT | ALL] db2 "SELECT a, MIN(b) Returns the
expression) MIN(b) expression) FROM t1 minimum value in a
FROM t1 GROUP BY a" set of values.
GROUP BY a
STDDEV (expression) / mysql> SELECT STDDEV ([DISTINCT | db2 " SELECT Returns the
STDDEV_POP STDDEV (a),a ALL] expression) stddev(a),a standard deviation
(expression) FROM t1 FROM t1 (/n) of a set of
GROUP BY a GROUP BY a" numbers
SUM([DISTINCT] mysql> SELECT a, SUM([DISTINCT | ALL] db2 " SELECT a, Returns the sum of
expression) SUM(b) expression) sum(b) a set of numbers.
FROM t1 FROM t1
GROUP BY a GROUP BY a"
VAR_POP(expression) / mysql> SELECT VARIANCE ([DISTINCT | db2 " SELECT Returns the
VARIANCE(expression) VAR_POP(a) ALL] expression) VARIANCE(a) variance of a set of
FROM t1 FROM t1 numbers.
GROUP BY a GROUP BY a"
BIT_AND (expression) mysql> SELECT No equivalent function. Refer to UDF B.1, Returns the the
This function is an BIT_AND(a), a Implement using UDF. “Sample code for value of the bitwise
extension to SQL standards FROM t1 BIT_AND” on page 412 logical AND
GROUP BY a operation
BIT_OR (expression) mysql> SELECT No equivalent function. Refer to UDF B.1, Returns the the
This function is an BIT_OR(a), a Implement using UDF “Sample code for value of the bitwise
extension to SQL standards FROM t1 BIT_AND” on page 412 logical OR
GROUP BY a operation
BIT_XOR (expression) mysql> SELECT No equivalent function. Refer to UDF B.1, Returns the the
This function is an BIT_XOR(a), a Implement using UDF “Sample code for value of the bitwise
extension to SQL standards FROM t1 BIT_AND” on page 412 logical XOR
GROUP BY a operation
GROUP BY on alias mysql> SELECT a as x use column name for db2 " SELECT a
FROM a grouping FROM t1
GROUP BY x; GROUP BY a"
GROUP BY on position mysql> SELECT a use column name for db2 " SELECT a
FROM t1 grouping FROM t1
GROUP BY 1 GROUP BY a"
HAVING on alias mysql> SELECT a as x use column name in db2 " SELECT a
FROM t1 having clause FROM t1
GROUP BY a GROUP BY a
HAVING x > 0 HAVING a > 0”
ASCII(string) mysql> SELECT ascii('a'); ASCII(string) db2 "VALUES ascii('a') " Returns ASCII
+------------+ code value
| ascii('a') | 1
+------------+ -----------
| 97 | 97
+------------+
1 row in set (0.00 sec) 1 record(s) selected
CHAR(int, mysql> SELECT char(97); CHR(string) db2 "VALUES chr('97') " Returns the
[USING +----------+ character that has
character set]) | char(97) | 1 the ASCII code
+----------+ - value specified by
|a | a the argument
+----------+
1 row in set (0.00 sec) 1 record(s) selected.
CONCAT_WS( mysql> SELECT CONCAT_WS('-', Use || to db2 "SELECT (firstName || '-' || Returns the
separator, firstname, lastname, loginname ) as implement lastName || '-' || loginName) as concatenation of
string, FULLNAME from owners where id = CONCAT(list) fullName from admin.owners string arguments
string,…) 501; where id = 501" with separator
+-------------------------+
| FULLNAME | FULLNAME
+-------------------------+ -----------------------------------------
| Angela-Carlson-acarlson | ---------------------
+-------------------------+ Angela-Carlson-acarlson
1 row in set (0.01 sec)
1 record(s) selected.
CONCAT(strin mysql> SELECT Use db2 "SELECT (firstName || ' ' || Returns the
g, string,…) CONCAT(firstname, ' ', lastname) as CONCAT(string, lastName) as fullName from concatenation of
FULLNAME from owners where id = string) or || to admin.owners where id = 501" string arguments
501; implement
+----------------+ CONCAT(list) FULLNAME
| FULLNAME | -----------------------------------------
+----------------+ Angela Carlson
| Angela Carlson | 1 record(s) selected.
+----------------+
1 row in set (0.00 sec)
INSTR mysql> SELECT LOCATE('N', LOCATE(substri db2 " SELECT LOCATE('N', Returns the
(substring, 'DINING') ng, string, [start], 'DINING') starting position of
string) -> ; [CODEUNITS16 FROM the first
/LOCATE(subs +-----------------------+ | SYSIBM.SYSDUMMY1" occurrence of one
tring, string, | LOCATE('N', 'DINING') | CODEUNITS32| 1 string within
[position]) +-----------------------+ OCTETS]) ----------- another string
/POSITION(su | 3| 3
bstring, string) +-----------------------+ 1 record(s) selected.
1 row in set (0.00 sec)
LCASE(string) / mysql> SELECT LCASE('JOB'); LCASE(string) db2 " SELECT LCASE('JOB') Returns a string in
LOWER(string) +--------------+ FROM which all
| LCASE('JOB') | SYSIBM.SYSDUMMY1" characters have
+--------------+ 1 been converted to
| job | --- lowercase
+--------------+ job characters
1 row in set (0.00 sec) 1 record(s) selected.
LOAD_FILE(dir update blobTBL SET data = Use the LOAD Inserts the file into
String) LOAD_FILE('/tmp/AquaBlue.jpg') command with the database.
WHERE id = 6; LOBS FROM
<lob_directory>
LPAD(string, mysql> SELECT No equivalent Refer to UDF B.3, “Sample Returns the a
length, LPAD('TEST',6,'!!!'); function. code for RPAD and LPAD string of the given
substring) / +----------------------+ Implement functions” on page 414 length, if the length
RDAP( string, | LPAD('TEST',6,'!!!') | using UDF is longer then the
length, +----------------------+ string the
substring) | !!TEST | substring
+----------------------+ characters will be
1 row in set (0.00 sec) added to the left or
right end.
LTRIM(string) mysql> SELECT LTRIM(' Apple'); LTRIM(string) db2 "SELECT LTRIM(' Apple') Removes blanks
+------------------+ FROM from the beginning
| LTRIM(' Apple') | SYSIBM.SYSDUMMY1" of
+------------------+ 1 string-expression
| Apple | -------
+------------------+ Apple
1 row in set (0.00 sec)
QUOTE(string) mysql> SELECT quote(firstname) SELECT with || db2 "select ('''' || firstname || '''') Returns string with
from owners where id = 501; from admin.owners where id = single quotes.
+------------------+ 501"
| quote(firstname) |
+------------------+ 1
| 'Angela' | ----------------------
+------------------+ 'Angela'
1 row in set (0.00 sec)
1 record(s) selected.
REPLACE(strin mysql> SELECT REPLACE REPLACE(strin db2 "VALUES REPLACE Returns as sting
g1, string2, ('DINING', 'N', 'VID'); g1, string2, ('DINING', 'N', 'VID') " with all
string3) +--------------------------------+ string3) 1 occurrences of
| REPLACE ('DINING', 'N', 'VID') | ------------------ string2 in string1
+--------------------------------+ DIVIDIVIDG with string3
| DIVIDIVIDG | 1 record(s) selected.
+--------------------------------+
1 row in set (0.00 sec)
RTRIM(string) mysql> SELECT RTRIM('PEAR '); RTRIM(string) db2 "VALUES RTRIM('PEAR Removes blanks
+------------------+ ') " from the end of
| RTRIM('PEAR ') | string
+------------------+ 1
| PEAR | -------
+------------------+ PEAR
1 row in set (0.00 sec) 1 record(s) selected.
SPACE(expres mysql> SELECT space(30); SPACE(express db2 " VALUES space(3)" Returns a
sion) +--------------------------------+ ion) 1 character string
| space(30) | ----------- consisting of
+--------------------------------+ blanks with length
| | specified by the
+--------------------------------+ second argument.
1 row in set (0.00 sec)
STRCMP(strin mysql> SELECT STRCMP('test', CASE Implement using CASE Returns -1 if the
g, string) 'testing'); expression and VALUES first string is
+----------------------------+ statement smaller, 0 if the
| STRCMP('test', 'testing') | strings are the
+----------------------------+ same length, 1 if
| -1 | the second string
+----------------------------+ is smaller
1 row in set (0.00 sec)
TRIM([Both | mysql> select trim(trailing from TRIM([Both | db2 "VALUES trim(trailing from Removes blanks
Leading | trim(LEADING FROM ' abc ')) as Leading | trailing trim(LEADING FROM ' abc '))" or occurrences of
trailing OUTPUT; [substring] another specified
[substring] +--------+ FROM] string) OUTPUT character from the
FROM] string) | OUTPUT | --------- end or the
+--------+ abc beginning of a
| abc | string expression
+--------+ 1 record(s) selected.
1 row in set (0.00 sec)
UCASE(string) mysql> SELECT UPPER('jobs'); UCASE(string) / db2 "VALUES UPPER('jobs')" Returns a string in
/ UPPER +---------------+ UPPER (String) 1 which all
(String) | UPPER('jobs') | ---- characters have
+---------------+ JOBS been converted to
| JOBS | 1 record(s) selected. uppercase
+---------------+ characters
1 row in set (0.00 sec)
EXTRACT (unit FROM mysql> SELECT Use concatenate different date db2 "VALUES (YEAR('2009-08-31
expression) EXTRACT(YEAR_MON stripping functions (DAY, YEAR, 05:06:00') || MONTH('2009-08-31
TH from '2009-08-31 MONTH, DAYNAME, DAYOF 05:06:00'))"
05:06:00'); WEEK, and so on.
logical NOT as '!' in VALUES CASE WHEN Implement using CASE expression and VALUES
SELECT list 1!=1 THEN 0 ELSE 1 END statement
& (bitwise and) not available. Implement using UDF Refer to UDF B.1, “Sample code for BIT_AND” on
page 412
logical AND as '&&' in CASE Implement using CASE expression and VALUES
SELECT list statement
Function = in SELECT list: CASE Implement using CASE expression and VALUES
select (1=1) statement
<< and >> (bitwise shifts) no equivalent Implement using power function:
MySQL:
SELECT (x>>y)
SELECT(x<<y)
DB2 :
SELECT(x/power(2,y))
SELECT(x*power(2,y)):
BIT_COUNT no equivalent, implement using UDF Refer to UDF B.6, “Sample code for BIT_COUNT” on
page 429
LEAST FnLeastN See UDF example in B.5, “Sample code for LEAST” on
page 425
LIKE in SELECT CASE with LIKE Implement using CASE expression and VALUES
statement
LIKE ESCAPE in SELECT CASE with LIKE and ESCAPE Implement using CASE expression and VALUES
statement
NOT BETWEEN in SELECT CASE Implement using CASE expression and VALUES
statement
NOT LIKE in SELECT CASE Implement using CASE expression and VALUES
statement
--
-- DB2 UDF(User-Defined Function) Samples for conversion
--
-- 2001/08/29
--
-- Name of UDF: BIT_AND (N1 Integer, N2 Integer)
--
-- Used UDF: None
--
-- Description: Returns bit by bit and of both parameters.
--
-- Author: TOKUNAGA, Takashi
--
--------------------------------------------------------------------------
CREATE FUNCTION BITAND (N1 Integer, N2 Integer)
RETURNS Integer
SPECIFIC BITANDMySQL
LANGUAGE SQL
CONTAINS SQL
NO EXTERNAL ACTION
DETERMINISTIC
RETURN
WITH
Repeat (S, M1, M2, Ans) AS
(Values (0, N1, N2, 0)
Union All
Select S+1, M1/2, M2/2, Ans+MOD(M1,2)*MOD(M2,2)*power(2,S)
From Repeat
Where M1 > 0
AND M2 > 0
AND S < 32
)
SELECT ANS
FROM Repeat
WHERE S = (SELECT MAX(S)
FROM Repeat)
;
--------------------------------------------------------------------------
values bitand(10,8);
---------------------------------------------------
1
-----------
8
--------------------------------------------------------------------------
values bitand(14,3);
---------------------------------------------------
1
-----------
2
--------------------------------------------------------------------------
values bitand(1038,78);
---------------------------------------------------
1
-----------
14
Main_Loop:
WHILE XN > 0 DO
SET RetVal = SUBSTR(CHAR(MOD(XN,1000)),19,3) || RetVal;
SET XN = XN/1000;
IF XN > 0 THEN
SET RetVal = ',' || RetVal;
ELSE
LEAVE Main_Loop;
END IF;
END WHILE;
N 2 3
-------------------------- ----------------------- -----------------------
12.34567 12.34 12.
-12.34567 -12.34 -12.
120034.56700 120,034.56 120,034.
23400123456789.00000 123,400,123,456,789.00 123,400,123,456,789.
4 record(s) selected.
--------------------------------------------------------------------------
CREATE FUNCTION RPAD (C1 VarChar(4000), N integer, C2 VarChar(4000))
RETURNS VARCHAR(4000)
LANGUAGE SQL
SPECIFIC RPADBase
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
substr(C1 ||
repeat(C2,((sign(N-length(C1))+1)/2)*(N-length(C1)+length(C2))/(length(C2)+1-sign(length(C2)))),1,N)
;
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad('ABCDE',3,'*.'),20) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------
ABC
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad('ABCDE',20,'') || 'X',50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
ABCDE X
1 record(s) selected.
UDF RPAD with the third parameter omitted is shown in Example B-7.
Running the RPAD function gives you the results shown in Example B-8.
1
--------------------------------------------------
ABCDE X
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad('ABCDE',3) || 'X',50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
ABCX
1 record(s) selected.
Function RPAD allows a set of different input arguments. Example B-9 shows two
more RPAD UDFs.
Example: B-10 Results of RPAD with first parameter as integer, 2, and 3 parameters
1
--------------------------------------------------
927*.*.*.*.*
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad(927,12,'') || 'X',50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
927 X
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(rpad(9021,3),20) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------
902
1 record(s) selected.
The counterpart for RPAD are the LPAD functions, which are shown in
Example B-11.
1
--------------------------------------------------
*.*.*.*.*.ABCDE
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad('ABCDE',3,'*.'),50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
ABC
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad('ABCDE',15,'') || 'X',50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
ABCDEX
1 record(s) selected.
As RPAD allows LPAD a different number and data type for input arguments,
Example B-13 shows LPAD without the third parameter.
The results of Example B-13 should look like those in Example B-14.
1
--------------------
ABCDE
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad('ABCDE',3),20) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------
ABC
1 record(s) selected.
Two more LPAD UDFs with different characteristics are shown in Example B-15.
RETURNS VARCHAR(4000)
LANGUAGE SQL
SPECIFIC LPADIntParm3
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
LPAD(rtrim(char(I1)),N,C2)
;
--------------------------------------------------------------------------
CREATE FUNCTION LPAD (I1 Integer, N integer)
RETURNS VARCHAR(4000)
LANGUAGE SQL
SPECIFIC LPADIntParm2
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
LPAD(rtrim(char(I1)),N,' ')
;
Example: B-16 Results of LPAD: The first parameter is integer, 2, and 3 parameter
1
--------------------------------------------------
*.*.*.*.*.*9021
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad(9021,15,''),50) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------------------------------------
9021
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(lpad(9021,3),20) FROM SYSIBM.SYSDUMMY1;
---------------------------------------------------
1
--------------------
902
1 record(s) selected.
--
-- DB2 UDF(User-Defined Function) Samples for conversion
--
-- 2001/08/28, 08/29
--
-- Name of UDF: GREATEST (P1 VarChar(254), P2 VarChar(254), ...)
--
--
-- Used UDF: None
--
-- Description: Returns greatest value of list of data.
--
-- Author: TOKUNAGA, Takashi
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle2
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 >= P2 THEN P1
ELSE P2
END
;
---------------------------------------------------
--
-- GREATEST function with three parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle3
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 >= P2
THEN CASE
WHEN P1 >= P3 THEN P1
ELSE P3
END
ELSE CASE
WHEN P2 >= P3 THEN P2
ELSE P3
END
END
;
---------------------------------------------------
--
-- GREATEST function with four parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle4
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 >= P2
THEN CASE
WHEN P1 >= P3
THEN CASE
WHEN P1 >= P4 THEN P1
ELSE P4
END
ELSE CASE
WHEN P3 >= P4 THEN P3
ELSE P4
END
END
ELSE CASE
WHEN P2 >= P3
THEN CASE
WHEN P2 >= P4 THEN P2
ELSE P4
END
ELSE CASE
WHEN P3 >= P4 THEN P3
ELSE P4
END
END
END
;
---------------------------------------------------
--
-- GREATEST function with five parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254), P5
VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle5
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 >= P2
THEN CASE
WHEN P1 >= P3
THEN CASE
WHEN P1 >= P4
THEN CASE
WHEN P1 >= P5 THEN P1
ELSE P5
END
ELSE CASE
WHEN P4 >= P5 THEN P4
ELSE P5
END
END
ELSE CASE
WHEN P3 >= P4
THEN CASE
WHEN P3 >= P5 THEN P3
ELSE P5
END
ELSE CASE
WHEN P4 >= P5 THEN P4
ELSE P5
END
END
END
ELSE CASE
WHEN P2 >= P3
THEN CASE
WHEN P2 >= P4
THEN CASE
WHEN P2 >= P5 THEN P2
ELSE P5
END
ELSE CASE
WHEN P4 >= P5 THEN P4
ELSE P5
END
END
ELSE CASE
WHEN P3 >= P4
THEN CASE
WHEN P3 >= P5 THEN P3
ELSE P5
END
ELSE CASE
WHEN P4 >= P5 THEN P4
ELSE P5
END
END
END
END
;
---------------------------------------------------
--
-- GREATEST function with six parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle6
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
GREATEST(GREATEST(P1,P2,P3),GREATEST(P4,P5,P6))
;
---------------------------------------------------
--
-- GREATEST function with seven parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254), P7 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle7
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
GREATEST(GREATEST(P1,P2,P3,P4),GREATEST(P5,P6,P7))
;
---------------------------------------------------
--
-- GREATEST function with eight parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254), P7 VarChar(254), P8 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle8
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
GREATEST(GREATEST(P1,P2,P3,P4),GREATEST(P5,P6,P7,P8))
;
---------------------------------------------------
--
-- GREATEST function with nine parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254), P7 VarChar(254), P8 VarChar(254)
, P9 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle9
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
GREATEST(GREATEST(P1,P2,P3,P4,P5),GREATEST(P6,P7,P8,P9))
;
---------------------------------------------------
--
-- GREATEST function with ten parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION GREATEST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254), P7 VarChar(254), P8 VarChar(254)
, P9 VarChar(254),P10 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC GREATESTOracle10
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
GREATEST(GREATEST(P1,P2,P3,P4,P5),GREATEST(P6,P7,P8,P9,P10))
;
---------------------------------------------------
1
--------------------
abcfgh
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh'),20) FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
defgh
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...'),20) FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
endof...
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on'),20) FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
endof...
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on','extra'),20) FROM
sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
extra
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on','extra','a bit of'),20) FROM
sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
extra
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on','extra','a bit of','more'),20)
FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
more
1 record(s) selected.
--------------------------------------------------------------------------
1
--------------------
more and
1 record(s) selected.
--------------------------------------------------------------------------
SELECT char(greatest('abcdefg','defgh','abcfgh','endof...','add on','extra','a bit of','more','more
and ',' something'),20) FROM sysibm.sysdummy1;
---------------------------------------------------
1
--------------------
more and
1 record(s) selected.
--------------------------------------------------------------------------
CREATE FUNCTION LEAST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC LEASTOracle3
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 <= P2
THEN CASE
WHEN P1 <= P3 THEN P1
ELSE P3
END
ELSE CASE
WHEN P2 <= P3 THEN P2
ELSE P3
END
END
;
---------------------------------------------------
--
-- LEAST function with four parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION LEAST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC LEASTOracle4
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 <= P2
THEN CASE
WHEN P1 <= P3
THEN CASE
WHEN P1 <= P4 THEN P1
ELSE P4
END
ELSE CASE
WHEN P3 <= P4 THEN P3
ELSE P4
END
END
ELSE CASE
WHEN P2 <= P3
THEN CASE
WHEN P2 <= P4 THEN P2
ELSE P4
END
ELSE CASE
WHEN P3 <= P4 THEN P3
ELSE P4
END
END
END
;
---------------------------------------------------
--
-- LEAST function with five parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION LEAST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC LEASTOracle5
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
CASE
WHEN P1 <= P2
THEN CASE
WHEN P1 <= P3
THEN CASE
WHEN P1 <= P4
THEN CASE
WHEN P1 <= P5 THEN P1
ELSE P5
END
ELSE CASE
WHEN P4 <= P5 THEN P4
ELSE P5
END
END
ELSE CASE
WHEN P3 <= P4
THEN CASE
WHEN P3 <= P5 THEN P3
ELSE P5
END
ELSE CASE
WHEN P4 <= P5 THEN P4
ELSE P5
END
END
END
ELSE CASE
WHEN P2 <= P3
THEN CASE
WHEN P2 <= P4
THEN CASE
WHEN P2 <= P5 THEN P2
ELSE P5
END
ELSE CASE
WHEN P4 <= P5 THEN P4
ELSE P5
END
END
ELSE CASE
WHEN P3 <= P4
THEN CASE
WHEN P3 <= P5 THEN P3
ELSE P5
END
ELSE CASE
WHEN P4 <= P5 THEN P4
ELSE P5
END
END
END
END
;
---------------------------------------------------
--
-- LEAST function with six parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION LEAST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC LEASTOracle6
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
LEAST(LEAST(P1,P2,P3),LEAST(P4,P5,P6))
;
---------------------------------------------------
--
-- LEAST function with seven parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION LEAST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254), P7 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC LEASTOracle7
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
LEAST(LEAST(P1,P2,P3,P4),LEAST(P5,P6,P7))
;
---------------------------------------------------
--
-- LEAST function with eight parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION LEAST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254), P7 VarChar(254), P8 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC LEASTOracle8
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
LEAST(LEAST(P1,P2,P3,P4),LEAST(P5,P6,P7,P8))
;
---------------------------------------------------
--
-- LEAST function with nine parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION LEAST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254), P7 VarChar(254), P8 VarChar(254)
, P9 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC LEASTOracle9
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
LEAST(LEAST(P1,P2,P3,P4,P5),LEAST(P6,P7,P8,P9))
;
---------------------------------------------------
--
-- LEAST function with ten parameters
--
--------------------------------------------------------------------------
CREATE FUNCTION LEAST (P1 VarChar(254), P2 VarChar(254), P3 VarChar(254), P4 VarChar(254)
, P5 VarChar(254), P6 VarChar(254), P7 VarChar(254), P8 VarChar(254)
, P9 VarChar(254),P10 VarChar(254))
RETURNS VarChar(254)
LANGUAGE SQL
SPECIFIC LEASTOracle10
DETERMINISTIC
CONTAINS SQL
NO EXTERNAL ACTION
RETURN
LEAST(LEAST(P1,P2,P3,P4,P5),LEAST(P6,P7,P8,P9,P10))
;
1
---------------------------------------------------------------------------------------
HAROLD
1 record(s) selected
1
-----------
1
1 record(s) selected.
1
-----------
6
1 record(s) selected.
1
-----------
29
1 record(s) selected.
set temp=substr(temp,locate(delimit,temp)+1);
set num=num-1;
end while;
if(n>0) then
return substr(In,1,pos);
else
return substr(In,pos+1);
end if;
end
Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see “How to get Redbooks” on
page 437. Note that some of the documents referenced here may be available in
softcopy only.
Data Servers Developing PHP Applications for IBM Data Servers,
SG24-7218
MySQL to DB2 UDB Conversion Guide, SG24-7093
Up and Running with DB2 on Linux, SG24-6899
Oracle to DB2 Conversion Guide for Linux, UNIX, and Windows, SG24-7048
Other publications
These publications are also relevant as further information sources:
IEEE Standard for Software Test Documentation (829-1998)
ISBN 0-7381-1444-8
Understanding DB2, Learning Visually with Examples, Second Edition,
ISBN-13:978-0-13-158018-3
Installing IBM Data Server Clients, GC27-2454-00
Installing DB2 Servers, GC27-2455-00
Getting Started with DB2 Installation and Administration on Linux and
Windows, GI11-9411-00
Database Administration Concepts and Configuration Reference,
SC27-2442-00
Database Monitoring Guide and Reference, SC27-2458-00
Database Security Guide, SC27-2443-00
Partitioning and Clustering Guide, SC27-2453-00
Online resources
These Web sites are also relevant as further information sources:
DB2
Database Management
http://www.ibm.com/software/data/management/
DB2
http://www.ibm.com/software/data/db2/
DB2 Express-C
http://www.ibm.com/software/data/db2/express/
http://www.ibm.com/developerworks/forums/forum.jspa?forumID=805
IBM DB2 Database for Linux, UNIX, and Windows Information Center
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/index.jsp
DB2 Application Development
http://www.ibm.com/software/data/db2/ad/
DB2 Bootcamp Training
http://www.ibm.com/developerworks/data/bootcamps
DB2 Linux Validation
http://www.ibm.com/software/data/db2/linux/validate/
DB2 9.7 Manuals
http://www1.ibm.com/support/docview.wss?rs=71&uid=swg27015148
DB2 9.7 Features and benefits
http://www-01.ibm.com/software/data/db2/9/features.html
DB2 Migration Now
http://www.ibm.com/db2/migration
IBM Data Movement Tool
http://www.ibm.com/developerworks/data/library/techarticle/dm-0906datamovem
ent/
DB2 Technical Support
http://www.ibm.com/software/data/db2/support/db2_9/
Integrated Data Management
http://www.ibm.com/software/data/optim/
IBM Developerworks
http://www.ibm.com/developerworks/
IBM PartnerWorld
http://www.ibm.com/partnerworld
Software Migration Project Office
http://www.ibm.com/software/solutions/softwaremigration/
Leveraging MySQL skills to learn DB2 Express: DB2 versus MySQL
administration and basic tasks
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0602tham2/
Leverage MySQL skills to learn DB2 Express: DB2 versus MySQL backup
and recovery, viewed June 28, 2008,
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0606tham/
Leverage MySQL skills to learn DB2 Express, Part 3: DB2 versus MySQL
graphical user interface, viewed June 28, 2008,
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0608tham/
Leverage MySQL skills to learn DB2 Express, Part 4: DB2 versus MySQL
data movement, viewed June 28, 2008,
http://www-128.ibm.com/developerworks/db2/library/techarticle/dm-0610tham/
Convert from MySQL or PostSQL to DB2 Express-C, viewed June 28, 2008,
http://www.ibm.com/developerworks/db2/library/techarticle/dm-0606khatri/
DB2 Basics: Fun with Dates and Times, viewed August 25th
http://www.ibm.com/developerworks/data/library/techarticle/0211yip/0211yip3
.html
MySQL
MySQL home page
http://www.mysql.com/
MySQL 5.1 Reference Manual
http://dev.mysql.com/doc/refman/5.1/en/index.html
PHP MyAdmin
http://www.phpmyadmin.net/home_page/index.php
Others
VMware
http://www.vmware.com/
SUSE Linux Enterprise
http://www.novell.com/linux/
PHP
http://www.php.net/
PHP PECL extension
http://pecl.php.net/
PHP Manual - Database extensions
http://ca2.php.net/manual/en/refs.database.php
APACHE
http://www.apache.org/
Perl
http://www.perl.org/
Comprehensive Perl Archive Network
http://www.cpan.org
Ruby
http://www.ruby-lang.org/en/
IBM and Ruby
http://rubyforge.org/projects/rubyibm
MySQL and Ruby
http://rubyforge.org/projects/mysql-ruby/
http://tmtm.org/en/ruby/mysql/README_en.html
http://www.tmtm.org/en/mysql/ruby/
Java.sql Package Documentation
http://java.sun.com/j2se/1.4.2/docs/api/java/sql/package-summary.html
UnixODBC
http://www.unixodbc.org/
Index
application flow 76
Symbols application function 56
.FRM file 124
application interface 37, 56
.MRG file 124
application porting 57
.MYD file 124
application profile 56
.MYI file 124
application server 26
application user account 177
Numerics archive storage engine 46
2-tier 25 ASNCLP command 298
32-bit 4 assignment 215
64-bit 4 async i/o 14
audit statement 183
authentication 180
A autocommit 275
access control 177
automated backup 5
access package 29
automatic storage 127
access path 17
autonomic computing daemon 10
access plan 28, 353
autonomic management feature 5
access right 167
accessctrl authority 183, 187
accesslist table 269 B
account detail 79 background process 10
ActiveX Data Object 32 backup compression 5
address space 9 bash shell 146
administrative interface 37 batch application 25
administrative views 347 big integer 118
ADO 32 binary data 175
ADO.NET 32 binary large object 175
aggregate function 223 bind command 28
all privileges privilege 188 bind commands 277
alter privilege 188 bindadd privilege 187
alterin privilege 188 binding 27
AMD workstation 4 blackhole storage engine 47
ANSI mode 212 BLOB 175
ANSI92 standard 219 block device 14
applaccess table 269 buffer pool 14, 164
applet 30 buffers 39
application architecture 56, 62 built-in data 116
application assessment 62 built-in function 160
application client 26 business intelligence 6, 57
application data 68
application developer 2
application development 92
C
C client library 38
application environment 56
C/C++ 62
Index 441
7093IX.fm Draft Document for Review October 16, 2009 10:25 am
Index 443
7093IX.fm Draft Document for Review October 16, 2009 10:25 am
Index 445
7093IX.fm Draft Document for Review October 16, 2009 10:25 am
Index 447
7093IX.fm Draft Document for Review October 16, 2009 10:25 am
X
XML support 71
XSR object privilege 184
SG24-7093-01 ISBN